text
stringlengths 1
1.78M
| meta
dict |
|---|---|
\section{Introduction}
This paper studies the stability of traveling wave solutions to scalar hyperbolic equations of the form
\begin{equation}
\label{hypAC}
\tau u_{tt} + g(u,\tau) u_t = u_{xx} + f(u),
\end{equation}
where $u$ is a scalar, $x \in \mathbb{R}$, $t > 0$, and $\tau \geq 0$ is a constant. Note that \eqref{hypAC} is a nonlinear wave equation with a ``damping term", $g$, and a nonlinear reaction term $f$. Hyperbolic equations of this form often support traveling wave solutions, also called traveling fronts, which are special solutions describing coherent structures which propagate along a particular direction with a certain wave speed. In a previous contribution \cite{LMPS16}, we analyzed the existence and stability of propagating fronts for a one-dimensional model which is a particular case of equation \eqref{hypAC}, called the \textit{Allen-Cahn equation with relaxation}. The motivation for the present study is to explore both the existence and the stability of such configurations for a wider class of equations, which arises in other contexts.
We make the following assumptions. First, the reaction function $f :
\mathbb{R} \to \mathbb{R}$ is supposed to be of \textit{bistable type}\footnote{also called of Nagumo \cite{NAY62,McKe70}, or
Allen-Cahn \cite{AlCa79} type.}, that is, $f \in C^2([0,1];\mathbb{R})$ has two stable
equilibria at $u=0, u=1$, and one unstable equilibrium point at $u = \alpha \in (0,1)$, more precisely,
\begin{equation}
\label{H1}
\tag{H1}
\begin{aligned}
&f(0)=f(\alpha)=f(1)=0,
&\qquad &f'(0), f'(1)<0,\quad f'(\alpha)>0,\\
&f(u)>0\textrm{ for all } \, u \in(\alpha,1),
&\qquad &f(u)<0\textrm{ for all } \, u \in (0,\alpha),
\end{aligned}
\end{equation}
for a certain $\alpha \in (0,1)$. A well-known example is the widely used cubic polynomial
\begin{equation}
\label{cubicf}
f(u)= u(1-u)(u-\alpha),
\end{equation}
with $\alpha \in (0,1)$.
Reaction functions of bistable type arise in many models of natural phenomena, such as kinetics of
biomolecular reactions (cf. Mikha{\u\i}lov \cite{Mik94}), nerve conduction (see, e.g.,
Lieberstein \cite{Lbr67a}, McKean \cite{McKe70}) and electrothermal instability (cf. Iz\'us \textit{et al.}
\cite{IDRWZB95}). In terms of continuous descriptions of the spread of biological populations, it is
often applied to kinetics exhibiting positive growth rate for population densities over a threshold
value ($u > \alpha$), and decay for densities below such value ($u < \alpha$). The latter is often
described as the \textit{Allee effect}, in which aggregation can improve the survival rate of
individuals (see Murray \cite{MurI3ed}).
Secondly, we are going to assume that the damping coefficient $g = g(u,\tau)$ in equation
\eqref{hypAC} is regular enough and strictly positive. More precisely, we suppose that for some fixed value
$\tau_m>0$, there holds
\begin{equation}
\label{H2}
\tag{H2}
g \in C^1(\mathbb{R} \times [0, \tau_m]), \;\; \; \text{and,} \quad \inf\left\{g(u, \tau) : u\in\mathbb{R},
\tau\in (0,\tau_m)\right\}\geq \delta_0>0,
\end{equation}
for some $\delta_0 > 0$ independent of $\tau_m$.
Assumption \eqref{H2} is an extension of the previously studied case of the Allen-Cahn model with relaxation
\cite{LMPS16}, where
\begin{equation}
\label{ACrelax}
g(u,\tau) = 1 - \tau f'(u),
\end{equation}
and with $\tau > 0$ bounded above by the characteristic relaxation time associated to the reaction,
\[
0 \leq \tau < \tau_m := \frac{1}{\max_{u \in [0,1]} |f'(u)|}.
\]
for which, clearly, $g(u,\tau) > 0$. If $F$ is an antiderivative such that $F'=-f$ with $F(0)=0$, that is,
\begin{equation*}
F(u):=-\int_{0}^{u} f(v)\,dv,
\end{equation*}
then $F$ can be interpreted as the Allen-Cahn two-well potential (see Figure \ref{figpotAC}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=5.75cm]{bistable}
\includegraphics[width=5.75cm]{twowells}
\end{center}
\caption{\footnotesize The bistable cubic function $f(u)=u(1-u)(u-0.4)$ (left)
and the corresponding two-well potential $F$ (right).\label{figpotAC}}
\end{figure}
Another example of interest is the \textit{nonlinear telegrapher's
equation} \cite{Holm1}, where
\begin{equation}
\label{telegr}
g(u,\tau) \equiv 1,
\end{equation}
for all $u \in \mathbb{R}$ and $\tau \geq 0$.
\begin{remark}
There exist situations where the appearance of a diffusion coefficient $\varepsilon > 0$ in \eqref{hypAC},
\[
\tau u_{tt} + g(u,\tau) u_t = \varepsilon u_{xx} + f(u),
\]
is important, for example, in the study of slow motion of solutions or their \textit{metastability}
\cite{FLM17,Fol17}, when $0 < \varepsilon \ll 1$ is supposed to be small. For the problem of existence and stability of fronts, however,
the size of
$\varepsilon$ plays no role, and by rescaling the space variable, $x\mapsto x/\varepsilon$, we recover
equation
\eqref{hypAC}. Therefore, our analysis also applies to the more general model with arbitrary (constant)
diffusion and we can work with equation \eqref{hypAC} directly without loss of generality.
\end{remark}
In this paper we establish the spectral stability of traveling fronts for \eqref{hypAC} under the sole structural assumptions \eqref{H1} and \eqref{H2}, which include many models in population dynamics, microstructures and relaxation mechanisms, among others. In Section \ref{secexist} we prove that traveling fronts exist and provide some of their more important features and properties. Section \ref{secperturb} contains the perturbation problem and describes how to formulate a natural spectral problem (after linearization of the equation around the front), whose analysis encodes the most fundamental stability properties. We show that there exists two different but equivalent ways to formulate the spectral problem. In Section \ref{secess} we analyze the asymptotic systems associated to the perturbed equations and locate the essential spectrum. Section \ref{secptsp} contains the proof that the point spectrum is stable (via energy estimates in the frequency regime), the simplicity of the eigenvalue zero associated to translation, as well as the statement of our main result (see Theorem \ref{mainthm}). Finally, in section \ref{secdisc} we make some concluding remarks.
\section{Structure of traveling fronts}
\label{secexist}
In this section we review the existence theory and structural properties of front solutions to equations of
the form \eqref{hypAC}. In a recent contribution, Gilding and Kersner \cite{GiKe15} established the necessary
and sufficient conditions for the existence of traveling wave solutions to equation \eqref{hypAC} with
reaction function of bistable type under the assumption of positive damping $g > 0$. The authors make use of an
integral equation approach. For completeness, in this section we present an existence result which applies a
different technique based on the computation of the index of a rotating vector field of the dynamical system
with respect to the velocity (in the sense of Perko \cite{Per1}); this proof resembles our previous analysis
in the particular case of the relaxed Allen-Cahn model \cite{LMPS16}. With this approach we are able to
derive further structural properties, such as the exponential decay of the solutions and a variational formula for the
(unique) wave speed, which are not available from the integral formulation in \cite{GiKe15}.
\subsection{Existence}
\label{sect:existence}
We look for solutions to \eqref{hypAC} of the form
\begin{equation*}
u(x,t)=U(\xi)\quad\textrm{with}\quad\xi=x-ct,
\qquad\textrm{and}\qquad
U(-\infty)=0,\quad
U(+\infty)=1.
\end{equation*}
Substituting into \eqref{hypAC}, we obtain the equation
\begin{equation}
\label{twode0}
(1-c^2\tau)U''+c\,g(U,\tau)U'+f(U)=0,
\end{equation}
where $' \, := d/d\xi$.
\begin{proposition}\label{prop:properties}
Let assumptions \eqref{H1} and \eqref{H2} be satisfied,
and let $U=U(\xi)$ be a solution to \eqref{twode0} together with the asymptotic
conditions $U(-\infty)=0$ and $U(+\infty)=1$. Then,
\par
\textit{(i)} (speed sign) the velocity $c$ has the same sign of $-\int_{0}^{1} f(u)\,du$;\par
\textit{(ii)} (subcharacteristic condition) the velocity $c$ necessarily satisfies
\begin{equation}
\label{subchar}
c^2\tau < 1.
\end{equation}
\end{proposition}
\begin{proof}
\smartqed
(i) Multiplying equation \eqref{twode0} by $U'$ and integrating in $\mathbb{R}$, we obtain
\begin{equation*}
c\int_{\mathbb{R}} g(U,\tau)\left|U'\right|^2\,dx=F(1)-F(0).
\end{equation*}
where $F'=-f$. Thus, $\mathrm{sgn}(c) = \mathrm{sgn}(F(1) - F(0))$, as $g(U,\tau) > 0$.
(ii) The case $c = 0$ is manifest. If $c > 0$ then multiply equation \eqref{twode0} by $U'$. This yields,
\[
(1-c^2 \tau)U'' U' + cg(U,\tau)|U'|^2 + f(U) U' = 0.
\]
Since $f = - F'$ last equation is equivalent to
\begin{equation}
\label{eqfive}
\Big( \tfrac{1}{2}(1-c^2 \tau) |U'|^2 - F(U) \Big)' + cg(U,\tau) |U'|^2 = 0.
\end{equation}
Integrate equation \eqref{eqfive} in $(\xi,+\infty)$, to
obtain
\begin{equation}
\label{launo}
\tfrac{1}{2}(1-c^2 \tau) |U'(\xi)|^2 = F(U(\xi)) - F(1) + c \int_{\xi}^{+\infty} g(U(s),\tau)|U'(s)|^2
\, ds,
\end{equation}
and choose $\xi \gg 1$, large enough so that $U(\xi) \in (\alpha, 1)$ (as $U(+\infty) = 1$).
Since $f(u) > 0$ for $u \in (\alpha, 1)$ and $U(\xi) \in (\alpha, 1)$, clearly
\[
F(U(\xi)) - F(1) = \int_{U(\xi)}^1 f(s) \, ds > 0.
\]
Since we are assuming $c > 0$ and since $g(U,\tau) > 0$, clearly the right hand side of \eqref{launo} is positive, yielding $1 > c^2
\tau$. The case $c<0$ can be treated similarly.
\qed \end{proof}
\begin{remark}
Notice that if $F(0)=F(1)$, then the speed $c$ is necessarily zero and the equation for the profile reduces
to the one for traveling waves for the parabolic Allen-Cahn equation.
\end{remark}
We now prove an auxiliary result.
\begin{proposition}\label{prop:auxode}
Let assumptions \eqref{H1} - \eqref{H2} be satisfied.
Then there exists a unique value $\gamma\in\mathbb{R}$, denoted by $\gamma_\ast=\gamma_\ast(\tau)$,
such that the equation
\begin{equation}\label{auxode}
V''+\gamma\,g(V,\tau)V'+f(V)=0
\end{equation}
has a monotone increasing solution, $V=V(\xi)$ with asymptotic limits $V(-\infty)=0$ and $V(+\infty)=1$.
\end{proposition}
The proof of Proposition \ref{prop:auxode} consists of showing that there exists a heteroclinic
connection between the singular points $(V,V')=(0,0)$ and $(V,V')=(1,0)$.
We follow a standard shooting argument starting from the local analysis near the asymptotic
states, and use the special dependence with respect to the parameter $\gamma$ to
show that there is a single value $\gamma_\ast$ for which there exists a connecting orbit.
The strategy closely resembles the one presented in H\"arterich and Mascia \cite{HaeMa1}.
For shortness, we drop the dependence of $g$ with respect to $\tau$.
\begin{proof}[of Proposition \ref{prop:auxode}]
\smartqed
The second order differential equation \eqref{auxode} can be rewritten as
\begin{equation}\label{firstorder}
\left\{\begin{aligned}
V'&=\Phi(V,W;\gamma):=W,\\
W'&=\Psi(V,W;\gamma):=-f(V)-\gamma\,g(V)\,W,
\end{aligned}\right.
\end{equation}
possessing the two singular points $(0,0)$ and $(1,0)$.
1. Linearizing at $(\bar u,0)$, we obtain the matrix
\begin{equation*}
\begin{pmatrix}
\partial_V \Phi & &\partial_W \Phi\\
\partial_V \Psi & &\partial_W \Psi
\end{pmatrix}
=\begin{pmatrix}
0 & &1\\
-f'(\bar u) & & \, -\gamma\,g(\bar u)
\end{pmatrix}.
\end{equation*}
In particular, since $f'(0)$ and $f'(1)$ are negative, $(0,0)$ and $(1,0)$
are saddles for \eqref{firstorder}.
The positive eigenvalue $\mu^+_0$ at $(0,0)$ and the negative eigenvalue
$\mu^-_1$ at $(1,0)$ are
\begin{equation*}
\begin{aligned}
\mu^+_0&=\frac12\left(\sqrt{(\gamma\,g(0))^2-4f'(0)}-\gamma\,g(0)\right),
\\
\mu^-_1&=-\frac12\left(\sqrt{(\gamma\,g(1))^2-4f'(1)}+\gamma\,g(1)\right).
\end{aligned}
\end{equation*}
We denote by $\mathcal{U}_0(\gamma)$ the intersection of the unstable manifold of $(0,0)$
and the set $\{(V,W)\,:\,W>0\}$, and by $\mathcal{S}_1(\gamma)$ the intersection of the
unstable manifold of $(1,0)$ and the set $\{(V,W)\,:\,W>0\}$.
2. Let $\gamma<0$ and $\hat W>M/c_0|\gamma|$, where $M:=\max\{f(u)\,:\,u\in(\alpha,1)\}$.
The solution trajectory passing through $(\alpha,W_0)$ is the graph of the solution
$\omega=\omega(V)$ to the Cauchy problem
\begin{equation}\label{trajeq}
\frac{d\omega}{dV}=-\frac{f(V)}{\omega}-\gamma\,g(V),
\end{equation}
with initial condition $\omega(\alpha)=\hat W$.
Denote its interval of maximal existence by $I$, and observe that $\omega$ is strictly increasing in $I\cap
[\alpha,1]$.
Indeed, since $d\omega/dV(\alpha)=-\gamma\,g(\alpha)>0$, the function $\omega$ is strictly
increasing for $V\in(\alpha,\alpha+\delta)$ for some $\delta>0$.
Moreover, if $\omega>\hat W$ and $V\in[\alpha,1]$ there holds
\begin{equation*}
\frac{d\omega}{dV}\geq -\frac{M}{\hat W}+c_0|\gamma|>0,
\end{equation*}
and the claim follows from a standard continuation argument.
As a consequence, the derivative of $\omega$ is a priori bounded and
the interval $I$ contains the interval $[\alpha,1]$.
Since the vector field $(\Phi,\Psi)$ point downward along the segment $(\alpha,1)\times\{0\}$,
the curve $\mathcal{S}_1(\gamma)$ intersect the line $V=\alpha$ at some value $W_1(\gamma)\geq 0$
for $\gamma<0$.
Similar arguments show that $\mathcal{U}_0(\gamma)$ intersects the line
$V=\alpha$ at some $W_0(\gamma)$ for $\gamma>0$.
3. Since
\begin{equation*}
\det\begin{pmatrix}
\Phi & &\Psi\\
\partial_\gamma\Phi & & \partial_\gamma \Psi
\end{pmatrix}
=\det\begin{pmatrix}
W & & \, -f-\gamma\,g\,W\\
0 & & \, -g\,W
\end{pmatrix}=-g\,W^2\leq -c_0 W^2\leq 0,
\end{equation*}
the vector field defining the differential system is a \textit{rotated vector field}
with respect to the parameter $\gamma$ (see Perko \cite{Per1}).
As a consequence, the graphs $\mathcal{U}_0(\gamma)$ and $\mathcal{S}_1(\gamma)$
rotate clockwise as the parameter $\gamma$ increases.
Therefore, the map $W_{0}=W_{0}(\gamma)$ is monotone decreasing in $(0,+\infty)$
and the map $W_{1}(\gamma)=W_{1}(\gamma)$ is monotone increasing in $(-\infty,0)$.
4. If $\bar V$ is a relative maximum point for a solution $\omega$ to \eqref{trajeq},
then
\begin{equation*}
|\omega(\bar V)|=\frac{|f(\bar V)|}{|\gamma|\,g(\bar V)}\leq \frac{M}{c_0|\gamma|},
\end{equation*}
where $M$ is the maximum of $|f|$ in $(0,1)$.
Thus, $W_0(\gamma)\to 0$ as $\gamma\to+\infty$
and $W_1(\gamma)\to 0$ as $\gamma\to-\infty$.
Following Hadeler \cite{Had1}, let us note that one can also prove that there exist values
$\gamma_\pm$ with $\gamma_0<0<\gamma_1$ such that $W_0(\gamma_0)=0$ and
$W_1(\gamma_1)=0$.
Then, for any $\gamma\geq \gamma_0$ the trajectory $\mathcal{U}_0(\gamma)$
describes a heteroclinic connection between $0$ and $\alpha$;
similarly, for any $\gamma\leq \gamma_1$ the trajectory $\mathcal{S}_1(\gamma)$
describes a heteroclinic connection between $\alpha$ and $1$.
5. From monotonicity of $W_{0}$ and $W_{1}$, we infer that they both have limits as $\gamma\to 0$.
Additionally, the trajectory equation \eqref{trajeq} shows that such limiting values $W_0(0)$ and
$W_{1}(0)$ are finite and can be computed explictly, taking advantage of the conserved quantity
$W^2-2F(V)$, yielding
\begin{equation*}
W_0(0)=\sqrt{2\bigl(F(\alpha)-F(0)\bigr)}
\qquad\textrm{and}\qquad
W_1(0)=\sqrt{2\bigl(F(\alpha)-F(1)\bigr)}.
\end{equation*}
Since the solution depends continuously with respect to the parameter $\gamma$,
there exist $\gamma_0, \gamma_1$ with $-\infty\leq \gamma_0<0<\gamma_1\leq +\infty$,
such that $W_0$ is defined (and monotone decreasing) in $(\gamma_0,+\infty)$ and $W_1$
is defined (and monotone increasing) in $(-\infty,\gamma_1)$.
If $\gamma_0$ is finite, $W_0\to+\infty$ as $\gamma\to\gamma_0^+$;
similarly, if $\gamma_1$ is finite, $W_1\to+\infty$ as $\gamma\to\gamma_1^-$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=9cm, height=6.5cm]{GraphWsxWdx1}
\end{center}
\caption{\footnotesize Graphs of the curves $\mathcal{U}_0(\gamma)$ (left) and $\mathcal{S}_1(\gamma)$ (right)
in the plane $(V,W)$ for different values of $\gamma$:
dashed line $\gamma=0$; dotted line $\gamma=-0.40$; and continuous line $\gamma=-0.32$. Here we considered the case of a cubic nonlinearity $f(u)=u(1-u)(u-\alpha)$ with $\alpha=0.3$, and damping term of Cattaneo-Maxwell type, $g(u;\tau)=1-\tau f'(u)$, where $\tau=1$.
}
\end{figure}
6. Let us consider the difference function $h:=W_1-W_0$ defined in $(\gamma_0,\gamma_1)$.
As a consequence of the properties of $W_0$ and $W_1$, we infer that $h$ is continuous,
monotone increasing and such that
\begin{equation*}
\liminf_{\gamma\to\gamma_0^+} h(\gamma)<0,\quad
\liminf_{\gamma\to\gamma_1^-} h(\gamma)>0.
\end{equation*}
In particular, there exists a unique $\gamma_\ast$ such that
$W_0(\gamma_\ast)=W_1(\gamma_\ast)$.
For such critical value, the conjuction of the curves $\mathcal{U}_0(\gamma_\ast)$ and
$\mathcal{S}_1(\gamma_\ast)$ gives the desired connection.
Uniqueness of the wave speed $\gamma_\ast$ follows from the monotonicity
of the functions $W_0$ and $W_1$.
\qed \end{proof}
\begin{remark}
Equation \eqref{auxode} arises also in the case of reaction-diffusion equations
with density-dependent diffusion
\begin{equation*}
w_t = \varphi(w)_{xx} + f(w),
\end{equation*}
where $\varphi$ is a strictly increasing function. Inserting the traveling wave profile \textit{ansatz}
$w(x,t)=W(x-\gamma t)$ and setting $V:=\phi(W)$ yields
\begin{equation*}
\frac{d^2V}{d\eta^2}+\gamma \psi'(V)\frac{dV}{d\eta}+f(\psi(V))=0,
\end{equation*}
where $\psi$ is the inverse function of $\phi$.
In fact, existence of heteroclinic solutions for \eqref{auxode} could be also proved by
appropriately changing the dependent variable $V$
and applying the general result proved by Engler \cite{Eng2} that relates the existence
of traveling wave solutions of reaction-diffusion equations with constant diffusion
coefficient to the ones of the density-dependent diffusion coefficient case.
\end{remark}
\begin{example}
\label{exAllenCahn}
In the special case of a nonlinear telegrapher's equation with cubic reaction function, namely,
\begin{equation}\label{fbist_g1}
g(u,\tau)=1,\qquad f(u)=\kappa\,u(1-u)(u-\alpha),
\end{equation}
we can look for $W=V'$ with the form $W(V) = AV(1-V)$, where $A$ is a constant to be determined.
Inserting in \eqref{auxode}, we deduce the following constraints on $A$ and $\gamma$
\begin{equation*}
A^2+\gamma\,A-\kappa\,\alpha=0,\qquad 2A^2-\kappa=0,
\end{equation*}
giving the explicit formulas $A=\sqrt{\kappa/2}$ and
\begin{equation}\label{gast_fbist_g1}
\gamma_\ast=\gamma_{{}_{\textrm{\tiny AC}}}:=\sqrt{\frac{2}{\kappa}}\left(\alpha-\frac12\right),
\end{equation}
which is the speed of propagation for the (parabolic) Allen--Cahn equation.
In the significant relaxation case $g(u,\tau)=1-\tau f'(u)$, the same simplification does not hold and
an analogous explicit formula for the critical speed $\gamma_\ast$ is not available. However, as in the case
of the standard Allen--Cahn equation, it is possible to establish
a min-max variational characterization for the critical speed $\gamma_\ast$ (cf. Hamel \cite{Hame99}; see also
\cite{MFF1}).
\end{example}
\begin{proposition}[variational formula for the speed]
\label{prop:minmax}
Let assumptions \eqref{H1} - \eqref{H2} be satisfied.
Set
\begin{equation*}
\mathcal{W}:=\{W\in C^2(\mathbb{R})\,:\, W(x)\in(0,1),\, W'(x)>0\;\,\text{for any}\, \;x\in\mathbb{R}\}.
\end{equation*}
Then the speed $\gamma_\ast$ defined in Proposition \ref{prop:auxode} is such that
\begin{equation}\label{minmax}
\gamma_\ast=-\inf_{W\in \mathcal W} \sup_{x\in\mathbb{R}}\frac{W''+f(W)}{g(W)\,W'}
=-\sup_{W\in \mathcal W} \inf_{x\in\mathbb{R}}\frac{W''+f(W)}{g(W)\,W'}.
\end{equation}
\end{proposition}
\begin{proof}
\smartqed
We give a sketch of the proof.
Denote by $V$ the traveling profile given by Proposition \ref{prop:auxode};
then there holds $\gamma_\ast=-(V''+f(V))/g(V)V'$.
Since $V\in\mathcal W$, we infer the inequalities
\begin{equation*}
\underline{\gamma}:= \inf_{W\in \mathcal W} \sup_{x\in\mathbb{R}}\frac{-\bigl(W''+f(W)\bigr)}{g(W)\,W'}
\leq \gamma_\ast
\leq \overline{\gamma}:=\sup_{W\in \mathcal W} \inf_{x\in\mathbb{R}}\frac{-\bigl(W''+f(W)\bigr)}{g(W)\,W'}.
\end{equation*}
If $\gamma_\ast<\overline{\gamma}$ then for any $\gamma\in(\gamma_\ast,\overline{\gamma})$,
there exists a function $W\in\mathcal W$ such that
\begin{equation*}
\inf_{x\in\mathbb{R}}\frac{-\bigl(W''+f(W)\bigr)}{g(W)\,W'}\geq \gamma.
\end{equation*}
As a consequence, we deduce
\begin{equation*}
W''+\gamma\,g(W)\,W'+f(W)\leq 0\leq
(\gamma-\gamma_\ast) g(V)\,V'=V''+\gamma\,g(V)\,V'+f(V),
\end{equation*}
showing that $W$ and $U$ are, respectively, super- and subsolution for
\begin{equation}\label{elliptic}
U''+\gamma\,g(U)\,U'+f(U)=0.
\end{equation}
Invoking a monotonicity argument \cite{ProWe84}, we deduce the existence of a solution
$U$ to \eqref{elliptic}
such that $V\leq U\leq W$,
thus satisfying, in particular, the asymptotic conditions $U(-\infty)=0$ and
$U(+\infty)=1$.
Such statement contradicts the uniqueness of the speed $\gamma_\ast$
given in Proposition \ref{prop:auxode}.
Thus, $\gamma_\ast=\overline{\gamma}$. Proving in an analogous manner the equality
$\gamma_\ast=\underline{\gamma}$,
we deduce formula \eqref{minmax}.
\qed \end{proof}
Independently from the variational characterization of the wave speed, the existence of a solution
for \eqref{twode0} with appropriate asymptotic values is a straightforward
consequence of Proposition \ref{prop:auxode}.
The relation between the speed $\gamma$ of Proposition \ref{prop:auxode} and $c$ for
\eqref{twode0} guarantees the uniqueness of the speed for the hyperbolic
Allen--Cahn equation.
\begin{theorem}[existence of a traveling front]
\label{theoexists}
Under assumptions \eqref{H1} - \eqref{H2} there exists a unique value $c\in\mathbb{R}$, denoted by
$c_\ast=c_\ast(\tau)$,
such that the equation
\begin{equation}\label{twode}
(1-c^2\tau)U''+c\,g(U,\tau)U'+f(U)=0.
\end{equation}
has a monotone increasing front solution $U=U(\xi)$ with $U(-\infty)=0$ and $U(+\infty)=1$. The value
$c_\ast=c_\ast(\tau)$ is related to $\gamma_\ast=\gamma_\ast(\tau)$
of Proposition \ref{prop:auxode} by the relation
\begin{equation}\label{castgast}
c_\ast=\frac{\gamma_\ast}{\sqrt{1+\tau\,\gamma_\ast^2}}.
\end{equation}
\end{theorem}
\begin{proof}
\smartqed
Thanks to the subcharacteristic condition \eqref{subchar}, we can
restrict our attention to
$c\in(-1/\sqrt{\tau},1/\sqrt{\tau})$.
By applying the change of variables
\begin{equation*}
\sqrt{1-c^2\tau}\frac{d}{d\xi}=\frac{d}{d\eta},
\end{equation*}
and setting $\gamma=\gamma(c)=c/\sqrt{1-c^2\tau}$,
equation \eqref{twode} transforms into \eqref{auxode}.
Then the profile existence and uniqueness statement follows since
$\gamma=\gamma(c)$ is increasing and $\gamma(\pm 1 /\sqrt{\tau})=\pm\infty$.
Relation \eqref{castgast} is obtained by inverting the function $\gamma = \gamma(c)$.
\qed \end{proof}
\subsection{Exponential decay}\label{secexpdecay}
As a consequence of the analysis in Proposition \ref{prop:auxode} and Theorem \ref{theoexists}, the profile function decays to its asymptotic limits exponentially fast.
\begin{lemma}[exponential decay of the profile]
\label{lemexpdecay}
For each $\tau \geq 0$ the front solution and its derivatives satisfy
\begin{equation}
\label{expdecay}
|\partial_\xi^j(U(\xi) - U_\pm)| \leq C e^{-\eta|\xi|},
\end{equation}
for all $\xi \in \mathbb{R}$, $j = 0,1,2$, with uniform constants $C > 0$ and $\eta > 0$.
\end{lemma}
\begin{proof}
\smartqed
Suppose that $U = U(\xi)$ is the profile function of Theorem \ref{theoexists}, traveling with speed $c = c_*(\tau)$. As before, $\xi = x - ct$ and $\, ' = d/d\xi$. If we denote $V = U'$ then $(U,V) = (U,V)(\xi)$ is an heteroclinic connection between the rest points
\[
(U_+, V_+) = (1,0) \quad \text{and,} \quad (U_-,V_-) = (0,0),
\]
as $\xi \to \pm \infty$, of the first order system
\begin{equation}
\label{firstODE}
\begin{pmatrix}U \\ V
\end{pmatrix}' = \begin{pmatrix} V \\ - (1-c^2 \tau)^{-1} (f(U) +c g(U,\tau)V)
\end{pmatrix} =: \begin{pmatrix} \hat{\Phi} \\ \hat{\Psi}\end{pmatrix} (U,V).
\end{equation}
Linearizing around the asymptotic rest states we obtain
\[
\frac{D(\hat{\Phi}, \hat{\Psi})}{D(U,V)}(U_\pm, V_\pm) = \begin{pmatrix} 0 & & 1 \\ (1-c^2\tau)^{-1} |a_\pm| & & \, -(1-c^2\tau)^{-1}c b_\pm \end{pmatrix},
\]
where, in view of assumptions \eqref{H1} and \eqref{H2}, we have denoted $a_\pm = f'(U_\pm) < 0$ and $b_\pm = g(U_\pm,\tau) > 0$. Its eigenvalues are
\[
\mu_{1,2}^\pm = - \tfrac{1}{2} cb_\pm (1-c^2 \tau)^{-1} \pm \tfrac{1}{2} \sqrt{c^2 b_{\pm}^2(1-c^2
\tau)^{-2} + 4(1-c^2 \tau)^{-1}|a_\pm|},
\]
which are real and the asymptotic states are non-degenerate hyperbolic points. The positive eigenvalue at $(U_-,V_-) = (0,0)$ is
\[
\mu_2^- = - \tfrac{1}{2}cb_-(1-c^2 \tau)^{-1} + \tfrac{1}{2} \sqrt{c^2 b_{-}^2(1-c^2 \tau)^{-2} +
4(1-c^2 \tau)^{-1}|a_-|},
\]
and the orbit decays to $(U_-,V_-) = (0,0)$ with exponental rate $|(U,V)(\xi)| \leq C e^{\mu_2^- \xi}$ as
$\xi \to -\infty$ for some uniform $C > 0$. The negative eigenvalue at $(U_+,V_+) = (1,0)$ is
\[
\mu_1^+ = - \tfrac{1}{2}cb_+(1-c^2 \tau)^{-1} - \tfrac{1}{2} \sqrt{c^2 b_{+}^2(1-c^2 \tau)^{-2} +
4(1-c^2 \tau)^{-1}|a_+|},
\]
and the orbit decays as $|(U,V)(\xi) - (1,0)| \leq C e^{-|\mu_1^+|\xi}$, when $\xi \to +\infty$. Thus, if we define $\eta = \min \{\mu_2^-, |\mu_1^+|\} > 0$ we obtain the result. Notice that $\eta = \eta(\tau) > 0$ for each fixed $\tau \geq 0$ and that $V' = U''$ also decays exponentially fast.
\qed
\end{proof}
\section{Perturbation equations and the stability problem}
\label{secperturb}
In this section we derive the equation for a perturbation of the traveling front, linearize it around the
wave, and set up the associated spectral
problem.
For fixed $\tau > 0$ let $c = c_*(\tau) \in (-1/\sqrt{\tau},1/\sqrt{\tau})$ be the unique wave speed of the
traveling front of Theorem \ref{theoexists}. We then recast equation \eqref{hypAC} in the moving coordinate
frame and, with a slight abuse of notation, make the transformation $x \to x-ct$ so that the model equation
\eqref{hypAC} now reads
\begin{equation}
\label{newhypAC}
\tau u_{tt} - 2c\tau u_{xt} + g(u,\tau) u_t = (1-c^2 \tau) u_{xx} + c g(u,\tau) u_x + f(u).
\end{equation}
From this point on and for the rest of the paper $x$ will denote the (Galilean) moving variable and
the front profile $U = U(x)$ is now a stationary solution to \eqref{newhypAC}, satisfying
\begin{equation}
\label{nprofileq}
(1-c^2\tau) U_{xx} + cg(U,\tau) U_x + f(U) = 0.
\end{equation}
As before, the asymptotic limits are $U_+ = U(+\infty) = 1$ and $U_- = U(-\infty) = 0$. In view of Lemma
\ref{lemexpdecay} the convergence of $U$ to its asymptotic limits is exponential,
\begin{equation}
\label{expdec}
|\partial_x^j(U - U_\pm) (x)| \leq C e^{- \eta |x|},
\end{equation}
as $x \to \pm \infty$ and for some $C, \eta > 0$.
\begin{remark}
\label{remUreg}
By regularity of the profile and its exponential decay, it is clear that $U_x \in H^1(\mathbb{R})$.
Apply a bootstrapping argument to verify that, in fact, $U_x \in H^3(\mathbb{R})$. Details are left to the reader.
\end{remark}
\subsection{Equations for the perturbation and the spectral problem}
Let us consider solutions to \eqref{newhypAC} of the form $u(x,t) + U(x)$, where now $u = u(x,t)$ stands for
a perturbation of the front. Upon substitution, we obtain the following nonlinear equation for the
perturbation,
\begin{equation}
\label{nlpert}
\begin{aligned}
\tau u_{tt} - 2c\tau u_{xt} &+ g(u+U,\tau) u_t = \\ &=(1-c^2 \tau) u_{xx} + (1-c^2 \tau) U_{xx} + c
g(u+U,\tau)(
u_x + U_x) + f(u).
\end{aligned}
\end{equation}
Expand the nonlinear terms in Taylor series around $U$ and use the profile equation \eqref{nprofileq} to
write equation \eqref{nlpert} as
\[
\begin{aligned}
\tau u_{tt} - 2c\tau u_{xt} + g(U,\tau) u_t &= (1-c^2 \tau) u_{xx} + c g(U,\tau) u_x + (c
g_u(U,\tau)U_x + f'(U))u +\\ & \, + O(|uu_t|) + O(|uu_x|) + O(|u|^2).
\end{aligned}
\]
Let us define
\[
a(x) := c g(U,\tau)_x + f'(U), \quad b(x) := g(U,\tau) \, > \, 0.
\]
Dropping the nonlinear terms we arrive at the following linearized equation for the perturbation
\begin{equation}
\label{linequ}
\tau u_{tt} - 2c\tau u_{xt} + b(x) u_t = (1-c^2 \tau) u_{xx} + c b(x) u_x + a(x) u.
\end{equation}
Let us specialize the linear problem to solutions of the form $u(x,t) = e^{\lambda t} v(x)$, where $\lambda
\in \mathbb{C}$ is the spectral parameter and $v$ belongs to an appropriate Banach space $X$. The result is the
following spectral equation for $v$,
\begin{equation}
\label{specprob}
\lambda^2 \tau v - 2c \lambda \tau v_x + \lambda b(x) v = (1-c^2 \tau) v_{xx} + cb(x) v_x + a(x) v,
\end{equation}
for some $v \in X$, $\lambda \in \mathbb{C}$.
In this analysis we choose the perturbation space to be $X = L^2(\mathbb{R};\mathbb{C})$, and the domain of solutions to
\eqref{specprob} to be $\mathcal{D} = H^2(\mathbb{R};\mathbb{C})$. In the sequel, $L^2$ and $H^m$, with $m > 0$, will denote the complex spaces $L^2(\mathbb{R};\mathbb{C})$ and $H^m(\mathbb{R};\mathbb{C})$, respectively, except where it is explicitly stated otherwise.
\begin{remark}
\label{rempencil}
Notice that the spectral equation \eqref{specprob} is quadratic in $\lambda$. Under the substitution $\lambda
= i \zeta$ equation \eqref{specprob} can be written in terms of a
\textit{quadratic operator pencil} $\tilde {\mathcal{A}}(\zeta)$ (cf. Markus \cite{Markus88}), given by
\[
\tilde {\mathcal{A}} (\zeta) = \tilde {\mathcal{A}}_0 + \zeta \tilde {\mathcal{A}}_1 + \zeta^2 \tilde {\mathcal{A}}_2,
\]
with
\[
\begin{aligned}
\tilde {\mathcal{A}}_0 &= (1-c^2 \tau) \frac{d^2}{dx^2} + cb(x) \frac{d}{dx} + a(x),\\
\tilde {\mathcal{A}}_1 &= i 2 \tau \frac{d}{dx} - ib(x),\\
\tilde {\mathcal{A}}_2 &= \tau.
\end{aligned}
\]
It is easy to see that \eqref{specprob} is equivalent to $\tilde {\mathcal{A}}(\zeta) v = 0$. The transformation
$v_1 = v$, $v_2 = \lambda v - cv_x$ defines an
appropriate Cartesian product of the base space which allows us to write equation \eqref{specprob} as a
genuine eigenvalue problem in the form
\begin{equation}
\label{defcLtau}
\lambda \begin{pmatrix}
v_1 \\ v_2
\end{pmatrix} = \begin{pmatrix}
c \partial_x & & 1 \\ \tau^{-1} (\partial_x^2 + a(x)) & & \, c \partial_x - \tau^{-1}b(x)
\end{pmatrix}\begin{pmatrix}v_1 \\ v_2 \end{pmatrix} =: {\mathcal{L}}^\tau \begin{pmatrix}v_1 \\
v_2\end{pmatrix}.
\end{equation}
The linear operator ${\mathcal{L}}^\tau$ (densely defined in $L^2 \times L^2$ with domain
${\mathcal{D}}({\mathcal{L}}^\tau) = H^2 \times H^1$ for $\tau > 0$) is often called the \textit{companion matrix} to the pencil $\tilde
{\mathcal{A}}$ (see \cite{BrJoK14,KoMi14,LaSu1} for further information).
\end{remark}
\subsection{Reformulation as a first order system}
According to custom in the literature of stability of nonlinear waves \cite{AGJ90,KaPro13}, we now recast the
spectral problem \eqref{specprob} as a first order system in the frequency regime of the form
\begin{equation}
\label{Wsystem}
W_x = \mathbb{A}^\tau(x,\lambda) W,
\end{equation}
where $\lambda \in \mathbb{C}$ is a parameter and $\tau > 0$ is fixed. Indeed, making
\[
W= \begin{pmatrix}
v \\ v_x
\end{pmatrix},
\]
and noticing that because of the subcharacteristic condition (see Proposition \ref{prop:properties}
(ii)) there holds $1 - c^2 \tau > 0$, we obtain a first order ODE system of the form \eqref{Wsystem}
with coefficient matrix given by
\begin{equation}
\label{coeffA}
\mathbb{A}^\tau(x,\lambda) = (1 - c^2 \tau)^{-1}\begin{pmatrix}
0 & & 1-c^2\tau \\ \tau \lambda^2 + \lambda b(x) - a(x) & & \, -c(b(x) + 2
\tau \lambda)
\end{pmatrix}.
\end{equation}
Since $U(x) \to U_\pm$ as $x \to \pm \infty$, with $U_- = 0$, $U_+ = 1$, let us denote
\[
\begin{aligned}
a_\pm &= \lim_{x \to \pm \infty} a(x) = \lim_{x \to \pm \infty} \big( f'(U) + g_u(U,\tau)U_x \big) =
f'(U_\pm)
< 0,\\
b_\pm &= \lim_{x \to \pm \infty} b(x) = \lim_{x \to \pm \infty} g(U,\tau) = g(U_\pm,\tau) > 0,
\end{aligned}
\]
because $U_x \to 0$, $f'(1)$, $f'(0) < 0$ and $g(U,\tau) > 0$, by hypotheses \eqref{H1} and \eqref{H2}. In
this fashion, we denote the asymptotic coefficient matrices as
\begin{equation}
\label{asympcoeff}
\begin{aligned}
\mathbb{A}^\tau_\pm (\lambda) &:= \lim_{x \to \pm \infty} \mathbb{A}^\tau(x,\lambda) \\&= (1 - c^2\tau)^{-1} \begin{pmatrix}
0 & & 1 - c^2 \tau \\
\tau \lambda^2 + \lambda b_\pm + |a_\pm| & & \, -c(b_\pm + 2 \tau \lambda)
\end{pmatrix},
\end{aligned}
\end{equation}
for each $\tau \geq 0$, $\lambda \in \mathbb{C}$.
It is convenient to define the spectra and
resolvent of the spectral problem \eqref{specprob} in terms of the first order systems \eqref{Wsystem}.
Consider the following family of linear,
closed, densely defined operators
\[
{\mathcal{T}}^{\tau}(\lambda) : \bar {\mathcal{D}} \to L^2 \times L^2,
\]
\[
{\mathcal{T}}^{\tau}(\lambda) := \partial_x - \mathbb{A}^{\tau}(x,\lambda),
\]
with domain $\bar {\mathcal{D}} = H^1 \times H^1$, indexed by $\tau \geq 0$ and parametrized by $\lambda \in \mathbb{C}$. With
a
slight abuse of notation we call $W \in H^1 \times H^1$ an \textit{eigenfunction} associated to the
eigenvalue
$\lambda \in \mathbb{C}$ provided $W$ is a bounded solution to the equation
\[
{\mathcal{T}}^{\tau}(\lambda) W = W_x - \mathbb{A}^{\tau}(x,\lambda) W = 0.
\]
\begin{definition}[resolvent and spectra]
\label{defsigmatwo}
For fixed $\tau \geq 0$ we define,
\[
\begin{aligned}
\rho &:= \{\lambda \in \mathbb{C} \, : \, {\mathcal{T}}^{\tau}(\lambda) \,\text{ is injective and onto, and }
{\mathcal{T}}^{\tau}(\lambda)^{-1} \, \text{is bounded} \, \},\\
\sigma_\mathrm{\tiny{pt}} &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{T}}^{\tau}(\lambda) \,\text{ is Fredholm with index zero and has a} \\
& \qquad \qquad \qquad \text{non-trivial kernel} \},\\
\sigma_\mathrm{\tiny{ess}} &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{T}}^{\tau}(\lambda) \,\text{ is either not Fredholm or has index
different } \\
& \qquad \qquad \qquad \text{from zero} \}.
\end{aligned}
\]
The spectrum $\sigma$ of problem \eqref{specprob} is defined as $\sigma = \sigma_\mathrm{\tiny{ess}} \cup
\sigma_\mathrm{\tiny{pt}}$. Since ${\mathcal{T}}^{\tau}(\lambda)$ is closed, we know that $\rho = \mathbb{C} \backslash \sigma$ (cf.
Kato \cite{Kat80}).
\end{definition}
\begin{remark}
This definition of spectrum is due to Weyl \cite{We10}, making $\sigma_\mathrm{\tiny{ess}}$ a large set but easy to compute,
whereas $\sigma_\mathrm{\tiny{pt}}$ is a discrete set of isolated eigenvalues with finite multiplicity (see Remark 2.2.4
in \cite{KaPro13}). We remind the reader that a closed operator $\mathcal{L}$ is said to be
Fredholm if its range $\mathcal{R(L)}$ is closed, and both its nullity, $\text{nul}\,\mathcal{L} = \dim \ker
\mathcal{L}$, and its deficiency, $\mathrm{def} \,\mathcal{L} = \mathrm{codim} \, \mathcal{R(L)}$, are
finite. In such a case the index of ${\mathcal{L}}$ is defined as
$\text{ind}\, \mathcal{L} = \text{nul}\, \mathcal{L} - \mathrm{def} \, \mathcal{L}$ (cf. \cite{Kat80}).
\end{remark}
For each $\tau \geq 0$ we can write the coefficients as
\[
\mathbb{A}^\tau(x,\lambda) = \mathbb{A}^\tau_0(x) + \lambda \mathbb{A}^\tau_1(x) + \lambda^2 \mathbb{A}^\tau_2(x),
\]
where
\[
\mathbb{A}^\tau_0(x) = (1-c^2\tau)^{-1}\begin{pmatrix}
0 & & 1 - c^2\tau \\ -a(x) & & -cb(x)
\end{pmatrix},
\]
\[
\mathbb{A}^\tau_1(x) = (1-c^2\tau)^{-1}\begin{pmatrix}
0 & & 0 \\ b(x) & & -2c\tau
\end{pmatrix},
\]
\[
\mathbb{A}^\tau_2(x) = (1-c^2\tau)^{-1}\begin{pmatrix}
0 & & 0 \\ \tau & & 0
\end{pmatrix}.
\]
Therefore, we may compute
\begin{equation}
\label{derivlamA}
\partial_\lambda \mathbb{A}^\tau(x,\lambda) = \mathbb{A}^\tau_1(x) + 2 \lambda \mathbb{A}^\tau_2(x).
\end{equation}
Furthermore, if we regard the coefficients \eqref{coeffA} as functions from $(\lambda,\tau)$ into $L^\infty$
then they are analytic in $\lambda$ (quadratic polynomial) and continuous in $\tau$.
We also define the algebraic and geometric multiplicities of the elements in the point spectrum as
follows.
\begin{definition}
\label{defmult}
Assume $\lambda \in \sigma_\mathrm{\tiny{pt}}$. Its geometric multiplicity (\textit{g.m.}) is the maximal number of linearly
independent elements in $\ker {\mathcal{T}}^{\tau}(\lambda)$. Suppose $\lambda \in \sigma_\mathrm{\tiny{pt}}$ has $g.m. = 1$, so that
$\ker {\mathcal{T}}^{\tau}(\lambda) =$ span $\{W_0\}$. We say $\lambda$ has algebraic multiplicity (\textit{a.m.})
equal to $m$ if we can solve
\[
{\mathcal{T}}^{\tau}(\lambda) W_j = \partial_\lambda \mathbb{A}^\tau(x,\lambda) W_{j-1},
\]
for each $j = 1, \ldots, m-1$, with $W_j \in H^1$, but there is no bounded $H^1$ solution $W$ to
\[
{\mathcal{T}}^{\tau}(\lambda) W = \partial_\lambda \mathbb{A}^\tau(x,\lambda) W_{m-1}.
\]
For an arbitrary eigenvalue $\lambda \in \sigma_\mathrm{\tiny{pt}}$ with $g.m.= l$, the algebraic multiplicity is defined as the
sum of the multiplicities $\sum_k^l m_k$ of a maximal set of linearly independent elements in $\ker
{\mathcal{T}}^{\tau}(\lambda) = $ span $\{W_1, \ldots, W_l\}$.
\end{definition}
\begin{remark}
Notice that, unlike the operator defined in \eqref{defcLtau}, the spectral problem formulated as a first order system is well defined also for $\tau = 0$, as
\begin{equation}
\label{coeffA0}
\mathbb{A}^0(x,\lambda) = \begin{pmatrix}
0 & & 1 \\ \lambda b(x) - a(x) & & \, -c b(x)
\end{pmatrix},
\end{equation}
where the coefficients $a(x) = f'(U) + g(U,0)_x$, $b(x) = g(U,0)$ and the speed $c = c(0)$ are evaluated at
$\tau = 0$.
\end{remark}
Finally we remark that, due to translation invariance, $\lambda = 0$ belongs to the point spectrum.
\begin{lemma}
\label{lemzeroeigenv}
For each $\tau \geq 0$, $0 \in \sigma_\mathrm{\tiny{pt}}$, with associated eigenfunction $\Phi = (U_x, U_{xx})^\top \in H^1
\times H^1$.
\end{lemma}
\begin{proof}
\smartqed
Follows by a direct calculation using the profile equation \eqref{nprofileq}.
Notice that $U_{x} \in H^2$ (see Remark \ref{remUreg}), so that $\Phi = (U_x, U_{xx})^\top \in \ker
{\mathcal{T}}^\tau(0) \subset H^1 \times H^1$ is indeed an eigenfunction.
\qed \end{proof}
\subsection{Spectral equivalence}
\label{secequiv}
The seasoned reader might rightfully ask what is the relation between the spectrum of Definition \ref{defsigmatwo}, and the standard spectrum of the family of operators ${\mathcal{L}}^\tau$ defined in \eqref{defcLtau} (see Remark \ref{rempencil}). Just like in the relaxed Allen-Cahn case (see Section 3 of \cite{LMPS16}), we shall prove that there is a one-to-one correspondence between the two sets, both in location and in multiplicities.
First observe that the family of operators ${\mathcal{L}}^\tau$ in \eqref{defcLtau} is defined for parameter values of $\tau > 0$ only, whereas the first order systems \eqref{Wsystem} are well defined for $\tau = 0$ as well. (This happens because the hyperbolic equation \eqref{hypAC} actually degenerates into a parabolic equation when $\tau \to 0^+$.) Thus, we shall prove the spectral equivalence between the two spectral problems assuming that $\tau > 0$. Notice that for each $\tau > 0$ the operator ${\mathcal{L}}^\tau : L^2 \times L^2 \to L^2 \times L^2$ is a closed, densely defined linear operator with domain ${\mathcal{D}}({\mathcal{L}}^\tau) = H^2 \times H^1$.
\begin{lemma}
\label{lemQinv}
For each $\lambda \in \mathbb{C}$ and $\tau > 0$, the mapping
\[
\begin{aligned}
{\mathcal{K}} : \ker ({\mathcal{L}}^\tau - \lambda) &\subset H^2 \times H^1 \, \longrightarrow \ker {\mathcal{T}}^\tau(\lambda) \subset H^1 \times H^1,\\
{\mathcal{K}} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} &:= \begin{pmatrix} v_1 \\ \partial_x v_1 \end{pmatrix}, \qquad \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} \in \ker ({\mathcal{L}}^\tau - \lambda),
\end{aligned}
\]
is one-to-one and onto.
\end{lemma}
\begin{proof}
\smartqed
First we check that $(v_1,v_2)^\top \in \ker ({\mathcal{L}}^\tau - \lambda)$ implies that ${\mathcal{K}} (v_1,v_2)^\top \in \ker {\mathcal{T}}^\tau(\lambda)$. In that case we have the system
\[
\begin{aligned}
c \partial_x v_1 + v_2 &= \lambda v_1 \\
\tau^{-1} (\partial_x^2 + a(x)) v_1 + (c \partial_x - \tau^{-1} b(x)) v_2 &= \lambda v_2.
\end{aligned}
\]
Labeling $v := v_1$ and substituting the first equation into the second we immediately arrive at equation \eqref{specprob}, with $(v, v_x) \in H^1 \times H^1$. This shows that ${\mathcal{K}} (v_1, v_2)^\top = (v, v_x)^\top \in \ker {\mathcal{T}}^\tau(\lambda)$.
Now suppose that $(v, v_x)^\top \in \ker {\mathcal{T}}^\tau(\lambda) \subset H^1 \times H^1$. Then clearly $v \in H^2$ and let us define $v_1 := v$, $v_2 := \lambda v - cv_x$. It is then easy to verify that
\[
c \partial_x v_1 + v_2 = \lambda v = \lambda v_1, \qquad \text{and, }
\]
\[
\tau^{-1} (\partial_x^2 + a(x)) v_1 + (c\partial_x - \tau^{-1} b(x)) v_2 = \tau^{-1} (\lambda^2 \tau v - c \lambda \tau v_x) = \lambda (\lambda v - cv_x) = \lambda v_2.
\]
This yields $(v_1, v_2)^\top \in \ker ({\mathcal{L}}^\tau - \lambda)$. Thus, for each element $(v, v_x)^\top \in \ker {\mathcal{T}}^\tau(\lambda)$ there exists $(v_1, v_2)^\top \in \ker ({\mathcal{L}}^\tau - \lambda)$ such that $(v, v_x)^\top = {\mathcal{K}} (v_2, v_2)^\top$, and we verify that ${\mathcal{K}}$ is onto.
Finally, suppose that ${\mathcal{K}} (u_1, u_2)^\top = {\mathcal{K}}(v_1, v_2)^\top$ for $(u_1, u_2)$, $(v_1, v_2) \in \ker ({\mathcal{L}}^\tau - \lambda)$. This means that $(u_1, \partial_x u_1) = (v_1, \partial_x v_1)$ a.e. in $H^2 \times H^1$. But this implies that $v_2 = \lambda v_1 - c\partial_x v_1 = \lambda u_1 - c \partial_x u_1 = u_2$ a.e. in $H^1$ and we conclude that the mapping ${\mathcal{K}}$ is one-to one.
\qed
\end{proof}
An immediate consequence of the one-to-one correspondence between the kernels of ${\mathcal{L}}^\tau - \lambda$ and ${\mathcal{T}}^\tau(\lambda)$ is that the Fredholm properties of both operators are the same (see, e.g., Sandstede \cite{San02}, section 3.3). Therefore, if we naturally adopt Weyl's definition of spectra and define
\[
\begin{aligned}
\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^\tau) &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{L}}^\tau - \lambda \,\text{ is Fredholm with index zero and has a} \\
& \qquad \qquad \qquad \text{non-trivial kernel} \},\\
\sigma_\mathrm{\tiny{ess}}({\mathcal{L}}^\tau) &:= \{ \lambda \in \mathbb{C}\,: \; {\mathcal{L}}^\tau - \lambda \,\text{ is either not Fredholm or has index
different } \\
& \qquad \qquad \qquad \text{from zero} \},
\end{aligned}
\]
with $\rho({\mathcal{L}}^\tau) = \mathbb{C} \backslash (\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^\tau) \cup \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}^\tau))$, then we obtain the following
\begin{corollary}
\label{corsamespec}
For each $\tau > 0$,
\[
\sigma_\mathrm{\tiny{pt}} = \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^\tau), \quad \sigma_\mathrm{\tiny{ess}} = \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}^\tau), \quad \rho = \rho({\mathcal{L}}^\tau),
\]
where the sets on the left hand sides of the above equalities are, of course, the sets of Definition \ref{defsigmatwo}.
\end{corollary}
For $\lambda$ in the point spectrum, it is clear from Lemma \ref{lemQinv} that the dimensions of the finite-dimensional kernels are the same and, hence, the geometric multiplicity of $\lambda$ remains the same. Moreover, the mapping ${\mathcal{K}}$ can also be used to show that the Jordan block structures of ${\mathcal{L}}^\tau - \lambda$ and ${\mathcal{T}}^\tau(\lambda)$ coincide, that is, the algebraic multiplicity (the length of each maximal Jordan chain) is the same whether computed for one operator or for the other.
\begin{proposition}
\label{propJordan}
The mapping ${\mathcal{K}}$ induces a one-to-one correspondence between Jordan chains.
\end{proposition}
\begin{proof}
\smartqed
Suppose $(\varphi, \psi)^\top \in \ker ({\mathcal{L}}^\tau - \lambda)$. This implies the following system of equations,
\[
\begin{aligned}
c \varphi_x + \psi &= \lambda \varphi,\\
\tau^{-1} (\partial_x^2 + a(x)) \varphi + (c \partial_x - \tau^{-1} b(x)) \psi &= \lambda \psi.
\end{aligned}
\]
Take the next element in a Jordan chain, say, $(v_1, v_2)^\top \in H^2 \times H^1$ such that
\[
({\mathcal{L}}^\tau - \lambda) \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} \varphi \\ \psi \end{pmatrix}.
\]
This yields
\[
\begin{aligned}
c \partial_x v_1 + v_2 - \lambda v_1 &= \varphi,\\
\tau^{-1} (\partial_x^2 + a(x)) v_1 + (c\partial_x - \tau^{-1} b(x)) v_2 - \lambda v_2 &= \psi.
\end{aligned}
\]
Notice that ${\mathcal{K}} (v_1, v_2)^\top = (v_1, \partial_x v_1)^\top$, ${\mathcal{K}}(\varphi, \psi)^\top = (\varphi, \varphi_x)^\top$. Now substitute $\psi = \lambda \varphi - c \varphi_x$ and $v_2 = \varphi + \lambda v_1 - c\partial_x v_1$ in order to obtain a scalar equation for $v_1$ and $\varphi$. The result is
\[
\tau^{-1} (\partial_x^2 + a(x)) v_1 + (c\partial_x - \tau^{-1} b(x) - \lambda) (\varphi + \lambda v_1 - c\partial_x v_1) = \lambda \varphi - c \varphi_x.
\]
Labeling $v := v_1$, last equation reads
\[
(1-c^2 \tau) v_{xx} + (cb(x) + 2 \tau \lambda) v_x - (\lambda^2 \tau v + \lambda b(x) - a(x))v = (b(x) + 2 \tau \lambda) \varphi - 2c\tau \varphi_x,
\]
which is equivalent to
\[
(\partial_x - \mathbb{A}^\tau(x,\lambda)) \begin{pmatrix} v \\ v_x \end{pmatrix} = \Big( \mathbb{A}_1^\tau(x) + 2 \lambda \mathbb{A}^\tau_2(x) \Big) \begin{pmatrix} \varphi \\ \varphi_x \end{pmatrix}.
\]
Generalizing this procedure, we observe that solutions to
\[
({\mathcal{L}}^\tau - \lambda) \begin{pmatrix} v_1^j \\ v_2^j \end{pmatrix} = \begin{pmatrix} v_1^{j-1} \\ v_2^{j-1} \end{pmatrix},
\]
for some $j \geq 1$, are in one-to-one correspondence to solutions to
\[
{\mathcal{T}}^\tau(\lambda) {\mathcal{K}} \begin{pmatrix} v_1^j \\ v_2^j \end{pmatrix} = (\partial_\lambda \mathbb{A}^\tau(x,\lambda)) {\mathcal{K}} \begin{pmatrix} v_1^{j-1} \\ v_2^{j-1} \end{pmatrix}.
\]
We conclude that a Jordan chain for the operator ${\mathcal{L}}^\tau - \lambda$ induces a Jordan chain for ${\mathcal{T}}^\tau(\lambda)$ with the same block structure and length.
\qed
\end{proof}
\begin{corollary}
\label{corsameptsp}
Assume $\tau > 0$. Then for any complex number $ \lambda \in \mathbb{C}$ there holds
\[
\lambda \in \sigma_\mathrm{\tiny{pt}} \quad \text{if and only if} \quad \lambda \in \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^\tau),
\]
with the same algebraic and geometric multiplicities (here $\sigma_\mathrm{\tiny{pt}}$ is the set in Definition \ref{defsigmatwo}).
\end{corollary}
\begin{remark}
The results of Corollary \ref{corsamespec} and Proposition \ref{propJordan} generalize the spectral equivalence proved in the relaxed Allen-Cahn case (see Section 3 in \cite{LMPS16}). It is remarkable, however, that for the Allen-Cahn model with relaxation the associated matrix ${\mathcal{L}}^\tau$ is a first order differential operator, whereas in the present (general) case the operator is of second order.
\end{remark}
\section{Asymptotic limits and the essential spectrum}
\label{secess}
In this section we analyze the asymptotic equations
\begin{equation}
\label{Wasymp}
W_x = \mathbb{A}^\tau_\pm(\lambda) W,
\end{equation}
wherupon the asymptotic coefficients are defined in \eqref{asympcoeff}, and
which will allow us, in turn, to
locate the essential spectrum of our problem.
\subsection{The asymptotic equations}
Take a look at the asymptotic coefficients \eqref{asympcoeff}. Let us denote the characteristic polynomial of
$\mathbb{A}^\tau_\pm(\lambda)$ as
\begin{equation}
\label{defcharpol}
p_\pm^\tau(\mu) = \det \big( \mathbb{A}_\pm^\tau (\lambda) - \mu I\big).
\end{equation}
Notice that $\mu$ is a root of $p_\pm^\tau(\mu) = 0$ if and only if $\kappa = (1-c^2 \tau) \mu$ is a root of
\[
\begin{aligned}
\det \big(\kappa I - (1-c^2 \tau) \mathbb{A}_\pm^\tau (\lambda)\big) &= \det \begin{pmatrix}
\kappa & & -(1-c^2\tau) \\ - \tau
\lambda^2 - \lambda b_\pm - |a_\pm| & & \, \kappa + c(b_\pm + 2 \tau \lambda)
\end{pmatrix}\\
&= \kappa^2 + \kappa c(b_\pm + 2 \tau \lambda) - (1 - c^2 \tau)(\tau \lambda^2 + \lambda b_\pm + |a_\pm|) \\
&= 0.
\end{aligned}
\]
Suppose that $\kappa = i \xi$, with $\xi \in \mathbb{R}$. Then the $\lambda$-roots of the equation
\begin{equation}
\label{disprel}
\xi^2 - ic\xi (b_\pm + 2 \tau \lambda) + (1-c^2\tau)(\tau \lambda^2 + b_\pm \lambda + |a_\pm|) = 0,
\end{equation}
define algebraic curves in the complex plane, bounding the essential spectrum. We denote these curves as
\begin{equation}
\label{algcurves}
\lambda = \lambda_{1,2}^\pm(\xi), \qquad \xi \in \mathbb{R}.
\end{equation}
Equation \eqref{disprel} is the \textit{dispersion relation} for the wave solutions to the constant
coefficient asymptotic equations.
\begin{remark}
It is clear that $\lambda = 0$ does not belong to any of the algebraic curves \eqref{algcurves}, inasmuch as
$\xi^2 - ic\xi b_\pm + (1-c^2\tau) |a_\pm|$ has strictly positive real part for all $\xi \in \mathbb{R}$.
\end{remark}
\subsubsection{The case $\tau = 0$}
We first analyze these curves in the case when $\tau = 0$. Then the dispersion relation \eqref{disprel} reads
\[
\xi^2 - ic\xi b_\pm + b_\pm \lambda + |a_\pm| = 0,
\]
and the single root is simply
\begin{equation}
\label{algcurvtau0}
\lambda_0^\pm(\xi) = -b_\pm^{-1}|a_\pm| + ic\xi - b_\pm \xi^2,
\end{equation}
for all $\xi \in \mathbb{R}$. These curves lie on the stable half plane with $\Re \lambda < 0$. In fact, there exist
\begin{equation}
\chi_0^\pm = \tfrac{1}{2}b_\pm^{-1}|a_\pm|,
\end{equation}
\[
\chi_0 = \min \{\chi_0^+, \chi_0^-\} > 0
\]
such that
\[
\Re \lambda_0^\pm(\xi) < - \chi_0^\pm \leq -\chi_0 < 0, \qquad \text{for all} \; \xi \in \mathbb{R}.
\]
In other words, there is a \textit{spectral gap}.
\subsubsection{The case $\tau > 0$}
We now examine the case when $\tau > 0$. Recall that $0 < \tau < 1/c^2$ thanks to the subcharacteristic
condition. Let us suppose that $\lambda(\xi)$ belongs to one of the curves \eqref{algcurves} and let
$\eta(\xi) = \Re \lambda(\xi)$, $\beta(\xi) = \Im \lambda(\xi)$. Then, take the real and imaginary parts of
the dispersion relation \eqref{disprel} to obtain
\begin{equation}
\label{realp}
\xi^2 + 2c\tau \xi \beta + (1-c^2\tau)\big( \tau(\eta^2 - \beta^2) + \eta b_\pm + |a_\pm|\big) = 0.
\end{equation}
\begin{equation}
\label{imagp}
-c\xi b_\pm - 2c\tau \xi \eta + (1-c^2\tau)\big( 2\tau \eta\beta + b_\pm \beta\big) = 0.
\end{equation}
\begin{remark}
Upon inspection of \eqref{realp} and \eqref{imagp} we notice that if we assume that $\eta = \Re \lambda = 0$
for some $\xi \in \mathbb{R}$ then $-c\xi b_\pm + (1-c^2\tau) \beta b_\pm = 0$. Since $b_\pm > 0$ this implies that
$\beta = c\xi/(1-c^2\tau)$. Substituting into \eqref{realp} we obtain $\xi^2 + \tau c^2 \xi^2 /(1-c^2\tau) +
|a_\pm| = 0$, which is a contradiction with $|a_\pm| > 0$, $\tau > 0$, $1-c^2\tau > 0$. This shows that the
algebraic curves never cross the imaginary axis; they remain in either the stable or the unstable complex half
plane.
\end{remark}
Notice that equation \eqref{imagp} can be written as
\[
\big( \beta - \frac{c\xi}{1-c^2\tau}\big)(b_\pm + 2\tau \eta) = 0.
\]
Thus, either
\begin{align}
\eta(\xi) &= - \frac{b_\pm}{2\tau}, \label{casei}\\
\textrm{or, } \;\; \beta(\xi) &= \frac{c\xi}{1-c^2\tau}. \label{caseii}
\end{align}
First, let us consider case \eqref{casei}. Substituting into \eqref{realp} yields
\begin{equation}
\label{eqforbeta}
\tau \beta^2 - \big( \frac{2c\tau \xi}{1-c^2\tau}\big) \beta - |a_\pm| + \frac{b_\pm^2}{4\tau} -
\frac{\xi^2}{1-c^2\tau} = 0.
\end{equation}
This equation has real solutions $\beta$ provided that
\[
\Delta_1 := \frac{4c^2 \tau^2 \xi^2}{(1-c^2 \tau)^2} - 4\tau \Big( -|a_\pm| + \frac{b_\pm^2}{4\tau} -
\frac{\xi^2}{1-c^2\tau}\Big) \geq 0,
\]
or equivalently,
\begin{equation}
\label{star}
\xi^2 (1-c^2 \tau)^{-2} + |a_\pm| \geq \frac{b_\pm^2}{4 \tau}.
\end{equation}
On the other hand, if we consider case \eqref{caseii} then after substituting into \eqref{realp} we obtain
\begin{equation}
\label{eqforeta}
\tau \eta^2 + b_\pm \eta + |a_\pm| + \frac{\xi^2}{(1-c^2\tau)^2} = 0.
\end{equation}
Last equation has real solutions $\eta$ if and only if
\[
\Delta_2 := b_\pm^2 - 4\tau \Big( |a_\pm| + \frac{\xi^2}{(1-c^2\tau)^2} \Big) \geq 0,
\]
that is, when
\begin{equation}
\label{dstar}
\xi^2(1-c^2\tau)^{-2} + |a_\pm| \leq \frac{b_\pm^2}{4\tau}.
\end{equation}
Therefore, clearly, $\mathrm{sgn}\, \Delta_2 = - \mathrm{sgn}\, \Delta_1$. We consider two cases:\\
\noindent \textit{Case (I):} Suppose that for a certain parameter value $\tau > 0$ there holds
\begin{equation}
\label{taularge}
\frac{b_\pm^2}{4\tau} < |a_\pm|,
\end{equation}
which means that for both the asymptotic states, or for one of them, $\tau > 0$ is sufficiently large such
that \eqref{taularge} is true.
\begin{remark}
It is to be observed that this case happens in the example when $g \equiv 1$, $f(u) = u(1-u)(u-1/2)$ if we
take $\tau = 1$, yielding $b_\pm = 1$, $|a_\pm| = 1/2$.
\end{remark}
Whence, if \eqref{taularge} holds then condition \eqref{dstar} is never satisfied and \eqref{star} is always
true. Therefore there are only real solutions for $\beta$ in \eqref{eqforbeta} inasmuch as $\Delta_1 > 0$ for
all $\xi \in \mathbb{R}$. This implies that the only algebraic curve solutions $\lambda = \lambda(\xi)$ to
\eqref{disprel} are
\begin{equation}
\label{etabeta}
\begin{aligned}
\Re \lambda (\xi) = \eta (\xi) &= - \frac{b_\pm}{2\tau}, \\
\Im \lambda (\xi) = \beta(\xi) &= \frac{c\xi}{1-c^2\tau} \pm \frac{1}{2\tau} \sqrt{\Delta_1(\xi)},
\end{aligned}
\end{equation}
for all $\xi \in \mathbb{R}$. Notice that there exists $\chi_1^\pm(\tau) := b_\pm/(4\tau) > 0$ such that there is a
spectral gap:
\[
\Re \lambda(\xi) < - \chi_1^\pm < 0, \qquad \xi \in \mathbb{R}.
\]
\noindent \textit{Case (II):} Now suppose that for certain parameter values
\begin{equation}
\label{tausmall}
\frac{b_\pm^2}{4\tau} \geq |a_\pm|.
\end{equation}
\begin{remark}
Notably, this case occurs for systems of Cattaneo-Maxwell type with $f(u) = u(1-u)(u-\alpha)$, $g(u,\tau) = 1
- \tau f'(u)$, $\alpha \in (0,1)$. Here $g(u,\tau) > 0$ provided that
\[
0 < \tau < \frac{3}{1-\alpha + \alpha^2},
\]
as the reader may easily verify. (This warrants hypothesis \eqref{H1} to hold.) Since $g(0,\tau) = b_- = 1 +
\tau \alpha > 0$, $g(1,\tau) = b_+ = 1+\tau(1-\alpha) > 0$, then clearly
\begin{align*}
\frac{b_-^2}{4\tau} &= \frac{(1+\alpha\tau)^2}{4\tau} \geq \alpha = |a_-|,\\
\frac{b_+^2}{4\tau} &= \frac{(1+(1-\alpha)\tau)^2}{4\tau} \geq 1-\alpha = |a_+|,
\end{align*}
verifying the two conditions \eqref{tausmall}.
\end{remark}
Assuming \eqref{tausmall}, let $\xi_0^\pm \geq 0$ be the nonnegative solution to
\[
(\xi_0^\pm)^2 = (1-c^2\tau)^2\big( \frac{b_\pm^2}{4\tau} - |a_\pm|\big).
\]
Henceforth, for every $\xi \in (-\xi_0^\pm,\xi_0^\pm)$ we have that
\[
\xi^2 < (1-c^2\tau)^2\big( \frac{b_\pm^2}{4\tau} - |a_\pm|\big),
\]
condition \eqref{dstar} is satisfied, and consequently, $\Delta_2(\xi) > 0$. In that range for $\xi$ the
solutions for $\beta$ and $\eta$ are thus given by
\[
\beta(\xi) = \frac{c\xi}{1-c^2\tau}, \qquad \xi \in (-\xi_0^\pm,\xi_0^\pm),
\]
and by
\begin{equation}
\label{etagood}
\eta(\xi) = \frac{1}{2\tau} \big( b_\pm \pm \sqrt{\Delta_2(\xi)}\big), \qquad \xi \in (-\xi_0^\pm,\xi_0^\pm),
\end{equation}
respectively. Observe, however, that $\Delta_1(\xi), \Delta_2(\xi) \to 0$ as $|\xi| \uparrow \xi_0^\pm$; that
$\beta(\xi) \to \pm c\xi_0/(1-c^2\tau)$ as $\xi \to \pm \xi_0^\pm$, $|\xi| < \xi_0^\pm$; and that $\eta(\xi)
\to -b_\pm/2\tau$ as $|\xi| \uparrow \xi_0^\pm$. This behavior guarantees the continuity of the algebraic
curves at $|\xi| = \xi_0^\pm$, because the roots of equation \eqref{eqforbeta} at $|\xi|=\xi_0^\pm$ are
\[
\beta(\xi_0) = \frac{\pm c\xi_0}{1-c^2\tau}
\]
(as $\Delta_1(\xi_0^\pm) = 0$), and $\eta$ is constant, given by $\eta = -b_\pm/2\tau$. Therefore, for values
$|\xi| \geq \xi_0^\pm$, $\Delta_1$ and $\Delta_2$ switch signs, $\Delta_1$ is now positive and the solutions
for $\eta$ and $\beta$ are given by formulas \eqref{etabeta}.
Closer inspection of \eqref{etagood} reveals that
\[
\eta(\xi) = - \frac{b_\pm}{2\tau} \pm \sqrt{\frac{b_\pm^2}{4\tau^2} - \frac{1}{\tau}\Big( |a_\pm| +
\frac{\xi^2}{(1-c^2\tau)^2}\Big)} \; \leq \, - \frac{b_\pm}{2\tau} + \sqrt{\frac{b_\pm^2}{4\tau^2} -
\frac{|a_\pm|}{\tau}} \; < 0,
\]
for all $|\xi| \leq \xi_0^\pm$. Therefore, in case (II) there exists
\[
\chi_2^\pm(\tau) = \frac{b_\pm}{4\tau} - \frac{1}{2} \sqrt{\frac{b_\pm^2}{4\tau^2} - \frac{|a_\pm|}{\tau}}
>0,
\]
such that
\[
\Re \lambda(\xi) < - \chi_2^\pm < 0, \qquad |\xi| \leq \xi_0^\pm,
\]
and there is also a spectral gap.
Under these considerations we now define, for each fixed $\tau \geq 0$,
\begin{equation}
\label{defspectralgap}
0 < \chi_0^\pm (\tau) := \begin{cases}
\tfrac{1}{2}b_\pm^{-1}|a_\pm|, & \text{if } \; \tau = 0,\\
\displaystyle{\tfrac{1}{2}\Big( \frac{b_\pm}{2\tau} - \sqrt{\frac{b_\pm^2}{4\tau^2} -
\frac{|a_\pm|}{\tau}} \, \Big)}, & \text{if } \; b_\pm^2 \geq 4\tau |a_\pm|, \, \tau > 0,\\
\displaystyle{\frac{b_\pm}{4\tau}}, & \text{otherwise.}
\end{cases}
\end{equation}
Thus we have proved the following
\begin{lemma}[spectral gap]
\label{lemspectralgap}
For each $\tau \geq 0$, there exists a uniform
\begin{equation}
\label{defchi0}
\chi_0(\tau) = \min \{\chi_0^+(\tau), \chi_0^-(\tau)\} > 0,
\end{equation}
(where $\chi_0^\pm (\tau)$ are defined in \eqref{defspectralgap}) such that the algebraic curves $\lambda =
\lambda_{1,2}^\pm(\xi)$, $\xi \in \mathbb{R}$, solutions to the dispersion relations \eqref{disprel}, satisfy
\begin{equation}
\label{spectralgapeq}
\mathrm{Re}\, \lambda_{1,2}^\pm(\xi) < - \chi_0(\tau) < 0, \qquad \xi \in \mathbb{R}.
\end{equation}
\end{lemma}
\begin{remark}
The significance of Lemma \ref{lemspectralgap} is that there is no accumulation of essential spectrum at the
eigenvalue $\lambda = 0$, which is an isolated eigenvalue with finite multiplicity (see Lemma \ref{lemalgm}
below).
Notice that for each
finite $\tau \geq 0$, the bound $\chi_0(\tau)$ is positive. There could be accumulation of the essential
spectrum in the case when $\tau \to +\infty$ (for which, it may happen, that $\chi_0(\tau) \to 0$), but that
case is precluded by our hypothesis \eqref{H2}, with an upper bound $\tau < \tau_m < +\infty$. In the case
of the relaxation model with Cattaneo-Maxwell transfer law (see equation \eqref{ACrelax}), the parameter
values are bounded by a characteristic relaxation time associated to the reaction, $\tau_m = 1/\max_{u \in
[0,1]} |f'(u)|$.
\end{remark}
\subsection{Hyperbolicity and consistent splitting}
For a given $\tau \geq 0$, we define the following open, connected region of the complex plane,
\begin{equation}
\label{defOmega}
\Omega := \{\lambda \in \mathbb{C} \, : \, \Re \lambda > - \chi_0(\tau)\}.
\end{equation}
It properly contains the unstable complex half plane $\mathbb{C}_+ = \{ \Re \lambda > 0\}$. This is called the
region of consistent splitting \cite{San02}. Denote
$S^\tau_\pm(\lambda)$ and $U^\tau_\pm(\lambda)$ as the stable and unstable eigenspaces of
$\mathbb{A}^\tau_\pm(\lambda)$, respectively.
\begin{lemma}\label{lemconsplit}
Given $\tau \geq 0$, for all $\lambda \in \Omega$ the coefficient matrices $\mathbb{A}^\tau_\pm(\lambda)$ have no
center eigenspace and, moreover,
\[
\dim S^\tau_\pm(\lambda) = \dim U^\tau_\pm(\lambda) = 1.
\]
\end{lemma}
\begin{proof}
\smartqed
Take $\lambda \in \Omega$ and suppose $\kappa = i \xi$, with $\xi \in \mathbb{R}$, is an eigenvalue of
$\mathbb{A}_\pm^\tau(\lambda)$. Then $\lambda$ belongs to one of the algebraic curves \eqref{algcurves}. But
\eqref{spectralgapeq} yields a contradiction with $\lambda \in \Omega$. Therefore, the matrices
$\mathbb{A}_\pm^\tau(\lambda)$ have no center eigenspace.
Since $\Omega$ is a connected region of the complex plane, it suffices to compute the dimensions of
$S_\pm^\tau(\lambda)$ and $U_\pm^\tau(\lambda)$ when $\lambda = \eta \in \mathbb{R}_+$, sufficiently large. $\mu$ is
a root of $p_\pm^\tau(\mu) = \det (\mathbb{A}_\pm^\tau(\lambda) - \mu) = 0$ if and only if $\kappa = (1-c^2 \tau)
\mu$ is a solution to
\begin{equation}
\label{eqkappa}
\kappa^2 + \kappa c(b_\pm + 2 \tau \lambda) - (1 - c^2 \tau)(\tau \lambda^2 + \lambda b_\pm + |a_\pm|) = 0.
\end{equation}
Assuming $\lambda = \eta \in \mathbb{R}_+$, the roots are
\[
\kappa = - \frac{c}{2}(b_\pm + 2 \tau \eta) \pm \frac{1}{2} \sqrt{c^2 (b_\pm + 2\tau \eta)^2 +
4(1-c^2\tau)(\tau \eta^2 + \eta b_\pm + |a_\pm|)}.
\]
Clearly, for each $\eta > 0$, one of the roots is positive and the other is negative. This proves the lemma.
\qed \end{proof}
The most important consequence of last lemma is the following
\begin{corollary}[stability of the essential spectrum]\label{corstabess}
For each $\tau \geq 0$, the essential spectrum is contained in the stable half-plane. More precisely,
\[
\sigma_\mathrm{\tiny{ess}} \subset \{\lambda \in \mathbb{C} \, : \, \mathrm{Re} \, \lambda \leq - \chi_0(\tau) < 0\}.
\]
\end{corollary}
\begin{proof}
\smartqed
The proof follows standard arguments \cite{KaPro13}. Fix $\lambda \in \Omega$. Since $\mathbb{A}_\pm^\tau(\lambda)$
are hyperbolic, by exponential dichotomies
theory (cf. Coppel \cite{Cop78}, Sandstede
\cite{San02}) the asymptotic systems $W_x = \mathbb{A}_\pm^\tau(\lambda)W$ have exponential dichotomies in $x \in
\mathbb{R}_+ = (0,+\infty)$ and in $x \in \mathbb{R}_- = (-\infty,0)$, respectively,
with Morse indices
\[
\begin{aligned}
i_+(\lambda) &= \dim U_+^\tau(\lambda) = 1, \\
i_-(\lambda) &= \dim U_-^\tau(\lambda) = 1.
\end{aligned}
\]
This implies (cf. Palmer \cite{Pal1,Pal2}, Sandstede \cite{San02}), that the variable coefficient operators
${\mathcal{T}}^\tau(\lambda)$ are Fredholm as well, with index
\[
\text{ind}\, {\mathcal{T}}^\tau(\lambda) = i_+(\lambda) - i_-(\lambda) =
0,
\]
showing that $\Omega \subset \mathbb{C}\backslash \sigma_\mathrm{\tiny{ess}}$, or equivalently, that $\sigma_\mathrm{\tiny{ess}} \subset \mathbb{C}\backslash
\Omega = \{\Re
\lambda \leq - \chi_0(\tau)\}$, as claimed.
\qed \end{proof}
\begin{corollary}\label{corsplit}
For every $\lambda \in \Omega$, the eigenvalues of the asymptotic coefficients \eqref{asympcoeff} are given
by
\begin{equation}
\label{evaluesmu}
\mu^\pm_{1,2}(\lambda) = - \frac{c}{2(1-c^2 \tau)}(b_\pm + 2 \tau \lambda) + \omega^\pm_{1,2}(\lambda),
\end{equation}
whereupon
\[
\omega_1^\pm (\lambda) := - \frac{1}{2} \Theta_{\pm}(\lambda)^{1/2}, \qquad \omega_2^\pm (\lambda) :=
\frac{1}{2} \Theta_{\pm}(\lambda)^{1/2},
\]
and,
\[
\Theta_{\pm}(\lambda) = (1-c^2\tau)^{-2} \Big( c^2 b_\pm^2 + 4(\tau \lambda^2 + b_\pm \lambda + (1-c^2
\tau)|a_\pm|) \Big).
\]
Morever, for every $\lambda \in \Omega$,
\[
\mathrm{Re} \, \mu_1^\pm(\lambda) < 0 < \mathrm{Re} \, \mu_2^\pm(\lambda),
\]
that is, $\mu_1^+(\lambda)$ is the decaying mode at $+\infty$, and $\mu_2^-(\lambda)$ is the decaying mode at
$-\infty$.
\end{corollary}
\begin{proof}
\smartqed
Since $p_\pm^\tau(\mu) = 0$ if and only if $\kappa = (1-c^2 \tau)\mu$ is a root of the characteristic
equation \eqref{eqkappa}, then it is clear that for each $\lambda \in\Omega$ the eigenvalues of
$\mathbb{A}_\pm^\tau(\lambda)$ are given by \eqref{evaluesmu}. A little algebra yields the expression for the
discriminant $\Theta_\pm(\lambda)$, an analytic function of $\lambda$. From the proof of Lemma
\ref{lemconsplit}, we know that, for $\lambda \in \mathbb{R}$ and $\lambda \gg 1$, the only eigenvalue with
negative real part is $\mu_1^\pm(\lambda)$. Since $\Omega$ is connected and the eigenvalues are continuous
(analytic) in $\lambda$, we conclude that $\Re \mu_1^\pm(\lambda) < 0$ for all $\lambda \in \Omega$
(otherwise, the hyperbolicity, and consequently the consistent splitting, would be violated). The same argument
applies to $\mu_2^\pm(\lambda)$ and the conclusion follows.
\qed
\end{proof}
\section{Point spectral stability}
\label{secptsp}
This section is devoted to showing that the point spectrum is stable. The proof presented here makes use of
energy estimates and contrasts with the one reported in \cite{LMPS16} for the particular case of the
Allen-Cahn model with relaxation. The former proof was based on a perturbation argument in the vicinity of
$\tau = 0$ and a further extension to the whole parameter domain. In contrast, here we perform
energy estimates in the frequency regime that require to apply a transformation
on the $H^2$-eigenfunction. Thanks to its decaying behaviour, the transformed eigenfunction also belongs to
$H^2$ and we are able to perform the energy estimates on the new spectral equation. We close the section by
showing that the eigenvalue $\lambda = 0$ is simple and by stating the main result of the paper.
\subsection{Decay of solutions to spectral equations}
\begin{lemma}
\label{goodw}
Suppose $v \in H^2$ is a solution to the spectral equation \eqref{specprob} for some $\lambda \in
\sigma_\mathrm{\tiny{pt}}$ with $\mathrm{Re}\, \lambda \geq 0$ and $\lambda \in \Omega$. If we define
\begin{equation}
\label{transfw}
w(x) = \exp \left( \frac{c}{2(1-c^2 \tau)} \int_{x_0}^x b(s) \, ds \right) v(x), \qquad x \in \mathbb{R},
\end{equation}
then $w \in H^2$. Here $x_0 \in \mathbb{R}$ is fixed but arbitrary.
\end{lemma}
\begin{proof}
\smartqed
Since $\lambda \in \sigma_\mathrm{\tiny{pt}}$ there exists $W = (v,v_x)^\top \in H^1 \times H^1$ such that ${\mathcal{T}}^\tau(\lambda) W
= 0$. This implies, in turn, that $v \in H^2$ is a solution to the spectral equation \eqref{specprob}. To
analyze the decaying properties of $v$ (equivalently, of $W$) we invoke the Gap Lemma \cite{GZ98,KS98}, which
relates the decaying properties of the solutions to the variable coefficient system \eqref{Wsystem} to those
of the solutions of the constant coefficient systems \eqref{Wasymp}, provided that $\mathbb{A}^\tau(x,\lambda)$
approaches $\mathbb{A}_\pm^\tau(\lambda)$ exponentially fast as $x \to \pm \infty$. For the precise statement of the
Gap Lemma we refer the reader to Lemma A.11 in \cite{Zum04}, or Appendix C in \cite{MZ02}.
Suppose that $c > 0$. Since $b > 0$, it is clear that if $x < x_0$ then $|w| \leq |v|$ and $w$ decays like
$v$ as $x \to - \infty$ Thus, we need to make precise the decaying behaviour of $v$ as $x \to +\infty$. By
exponential decay of the profile \eqref{expdec}, it is clear that
\[
|\mathbb{A}^\tau(x,\lambda) - \mathbb{A}_\pm^\tau(\lambda)| \leq C e^{-\nu |x|},
\]
as $x \to \pm \infty$, for some $C, \nu > 0$, uniformly in $\lambda$. Then, applying the Gap Lemma and
Corollary \ref{corsplit}, the decaying solution $W$ at $+\infty$ to the variable coefficient equation behaves
as
\[
W(x,\lambda) = e^{\mu_1^+(\lambda)} \Big( V_1^+(\lambda) + O(e^{-\nu|x|} |V_1^+(\lambda)|) \Big), \quad x >
0,
\]
where $V_1^+(\lambda)$ is the eigenvector of $\mathbb{A}_\pm^\tau(\lambda)$ associated to the eigenmode
$\mu_1^+(\lambda)$. This imples that $v$ and $v_x$ decay, at most, as
\[
|v|, |v_x| \leq C e^{\Re \mu_1^+(\lambda) x},
\]
as $x \to +\infty$. We then readily see, from Corollary \ref{corsplit}, that
\[
\begin{aligned}
|w| &\leq C \exp \Big( \frac{c}{2(1-c^2 \tau)} \int_{x_0}^x |b(s) - b_+| \, ds \Big) \times \\ & \qquad
\times \exp \Big( \Big( - \frac{c \tau \Re \lambda}{2(1-c^2 \tau)} - \frac{1}{2 \sqrt{2}} \sqrt{\Re
\Theta_+(\lambda) + |\Theta_+(\lambda)|}\Big) x \Big) \\
&\leq C \exp \Big( - \frac{c \tau (\Re \lambda)x}{2(1-c^2 \tau)}\Big) \exp \Big( - \frac{x}{2 \sqrt{2}}
\sqrt{\Re \Theta_+(\lambda) +
|\Theta_+(\lambda)|} \Big) \, \to 0,
\end{aligned}
\]
as $x \to +\infty$, thanks to exponential decay of the profile, which yields
\[
\exp \Big( \frac{c}{2(1-c^2 \tau)} \int_{x_0}^x |b(s) - b_\pm| \, ds \Big) \leq C \exp( -e^{-\nu x}) \leq C.
\]
This shows that $w$ decays exponentially fast as $x \to +\infty$. Since $v_x$ decays as the same rate as $v$,
it is easy to verify that $w_x$ also decays exponentially fast at $+\infty$. We conclude that $w \in H^1$.
Upon differentiation one can prove that, in fact, $w \in H^2$, as $w_{xx}$ decays exponentially fast as well
at $+\infty$. Details are left to the dedicated reader.
The case $c < 0$ can be treated similarly, inasmuch as the decay at $-\infty$ of the eigenfunction $W =
(v,v_x)^\top$ is determined by the eigenmode $\mu_2^-(\lambda)$; an analogous argument applies. This
concludes the proof of the lemma.
\qed \end{proof}
\subsection{Energy estimates}
Suppose that $\lambda \in \sigma_\mathrm{\tiny{pt}}$, with $\Re \lambda \geq 0$ (and consequently, $\lambda \in \Omega$). Then
there exists $W = (v, v_x)^\top \in H^1 \times H^1$ such that
${\mathcal{T}}^\tau (\lambda) W = 0$. This is tantamount to have an $H^2$ solution $v$ to the spectral equation
\eqref{specprob}. Consider the transformation
\[
v(x) = w(x) e^{\theta(x)},
\]
where the function $\theta = \theta(x)$ is to be determined. Upon substitution into
\eqref{specprob} we obtain
\[
\begin{aligned}
\lambda^2 \tau w - 2c\lambda \tau (w_x + \theta_x w) + \lambda b(x) w &= (1 - c^2 \tau) w_{xx} + \big(
2(1-c^2 \tau) \theta_x + c b(x) \big) w_x + \\
& \; + \big( (1-c^2 \tau) (\theta_x^2 + \theta_{xx}) + cb(x) \theta_x + a(x) \big) w.
\end{aligned}
\]
Choose $\theta$ such that
\[
\theta_x = - \frac{c}{2(1-c^2\tau)} b(x).
\]
This yields
\begin{equation}
\label{eqsix}
\lambda^2 \tau w - 2c \lambda \tau w_x + \frac{\lambda b(x)}{1-c^2 \tau} w = (1 - c^2 \tau) w_{xx} + H(x) w,
\end{equation}
whereupon
\[
H(x) := a(x) - \frac{c^2 b(x)^2}{4(1-c^2 \tau)} - \tfrac{1}{2} c b'(x).
\]
If we apply the same procedure to the eigenfunction $U_x \in H^2$ associated to the eigenvalue $\lambda = 0
\in \sigma_\mathrm{\tiny{pt}}$, denoting $U_x = \psi e^\theta$ we arrive at
\begin{equation}
\label{eqsixpsi}
0 = (1 - c^2 \tau) \psi_{xx} + H(x) \psi.
\end{equation}
By monotonicity of the profile, $U_x > 0$, we know that $\psi > 0$ and we can solve for $H$ in
\eqref{eqsixpsi}, yielding
\[
H(x) = - (1-c^2 \tau) \frac{\psi_{xx}}{\psi}.
\]
Substituting back into \eqref{eqsix} we obtain
\begin{equation}
\label{eqsix2}
\lambda^2 \tau w - 2c \lambda \tau w_x + \frac{\lambda b(x)}{1-c^2 \tau} w = (1 - c^2 \tau) \Big( w_{xx} -
\frac{\psi_{xx}}{\psi} w \Big).
\end{equation}
Notice that thanks to Lemma \ref{goodw}, we have that this is an spectral equation for $w \in H^2$. We
perform standard energy estimates on equation \eqref{eqsix2}. Multiply by $\overline{w}$ and integrate by
parts in $\mathbb{R}$. The result is,
\[
\begin{aligned}
\lambda \tau^2 \| w \|_{L^2}^2 - 2c \lambda \tau \int_{\mathbb{R}} \overline{w} w_x \, dx &+ \frac{\lambda}{1-c^2
\tau} \int_{\mathbb{R}} b(x) |w|^2 \, dx = \\ &= (1-c^2 \tau) \left( - \int_{\mathbb{R}} |w_x|^2 \, dx + \int_{\mathbb{R}} \psi_x
\partial_x \Big( \frac{|w|^2}{\psi} \Big) \, dx \right)
\end{aligned}
\]
Using the identity
\begin{equation*}
\psi^{2}\left|\left(\frac{w}{\psi} \right)_x \right|^{2} = - \left( \psi_x \left(\frac{|w|^{2}}{\psi}
\right)_x-|w_x|^{2} \right),
\end{equation*}
and substituting, we obtain the estimate
\begin{equation}
\label{basicee}
\begin{aligned}
\lambda \tau^2 \| w \|_{L^2}^2 - 2c \lambda \tau \int_{\mathbb{R}} \overline{w} w_x \, dx &+ \frac{\lambda}{1-c^2
\tau} \int_{\mathbb{R}} b(x) |w|^2 \, dx = \\ &= - (1-c^2 \tau) \int_{\mathbb{R}} \psi^2 \left| \partial_x \Big(
\frac{w}{\psi}\Big) \right|^2 \, dx.
\end{aligned}
\end{equation}
\begin{lemma}[point spectral stability]\label{lemptsp}
Suppose $\tau \geq 0$. If $\lambda \in \sigma_\mathrm{\tiny{pt}} \cap \Omega$ then either $\lambda = 0$, or $\mathrm{Re} \,
\lambda \leq -
\chi_1(\tau) < 0$, for some uniform $\chi_1(\tau) > 0$.
\end{lemma}
\begin{proof}
\smartqed
The result is a consequence of the basic energy estimate \eqref{basicee}. Indeed, suppose that $\lambda \in
\sigma_\mathrm{\tiny{pt}}$ and $\Re \lambda \geq 0$ (and consequently, $\lambda \in \Omega$). Then after the transformation, $w =
e^{-\theta} v \in H^2$ satisfies \eqref{basicee}. Notice that
\[
\Re \int_{\mathbb{R}} \overline{w} w_x \, dx = \tfrac{1}{2} \int_{\mathbb{R}} \partial_x \big( |w|^2\big) \, dx = 0.
\]
First, let us assume that $\tau > 0$. For shortness, we denote,
\[
\begin{aligned}
k_0 &:= (1-c^2 \tau) \int_{\mathbb{R}} \psi^2 \left| \partial_x \Big( \frac{w}{\psi}\Big) \right|^2 \, dx &\geq 0,\\
k_1 &:= (1-c^2 \tau)^{-1} \int_{\mathbb{R}} b(x) |w|^2 \, dx &> 0,\\
k_2 &:= \tau^2 \| w \|_{L^2}^2 &> 0,\\
i k_3 &:= \int_{\mathbb{R}} \overline{w} w_x \, dx,
\end{aligned}
\]
with $k_j \in \mathbb{R}$. Notice that $k_1, k_2 > 0$ because $v$ is an eigenfunction, $\tau > 0$, and because
of \eqref{H2}.
Let us denote $\zeta = \Re \lambda$, $\beta = \Im \lambda$. Therefore, taking the real and imaginary parts of
\eqref{basicee} yields
\[
\begin{aligned}
(\zeta^2 - \beta^2) k_2 + 2c\tau \beta k_3 + \zeta k_1 + k_0 &= 0,\\
2 \zeta \beta k_2 - 2c\tau \zeta k_3 + \beta k_1 &= 0.
\end{aligned}
\]
Multiply the first equation by $\zeta$, the second by $\beta$, and add them up. The result is,
\[
(\zeta^2 + \beta^2) (k_1 + \zeta k_2) + \zeta k_0 = 0,
\]
or, equivalently,
\[
|\lambda|^2 k_1 + (\Re \lambda) \big( k_0 + |\lambda|^2 k_2 \big) = 0.
\]
Since $k_0 > 0$, $k_1, k_2 \geq 0$, this implies that $\Re \lambda \leq 0$.
Now, if we assume that $\zeta = \Re \lambda = 0$, from the equations we have that $\beta^2 k_1 = 0$. Since
$k_1 > 0$ we conclude that $\beta = 0$ and this implies that $\lambda = 0$. On the other hand, if we assume
that $\beta = \Im \lambda = 0$, then from the first equation we obtain,
\[
k_2 \zeta^2 + k_1 \zeta + k_0 = 0,
\]
or,
\[
\zeta = \Re \lambda = - \frac{k_1}{2 k_2} \pm \frac{1}{2 k_2} \Big( k_1^2 - 4 k_2 k_0 \Big)^{1/2}.
\]
Since $k_2 k_0 \geq 0$ we have that $\Re \lambda = \zeta < 0$, a contradiction.
We conclude that the only
eigenvalue with $\Re \lambda = 0$
is $\lambda = 0$ and that, for any other eigenvalue with $\lambda \neq 0$ in $\Omega$, there holds
\[
\Re \lambda \leq - \chi_1(\tau) < 0,
\]
for some $\chi_1(\tau) > 0$. This holds because the set $\sigma_\mathrm{\tiny{pt}}$ comprises isolated eigenvalues with finite multiplicity.
$-\chi_1(\tau) < 0$ is actually the real part of the first (isolated) eigenvalue different from zero. In
other words, there is a spectral gap.
In the case where $\tau = 0$, the basic energy estimate \eqref{basicee} yields
\[
\lambda \int_{\mathbb{R}} b(x) |w|^2 \, dx = - \int_{\mathbb{R}} \psi^2 \left| \partial_x \Big(
\frac{w}{\psi}\Big) \right|^2 \, dx,
\]
which implies, in turn, that $\lambda \in \mathbb{R}$ and $\lambda \leq 0$.
Finally, notice that $\lambda = 0$ if and only if
$w/\psi = 0$ a.e., which is tantamount to $v = U_x$ a.e. This concludes the proof of the lemma.
\qed \end{proof}
As a consequence of the proof of Lemma \ref{lemptsp} we have the following immediate
\begin{corollary}
\label{corgm}
$\lambda = 0$ is an eigenvalue with $g.m. = 1$.
\end{corollary}
\subsection{Simple translation eigenvalue}
We now show that the eigenvalue $\lambda = 0$ is a simple eigenvalue.
\begin{lemma}\label{lemalgm}
The algebraic multiplicity of $\lambda = 0 \in \sigma_\mathrm{\tiny{pt}}$ is equal to one.
\end{lemma}
\begin{proof}
\smartqed
From Corollary \ref{corgm}, we know that $\Phi = (U_x,U_{xx})^\top \in H^1 \times H^1$ is the only
eigenfunction associated to $\lambda = 0$. Let us denote, for simplicity, $\phi = U_x \in H^2$,
so that $\Phi = (\phi, \phi_x)^\top$. Clearly, because of equation \eqref{nprofileq}, $\phi \in H^2$ is a
solution to
\[
{\mathcal{A}} \phi := (1-c^2 \tau) \phi_{xx} + cb(x) \phi_x + a(x) \phi = 0.
\]
This holds upon differentiation \eqref{nprofileq} with respect to $x$. The auxiliary operator, ${\mathcal{A}} : L^2 \to
L^2$ defined above, with domain ${\mathcal{D}}({\mathcal{A}}) = H^2$, has a formal adjoint, ${\mathcal{A}}^* : L^2 \to L^2$, given by
\[
{\mathcal{A}}^* \psi = (1-c^2 \tau) \psi_{xx} - cb(x) \psi_x + (a(x) -cb'(x)) \psi, \qquad \psi \in {\mathcal{D}}({\mathcal{A}}^*) = H^2 \subset
L^2.
\]
Now, for any $\lambda \in \sigma_\mathrm{\tiny{pt}}$, the operator ${\mathcal{T}}^\tau(\lambda)$ is Fredholm with index zero. Therefore, by
properties of closed operators \cite{Kat80}, we have that
\[
\dim \ker {\mathcal{T}}^\tau(\lambda)^* = \dim {\mathcal{R}}({\mathcal{T}}^\tau(\lambda))^\perp = \mathrm{codim} \, {\mathcal{R}}({\mathcal{T}}^\tau(\lambda))
= \dim \ker {\mathcal{T}}^\tau(\lambda).
\]
Since $\dim \ker {\mathcal{T}}^\tau(0) = 1$ we conclude that there exists a unique bounded solution $\Psi =
(y,z)^\top \in H^1 \times H^1$ to the adjoint equation
\[
{\mathcal{T}}^\tau(0)^* \Psi = - \big( \partial_x + \mathbb{A}^\tau(x,0)^*\big)\Psi = 0.
\]
From the expression for $\mathbb{A}^\tau(x,0)$ we observe that $(y,z)^\top \in H^1 \times H^1$ is a solution to the
system
\begin{equation}
\label{yzsyst}
\begin{aligned}
-a(x)z + (1-c^2 \tau) y_x &= 0,\\
(1-c^2 \tau) y - cb(x) z + (1-c^2\tau) z_x &= 0.
\end{aligned}
\end{equation}
Since the coefficents are bounded and $y, z \in H^1$, by a bootstrapping argument we can verify from the
system of equations that actually $y, z \in H^2$. Thus, upon differentiation of the second equation and
substitution into the first one we obtain
\[
{\mathcal{A}}^* z = (1-c^2 \tau) z_{xx} - cb(x) z + (a(x) - b'(x)) z = 0.
\]
We conclude that $z = z(x)$ is the only bounded $H^2$-solution to ${\mathcal{A}}^* z = 0$.
Now, like in \cite{LMPS16}, let us define the Melnikov integral
\[
\Gamma := \langle \Psi, \big( \partial_\lambda \mathbb{A}^\tau(x,\lambda) \big)_{|\lambda = 0} \Phi \rangle_{L^2
\times L^2}.
\]
It is well-known (see section 4.2.1 in \cite{San02}) that $\Gamma$ decides whether $\lambda = 0$ is a simple
eigenvalue: if $\Gamma \neq 0$ then its algebraic multiplicity is equal to one (see also \cite{LMPS16} and the
discussion therein). From \eqref{derivlamA} we observe that $\partial_\lambda \mathbb{A}^\tau(x,\lambda)
\big)_{|\lambda = 0} = \mathbb{A}_1^\tau(x)$, and therefore we arrive at
\[
\begin{aligned}
\Gamma = \langle \Psi, \mathbb{A}_1^\tau(x) \Phi \rangle_{L^2 \times L^2} &= \int_{\mathbb{R}} \begin{pmatrix}
y \\ z
\end{pmatrix}^* \mathbb{A}_1^\tau(x) \begin{pmatrix}
\phi \\ \phi_x
\end{pmatrix} \, dx \\
&= (1-c^2 \tau)^{-1} \int_{\mathbb{R}} \overline{z} \big( b(x) \phi - 2c\tau \phi_x \big) \, dx.
\end{aligned}
\]
Like in the argumentation leading to the proof of Lemma 3.2 in \cite{LMPS16}, a direct computation allows to
verify that the only bounded solution to ${\mathcal{A}}^* z = 0$ is given by $z =\phi/h^2$, where $h$ is a solution to
\[
h_x = - \frac{c b(x)}{2 (1- c^2 \tau)} h,
\]
that is, $h(x) = e^{\theta(x)}$ as in the previous section. By the arguments of Lemma \ref{goodw} it is easy
to verify that $z \in H^2$ inasmuch as $\phi \in H^2$. Thus, a direct computation yields
\[
{\mathcal{A}}^* z = \frac{1}{h^2}\Big( (1-c^2 \tau) \phi_{xx} + cb(x) \phi_x + a(x) \phi \Big) = \frac{1}{h^2} {\mathcal{A}}
\phi = 0,
\]
as claimed. Whence, substituting $z = \phi / h^2$ into the expression for $\Gamma$ we obtain
\[
\begin{aligned}
(1-c^2 \tau) \Gamma &= \int_{\mathbb{R}} \frac{\overline{\phi}}{h^2} \big( b(x) \phi - 2c\tau \phi_x \big) \, dx \\
&= \int_{\mathbb{R}} \frac{b(x)}{h^2} |\phi|^2 - \frac{c\tau}{h^2} \partial_x ( |\phi|^2 ) \, dx \\
&= (1 - c^2 \tau)^{-1} \int_{\mathbb{R}} \frac{b(x)}{h^2} |\phi|^2 \, dx,
\end{aligned}
\]
after integration by parts and substitution of the equation for $h$. We observe that
\[
\Gamma = (1 - c^2 \tau)^{-2} \int_{\mathbb{R}} \frac{b(x)}{h^2} |\phi|^2 \, dx > 0,
\]
and the conclusion follows.
\qed
\end{proof}
\subsection{Main result}
We conclude this section by stating our main theorem.
\begin{theorem}[spectral stability with spectral gap]
\label{mainthm}
Under assumptions \eqref{H1} and \eqref{H2}, for each $\tau \in [0,\tau_m)$ fixed let $U = U(x)$ be the
monotone traveling front solution to \eqref{hypAC}. Then this front is spectrally stable with spectral gap,
more precisely, there exists a uniform $\chi(\tau) > 0$ such that
\[
\sigma \subset \{ \lambda \in \mathbb{C} \, : \, \mathrm{Re} \, \lambda \leq - \chi(\tau) < 0 \} \cup \{ 0 \}.
\]
Moreover, $\lambda = 0$ is a simple isolated eigenvalue (with algebraic multiplicity equal to one) associated
to translation invariance.
\end{theorem}
\begin{proof}
\smartqed
The conclusion follows directly by collecting the results of Corollary \ref{corstabess} and Lemmata \ref{lemptsp}
and \ref{lemalgm}. The spectral gap is given by
\[
\chi(\tau) := \min \{ \chi_0(\tau), \chi_1(\tau) \} > 0,
\]
for each fixed $\tau \in [0, \tau_m)$, where $\chi_0$ is defined in \eqref{defchi0} and $\chi_1$ is the gap defined in Lemma \ref{lemptsp}.
\qed
\end{proof}
Notice that if $\tau > 0$ then the statement of Theorem \ref{mainthm} can be recast in terms of the spectrum of the operators ${\mathcal{L}}^\tau$ defined in \eqref{defcLtau}. Indeed, corollaries \ref{corsamespec} and \ref{corsameptsp} imply that the spectral stability with spectral gap are also properties of the matrix operators ${\mathcal{L}}^\tau$ when $\tau > 0$. Thus, we can state the following
\begin{theorem}
Under assumptions \eqref{H1} and \eqref{H2}, and for each fixed $0 < \tau < \tau_m$ there holds
\[
\sigma({\mathcal{L}}^\tau) \subset \{ \lambda \in \mathbb{C} \, : \, \Re \lambda < - \chi(\tau) < 0\} \cup \{ 0 \},
\]
for some uniform $\chi(\tau) > 0$. Moreover, $\lambda = 0$ is a simple isolated eigenvalue of ${\mathcal{L}}^\tau$ with associated eigenfunction $(U_x, -cU_{xx}) \in \ker {\mathcal{L}}^\tau$.
\end{theorem}
\section{Discussion}
\label{secdisc}
In this paper we established the spectral stability with spectral gap of a family of traveling fronts for nonlinear wave equations of the form \eqref{hypAC} when the reaction function is of bistable type. The equations under consideration are endowed with a positive ``damping" term, $g > 0$, which generalizes the previous studied case of the Allen-Cahn equation with relaxation. To that end, we revisited the existence theory using a dynamical systems approach, more in the spirit of our previous work \cite{LMPS16}. Even though existence results are available in the literature \cite{GiKe15}, here we presented a different construction which allows us to derive a variational formula for the unique wave speed and to establish exponential decay of the profile function. Both features play a role in the stability analysis: the uniqueness of the speed is related to the algebraic multiplicity of the zero eigenvalue of the linearized problem around the front, whereas the exponential decay is crucial to locate the essential spectrum.
Our main result establishes that the spectrum of the linearized problem around the front is located in the complex half plane with negative real part, except for the translation zero eigenvalue, which is isolated with finite multiplicity. This property is also known as \textit{spectral stability with spectral gap} and prevents the accumulation of essential spectrum around zero. In this fashion, we generalize the analysis performed in \cite{LMPS16} for a particular case (the Allen-Cahn equation with relaxation) to a wider class of equations. It is important to remark that this result is more general not only in applicability but also in methodology. Indeed, the present proof makes use of energy estimates and works for the whole parameter regime, whereas the previous argument is of perturbative nature, with an extension to further relaxation times. In our opinion, the method presented here is more direct.
The establishment of spectral stability is a first step in a more general program which includes the nonlinear stability analysis of the fronts under small perturbations. Thanks to the location of the spectrum in the complex plane, we conjecture that the linearized operator around the wave is the infinitesimal generator of a $C_0$-semigroup. The generation of such semigroup and its decaying properties is a matter of future investigation. (As additional information, in the Appendix we present how to establish resolvent estimates in the case of stationary fronts with $c = 0$, yielding the generation of the semigroup via Lumer-Philips theorem.) Such analysis, also called \textit{linearized stability} in the literature \cite{KaPro13,San02}, is used to prove nonlinear stability in a key way. There exist results in the literature which guarantee nonlinear stability under the assumption of spectral stability (see, e.g., Rottmann-Matthes \cite{Rott11,Rott12a}), but they are not applicable to the generic class of equations considered here, as they are restricted to hyperbolic systems with constant coefficient first order operators. We regard the nonlinear stability of the hyperbolic fronts of equations of the form \eqref{hypAC} as an important open problem which warrants attention from the nonlinear wave propagation community.
\begin{acknowledgement}
R. G. Plaza is grateful to the Department of Information Engineering, Computer Science and Mathematics of the
University of L'Aquila, for their hospitality during the Fall of 2017, when this research was carried out.
This work was partially supported by the EU Project ModComShock G.A. N.
642768.
\end{acknowledgement}
\section*{Appendix: Resolvent estimates for stationary fronts}
\addcontentsline{toc}{section}{Appendix}
Fix $\tau > 0$ and consider the space $\mathcal{X} := H^1 \times L^2$ endowed with the scalar product
\begin{equation*}
\langle (u_1,v_1),(u_2,v_2)\rangle_{{}_{\mathcal{X}}}
:=\Re \langle u_1,u_2\rangle_{{}_{L^2}}
+\tau^{-1}\Re \langle \partial_x u_1, \partial_x u_2\rangle_{{}_{L^2}}
+\Re \langle v_1,v_2\rangle_{{}_{L^2}},
\end{equation*}
and corresponding norm
\begin{equation*}
\|(u,v)\|_{{}_{\mathcal{X}}}= \Big( \|u\|_{{}_{L^2}}^2+\tau^{-1}\|u_x \|_{{}_{L^2}}^2 + \|v\|_{{}_{L^2}}^2 \Big)^{1/2}.
\end{equation*}
Then, for simplicity drop the $\tau > 0$ from the notation and consider the operator defined in \eqref{defcLtau},
\[
{\mathcal{L}} \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} c \partial_x & & 1 \\ \tau^{-1} (\partial_x^2 + a(x)) & & \; c \partial_x - \tau^{-1} b(x) \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix},
\]
as a closed, densely defined operator on ${\mathcal{X}}$ with domain ${\mathcal{D}} = H^2 \times H^1$. This operator can be conveniently written as
\[
{\mathcal{L}} = {\mathcal{L}}_0 + {\mathcal{B}},
\]
where
\[
{\mathcal{L}}_0 := \begin{pmatrix} c \partial_x & & 1 \\ \tau^{-1} \partial_x^2 -1 & & c \partial_x - \tau^{-1}b(x) \end{pmatrix}, \qquad {\mathcal{B}} := \begin{pmatrix} 0 & & 0 \\ \tau^{-1} a(x) + 1 & & 0 \end{pmatrix}.
\]
We first observe that the operator ${\mathcal{L}}_0$ is dissipative on $\mathcal{X}$ since for any $\mathbf{w} = (u,v)^\top \in {\mathcal{D}}$,
\begin{equation*}
\begin{aligned}
\langle \mathbf{w}, {\mathcal{L}}_0 \mathbf{w} \rangle_{{}_{\mathcal{X}}} &= \Re \langle u, c u_x + v \rangle_{{}_{L^2}}
+ \tau^{-1} \Re \langle u_x, cu_{xx} + v_x \rangle_{{}_{L^2}} + \\ & \;\;\; + \Re \langle v, \tau^{-1} u_{xx} -u + cv_x - \tau^{-1} b(x) v \rangle_{{}_{L^2}} \\
&= - \tau^{-1} \Re \langle v, b(x)v \rangle_{{}_{L^2}} \leq 0,\\
\end{aligned}
\end{equation*}
in view of Hypothesis \eqref{H2} and having used the fact that $\Re \langle f, f_x\rangle_{{}_{L^2}} = 0$ for any $f \in H^1$. Since ${\mathcal{D}}$ is dense in ${\mathcal{X}}$ and by dissipativity, thanks to the Lumer-Philips theorem (see, e.g., Theorem 12.22 in \cite{ReRo04}) it suffices to show that ${\mathcal{L}}_0 - \lambda$ is onto for real $\lambda$ sufficiently large to conclude that ${\mathcal{L}}_0$ is the infinitesimal generator of a $C_0$-semigroup of contractions, $e^{t {\mathcal{L}}_0}$, satisfying $\| e^{t {\mathcal{L}}_0}\| \leq 1$. Clearly, ${\mathcal{B}}$ is a bounded operator and $\|{\mathcal{B}}\| \leq O(1 + \tau^{-1} \| a \|_{{}_{L^\infty}})$; since ${\mathcal{L}}$ is a bounded perturbation of ${\mathcal{L}}_0$, it is also the infinitesimal generator of a quasi-contractive $C_0$-semigroup, ${\mathcal{S}}(t)$, such that
\[
\| {\mathcal{S}}(t) \| \leq e^{t \| {\mathcal{B}} \|} = e^{tC(1 + \tau^{-1} \| a \|_{{}_{L^\infty}})},
\]
for some $C > 0$ (see Theorem 1.1 in Pazy \cite{Pazy83}, chapter 3).
We illustrate how to prove that ${\mathcal{L}}_0 - \lambda$ is onto for $\lambda$ real and large in the case of a stationary front with $c = 0$ by establishing a resolvent estimate.
First, note that if $c = 0$ then the operator ${\mathcal{L}}_0$ reduces to
\begin{equation}
\label{defopc0}
{\mathcal{L}}_0 \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} 0 & & 1 \\ \tau^{-1} \partial_x^2 - 1 & & \; - \tau^{-1} b(x) \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix}.
\end{equation}
For given $(\phi, \psi)^\top \in {\mathcal{X}}$ suppose that $(u,v)^\top \in {\mathcal{D}}$ is a solution to the resolvent equation
\[
(\lambda - {\mathcal{L}}_0) \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} \phi \\ \psi / \tau \end{pmatrix},
\]
for some $\lambda \in \mathbb{C}$. This yields the system of equations
\begin{equation}\label{reszerospeed}
\lambda u-v=\phi,\qquad
\tau\lambda v-u_{xx} + \tau u+b(x)v=\psi.
\end{equation}
\begin{lemma}\label{lemma:vuprime}
Let $b(x)\geq b_0>0$ for any $x\in\mathbb{R}$.
Given $(\phi,\psi)\in {\mathcal{X}} = H^1 \times L^2$, let $(u,v)\in {\mathcal{D}} = H^2 \times H^1$
be a solution to system \eqref{reszerospeed}.
Then for any $r>0$, there exists a constant $C>0$ (depending on $\tau, b_0$
and $r$) such that
\begin{equation}\label{vuprime}
\|v\|_{{}_{L^2}}+\|u_x\|_{{}_{L^2}}
\leq C\left( \|\psi\|_{{}_{L^2}}+\|\phi_x\|_{{}_{L^2}}+\|u\|_{{}_{L^2}}\right)
\end{equation}
for any $\lambda$ with $|\lambda|\geq r>0$ and $\Re \lambda\geq 0$.
\end{lemma}
\begin{proof}
\smartqed
Multiplying the second equation by $\bar v$ we obtain
\begin{equation*}
\bigl(\tau\lambda+b(x)\bigr) |v|^2-(u_x \bar v)_x + u_x \bar{v}_x + \tau u \bar{v}=\psi \,\bar{v}.
\end{equation*}
Since $v_x = \lambda u_x - \phi_x$, there holds
\begin{equation*}
\bigl(\tau\lambda+b(x)\bigr) |v|^2+\bar \lambda |u_x|^2 + \tau u\,\bar v-(u_x \bar{v}_x)_x
=\psi\,\bar{v}+\bar{\phi}_x \,u_x.
\end{equation*}
Integrating in $\mathbb{R}$ and separating real and imaginary parts, we infer
\begin{equation*}
\begin{aligned}
&\bigl(\tau\Re \lambda+b_0\bigr) \|v\|_{{}_{L^2}}^2+\Re \lambda \|u_x\|_{{}_{L^2}}^2
\leq \|\psi\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}
+\tau \|u\|_{{}_{L^2}}\|v\|_{{}_{L^2}},\\
&\bigl|\Im\lambda\bigr|\,\Bigl|\tau\|v\|_{{}_{L^2}}^2-\|u_x\|_{{}_{L^2}}^2\Bigr|
\leq \|\psi\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}
+ \tau \|u\|_{{}_{L^2}}\|v\|_{{}_{L^2}}.\\
\end{aligned}
\end{equation*}
Applying Young's inequality, we deduce
\begin{equation*}
\begin{aligned}
&\bigl(\tau \, \Re\lambda+b_0\bigr) \|v\|_{{}_{L^2}}^2+\Re \lambda \|u_x\|_{{}_{L^2}}^2
\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2
+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}+\frac{b_0}{2}\|v\|_{{}_{L^2}}^2.
\end{aligned}
\end{equation*}
Hence, the following two estimates hold for any choice of $\lambda$ such that
$\Re\lambda\geq 0$,
\begin{equation}\label{forvuprime}
\begin{aligned}
&\frac12\,b_0\|v\|_{{}_{L^2}}^2+\Re \lambda \|u_x\|_{{}_{L^2}}^2
\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2
+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}\\
&\bigl|\Im\lambda\bigr|\,\Bigl|\tau\|v\|_{{}_{L^2}}^2-\|u_x\|_{{}_{L^2}}^2\Bigr|
\leq \|\psi\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+ \tau \|u\|_{{}_{L^2}}\|v\|_{{}_{L^2}}
+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}
\end{aligned}
\end{equation}
For $\Re\lambda\geq c_0>0$, there holds
\begin{equation*}
\begin{aligned}
b_0 \|v\|_{{}_{L^2}}^2+c_0\|u_x\|_{{}_{L^2}}^2
&\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2
+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}\\
&\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{1}{2c_0} \|\phi_x\|_{{}_{L^2}}^2
+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2+\frac{c_0}{2}\|u_x\|_{{}_{L^2}}^2.
\end{aligned}
\end{equation*}
Thus, we deduce
\begin{equation*}
\|v\|_{{}_{L^2}}^2+\|u_x\|_{{}_{L^2}}^2
\leq C\left(\|\psi\|_{{}_{L^2}}^2+\|\phi_x\|_{{}_{L^2}}^2+\|u\|_{{}_{L^2}}^2\right),
\end{equation*}
for some strictly positive constant $C$ depending on $b_0, \tau$ and $c_0$.
Next, let $\lambda$ to be such that $|\Im\lambda|\geq \theta_0>0$.
Then, from the second bound in \eqref{forvuprime}, it follows
\begin{equation*}
\begin{aligned}
\theta_0\|u_x\|_{{}_{L^2}}^2
&\leq \|\psi\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}
+\tau\|u\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+\theta_0\tau\|v\|_{{}_{L^2}}^2\\
&\leq \|\psi\|_{{}_{L^2}}\|v\|_{{}_{L^2}}
+\frac{1}{2\theta_0}\|\phi_x\|_{{}_{L^2}}^2+\frac{\theta_0}{2}\|u_x\|_{{}_{L^2}}^2
+\tau \|u\|_{{}_{L^2}}\|v\|_{{}_{L^2}}+\theta_0\tau\|v\|_{{}_{L^2}}^2,
\end{aligned}
\end{equation*}
again thanks to Young's inequality, so that
\begin{equation}\label{uprime}
\|u_x\|_{{}_{L^2}}^2
\leq C\left(\|\psi\|_{{}_{L^2}}^2+\|\phi_x\|_{{}_{L^2}}^2
+\|u\|_{{}_{L^2}}^2+\|v\|_{{}_{L^2}}^2\right),
\end{equation}
for some strictly positive constant depending on $\tau$ and $\theta_0$.
Hence, from the first estimate in \eqref{forvuprime},
we deduce for $\Re\lambda\geq 0$ and $|\Im\lambda|\geq \theta_0>0$, that
\begin{equation*}
\begin{aligned}
\frac12\,b_0\|v\|_{{}_{L^2}}^2
&\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2
+\|\phi_x\|_{{}_{L^2}}\|u_x\|_{{}_{L^2}}\\
&\leq \frac{1}{b_0}\|\psi\|_{{}_{L^2}}^2+\frac{\tau^2}{b_0}\|u\|_{{}_{L^2}}^2
+\frac{1}{2\eta}\|\phi_x\|_{{}_{L^2}}^2+\frac{\eta}{2}\|u_x\|_{{}_{L^2}}^2,
\end{aligned}
\end{equation*}
for any $\eta>0$. By choosing $\eta$ sufficiently small and taking advantage of \eqref{uprime},
we deduce
\begin{equation*}
\|v\|_{{}_{L^2}}^2
\leq C\left(\|\psi\|_{{}_{L^2}}^2+\|\phi_x\|_{{}_{L^2}}^2+\|u\|_{{}_{L^2}}^2\right),
\end{equation*}
for some strictly positive constant $C>0$ depending on $\tau, b_0$ and $\theta_0$.
\qed
\end{proof}
Thanks to Lemma \ref{lemma:vuprime}, it is enough to estimate $u$ in $L^2$.
To this aim, we state and prove the following elementary result.
\begin{lemma}
Let $0\leq A\leq B$ with $B>0$.
Given $c_0>0$, set $\Sigma_0:=\{(x,y)\,:\,x\geq 0, |y|\geq c_0\}$. Then
\begin{equation}\label{somesup}
\sup_{(x,y)\in\Sigma_0} \frac{1+A\sqrt{x^2+y^2}}{\bigl(1+B\,x\bigr)|y|}
\leq A+\frac{1}{y_0}.
\end{equation}
\end{lemma}
\begin{proof}
\smartqed
Fix $c_0>0$ and $y\geq c_0$ and consider $y$ such that $|y|\geq y_0$
We want to prove that $M$ is such that
\begin{equation*}
F(x):=M\bigl(1+B\,x\bigr)|y|-A\sqrt{x^2+y^2}\geq 1, \qquad\qquad \forall x\geq 0.
\end{equation*}
Since the function $F$ is concave, it is enough to require that the condition $F(x)\geq 1$
is satisfied at $x=0$ and at $x=+\infty$.
The former condition is satisfied if $M\geq A+1/y_0$;
the latter, if $M>A/By_0$.
Since $B>A$, the first condition implies the second.
\qed
\end{proof}
\begin{lemma}\label{lemma:uell2}
Let $0<b_0\leq b(x)\leq b_1$ for any $x\in\mathbb{R}$.
Given $(\phi,\psi)\in {\mathcal{X}} = H^1 \times L^2$, let $(u,v)\in {\mathcal{D}} = H^2 \times H^1$
be such that \eqref{reszerospeed} holds.
Then there exists $M>0$ such that for any $\theta_0>0$, there exists a constant
$C>0$ (depending on $\tau, b_0, M$ and $\theta_0$) such that
\begin{equation}\label{uell2}
\|u\|_{{}_{L^2}}\leq C\left(\|\phi\|_{{}_{L^2}}+\|\psi\|_{{}_{L^2}}\right),
\end{equation}
for any $\lambda$ with either $\Re\lambda\geq M$ or $|\Im\lambda|\geq \theta_0>0$.
\end{lemma}
\begin{proof}
\smartqed
Multiplying the second equation by $\bar u$ we obtain
\begin{equation*}
\bigl(\tau\lambda^2 +\lambda b(x) + \tau \bigr)|u|^2+|\bar{u}_x|^2-(u_x \bar{u})_x
=(b(x)+\tau\lambda)\bar u\phi+\bar u\psi.
\end{equation*}
Integrating in $\mathbb{R}$ and taking real and imaginary parts, we infer
\begin{equation}\label{realandimag}
\begin{aligned}
\bigl(\tau(\Re\lambda)^2-\tau(\Im\lambda)^2
+b_0\Re\lambda + \tau\bigr)\|u\|_{{}_{L^2}}
&\leq (b_1+\tau|\lambda|)\|\phi\|_{{}_{L^2}}+\|\psi\|_{{}_{L^2}},\\
|\Im\lambda|\bigl(2\tau\Re\lambda+b_0\bigr)\|u\|_{{}_{L^2}}
&\leq (b_1+\tau|\lambda|)\|\phi\|_{{}_{L^2}}+\|\psi\|_{{}_{L^2}},
\end{aligned}
\end{equation}
Applying \eqref{somesup}, from the second inequality in \eqref{realandimag}, we infer
\begin{equation}\label{estimate1}
\|u\|_{{}_{L^2}}
\leq \frac{1}{b_0}\left(\tau+\frac{b_1}{\theta_0}\right)\|\phi\|_{{}_{L^2}}
+\frac{1}{b_0\,\theta_0}\,\|\psi\|_{{}_{L^2}}
\end{equation}
for any $\lambda$ such that $|\Im\lambda|\geq \theta_0>0$ and $\Re\lambda\geq 0$.
Using relations \eqref{realandimag}, we deduce
\begin{equation*}
\begin{aligned}
\bigl(\tau(\Re\lambda)^2 +b_0\Re\lambda + \tau\bigr)\|u\|_{{}_{L^2}}
&\leq (b_1+\tau|\lambda|)\|\phi\|_{{}_{L^2}}+\|\psi\|_{{}_{L^2}}
+\tau(\Im\lambda)^2\|u\|_{{}_{L^2}}\\
&\leq \frac{2\tau\Re\lambda+\tau|\Im\lambda|+b_0}{2\tau\Re\lambda+b_0}
\left((b_1+\tau|\lambda|)\|\phi\|_{{}_{L^2}}+\|\psi\|_{{}_{L^2}}\right).
\end{aligned}
\end{equation*}
For $\Re\lambda$ large and $|\Im\lambda|\leq m_0|\Re\lambda|$, there holds
\begin{equation*}
\|u\|_{{}_{L^2}}\leq \frac{C}{\Re\lambda}\left(\|\phi\|_{{}_{L^2}}
+\frac{1}{\Re\lambda}\|\psi\|_{{}_{L^2}}\right)
\end{equation*}
for some strictly positive constant $C>0$.
\qed
\end{proof}
Collecting the statements contained in Lemma \ref{lemma:vuprime} and Lemma \ref{lemma:uell2},
we deduce the following result.
\begin{proposition}
Given $0<b_0\leq b(x)\leq b_1$ for any $x\in\mathbb{R}$,
let ${\mathcal{L}}_0$ be the operator defined in \eqref{defopc0} on the space ${\mathcal{X}} = H^1 \times L^2$ with
dense domain ${\mathcal{D}} = H^2 \times H^1$. Then,\par
{(i) } there exists $M>0$ such that
\begin{equation*}
\{\lambda \in \mathbb{C} \,:\,\Re\lambda\geq 0\}\setminus[0,M]\subseteq\rho({\mathcal{L}}_0),
\end{equation*}
where $\rho({\mathcal{L}}_0)$ is the resolvent set of ${\mathcal{L}}_0$; and, \par
{(ii) } for any $\theta_0>0$, there exists a constant $C>0$ for which
\[
\|(\lambda-{\mathcal{L}}_0)^{-1}\|\leq C
\]
for any $\lambda$ such that either $\Re\lambda\geq M$ or $|\Im\lambda|\geq \theta_0>0$.
\end{proposition}
\def$'${$'$}
|
{
"timestamp": "2018-02-27T02:01:58",
"yymm": "1802",
"arxiv_id": "1802.08750",
"language": "en",
"url": "https://arxiv.org/abs/1802.08750"
}
|
\section{Preliminaries}\label{Prelims}
Throughout the paper we let $(M, g)$ denote a compact Riemann surface with a smooth metric and let $(X, d)$ denote a compact locally CAT(1) space. We refer the reader to \cite[Section 2.2]{Banff1} for background on CAT(1) spaces. A metric space $(X,d)$ is said to be \emph{locally} CAT(1) if every point of $X$ has a geodesically convex CAT(1) neighborhood. Note that for a compact locally CAT(1) space, there exists a radius $r(X)>0$ such that for all $P \in X$, $\overline {\mathcal B_{r(X)}(P)}$ is a compact CAT(1) space. Let
\[
\tau(X):= \min\{r(X), \pi/4\}
\]and let $\mathrm{inj}(M)$ denote the injectivity radius of $M$.
For $r \in (0, \mathrm{inj}(M))$, $t \in (0, \tau(X))$, we denote geodesic disks and balls in their respective domains as $D_r(x) \subset M$ and $\mathcal B_t(P) \subset X$. We also frequently consider geodesic disks with respect to the metric induced by the pullback of the exponential map and use the same notation, $D_r(0) \subset T_xM =\mathbb R^2$.
Following the definition in \cite{korevaar-schoen1}, the Sobolev space $W^{1,2}(M,X)$ is the space of finite energy maps. That is, $u \in W^{1,2}(M,X)$ if its energy density function (as defined in \cite{korevaar-schoen1}) $|\nabla u|^2 \in L^1(M)$. The total energy of the map $u$ is given by
\[
E[u]:=\int_M|\nabla u|^2 d\mu_g
\]and we denote the energy on subsets $\Omega \subset M$ by
\[
E[u,\Omega]:= \int_\Omega |\nabla u|^2 d\mu_g.
\]Given any $h \in W^{1,2}(\Omega,X)$ we define
\[
W^{1,2}_h(\Omega,X):=\{ f \in W^{1,2}(\Omega,X): Tr(f) = Tr(h)\}
\]where $Tr(u) \in L^2(\partial \Omega,X)$ denotes the trace map (see \cite{korevaar-schoen1}).
\begin{defn}\label{def:min}
A map $u \in W^{1,2}(M,X)$ is \emph{harmonic} if it is locally energy minimizing. In particular, for each $x \in M$ there exist $0<r_x$, $0< \rho< \tau(X)$, and $P \in X$ such that $u(D_{r_x}(x)) \subset \mathcal B_\rho(P)$ and $h:=u|_{D_{r_x}(x)}$ has finite energy and minimizes energy among all maps in $W^{1,2}_h(D_{r_x}(x),\overline{ \mathcal B_\rho(P)})$.
\end{defn} The existence and uniqueness of Dirichlet solutions follows from \cite[Lemma B.2]{Banff2} and \cite{serbinowski}. We will need also the regularity of such solutions.
\begin{theorem}[Lemma 1.3, \cite{Banff1}]
Suppose that $u:D_r \to \mathcal B_{\tau(X)}(P) \subset X$ is an energy minimizing map. Then $u$ is Lipschitz continuous on $D_{r/2}$ with Lipschitz constant depending only on $E[u, D_r]$ and $g$.
\end{theorem}
Let $|u_*(Z)|^2$ denote the directional energy density function for $Z \in \Gamma( TM)$, where $\Gamma(TM)$ is the space of Lipschitz vector fields on $M$ (see \cite[Section 1.8]{korevaar-schoen1}). For any finite energy map $u:(M,g) \to (X,d)$, let
\[
\pi:\Gamma(TM) \times \Gamma(TM) \to L^1(M)
\]where
\[
\pi(Z,W):= \frac 14 \left|u_*(Z+W)\right|^2 - \frac 14\left|u_*(Z-W)\right|^2.
\]By \cite[Lemma 3.5]{Banff1}, $\pi$ is a continuous, symmetric, bilinear, non-negative tensorial operator.
Let
\begin{equation*}
\Phi_u = \pi \left( {\partial_x}, {\partial_x} \right) - \pi\left({\partial_y},{\partial_y} \right) - 2 \mathbf{\textit{i}}\pi \left({\partial_x} , {\partial_y}\right)
\end{equation*}
denote the \emph{Hopf function} for $u$. As in the smooth setting, when $u$ is harmonic, $\Phi_u$ is holomorphic (see \cite[Lemma 3.7]{Banff2}).
\section{Analogues of Classical Results} \label{Tools}
In the smooth setting, the compactness follows from four properties of harmonic maps (see \cite[Proposition 1.1]{Parker}). We state an analogous proposition for harmonic maps into compact locally CAT(1) spaces. Note that the uniform convergence statement is not as strong as Parker's; we can get only $C^0$ uniform convergence. Nevertheless, we are still able to prove Theorem \ref{MAIN}.
\begin{prop}\label{BTtools}
There exist positive constants $C', \epsilon'>0$ depending only on $(M, g)$ and $(X,d)$ such that the following hold:
\begin{enumerate}
\item (Sup Estimate) Let $u:D_r\to X$ be a harmonic map with $E\left[ u,D_r \right]< \epsilon'$ and $0<r<\epsilon'$. Then
\[
\max_{0 \le \sigma \le r} \sigma^2 \sup_{D_{r - \sigma}} |\nabla u|^2 \le C'.
\]In particular for all $x \in D_{3r/4}$,
\[
|\nabla u|^2(x) \leq \frac {C'}{r^2}.
\]
\item (Energy Gap) If $(M,g)=(\mathbb S^2, g_0)$, where $g_0$ is the standard metric on the sphere, and $E\left[ u,\mathbb S^2 \right] < \epsilon'$, then $u$ is a constant map.
\item (Uniform Convergence) Let $u_k:D_r \to X$ be a sequence of harmonic maps with $E\left[ u_k,D_r \right]<\epsilon'$. Then a subsequence $u_k$ convergence in $C^0$ uniformly to a harmonic map $u$ on $D_{r/2}$.
\item (Removable Singularity) Let $u:D_r \backslash \{0\} \to X$ be a finite energy harmonic map. Then $u$ extends to a locally Lipschitz harmonic map $u:D_r \to X$.
\end{enumerate}
\end{prop}
The entirety of this section is devoted to proving each of these results. The results are listed in the order in which they are proven and each subsection contains the proof of a single item.
In the smooth setting, the proofs of these results rely on the Euler-Lagrange equation of the (perturbed) energy functional. Lacking such an equation, we instead exploit weak differential inequalities which follow from the locally minimizing property of harmonic maps coupled with the local convexity of the target space.
\subsection{Sup Estimate}
Following the now classical methods of \cite{Choi-Schoen}, we use a monotonicity formula and scale invariance to prove pointwise gradient bound for harmonic maps with small energy.
\begin{prop}\label{GradientProp}
Suppose $u : D_r \to X$, $r \leq 1$, is a finite energy harmonic map. There exists an $\epsilon_0 >0$, depending only on the metric $g$, such that if $E\left[ u , D_r \right] < \epsilon_0$ and $r < \epsilon_0$, then
\begin{equation}
\max_{0 \le \sigma \le r} \sigma^2 \sup_{D_{r - \sigma}} |\nabla u|^2 \le C_0^2 ,
\end{equation} where $C_0$ depends only on the metric $g$.
\end{prop}
Before proceeding with the proof, we point out an important subharmonicity estimate that we will need. The result follows from a local Bochner type inequality (see \cite{FZ}).
\begin{prop}\label{SubharmProp}
Let $u : (D_{2r},g) \to X$ be a harmonic map with finite energy and let $g$ be a metric with bounded curvature. Then for all $\eta \in C_0^\infty(D_{r})$,
\begin{equation}
-\int_{D_r} \nabla |\nabla u|^2 \cdot \nabla \eta \ge -C' \int_{D_r} \eta |\nabla u|^2 \left( 1+|\nabla u|^2\right)
\end{equation}where $C'>0$ depends only on the curvature of the domain.
\end{prop}
\begin{proof}
For each $x \in \overline {D_r}$, let $s_x:= \sup\{s>0: u(D_s(x)) \subset \mathcal B_{\tau(X)}(u(x))\}$. For the open cover $\{D_{s_x}(x)\}_{x \in \overline{D_r}}$, consider a finite subcover $\left\{D_{s_i}(x_i)\right\}_{1\le i \le m}$ and denote $s:= \min_i\{s_{i}\}$. By \cite {FZ} there exists $C'>0$ depending on the curvature of $M$ such that for each $x_i$,
\begin{equation}
-\int_{D_s(x_i)} \nabla |\nabla u|^2 \cdot \nabla \eta \ge -C' \int_{D_s(x_i)} \eta |\nabla u|^2 \left( 1+|\nabla u|^2\right).
\end{equation}
Now let $\{\phi_i\}$ be a smooth partition of unity subordinate to the covering. Then $\phi_i \in C^\infty_0 (D_s(x_i))$ for each $i$. Moreover, $\sum_i \phi_i \equiv 1$, $\sum_i \nabla \phi_i \equiv 0$. Therefore for any test function $\eta \in C^1_0 (D_r)$,
\begin{eqnarray*}
-\int_{D_r} \nabla |\nabla u|^2 \cdot \nabla \eta &=&- \int_{D_r} \nabla |\nabla u|^2 \cdot \nabla \left(\eta \left(\sum_i \phi _i\right)\right) \\ &=& -\sum_i \int_{D_s(x_i)\cap D_r} \nabla |\nabla u|^2 \cdot \nabla (\eta \phi_i ) \\ &\ge& -C' \sum_i \int_{D_s(x_i)\cap D_r} \eta \phi_i |\nabla u|^2 \left( 1+|\nabla u|^2\right) \\
&=& -C' \int_{D_r} \; \eta|\nabla u|^2 \left( 1+|\nabla u|^2\right).
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{GradientProp}]
Choose $\sigma_0 \in (0 , r] $ and $x_0 \in \overline D_{r - \sigma_0}$ so that
\begin{equation*}
\sigma_0^2 \sup_{D_{r - \sigma_0}} |\nabla u|^2 = \max_{\sigma \in (0,r]} \sigma^2 \sup_{D_{r - \sigma}} |\nabla u|^2,
\end{equation*}
and
\begin{equation*}
|\nabla u|^2 (x_0) \geq \frac 12 \sup_{D_{r - \sigma_0}} |\nabla u|^2.
\end{equation*}
We deduce that
\begin{equation*}
\sup_{D_{\frac{\sigma_0}{2}}(x_0)} |\nabla u|^2 \le 8 |\nabla u|^2 (x_0).
\end{equation*}
Notice that if $\sigma_0^2 |\nabla u|^2 (x_0) \le 4$ then the desired result holds. So suppose instead that $ |\nabla u|^2 (x_0) \ge 4 \sigma_0^{-2}$. Let $\tilde{u}: D_1 \to X $ be given by
\begin{equation*}
\tilde{u}(x) = u \left( x_0 +|\nabla u|^{-1} (x_0) x \right)
\end{equation*}
Then
\begin{equation*}
\sup_{D_1} \left| \nabla \tilde{u} \right|^2 \le 8 \quad and \quad \left| \nabla \tilde{u} \right|^2(0) = 1.
\end{equation*}
By Lemma \ref{SubharmProp}, for all $\eta \in C_0^\infty(D_1)$,
\begin{equation*}
-\int_{D_1} \nabla \left| \nabla \tilde{u} \right|^2 \cdot \nabla \eta \ge -C'\int_{D_1}\eta |\nabla \tilde u|^2 \left( 1+|\nabla \tilde u|^2\right) \geq -9C' \int_{D_1}\eta \left|\nabla \tilde u\right|^2.
\end{equation*} Finally, Morrey's mean value inequality and the scale invariance of the energy implies that, for $c$ depending only on the domain metric $g$,
\begin{equation*}
1 = \left| \nabla \tilde{u} \right|^2 (0) \le c \int_{D_1} \left| \nabla \tilde{u} \right|^2 \leq c \epsilon.
\end{equation*}For $\epsilon$ sufficiently small, we get a contradiction.
\end{proof}
\subsection{Energy Gap}
\begin{prop}[Energy Gap]\label{GapThm}There exists $\epsilon_{\mathrm{gap}}>0$ depending only on $g_0, (X,d)$ (where $g_0$ is the standard metric on $\mathbb S^2$) such that the following holds:
Let $u:\mathbb S^2 \to X$ be a conformal, harmonic map such that $E\left[ u,\mathbb S^2 \right] < \epsilon_{\mathrm{gap}}$. Then $u$ is a constant map.
\end{prop}
\begin{proof}
Suppose first that $u(\mathbb S^2) \subset \mathcal B_{\tau(X)} (P)$ for some $P \in X$. Then, by \cite[Lemma 4.3]{Banff1}, $\Delta d^2(u(x),P) \geq \frac 12 |\nabla u|^2 \geq 0$ holds weakly on all of $\mathbb S^2$. It follows that $d^2(u(x),P) \equiv 0$, i.e. $u$ is constant.
Now suppose that $u(\mathbb S^2)$ is not contained in $\mathcal B_{\tau(X)}(P)$ for all $P \in X$. Then, $\mathrm{diam}(u(\mathbb S^2)) > \tau(X)$. By the Monotonicity Formula of \cite[Theorem 3.4]{Banff2}, there exists $C>0$, independent of $u$ and $p \in \mathbb S^2$ such that
\[
E\left[ u,\mathbb S^2 \right] \geq E\left[ u, u^{-1}(\mathcal B_{\tau(X)}(u(p))) \right] \geq C\tau(X)^2.
\]
Thus, choosing $\epsilon_{\mathrm{gap}} <C \tau(X)^2$ implies the result.
\end{proof}
\subsection{Uniform Convergence}
\begin{prop}There exists $\epsilon_2>0$, depending only on $g, (X,d)$ such that the following holds:
Let $u_k:D_r \to X$ be a sequence of harmonic maps with $E\left[ u_k,D_r \right]<\epsilon_2$. Then a subsequence $u_k$ converges in $C^0$ uniformly to a harmonic map $u$ on $D_{r/2}$.
\end{prop}
\begin{proof}Let $\epsilon_0, C_0$ be as in Proposition \ref{GradientProp}. Set $0<\epsilon_2 \leq \epsilon_0$. Then for all $x,y \in D_{3r/4}$ with $d_g(x,y)<\frac{\tau(X)}{C_0}r$ and all $k$, $d(u_k(x), u_k(y)) < \tau(X)$. Set $s:= \min\left\{\frac{\tau(X)}{C_0}r, \frac r{16}\right\}$. Cover $D_{r/2}$ by disks $\{D_{s/4}(x_j)\}$. By \cite[Remark 3.2]{Banff2}, for all $k$, $u_k|_{D_{s}(x_i)}$ is energy minimizing. By \cite[Theorem 1.3]{Banff1}, the $u_k$ are equicontinuous on the cover $\{D_{s/2}(x_j)\}$. Therefore a subsequence $u_k \to u$ uniformly on every ball in this cover and thus on $D_{r/2}$. Applying \cite[Theorem 2.3]{Banff2} to each disk $D_{s/2}(x_j)$, we see that $u$ is energy minimizing on each disk $D_{s/4}(x_j)$. It follows that $u$ is harmonic on $D_{r/2}$.
\end{proof}
\subsection{Removable singularity theorem}
Notice that the work of this subsection extends the result of \cite[Theorem 3.6]{Banff2}, where a removable singularity theorem is proven for conformal harmonic maps.
\begin{theorem}\label{RSThm}
Let $u:D_1 \backslash \{0\} \to X$ be a finite energy harmonic map. Then $u$ extends to a locally Lipschitz harmonic map $u:D_1 \to X$.
\end{theorem}
\begin{proof}
Since $u$ has finite energy, the Hopf function $\Phi_u\in L^1(D_1 \backslash \{0\}, \mathbb C \backslash \{0\})$ and therefore $\Phi_u$ can have at worst a simple pole at the origin. Without loss of generality, assume that $\Phi_u$ is nowhere zero on $D_1\backslash \{0\}$.
We now follow the ideas of Schoen \cite[Theorem 10.4]{Schoen-Analytic} to define a conformal harmonic map. Schoen's argument involves taking the square root of $-\Phi_u$, and since in his case the domain is a disk and the image does not contain the origin the square root function is well-defined. We build an admissible cell complex $W$ (see \cite[Section 2.1]{Banff1}, \cite[Section 2.2]{daskal-meseCAG}) such that $W \setminus W^{(0)}$ will be the double cover of $\mathbb C \setminus \{0\}$. We then lift the map $\Phi_u$ to be defined from this double cover, allowing us to take its square root.
Let $H_j:=\{z \in \mathbb C: \im(z) \geq 0\}$, $j=1, \dots, 4$ denote four $2$-cells and let $z_j = x_j+ i y_j$ denote the coordinates in the $2$-cell $H_j$. Define the $2$-complex $W:= \bigsqcup_{j=1}^4H_j / \sim$ where the similarity relations determine the gluing of $1$-cell boundaries and are non-empty relations only in the following cases:
\[
\left\{ \begin{array}{ll}
z_1 \sim z_2 &\text{ iff } \re(z_1)=\re(z_2) \leq 0, \im(z_1)=\im(z_2)=0,\\
z_2\sim z_3 &\text{ iff } \re(z_2)=\re(z_3) \geq 0, \im(z_2)=\im(z_3)=0,\\
z_3 \sim z_4 &\text{ iff } \re(z_3)=\re(z_4) \leq 0,\im(z_3)=\im(z_4)=0,\\
z_4 \sim z_1 &\text{ iff } \re(z_4)=\re(z_1) \geq 0,\im(z_4)=\im(z_1)=0.
\end{array}\right.
\]It is straightforward to see that $W\backslash W^{(0)}$ is a double cover of $\mathbb C\backslash \{0\}$. We will associate each $p \in W$ with a projection onto $\mathbb C$ using isometries of half-spaces.
Let $\psi_j:H_j \to \{z \in \mathbb C: \im(z) \geq 0\}$, $ \psi_j^-:H_j \to \{z \in \mathbb C: \im(\overline z) \geq 0\}$ denote the natural Euclidean isometries. For $p \in W$, we define $\re, \im:W \to \mathbb R$ such that
\[
\re(p):=
\re(\psi_j(z_j)) \text{ if } p=z_j, j=1,\dots, 4,
\]
\[
\im(p):=\left\{ \begin{array}{ll}
\im(\psi_j(z_j)) &\text{ if } p=z_j, j=1,3,\\
\im(\psi_j^-(z_j)) & \text{ if } p = z_j, j=2,4.
\end{array}\right.
\]Let $\Pi: W \to \mathbb C$ such that
$\Pi(p):= \re(p) + i \im(p)$.
We define $\underline u: W \backslash W^{(0)} \to X$ and $\underline \Phi_u: W \backslash W^{(0)} \to \mathbb C \backslash \{0\}$ such that
\[
\underline u(p) := u \circ \Pi(p); \quad \quad \underline \Phi_u(p):= \Phi_u \circ \Pi (p).
\]Note that $(\Phi_u)_*(\pi_1(D_1 \backslash \{0\})) = n \mathbb Z$ for some $n \in \mathbb N$. It follows that $(\underline \Phi_u)_*(\pi_1(W \backslash W^{(0)}))=2n\mathbb Z \subset 2\mathbb Z$. Therefore, there exists a map $\Psi_u:W \backslash W^{(0)}\to \mathbb C \backslash \{0\}$ such that $\Psi_u^2(p) = \underline \Phi_u(p)$.
Define $v:W \backslash W^{(0)} \to \mathbb R$ such that
\[
v(p):= \re \int_{p_0}^p\Psi_u(\zeta) d\zeta
\]where $p_0 \in W \backslash W^{(0)}$. By construction, $v$ is a well-defined, real-valued harmonic function which is minimizing on every compact subset of $W \backslash W^{(0)}$.
We compute
\begin{equation*}
\frac{\partial v}{ \partial z}(p) = \frac{1}{2} \re \Psi(p) \in \mathbb R.
\end{equation*} It follows that $E[v] \leq C\int_{D_1\backslash \{0\}} |\Phi_u| d\mu_g< \infty$ and thus $v$ has finite energy. Let $\tilde{u} :W \backslash W^{(0)}\to X \times \mathbb R$ where
\[
\tilde{u}(p) := \left(\underline u(p) , v(p) \right).
\] By definition, $\tilde u$ is a finite energy harmonic map and
the Hopf differential of $\tilde{u}$ satisfies
\begin{equation*}
{\Phi_{\tilde u}} (p) = \Phi_u(p) + 4 \left(\frac{\partial v}{\partial z} \right)^2 (p) \equiv 0.
\end{equation*}
Therefore, $\tilde{u} :W \backslash W^{(0)} \to X \times \mathbb R$ is a \emph{conformal} harmonic map. We apply \cite[Theorem 3.6]{Banff2} to prove the removable singularity result for $\tilde u$. Observe that the hypothesis of the cited theorem states that the target space is \emph{compact} locally CAT(1) and that the domain is a Riemann surface. Nevertheless, the theorem can still be applied.
While the metric space $(X \times \mathbb R, d \times \delta)$ is obviously not compact, it remains a locally CAT(1) space. Moreover, for each $P \in X \times \mathbb R$, the closed geodesic ball $\overline{\mathcal B_{\tau(X)}(P)}\subset X \times \mathbb R$ is a compact locally CAT(1) space. It follows that for any $\rho \in (0, \tau(X))$ and any $y \in X \times \mathbb R$, $\tilde u$ is energy minimizing on the domain $\tilde u^{-1}(\mathcal B_{\rho}(P))$. The removable singularity theorem for conformal harmonic maps does not in fact require compactness of the target space but does require a uniform lower bound in the target for which harmonic maps are minimizers. (This uniform lower bound is needed in order to appeal to the monotonicity formula.) Our target possesses such a uniform lower bound.
Moreover, while our domain here is a cell complex, away from $W^{(0)}$ the complex is a Riemannian manifold with a smooth Riemannian metric.
Therefore everywhere we apply the arguments of \cite[Theorems 3.4 and 3.6]{Banff2} the fact that the domain is a complex is irrelevant. It follows that $\tilde u$ extends as a locally Lipschitz harmonic map $\tilde u:W \to X \times \mathbb R$ and thus so does $u$.
\end{proof}
\section{Isoperimetric Inequality for Minimal Surfaces with Small Area} \label{IsoperimetricSection}
We prove an isoperimetric inequality for minimal surfaces with small area in a CAT(1) metric space. By a minimal surface we mean a conformal harmonic map $u : \left( \Sigma , g \right) \to X$ which is minimizing in the sense of Definition \ref{def:min}. For such a map $u$, we define the area of its image by integrating the conformal factor $\lambda= \frac 12|\nabla u|^2$:
\begin{equation*}
\mathrm{Area} \left( \mathrm{image}(u) \right) = \int_{\Sigma}\; \lambda \; d\mu_g.
\end{equation*}
To prove the isoperimetric inequality we follow the classical arguments of Hoffman-Spruck \cite{HoffmanSpruck} who prove the result by first proving a Sobolev inequality for $C^1$ functions.
We begin by improving the weak differential inequality satisfied by $d^2(u(x),Q)$ for some fixed $Q\in X$.
\begin{lemma}\label{lem:2.5-modified}
Given a geodesic triangle $\triangle PQS\subset X$ and $0 \leq \eta, \eta' \leq 1$, let $P_{\eta'}:= (1-\eta')P + \eta'Q$ and $S_\eta:=(1-\eta)S + \eta Q$. Then\begin{align*}
d^2 \left( P_{\eta'} , S_\eta \right) &\le \left( 1 - 2 \eta d_{QS} \cot d_{QS} \right) d^2_{PS} - 2 \left( \eta - \eta' \right) \left( d_{QS} - d_{QP} \right) d_{QS} + (\eta' - \eta)^2 d_{QS}^2 \\
& \quad+\mathrm{Quad}(\eta, \eta') \mathrm{Quad}(d_{PS}, d_{QS}-d_{QP})+ \mathrm{Cub}\left( d_{PS}, d_{QS}-d_{QP}, \eta-\eta' \right).
\end{align*}
\end{lemma}
\begin{proof}
The proof follows from \cite[Lemmas 2.4 and 2.5]{Banff1} by keeping and expanding the equality
\[
\frac{\sin^2((1-\eta) d_{QS})}{\sin^2 d_{QS}} = \left(1 - \eta \frac {d_{QS}}{\sin d_{QS}} \cos d_{QS} +O(\eta^2)\right)^2
\]rather than getting an upper estimate.
\end{proof}
We now prove a modification of \cite[Lemma 4.3]{Banff1}, which implies almost subharmonicity for $d(Q,u(x))$.
\begin{lemma}\label{lem:div-thm}
Let $0 < t < \tau(X)$ and $u : \left( D_r , g \right) \to \mathcal{B}_t (P) \subset X$ be an energy minimizing map. For a fixed $Q \in \mathcal{B}_t (P) $, $\eta \in [0,1]$, and all $0 < \sigma \le r$,
\begin{equation*}
\int_{D_\sigma} \; 2 \eta \; \hat{d} \; \cot \hat{d} \; \left| \nabla u \right|^2 d\mu_g \le -\int_{D_\sigma} \; \left< \nabla \eta , \nabla \hat{d}^2 \right> d\mu_g ;
\end{equation*}
where $\hat{d} (x) := d \left( Q , u(x) \right)$.
\end{lemma}
\begin{proof}
Define
$u_{\eta}:(D_\sigma,g) \rightarrow X$
by setting
\[
u_{\eta}(x)=(1-\eta(x)) u(x)+\eta(x) Q
\]
for $\eta \in C^{\infty}_c (D_\sigma)$.
Letting $S = u(x),P = u(y), \eta' =\eta(y)$, we use the estimate of Lemma \ref{lem:2.5-modified} to observe that for $\hat d(x):= d(Q, u(x))$,
\begin{align*}
d^2(u_\eta(y), u_\eta(x)) &\leq (1- 2\eta(x)\hat d(x)\cot(\hat d(x)))d^2(u(x),u(y)) \\& \quad
-2(\eta(x)-\eta(y))(\hat d(x)-\hat d(y))\hat d(x)\\
& \quad +(\eta(y)-\eta(x))^2\hat d^2(x) + \eta^2(x)\mathrm{Quad}(d(u(x),u(y)), \hat d(x)-\hat d(y)) \\
&\quad + \mathrm{Cub}\left( d(u(x),u(y)), \hat d(x)-\hat d(y), \eta(x)-\eta(y) \right).
\end{align*} The rest of the proof is identical to the rest of the proof of \cite[Lemma 4.3]{Banff1}.
\end{proof}
\begin{lemma}
Let $u: \Sigma\to X$ be a conformal harmonic map. Suppose $\xi \in C^1 \left( -\infty , \infty \right)$ is a non-decreasing function such that $\xi(t) = 0$ for $t\le 0$, $h \in C_0^1(\Sigma)$ is a non-negative function, and $\xi h \in [0,1]$. For $x_0 \in \Sigma$ and $0<\rho<\tau(X)$, define
\begin{equation*}
\phi_{x_0} (\rho) := \int_\Sigma \; h(x) \; \xi (\rho - r(x)) \; \lambda(x) d\mu_g;
\end{equation*}
and
\begin{equation*}
\psi_{x_0}(\rho) := \int_\Sigma \; \left| \nabla h \right|(x) \; \xi (\rho - r(x)) \; \lambda^{\frac{1}{2}}(x) d\mu_g
\end{equation*}
where $r(x) := d\left( u(x) , u(x_0) \right)$. Then the following differential inequality holds weakly:
\begin{equation}\label{eq:iso-1}
- \frac{d}{d \rho} \left( \frac{\phi_{x_0}(\rho)}{\sin^2 \rho} \right) \le \frac{\psi_{x_0}(\rho)}{\sin^2\rho}.
\end{equation}
\end{lemma}
\begin{proof}
First note that \eqref{eq:iso-1} is equivalent to
\begin{equation}\label{form2}
2 \cot \rho\, \phi_{x_0}(\rho) \le \psi_{x_0}(\rho) + \phi'_{x_0}(\rho).
\end{equation} By Lemma~\ref{lem:div-thm}, for any test function $\Psi\in [0,1]$ and $x_0 \in \Sigma$, we have that
\begin{equation*}
\int_{\Omega} \; 2 \Psi r \cot r \left| \nabla u \right|^2 d\mu_g \le - \int_{\Omega} \; \left< \nabla \Psi , \nabla r^2 \right> d\mu_g
\end{equation*}
where $\Omega:= u^{-1} \left( \mathcal B_\rho (u(x_0)) \right) $.
Let $\Psi (x) = h(x) \xi \left( \rho - r(x) \right)$ so that
\begin{equation*}
\nabla \Psi (x) = -h(x) \xi'\left( \rho - r(x) \right) \nabla r(x) + \xi \left( \rho - r(x) \right) \nabla h(x).
\end{equation*}
By conformality and given the support of $\xi, \xi'$, it follows that
\begin{eqnarray}
2 \rho \cot \rho \; \int_{\Sigma} \; \Psi \; \lambda d\mu_g &\le& \int_{\Sigma} \; \Psi r \cot r \; |\nabla u|^2 d\mu_g \notag \\ &\le& \int_{\Sigma} \; r(x) h(x) \xi'\left( \rho - r(x) \right) \left|\nabla r(x) \right|^2 \; d\mu_g \notag \\ && - \int_{\Sigma} \; r(x) \xi \left( \rho - r(x) \right)\langle \nabla h(x) , \nabla r(x) \rangle\; d\mu_g \notag \\ &\le& \notag \int_{\Sigma} \; r(x) h(x) \xi'\left( \rho - r(x) \right) \; \lambda d\mu_g \notag \\ && + \int_{\Sigma} \; r(x) \xi \left( \rho - r(x) \right) \left| \nabla h(x) \right| \lambda^{\frac{1}{2}} \; d\mu_g \notag \\ &\le& \rho \int_{\Sigma} \; h(x) \xi'\left( \rho - r(x) \right) \; \lambda d\mu_g \notag \\ && + \rho \int_{\Sigma} \; \xi \left( \rho - r(x) \right) \left| \nabla h(x) \right| \lambda^{\frac{1}{2}} \; d\mu_g. \notag
\end{eqnarray}
Note that the string of inequalities implies that \eqref{form2} holds weakly.
\end{proof}
\begin{lemma}
Let $u: \left(\Sigma , g \right) \to X$ be a minimal surface. Let $x_0 \in \Sigma$ with $h(x_0) \ge 1$. Let $\alpha$ and $t$ satisfy $0< \alpha < 1 \le t$. Set
\begin{equation*}
\rho_0 := \sin^{-1} \left( \frac{\int_\Sigma h(x) \; \lambda (x) \; d \mu_g}{\pi (1 - \alpha)} \right)^{\frac 12} ,
\end{equation*}
\begin{equation*}
\overline{\phi}_{x_0} (\rho) := \int_{S_\rho (x_0)} h(x) \lambda(x) d\mu_g ,
\end{equation*}
and
\begin{equation*}
\overline{\psi}_{x_0}(\rho) := \int_{S_\rho (x_0)} \left| \nabla h(x) \right| \lambda^{\frac{1}{2}}(x) d\mu_g
\end{equation*} where
\[
S_\rho(x_0):=\{ x \in \Sigma: d(u(x), u(x_0))< \rho\}.
\]
Then there exist $\rho$ with $0 < \rho < \rho_0$ such that
\begin{equation*}
\overline{\phi}_{x_0} (t \rho) \le \alpha^{-1} \rho_0 \overline{\psi}_{x_0}(\rho);
\end{equation*}
provided that
\begin{equation*}
\frac{\int_\Sigma h(x) \; \lambda (x) \; d \mu_g}{\pi (1 - \alpha)} \le 1
\end{equation*}
and
\begin{equation*}
t \rho_0 \le \tau(X).
\end{equation*}
\end{lemma}
\begin{proof}
The proof follows exactly the outline of \cite[Lemma 4.2] {HoffmanSpruck}, taking advantage of the differential inequality in \eqref{eq:iso-1} to establish a contradiction.
\end{proof}
An argument similar to the covering argument used in \cite[Theorem 2.1]{HoffmanSpruck} (see also \cite{MichaelSimon}) immediately implies the following lemma.
\begin{lemma}\label{lem:sob-type}
Let $u: \left( \Sigma,g \right) \to X$ be a conformal harmonic map with $\mathrm{image}(u) \subset \mathcal B_{\tau(X)} (P)$. If $ \mathrm{Area}\left[ u \left( \Sigma \right) \right] \le \frac{\pi}{3} $,
then for any $h \in C^1(\Sigma)$,
\begin{equation*}
\left( \int_\Sigma \; h^2(x) \; \lambda(x) \; d \mu_g \right)^{\frac{1}{2}} \le \left( \frac{27 \pi}{4} \right)^{\frac{1}{2}} \int_\Sigma \; \left| \nabla h \right|(x) \; \lambda^{\frac{1}{2}}\; d \mu_g.
\end{equation*}
\end{lemma}
Using the Sobolev type inequality of Lemma \ref{lem:sob-type} and an argument adapted from \cite{mese-iso}, we prove the isoperimetric inequality.
\begin{theorem}\label{thm:isoperimetric}
Let $u: \left( \Sigma ,g \right) \to X$ be a conformal harmonic map with $\mathrm{image}(u) \subset \mathcal B_{\tau(X)} (P)$. If $ \mathrm{Area}\left[ u \left( \Sigma \right) \right] \le \frac{\pi}{3} $,
then
\begin{equation*}
\frac{1}{2} E(u) = \mathrm{Area} \left[ u\left( \Sigma \right) \right] \le \left( \frac{27 \pi}{4} \right) \mathrm{length}^2 \left[ u\left( \partial \Sigma \right) \right],
\end{equation*}
\end{theorem}
\begin{proof}
Since $u$ is uniformly continuous, for any $\epsilon>0$, we can pick a family of Lipschitz closed curves $\Gamma_\epsilon$ that approximate $\partial \Sigma$ i.e. with
\begin{equation*}
\left| \mathrm{length} \left[ u(\Gamma_\epsilon) \right] - \mathrm{length} \left[ u\left( \partial \Sigma \right) \right] \right| < \left(\frac 4{27\pi}\right)^{\frac 12} \epsilon.
\end{equation*}
and such that
\begin{equation*}
\mathrm{Area}^{\frac 12}\left[ u\left( \Sigma \right) \right] < \epsilon + \mathrm{Area}^{\frac 12}\left[ u\left( \Sigma_\epsilon \right) \right]
\end{equation*}
where $\Sigma_\epsilon$ is the connected component of $\Sigma \backslash \Gamma_\epsilon$ which is disjoint from $\partial \Sigma$. By \cite[(1.9xvi)]{korevaar-schoen1}, for any Lipschitz closed curve $\Gamma \subset \Sigma$,
\[
\mathrm{length}[u(\Gamma)] = \int_\Gamma \lambda^{\frac 12} d\sigma_\Gamma.
\]
Following the proof of \cite[Theorem 6.1]{meseCAG}, let $\lambda^\sigma:=e^{(\log \lambda)_\sigma}$, where $(\log \lambda)_\sigma$ is a symmetric mollification of $\log \lambda$. Then, $\lambda^\sigma \ge \lambda$ and $\sqrt{\lambda^\sigma} \to \sqrt{\lambda}$ in $W^{1,2}_{loc}(\Sigma)$. Then for any $h \in C^\infty_0(\Sigma)$ with $\|h\|_{L^\infty}<1$ and $\sigma>0$ sufficiently small,
\[
\int_\Sigma |\nabla h| \lambda^{\frac 12} d\mu_g \leq \int_\Sigma |\nabla h| (\lambda^\sigma)^{\frac 12}d\mu_g,
\]
\[
\int_\Sigma h^2 \lambda^\sigma d\mu_g \leq \int_\Sigma h^2 \lambda d\mu_g + \int_\Sigma(\lambda^\sigma - \lambda) d\mu_g.
\]By Lemma \ref{lem:sob-type},
\[
\left(\int_\Sigma h^2 \lambda^\sigma d\mu_g \right)^{\frac 12} \leq \left(\frac{27\pi}4\right)^{\frac 12} \int_\Sigma |\nabla h|(\lambda^\sigma)^{\frac 12}d\mu_g + O(\sigma).
\]
Using smooth approximations of the cutoff function on $\Sigma_\epsilon$, we observe that
\[
\left(\int_{\Sigma_\epsilon} \lambda^\sigma d\mu_g \right)^{\frac 12} \leq \left(\frac{27\pi}4\right)^{\frac 12} \int_{\Gamma_\epsilon}(\lambda^\sigma)^{\frac 12}d\sigma_{\Gamma_\epsilon} + O(\sigma),
\]and letting $\sigma \to 0$ we see that
\[
\left(\int_{\Sigma_\epsilon} \lambda d\mu_g \right)^{\frac 12} \leq \left(\frac{27\pi}4\right)^{\frac 12} \int_{\Gamma_\epsilon}\lambda^{\frac 12}d\sigma_{\Gamma_\epsilon}.
\]By the choice of $\Gamma_\epsilon$,
\[
\mathrm{Area}^{\frac 12}\left[ u\left( \Sigma \right) \right] \leq \left( \frac{27 \pi}{4} \right)^{\frac{1}{2}} \int_{\partial \Sigma} \lambda^{\frac{1}{2}} d\sigma_{\partial \Sigma} + 2 \epsilon,
\]which implies the result.
\end{proof}
\section{Proof of the Main Theorem}\label{BTC}
This section consists of three subsections. In Section \ref{ConvSec}, we prove convergence results that produce the limit map and are applied iteratively to produce the bubble maps. Section \ref{MapsSec} contains a description of the bubble tree and the bubble maps. Finally, in Section \ref{NecSec} we prove the no-neck property and energy quantization result, which finishes the proof of Theorem \ref{MAIN}.
\subsection{Convergence Results}\label{ConvSec}
\begin{lemma}\label{BT1}
Let $u_k:(M,g) \to (X,d)$ be a sequence of harmonic maps such that $E\left[ u_k,M \right]<\Lambda<\infty$ and let $\epsilon_{\mathrm{gap}}$ be as in Proposition \ref{GapThm}. Then there exists a subsequence $\{u_k\}$ and a set of points $\{x_1, \dots, x_\ell\}$ with corresponding masses $\{m_1, \dots, m_\ell\}$ where $\ell \leq \Lambda/\epsilon_{\mathrm{gap}}$, and a harmonic map $u:M \to X$ such that
\begin{enumerate}
\item \label{bct1}$u_k \to u$ in $C^0$ uniformly on compact sets in $M \backslash \{x_1, \dots, x_\ell\}$.
\item \label{bct2}For any open subset $\Omega$ with $\overline{\Omega} \subset M \backslash \{x_1, \dots, x_\ell\}$,
\[
\lim_{k \to \infty}E\left[ u_k, \Omega \right]=E\left[ u,\Omega \right].
\]
\item \label{bct3} For all $r>0$ and all $i \in \{1, \dots, \ell\}$,
\[
\lim_{r \to 0} \; \lim_{k \to \infty}E\left[ u_k, D_r(x_i) \right]:=m_i \geq \epsilon_{\mathrm{gap}}.
\]
\item \label{bct4} The energies satisfy the relation
\[
\lim_{k \to \infty} E[u_k] = E[u] + \sum_{i=1}^\ell m_i.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Items \eqref{bct1} and \eqref{bct3} follow by standard arguments using Proposition \ref{BTtools}.
For \eqref{bct2}, choose $r'>0$ such that for all $x \in M$, $u(D_{r'}(x)) \subset \mathcal B_{\tau(X)/2}(u(x))$. Let $d_\Omega:= d_g(\partial \Omega, \{x_1,\dots, x_\ell\})$. There exists $K_{\Omega} \in \mathbb N$ such that for all $k \geq K_{\Omega}$ and, $x \in \Omega$, $0<r<d_\Omega/2$, $u_k(D_r(x)) \subset \mathcal B_{3\tau(X)/4}(u(x))$. Suppose to the contrary that the energy drops in the limit. Then there exists $y \in \Omega$ and $0<t<d_\Omega/2$ such that $\liminf_{k\to \infty}E\left[ u_k, D_t(x) \right]> E\left[ u,D_t(x) \right]$. But this contradicts the proof of compactness of minimizers in \cite[Lemma 2.3]{Banff1} and thus the claim holds.
Finally, \eqref{bct4} follows immediately from \eqref{bct2} and standard arguments.
\end{proof}
Fix a constant
\begin{equation}\label{eq:CR}
0<C_R \leq \min\left\{\frac \pi 3, \frac{\epsilon_{\mathrm{gap}}}2, C_{\mathrm{mon}}\frac{ \tau^2(X)}{16}\right\}.
\end{equation}
Here $C_{\mathrm{mon}}$ is the monotonicity constant given in \cite[Theorem 3.4]{Banff2}.
To understand the importance of the constants chosen in the following lemma, we provide a brief outline of their significance going forward. Let $r= \min\{d_g(x_i, x_j),i \neq j\}$, for $x_i$ from Lemma \ref{BT1}. In each ball $B_r(x_i)$, there are three regions of interest. In fact these regions will be on the pull back of this domain to $T_{x_i}M$ by the exponential map. We refer to this domain as $B_r(0) \subset T_xM$.
In the next lemma, we choose these regions by choosing $\epsilon_{k,i}, \lambda_{k,i}, c_{k,i}$ where
\[
B_{k \lambda_{k,i}}(c_{k,i}) \subset B_{\epsilon_{k,i}}(c_{k,i}) \subset B_r(0).
\]By construction $\epsilon_{k,i}/(k\lambda_{k,i}) \to \infty$ so the annular scale is not uniform. The scales and how we choose them will help us to complete our main theorem. On the outer region $B_r(0) \backslash B_{\epsilon_{k,i}}(c_{k,i})$, $u_k \to u$ uniformly in $C^0$, where $u$ is a harmonic map. On the inner region $B_{k \lambda_{k,i}}(c_{k,i})$, after an appropriate conformal transformation of the domain, the new maps $\overline u_{k,i}$ will converge to a ``bubble map" - a harmonic map from $\mathbb S^2$. The intermediate region $B_{\epsilon_{k,i}}(c_{k,i})\backslash B_{k \lambda_{k,i}}(c_{k,i})$ is called the ``neck region". The behavior of the sequence $u_k$ on this region is not captured by $u$ or by any of the bubble maps. Thus, the main objective is to determine whether any of the limiting information about $u_k$ escapes in the neck regions. Our main theorem demonstrates that no energy or image is lost in these necks.
\begin{lemma}\label{BT2}Consider a single bubble point $x_i$ with mass $m_i$. For simplicity we denote them in what follows as $x, m$.
There exists a further subsequence, and constants $\epsilon_k \searrow 0$ and $C>0$ such that, identifying $u_k$ with $\exp_x^*u_k$ and $D_k:= D_{2\epsilon_k}(0) \subset T_xM$,
\begin{enumerate}
\item \label{BT21}$E\left[ u_k, D_k \backslash D_{\epsilon_k/8k^2}(0) \right] \to 0$.
\item $u_k(\partial D_k) \subset \mathcal B_{C/k}(u(x))$.
\item for $c_k:=(c_k^1,c_k^2)$ where
\[
c_k^j= \frac {\int_{D_k} x^j |\nabla u_k|^2dx}{\int_{D_k}|\nabla u_k|^2 dx}, \quad \quad |c_k| \leq \epsilon_k/2k^2.
\]
\item \label{BT24} for
\[
\lambda_k := \min\{\lambda: \int_{D_{\epsilon_k}(0)\backslash D_{\lambda}(c_k)} |\nabla u_k|^2 dx \leq C_R\},\quad \quad \lambda_k \leq \epsilon_k/k^2.
\]
\end{enumerate}
\end{lemma}
Note that the proof of this lemma will not require $C^1$ convergence of $u_k$ to $u$, but instead uses the weaker convergence given by items \eqref{bct2}, \eqref{bct3} in Lemma \ref{BT1}.
\begin{proof}Let $\rho_0:=\frac 12 \min\{\mathrm{dist}(x_j, x): j \in \{1, \dots, \ell\}, x_j \neq x\}$. Choose $\epsilon_k \leq \min\{\frac 1k, \rho_0, \mathrm{inj}(M)\}$ to be the largest number such that
\[
\int_{D_k}|\nabla u|^2 \leq \frac m{16k^2}.
\]
We determine the subsequence inductively. For each $k \geq 1$, let $A_k:= D_k \backslash D_{\epsilon_k/8k^2}(0)$. With $\Omega:= \overline A_k$ fixed, items \eqref{bct2}, \eqref{bct3} of Lemma \ref{BT1} imply that there exists $N_k$ such that for all $n \geq N_k$,
\begin{equation}\label{eq:annular}
\int_{A_k}|\nabla u_n|^2 \leq 2\int_{A_k}|\nabla u|^2< \frac m{8k^2}
\end{equation}and
\begin{equation}\label{eq:disk}
\frac{8k^2-1}{8k^2}m\leq \int_{D_{\epsilon_k/8k^2}(0)}|\nabla u_n|^2 \leq \frac{8k^2+1}{8k^2}m.
\end{equation}Moreover, we may increase $N_k$ if necessary so that for all $n \geq N_k$,
\begin{equation}\label{eq:boundarydist}
\sup_{y \in \partial D_k} d(u_n(y), u(y)) \leq \frac 1k.
\end{equation}Set $n_k = \max\{N_k , 1+n_{k-1}\}$. Then the first item follows from \eqref{eq:annular}. The existence of $C$ such that the second item holds follows from the Lipschitz regularity of $u$ combined with \eqref{eq:boundarydist}. The estimates on $c_k, \lambda_k$ follow from \eqref{eq:annular}, \eqref{eq:disk} and their definitions (cf. \cite[Section 6]{Parker}).
\end{proof}
We will need a conformal transformation of $D_k$ onto $S_k \subset \mathbb S^2$ such that $c_k \mapsto (0,0,1)$ and $\partial D_{\lambda_k}(c_k)$ maps to the equator. Let $\pi_{S^2}: \mathbb S^2 \left(\subset \mathbb R^3\right) \to \mathbb R^2 \cong T_x M$ be a fixed stereographic projection that maps the equator to the unit circle and the north pole to the origin. Let $\Psi_k: \mathbb R^2 \to \mathbb R^2$ be given by
\[
\Psi_k (x) := \lambda_k x + c_k.
\]
Then, the map $\Theta_k := \left( \Psi_k \circ \pi_{S^2} \right)^{-1} = \pi_{S^2}^{-1} \circ \Psi_k^{-1}$ is a conformal transformation under which $c_k \mapsto (0,0,1)$ and $\partial D_{ \lambda_k}(c_k)$ maps to the equator. Define $S_k := \Theta_k \left( D_{2\epsilon_k} (0) \right)$. Now let
\begin{equation}\label{eq:bar-u}
\overline u_k:S_k \to X
\end{equation}
be defined as $\overline u_k \circ \Theta_k (x) = u_k (x)$ for all $x \in D_k$. Applying Proposition \ref{BTtools} to the maps $\overline u_k$ we obtain a result analogous to Lemma \ref{BT1} for maps from domains exhausting $\mathbb S^2$. For ease of notation, let $D_r^{\mathbb S^2}(y)$ denote a geodesic disk in $\mathbb S^2$ of radius $r$ and centered at $y \in \mathbb S^2$.
\begin{lemma}\label{BT3}Let $S^-_k$ represent the portion of $S_k$ in the southern hemisphere, $p^-$ denote the south pole, and $\epsilon_{\mathrm{gap}}$ be as in Proposition \ref{GapThm}.
There exists a further subsequence $\{\overline u_k\}$ of harmonic maps with $\overline u_k:S_k \to X$, a harmonic map $\overline u:\mathbb S^2 \to X$, a collection of points $\{y_1, \dots, y_l\}\subset \mathbb S^2$ with corresponding masses $\{m_1, \dots, m_l\}$ such that
\begin{enumerate}
\item \label{BT31} $\overline u_k \to \overline u$ in $C^0$ uniformly on compact subsets of $\mathbb S^2 \backslash \{y_1, \dots, y_l, p^-\}$.
\item \label{BT33} $\lim_{k \to \infty}E\left[ \overline u_k,S_k \right] = m$.
\item \label{BT32} $\lim_{k \to \infty}E\left[ \overline u_k, S_k^- \right]=C_R$.
\item \label{BT34} for any open set $\Omega$ with $\overline{\Omega} \subset \mathbb S^2 \backslash \{y_1, \dots, y_l, p^-\}$,
\[\lim_{k \to \infty}E\left[ \overline u_k, \Omega \right] = E\left[ \overline u, \Omega \right].
\]
\item \label{BT35} for all $r>0$ and $j \in \{1, \dots, l\}$,
\[
\lim_{r \to 0} \; \lim_{k \to \infty} E\left[ \overline u_k, D_r^{\mathbb S^2}(y_j) \right] :=m_j \geq \epsilon_{\mathrm{gap}}.
\]
\item \label{BT35b} there exists $\tau(p^-) \geq 0$ such that
\[
\lim_{k \to \infty} E[\overline u_k,S_k] = E[\overline u,\mathbb S^2] + \tau(p^-)+ \sum_{i=1}^l m_i
\]
\item \label{BT35b2} the map $|\nabla \overline u|^2 + \tau(p^-)\delta_{p^-} + \sum_{i=1}^l m_i\delta_{y_i}$ has center of mass on the $z$-axis.
\item \label{BT35c} if $E[\overline u,\mathbb S^2]<\epsilon_{\mathrm{gap}}$ then $E[\overline u,\mathbb S^2]=0$. In this case, $l>0$ and if $l=1$ then $\tau(p^-)=C_R$.
\item \label{BT36} $\overline u_k(\partial \Theta_k(D_{k\lambda_k}(c_k))) \subset \mathcal B_{C/k}(\overline u(p^-))$.
\item \label{BT37}
$ E \left[ \overline{u}_k , \Theta_k \left( D_{2k\lambda_k}(c_k) \backslash D_{k\lambda_k}(c_k) \right) \right] \to 0$.
\end{enumerate}
\end{lemma}
\begin{remark}
Following the usual convention, if $E[\overline u,\mathbb S^2]=0$ then we say that $\overline u$ is a \emph{ghost bubble}.
\end{remark}
\begin{proof}Item \eqref{BT31} follows from arguments as in Lemma \ref{BT1} and item \eqref{BT33} follows from the choice of $D_k$ earlier. Observe also that
\begin{align*}
E\left[ \overline u_k, S_k^- \right] = E\left[ u_k, D_k \backslash D_{\lambda_k}(c_k) \right]
=E\left[ u_k, D_k \backslash D_{\epsilon_k}(0) \right] + E\left[ u_k, D_{\epsilon_k}(0)\backslash D_{\lambda_k}(c_k) \right].
\end{align*} Item \eqref{BT32} now follows from Lemma \ref{BT1} \eqref{bct2} and Lemma \ref{BT2} \eqref{BT21}, \eqref{BT24}. Items \eqref{BT34} -- \eqref{BT35b} follow the logic as in Lemma \ref{BT2}, though we must include $\tau(p^-)$ in \eqref{BT35b} since energy may concentrate at $p^-$.
Item \eqref{BT35b2} holds since for $f \in C^\infty(\mathbb S^2, \mathbb R)$, by approximating by characteristic functions and appealing to the logic that gives \eqref{BT35b}, we conclude that
\[
\lim_{k \to \infty}\int_{\mathbb S^2 \cap S_k} f|\nabla \overline u_k|^2 d\mu_g = \int_{\mathbb S^2} f|\nabla \overline u|^2 d\mu_g + f(p^-) \tau(p^-)+ \sum_{i=1}^l f(y_i)m_i.
\]
The first part of item \eqref{BT35c} is immediate from the gap theorem. In that case, $l>0$ by items \eqref{BT33} and \eqref{BT32} and the fact that the $y_j$'s are in the northern hemisphere. When $l=1$, items \eqref{BT32} and \eqref{BT35b2} and the fact that $y_1$ must be in the northern hemisphere imply the result on $\tau(p^-)$. Item \eqref{BT36} follows as in Lemma \ref{BT2}. For item \eqref{BT37}, first notice that
\[
E \left[ \overline{u} , \Theta_k \left( D_{2k\lambda_k}(c_k) \right) \right] \to 0 \quad as \quad k \to \infty.
\]
By item \eqref{BT34}, for each fixed $k\geq 1$ we can choose $N_k$ such that for all $n \ge N_k$,
\[
\left| E \left[ \overline{u}_n , \Theta_k \left( D_{2k\lambda_k} (c_k) \backslash D_{k\lambda_k}(c_k) \right) \right] - E \left[ \overline{u} , \Theta_k \left( D_{2k\lambda_k}(c_k) \backslash D_{k\lambda_k} (c_k)\right) \right] \right|< \frac{1}{k}.
\]
Letting $n_k := \max\{n_{k-1}+1 , N_k\}$, we see that
\[
\lim_{k \to \infty}E \left[ \overline{u}_{n_k} , \Theta_k \left( D_{2k\lambda_k} (c_k) \backslash D_{k\lambda_k} (c_k)\right) \right] \to 0.
\]
Renaming the sequence implies item \eqref{BT37}.
\end{proof}
\subsection{The bubble tree}\label{MapsSec}
Given a bubble point $x_i \in M$ from Lemma \ref{BT1}, by Lemma \ref{BT3} the maps $\overline u_{k,i}:S_{k,i} \to X$ converge to a map $\overline u_i:\mathbb S^2 \to X$ except at $\{y_{i1}, \dots, y_{il_i}, p^-\}$. The process then iterates at each $y_{ij}$ which allows us to obtain bubbles on bubbles. By Lemma \ref{BT3}, item \eqref{BT35c}, there can be at most $\min\{\Lambda/C_R, \log_2(\Lambda/\epsilon_{\mathrm{gap}})\}$ ghost bubbles. Since every non-ghost bubble contains at least $\epsilon_{\mathrm{gap}}$ energy, the process terminates.
Prior to constructing the bubble tree, we prove two technical facts. First, given image curves $\gamma (\partial D_r)$ with small length and energy bounded by $C\delta/r$, there exists a coning off of $\gamma$ in $X$ which has energy bounded by $C\delta$. Second, the sequence of maps $u_k$ possess such curves on scale $\epsilon_k$ and $k\lambda_k$. These technical facts will be useful in the construction of our bubble tree.
\subsubsection{Coning off a curve}
We first demonstrate the existence of a coning off of a Lipschitz curve with energy depending on the energy of the curve.
\begin{definition}\label{def:coneoff}
Let $\gamma: \partial D_r \to \mathcal B_{\tau(X)}(P) \subset X$ be a Lipschitz map. We define the \emph{cone extension map} $\mathrm{Cone}(\gamma_{\partial D_r}): D_r \to X$ such that
\[
\mathrm{Cone}(\gamma_{\partial D_r})(s,\theta) = \eta_\theta \left( \frac{s}{r} \right),
\] where $\eta_\theta:[0,1] \to X$ is the constant speed geodesic connecting $c_\gamma$ to $\gamma(\theta)$ and $c_\gamma$ is the circumcenter of $\gamma$.
\end{definition}
\begin{lemma}\label{lem:coneoff}
Let $\left( D_{2r} , ds^2 + \mu^2(s,\theta)s^2 d\theta^2 \right)$ be a smooth disk such that $s^{-2}|1-\mu^2| + s^{-1} |\partial \mu^2| + |\partial^2 \mu^2| \leq c< \frac 14$ and $r<1$. Let $X$ be a compact locally CAT(1) space with injectivity radius $\tau(X)$. There exist $C, \delta >0$ depending on $X$ such that the following holds: for any Lipschitz loop $\gamma: \partial D_r \to X$ with $E[\gamma, \partial D_r] < \frac{\delta}{r} $, the cone extension map $\mathrm{Cone}(\gamma_{\partial D_r}): D_r \to X$ exists and satisfies $E[\mathrm{Cone}(\gamma_{\partial D_r}),D_r] \le C\delta$.
\end{lemma}
\begin{proof}By the Cauchy-Schwarz inequality,
\[
\mathrm{Length}^2(\gamma) \leq 2\pi \int_0^{2\pi} \left|\frac{\partial \gamma}{\partial \theta}\right|^2 d\theta \leq 2\pi \mu(r, \theta) \delta< \frac{\tau^2(X)}2
\]for sufficiently small $\delta$. Thus $\gamma \subset \mathcal B_{C\delta^{1/2}}(c_\gamma)$ and $\mathrm{Cone}(\gamma_{\partial D_r})(D_r) \subset \mathcal B_{C\delta^{1/2}}(c_\gamma)$. For convenience, let $u:=\mathrm{Cone}(\gamma_{\partial D_r})$.
By the $L-$convexity of CAT(1) spaces \cite[Definition 2.6 and Proposition 3.1]{ohta}, we deduce that
\[
d\left( \eta_{\theta_1}(t) , \eta_{\theta_2}(t) \right) \le \left( 1 + C \delta \right) t\, d \left( \gamma \left( \theta_1 \right) , \gamma\left( \theta_2 \right) \right).
\]
Now we estimate the directional derivative $\left| u_*\left( {\partial_\theta} \right) \right|\left( s,\theta_0 \right) $. Let $\mu := \mu (s , \theta_0)$ and let $\| \cdot \|$ denote the distance to $0$ with respect to the metric $g = ds^2 + \mu^2s^2d\theta^2$. Then by \cite[Section 1.9]{korevaar-schoen1}, for a.e. $(s, \theta_0) \in D_r$,
\begin{align*}
\left| u_*\left( {\partial}_{ \theta} \right) \right|\left( s,\theta_0 \right) &= \lim_{h \to 0}\frac{d\left( u\left( s,\theta_0 \right) , u\left( \mathrm{exp}_{(s,\theta_0)}\left(h\partial_\theta \right) \right) \right) }{|h|} \notag \\
&\le \lim_{h \to 0} \frac{d\left( u\left( s,\theta_0 \right) , u\left( \|\mathrm{exp}_{(s,\theta_0)}\left({h\partial_\theta} \right) \| , \theta_0 \right) \right)}{|h|} \notag \\
&+ \lim_{h \to 0} \frac{d\left( u \left(\|\mathrm{exp}_{(s,\theta_0)}\left(h\partial_\theta\right) \|,\theta_0 \right) , u \left(\|\mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta} \right) \|, \mathrm{arg}_\theta\left( \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta} \right) \right) \right) \right) }{|h|}. \notag
\end{align*}
As the radial geodesics are constant speed, and using the $L-$convexity estimate, we deduce that
\begin{align*}
\left| u_*\left({\partial_\theta} \right) \right| \left( s,\theta_0 \right) & \le \frac{d\left(c_\gamma , \gamma\left( \theta_0 \right) \right) }r \lim_{h \to 0} \frac{\| \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta}\right) \| - s}{|h|} \notag \\ &+ \lim_{h \to 0}\left( 1 + C \delta \right) \frac{ \| \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta}\right) \| }{r}\; \frac{d \left( \gamma\left( \theta_0 \right) , \gamma \left( \mathrm{arg}_\theta\left( \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta}\right) \right) \right) \right) }{ |h|}. \notag
\end{align*}
Since ${\partial_s}$ and ${\partial_\theta}$ are perpendicular, the first variation formula implies that
\[
\lim_{h \to 0} \frac{\| \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta} \right) \| - s}{ |h|} = 0.
\]
Thus,
\[
\| \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta}\right) \| = s + o(|h|)|h|.
\]Let $\Delta \theta(h):= \mathrm{arg}_\theta\left( \mathrm{exp}_{(s,\theta_0)}\left(h{\partial_\theta}\right) \right)-\theta_0$. Then,
\begin{align*}
\lim_{h \to 0} \frac{d \left( \gamma\left( \theta_0 \right) , \gamma \left( \theta_0+ \Delta \theta(h) \right) \right) }{h} &= \lim_{h \to 0} \frac{d \left( \gamma\left( \theta_0 \right) , \gamma \left( \theta_0+ \Delta \theta(h) \right) \right) }{\Delta \theta(h)} \cdot \lim_{h \to 0} \frac{\Delta \theta(h)}{h}\\
&= \left| \frac{d \gamma}{d\theta} \right|_{\theta = \theta_0} \cdot \mu.
\end{align*}
Therefore, the directional derivative $\left| u_*\left( \frac{1}{\mu s} \frac{\partial}{\partial \theta} \right) \right|$ satisfies
\begin{equation*}
\frac{1}{\mu s}\left|u_*(\partial_\theta)\right| = \left| u_*\left( \frac{1}{\mu s}{\partial_\theta} \right) \right|\left( s,\theta_0 \right) \le \frac{1 + C \delta}r\left| \frac{d \gamma}{d\theta} \right|_{\theta = \theta_0} .
\end{equation*}
Moreover, one easily calculates, using the constant speed of $\eta$,
\[
\left| u_*(\partial_s) \right| \left( s , \theta_0 \right) = \frac{1}{r} d\left( c_\gamma , \gamma\left(\theta_0 \right) \right) \le \frac{C\delta^{1/2}}{r}.
\]
It follows that
\begin{align*}
\left| \nabla u \right|^2\left( s , \theta_0\right)
&=\left| u_*({\partial_s} )\right|^2(s,\theta_0)+ \frac {1}{\mu^2 s^2} \left| u_*\left( {\partial_\theta} \right) \right|^2(s,\theta_0)\\
&\leq \frac {C^2\delta}{r^2} + \frac{( 1 + C \delta)^2}{r^2}\left| \frac{d \gamma}{d\theta}\right|^2(\theta_0).
\end{align*}
Increasing $C$ as necessary,
\begin{align*}
E[u,D_r] &= \int_{D_r} \left| \nabla u \right|^2\left( s , \theta\right) \mu(s,\theta) s ds d\theta\\
& \leq \frac{C^2\delta}{r^2}\int_{D_r} \mu(s,\theta)\, s\,ds d\theta + \frac{\left( 1 + C\delta \right)^2}{r^2}\int_{D_r} \left| \frac{d \gamma}{d\theta} \right|^2 \mu(s,\theta)\, s\, ds\, d\theta \\
&\le 2\pi C^2 \delta(1+cr^2) + (1+C\delta)^2 (1+cr^2)^2r E[\gamma, \partial D_r]\\
& \leq C \delta.
\end{align*}
\end{proof}
\subsubsection{Curves with small length}
In subsection Section \ref{NecSec}, we prove the no-neck property and energy quantization using the isoperimetric inequality which applies to conformal harmonic maps. Thus, we want to find scales with small image length for a conformal suspension of $u_k$ which in turn implies small image length for the original maps $u_k$.
We begin by recalling a modification of the previous conformalization scheme (see \cite[Theorem 2.3.4]{jost}).
\begin{lemma}\label{lem:conformal-suspension-2}
Let $u: D_1 \to X$ be a harmonic map with $E\left[ u , D_1 \right] \le \Lambda$. Then, there exists a conformal harmonic map $\tilde{u}: D_1 \to X \times \mathbb{C}$ of $u$ and a universal constant $c_1>0$ such that for all $x \in D_{1/2}$,
\begin{equation*}
\left| \nabla \tilde{u} (x)\right|^2 \le \left| \nabla u (x)\right|^2+ 1 + c_1^2 \Lambda^2.
\end{equation*}
\end{lemma}
\begin{proof}
We construct $\nu: D_1 \to \mathbb C$ to satisfy
\begin{equation*}
\begin{cases}
\partial_{{z}} \overline \nu = 1 & \quad in \quad D_1,\\
\partial_z \nu = -\frac{1}{4}\Phi_u & \quad in \quad D_1, \\
\Delta \nu = 0 & \quad in \quad D_1.
\end{cases}
\end{equation*}
To do this, let $\Psi$ be a holomorphic function with $\partial_z \Psi = -\frac{1}{4} \Phi_u$ where $\Phi_u$ is the Hopf function. Since $\Phi_u$ is holomorphic, $\nu(z) := \overline{z} + \Psi(z) $ satisfies the above conditions. Moreover, $\Phi_\nu = 4 \partial_z \nu \partial_z \overline{\nu} = - \Phi_u$. Let $\tilde u:D_1 \to X \times \mathbb C$ such that $\tilde u(x):= (u(x), \nu(x))$. By construction,
\begin{equation*}
\Phi_{\tilde{u}} = \Phi_u + \Phi_\nu = \Phi_u - \Phi_u = 0
\end{equation*}and thus $\tilde u$ is conformal.
Since $\Phi_u \in L^1\left( D_1 \right)$ is holomorphic on $D_1$, using the Cauchy integral formula there exists $c_1>0$ such that for all $x \in D_{1/2}$,
\begin{equation*}
\left| \Phi_u(x) \right| \le 4c_1 \Lambda.
\end{equation*}
Therefore, for all $x \in D_{1/2}$,
\begin{equation*}
\left| \nabla \tilde{u} \right|^2 = \left| \nabla u \right|^2 + \left| \nabla \nu \right|^2 = \left| \nabla u \right|^2 + 1 + \frac{\left| \Phi_u \right|^2}{16} \le \left| \nabla u \right|^2 + 1 + {c_1^2}\Lambda^2.
\end{equation*}
\end{proof}
\begin{definition}
Henceforth we refer to the above constructed $\tilde u$ as the \emph{conformal suspension} of $u$.
\end{definition}
\begin{lemma}\label{lem:susp-area}
Let $u$ and $\tilde{u}$ be as in Lemma~\ref{lem:conformal-suspension-2}. For any $ \Omega \subset D_{1/2}$ and any $0 < r <\frac 12$,
\begin{equation*}
2 \mathrm{Area} \left[ \tilde{u} \left(\Omega \right) \right] = E\left[ \tilde{u} , \Omega \right] \le E\left[ u ,\Omega \right] + \mathrm{Area}\left[\Omega\right] \left( 1 + {c_1^2}\Lambda^2 \right)
\end{equation*}
and
\begin{equation*}
\mathrm{length} \left[ \tilde{u} (\partial D_{r}) \right] \le \mathrm{length} \left[u (\partial D_{r}) \right] + \mathrm{length}[\partial D_r] \left( 1 + {c_1}\Lambda \right).
\end{equation*}Note that length and area of domain regions are taken with respect to the metric $g$.
\end{lemma}
\begin{proof}
Since $\tilde{u}$ is conformal, twice its area coincides with the total energy therefore,
\begin{align*}
2 \mathrm{Area} \left[ \tilde{u} \left(\Omega \right] \right) &= E\left[ \tilde{u} , \Omega \right] = \int_{\Omega} \; \left| \nabla \tilde{u} \right|^2 d\mu_g\\ &\le \int_{\Omega} \;\left( \left| \nabla u \right|^2+ 1 + {c_1^2}\Lambda^2 \right) d\mu_g\\ &= E\left[ u ,\Omega \right] + \mathrm{Area}\left[\Omega \right] \left( 1 + {c_1^2}\Lambda^2 \right).
\end{align*}
Similarly, letting $d\sigma_r$ denote the length measure on $\partial D_r$,
\begin{align*}
\mathrm{length} \left[ \tilde{u} (\partial D_{r}) \right] &= \int_{\partial D_{r}} \; \left| \partial_\theta \tilde{u} \right| \; d\sigma_r = \int_{\partial D_{r}} \; \left| \partial_\theta u \right| + \left| \partial_\theta \nu \right| \; d\sigma_r \\ &\le \mathrm{length} \left[ u (\partial D_{r}) \right] + \int_{\partial D_r} \; 1 + {c_1} \Lambda \; d\sigma_r \\ &= \mathrm{length} \left[ u (\partial D_{r}) \right] + \mathrm{length}[\partial D_r]\left( 1 + {c_1}\Lambda \right).
\end{align*}
\end{proof}
Let $x$ be a fixed bubble point and choose $\rho_0<\frac 12$ so that $D_{2 \rho_0}(x)$ does not contain any other bubble points. Let $\tilde u_k$ denote the conformal suspension of each $u_k|_{D_{2\rho_0}(x)}$ as in Lemma \ref{lem:conformal-suspension-2}. Recall that $\epsilon_k, \lambda_k$ are chosen in Lemma \ref{BT2} and an outline of their significance is given in the paragraph preceding that lemma. The next lemma provides precise scales, comparable to $\epsilon_k$ and $k \lambda_k$, on which we can apply our cone extension lemma. These two scales will determine the boundary of the neck region.
\begin{lemma}\label{lem:length}There exist sequences $r_k \in [\epsilon_k/4, \epsilon_k/2]$ and $s_k \in [k\lambda_k, 2 k \lambda_k]$ such that
\[
\lim_{k \to \infty}{r_k}E[\tilde u_k, \partial D_{r_k}(c_k)] = 0,
\]
\[
\lim_{k \to \infty}s_k E[\tilde u_k, \partial D_{s_k}(c_k)] = 0.
\]As a consequence,
\[
\lim_{k \to \infty}\mathrm{length}[\tilde u_k(\partial D_{r_k}(c_k))] =0,
\]
\[
\lim_{k \to \infty}\mathrm{length}[\tilde u_k(\partial D_{s_k}(c_k))] =0.
\]
\end{lemma}
\begin{proof}
As $\epsilon_k \to 0$, for each map $\tilde u_k$ we consider the metric in the tangent space $(D_{2\epsilon_k}(0), ds^2 + \mu^2_k(s,\theta)s^2 d\theta^2)$ where $s^{-2}|1- \mu_k^2|+s^{-1} |\partial \mu_k^2| + |\partial^2 \mu_k^2| \leq \alpha_k$ where $\alpha_k \to 0$. Let $r_k \in [\epsilon_k/4, \epsilon_k/2]$ such that $E[\tilde u_k, \partial D_{r_k}(c_k)] = \min_{r \in [\epsilon_k/4, \epsilon_k/2]}E[\tilde u_k, \partial D_{r}(c_k)] $. Then
\begin{align*}
\frac {\epsilon_k}4E[\tilde u_k, \partial D_{r_k}(c_k)] &\leq \int_{\epsilon_k/4}^{\epsilon_k/2}\int_0^{2\pi} \frac 1{s \mu_k}\left|\frac{\partial \tilde u_k}{\partial \theta}\right|^2 d\theta ds \\
&\leq E\left[ \tilde u_k, D_{\epsilon_k/2}(c_k) \backslash D_{\epsilon_k/4}(c_k) \right]\\
& \leq E\left[ u_k, D_k \backslash D_{\epsilon_k/8k^2}(0) \right] + \mathrm{Area}\left[ D_k \right] \left(1+ {c_1^2}\Lambda^2 \right)
\end{align*}where the last inequality follows from Lemma \ref{lem:susp-area} and the fact that $ D_{\epsilon_k/2}(c_k) \backslash D_{\epsilon_k/4}(c_k)\subset D_k \backslash D_{\epsilon_k/8k^2}(0)$. By Item \eqref{BT21} of Lemma \ref{BT2} and the fact that $\mathrm{Area}\left[ D_k \right] \leq c \epsilon_k^2$, the final expression tends to zero in $k$. Since $r_k/2 \leq {\epsilon_k}/4$, the desired result follows.
To find the $s_k$'s we use item \eqref{BT37} of Lemma~\ref{BT3} in place of item \eqref{BT21} of Lemma \ref{BT2} and follow a similar reasoning as above.
The length estimates then follow immediately from Cauchy-Schwarz.
\end{proof}
\subsubsection{The base, neck, and bubble maps}
Around each bubble point $x_i \in \{x_1, \dots, x_\ell\}$, there are three domains of interest. In $D_k(x_i) = D_{2\epsilon_{k,i}}(0)\subset T_{x_i}M$, we consider the disks $D_{r_{k,i}}(c_{k,i}), D_{s_{k,i}}(c_{k,i})$ and the annulus between them
\[
A_{k,i}':=D_{r_{k,i}}(c_{k,i})\backslash D_{s_{k,i}}(c_{k,i}).
\]
Here $\epsilon_{k,i}, \lambda_{k,i}, c_{k,i}$ are given by Lemma \ref{BT2} and $r_{k,i}, s_{k,i}$ are given by Lemma \ref{lem:length}.
We define the \emph{neck maps} $u_{k,i}|_{ A_{k,i}'}:A_{k,i}' \to X$. To define the \emph{extended base maps}, let
\[
\underline u_k(x):= \left\{ \begin{array}{ll} u_k(x) & \text{if } x \in M \backslash \cup_{i=1}^\ell \mathrm{exp}(D_{r_{k,i}}(c_{k,i}))\\
\mathrm{Cone}(u_k|_{\partial D_{r_{k,i}}(c_{k,i})}) & \text{if } x \in \mathrm{exp}(D_{r_{k,i}}(c_{k,i})).
\end{array}\right.
\]By Lemmas \ref{BT1}, \ref{lem:length}, and \ref{lem:coneoff}, $\underline u_k \to u$ in $C^0$ uniformly on $M$ and
\[
\lim_{k \to \infty} E[\underline u_k, M] = E[u, M].
\]
Similarly, the \emph{extended bubble maps} will cone off the maps $\overline u_{k,i}$. Let $\underline{\overline u}_{k,i}:\mathbb S^2 \to X$ such that
\[
\underline{\overline u}_{k,i}(x):= \left\{ \begin{array}{ll}
\overline u_{k,i}(x) & \text{if } x \in \Theta_{k,i} (D_{s_{k,i}}(c_{k,i}))\\
\mathrm{Cone}(\overline u_{k,i}|_{\Theta_{k,i} ( \partial D_{s_{k,i}}(c_{k,i}))}(x) &\text{otherwise}.
\end{array}\right.
\]
By Lemmas \ref{BT3}, \ref{lem:length}, and \ref{lem:coneoff}, for each $i \in \{1 , \dots, \ell\}$, $\underline{\overline u}_{k,i} \to {\overline u}_{i}$ uniformly in $C^0$ on $\mathbb S^2 \backslash \{y_{i1}, \dots y_{il_i}\}$ and
\[
\lim_{k \to \infty} E[\underline{\overline u}_{k,i} ,\mathbb S^2] = E[ {\overline u}_i,\mathbb S^2] + \sum_{j=1}^{l_i} m_{ij}.
\]
Note that the term $\tau_i(p^-)$ is lacking from the above limit and the uniform convergence happens across $p^-$. This occurs since we have removed the neck map portion from the extended bubble maps and replaced it by the coning off which has energy and diameter converging to zero. Moreover, the energy contained in the neck maps is exactly
\[
\tau_i(p^-) = \lim_{k \to \infty} E[u_{k,i}, A_{k,i}'].
\] We will show in the next subsection that $\tau_i(p^-)=0$ and $\mathrm{diam}(u_{k,i}(A_{k,i}')) \to 0$ and thus the $C^0$ limit and the limit of the energies of the extended bubble maps are the same as the limit for the original maps.
As mentioned previously, the extended bubble map process now iterates and we construct maps $\underline {\overline u}_{k,ij}:\mathbb S^2 \to X$ where $j \in \{1, \dots, l_i\}$. These maps converge, away from finitely many points $y_{ijm}$, $m \in \{1, \dots, l_{ij}\}$, to some map $ {\overline u}_{ij}:\mathbb S^2 \to X$ with an analogous energy limit to what we saw above.
\subsubsection{Constructing the bubble tree}\label{BTconst}
We now construct the bubble tree and the bubble domain. The bubble tree consists of vertices and edges where each vertex represents a harmonic map and each edge represents a bubble point. The base vertex of the tree is the map $u:M \to X$ and the $\ell$ edges emanating from the base vertex are the points $x_i$. The edges $x_i$ connect to the vertices $ {\overline u}_{i}:\mathbb S^2 \to X$ and the edges emanating from each of these vertices are the bubble points $y_{ij}$, $j \in \{1, \dots, l_i\}$. The tree is finite as the process terminates.
The bubble tower is the disjoint union of $M$ and a collection of $\mathbb S^2$'s, where each $\mathbb S^2$ is associated with a vertex in the bubble tree. Indeed, following \cite{Parker}, we may consider a \emph{bubble tower} $T$ in the following manner. Let $SM$ be an $\mathbb S^2$ bundle over $M$. Compactifying the vertical tangent space of $S M \to M$ yields an $\mathbb S^2$ bundle $S^2M$ over $SM$. Iterating this process then yields a tower of $\mathbb S^2$ fibrations. For clarity, the point $y_{i_1i_2\dots i_n}$ lies in $S^{n-1}M$.
A \emph{bubble domain} at level $n$ is a fiber $F$ of $ S^nM \to S^{n-1}M$ and a \emph{bubble tower} is a finite union of bubble domains such that the projection of $T \cap S^nM$ lies in $T \cap S^{n-1}M$.
Given the sequence $u_k:M \to X$, applying Lemmas \ref{BT1}, \ref{BT2}, \ref{BT3} determines a unique bubble tower $T= M \bigcup \left(\cup_I \mathbb S^2_I\right)$ where $I$ is indexed over all of the bubble points in the process. We define a sequence of bubble tower maps $\overline{\underline u}_{k,I}:T \to X$ such that ${\underline u}_k:M \to X$ and $\overline{\underline u}_{k,I}:\mathbb S^2_I \to X$. Letting $u, \overline {\underline u}_I$ index the limit maps, observe that
\begin{equation}
\lim_{k \to \infty}E[\overline{\underline u}_{k,I},T] = E[\overline{\underline u}_I,T]
\end{equation}and $\overline{\underline u}_{k,I} \to \overline{\underline u}_I$ in $C^0$ uniformly on $T$.
\subsection{Energy quantization and the no neck property}\label{NecSec}
In this subsection, we study the neck maps and use the isoperimetric inequality to prove that the energy of neck maps vanish in the limit. Then by monotonicity, the diameter of the neck maps must also vanish. Taken with the previous subsections, these results immediately imply Theorem \ref{MAIN}.
Consider a single neck map $u_k: A_k' \to X$ where $A_k':=D_{r_k}(c_k) \backslash D_{s_k}(c_k)$.
\begin{lemma}[Vanishing Neck Energy and Length] The following holds:
\[
\limsup_{k \to \infty} E\left[ u_k, A_k' \right] =0,
\]
\[
\limsup_{k \to \infty}\mathrm{diam}\left[u_k(A_k')\right] =0.
\]
\end{lemma}
\begin{proof} Let $\tilde u_k$ denote the conformal suspension of each $u_k|_{D_{r}(x)}$ as in Lemma \ref{lem:conformal-suspension-2}.
By \eqref{eq:CR} and Lemma \ref{lem:susp-area}, for all sufficiently large $k$, $\mathrm{Area}\left[\tilde{u}_k(A_k') \right]< \frac \pi 3$.
By Lemma \ref{lem:length}, for any $0<\delta\leq \tau(X)/4$ there exists a $K$ such that for all $k \geq K$ there exist points $P_k, Q_k \in X \times \mathbb C$ such that $\tilde u_k(\partial A_k') \subset \mathcal B_\delta(P_k) \cup \mathcal B_\delta(Q_k)$. Now suppose that there exists $R_k \in \tilde u_k(A_k')$ such that $R_k \notin
\mathcal B_{2\delta}(P_k) \cup \mathcal B_{2\delta}(Q_k)$. Then, applying the monotonicity formula of \cite[Theorem 3.4]{Banff2} to $\tilde u_k(A_k') \cap \mathcal B_\delta(R_k)$,
\[
C_{\mathrm{mon}} \delta^2 \leq \mathrm{Area}\left[\tilde u_k(A_k')\right] .
\]On the other hand, by Lemma \ref{lem:susp-area}, using the fact that $A_k' \subset D_{\epsilon_k}(0)\backslash D_{\lambda_k}(c_k)$, and recalling the definition of $C_R$ from \eqref{eq:CR},
\[
2\mathrm{Area}\left[\tilde u_k(A_k')\right] \leq E[u_k, D_{\epsilon_k}(0)\backslash D_{\lambda_k}(c_k)]+ \mathrm{Area}[A_k'](1+c_1^2\Lambda^2)\leq C_{\mathrm{mon}}\frac{\tau^2(X)}{16}+ \frac {C}{k^2}(1+c_1^2\Lambda^2).\]
This implies a contradiction for $\delta = \tau(X)/4$ and $k$ sufficiently large. It follows that for $k$ large enough, $\tilde u_k( A_k') \subset \mathcal B_{\tau(X)}(P_k)$. Thus each $\tilde u_k:A_k' \to X$ satisfies the hypotheses of the isoperimetric inequality, Theorem \ref{thm:isoperimetric}. By Lemma \ref{lem:length}, it thus follows that
\[
E\left[ u_k, A_k' \right] \leq E\left[ \tilde u_k,A_k' \right] = 2\mathrm{Area}\left[\tilde u_k(A_k')\right] \leq \frac{27\pi}2 \mathrm{length}^2\left[\tilde u_k(\partial A_k')\right]\to 0.
\]
With this improvement on the area estimate, for any fixed $\delta>0$ we may choose $N$ large enough so that for all $k \geq N$,
$\mathrm{Area}\left[\tilde u_k(A_k')\right] < C_{\mathrm{mon}}\delta^2/2$ and there exist points $P_k, Q_k \in X \times \mathbb C$ such that $\tilde u_k(\partial A_k') \subset \mathcal B_\delta(P_k) \cup \mathcal B_\delta(Q_k)$. If there exists $R_k \in \tilde u_k(A_k')$ such that $R_k \notin
\mathcal B_{2\delta}(P_k) \cup \mathcal B_{2\delta}(Q_k)$ then by the same argument as above, the monotonicity formula implies a contradiction. Therefore, $\tilde u_k(A_k') \subset \mathcal B_{4\delta}(P_k)$. It follows that
\[
\lim_{k \to \infty} \mathrm{diam}\left[u_k(A_k')\right] \leq \lim_{k \to \infty} \mathrm{diam}\left[\tilde u_k(A_k')\right]=0.
\]
\end{proof}
|
{
"timestamp": "2018-02-27T02:07:46",
"yymm": "1802",
"arxiv_id": "1802.08905",
"language": "en",
"url": "https://arxiv.org/abs/1802.08905"
}
|
\section{I. Introduction}
The conformal bootstrap is the idea that a conformally invariant quantum field theory is completely characterized by its spectrum of anomalous dimensions and operator product expansion coefficients \cite{Polyakov}. In $D=2$ dimensions, implementation of the bootstrap is hardly necessary since the conformal symmetry becomes the infinite dimensional Virasoro symmetry, which leads to powerful methods such as Coulomb gas techniques, current algebra and their cosets, etc. \cite{CFTbook}.
Remarkably, recently it has been demonstrated that the conformal bootstrap can provide accurate results in higher dimensions \cite{Rattazzi}. In particular for the $D=3$ Ising model,
the best results on anomalous dimensions is currently based on the bootstrap \cite{Kos2}. For reviews see \cite{SimmonsDuffin,Rychkov}.
In this paper we explore the power, or possible limitations, of the bootstrap for two conformal theories that are as important as the Ising model, namely percolation and polymers.
The latter is commonly referred to as the self-avoiding walk (SAW). These theories present several interesting challenges in the context of the conformal bootstrap. First of all, they are not unitary. Furthermore, they are very closely related in that they share some anomalous dimensions, and in $D=2$ they have the same Virasoro central charge $c=0$. It should be mentioned that some important problems in Anderson localization, such as the critical point in quantum Hall transitions for non-interacting fermions, are also expected
to be described by $D=2$, $c=0$ conformal field theories, many of whose description remains unknown. In contrast, the Ising model is essentially a unique theory: in $D=2$ it is the unique unitary theory with central charge $c=1/2$, which makes it easier to locate. In light of these comments, the main goal of this article is to explore whether the conformal bootstrap can distinguish between percolation and the SAW in any dimension $D$. As we will argue, the answer is affirmative. Our goal is not to provide highly accurate numerical results for conformal exponents, but rather to simply argue that the bootstrap is powerful enough to locate these two theories, however in a subtle way. We provide numerical estimates of exponents based on our proposal which are reasonably good, however not as accurate as those obtained by other methods such as $\epsilon$-expansion or Monte-Carlo, although our results can probably be improved with more extensive numerical studies.
In order to describe the problem, and establish notation, let us consider the $D=2$ case where exact results are known. The unitary minimal models have
central charge
\begin{equation}
\label{c}
c = 1 - \frac{6}{p(p+1)} \geq 1/2
\end{equation}
They contain primary fields $\Phi_{r,s}$, with $1\leq s \leq p$, $1\leq r \leq p-1$ with scaling dimension
\begin{equation}
\label{Deltars}
\Delta_{r,s}= 2 h_{r,s}= \frac{ \( (p+1) r - p s\)^2 -1}{2 p (p+1)}
\end{equation}
For concreteness consider the Ising model at $p=3$ with $c=1/2$. The model can be perturbed away from it's critical point by either changing the temperature away from the critical temperature $T_c$ and or turning on a magnetic field. One is thus led to consider the action
\begin{equation}
\label{action}
S = S_{\rm cft} + \int d^D x \Bigl( g_t \, \epsilon (x) + g_m \, \sigma (x) \Bigr)
\end{equation}
where $S_{\rm cft}$ is formally the action for the conformal field theory, $\epsilon (x)$ is the energy operator, $\sigma (x)$ is the spin field, and the $g$'s are couplings,
where $g_t = T- T_c$. It is well-known that the energy operator corresponds to $(r,s) = (2,1)$ with
$\Delta_\epsilon = 1$. The spin field corresponds to $(r,s) = (1,2)$ with $\Delta_\sigma = 1/8$.
They satisfy the fusion rule
\begin{equation}
\label{fusion}
[\sigma] \times [\sigma] = [1]+ [\epsilon]
\end{equation}
An important exponent is the correlation length exponent $\nu$. The dimension of the coupling $g_t$ is $D-\Delta_\epsilon$, therefore
$\xi = (g_t)^{-1/(D-\Delta_\epsilon)}$ has units of length and diverges as $g_t \to 0$, thus
\begin{equation}
\label{nudef}
\nu = \inv{D - \Delta_\epsilon}
\end{equation}
For the Ising model, $\nu = 1$.
Consider now lowering $p$ by $1$ to $p=2$ where one encounters the first non-unitary theories at $c=0$. The space of $c=0$ theories is vast; in fact it is infinite.
For instance current algebras based on the super Lie algebras $gl(n|n)$ or $osp(2n|2n)$ all have $c=0$ and have important applications to disordered systems.
In order to limit our attention to percolation and the SAW, we can view them as continuous limits of other models that pass through the Ising model.
The SAW is known to correspond to the $O(N)$ model as $N\to 0$, where Ising is $N=1$. On the other hand percolation is the $q\to 1$ limit of the q-state Potts model,
where the Ising model is $q=2$. Due to these limits, both these theories have an energy operator and spin field. These $D=2$ theories have been extensively studied, for instance in
\cite{CardyPerc,DotsenkoFateev,Saleur,Delfino,Delfino2,Dotsenko}. It is known that for both theories, the spin field corresponds
to $(r,s) = (3/2, 3/2)$ with $\Delta_\sigma = 5/48$. Thus, percolation and the SAW differ in the energy sector. For the SAW, the energy operator
corresponds to $(r,s) = (1,3)$ with $\Delta_\epsilon = 2/3$, which gives $\nu = 3/4$. On the other hand, for percolation it is $(r,s)= (2,1)$ with dimension $\Delta_\epsilon = 5/4$ which leads to $\nu = 4/3$.
The above discussion leads to some interesting questions. First of all, both percolation and the SAW have the same fusion rule \eqref{fusion} and same central charge $c=0$.
Can the conformal bootstrap deal with these important non-unitary theories? Can it distinguish between percolation and the SAW? Finally, how well does it work in dimensions $D \geq 2$?
Based on the above discussion, we expect them to differ in the energy sector, namely which descendants are included in $[1] + [\epsilon]$.
In the sequel, we will propose some selection rules that appear to answer these questions.
It should be mentioned that a detailed study of the difference between percolation and SAW in two dimensions was carried out by Gurarie and Ludwig \cite{Gurarie}.
It is known that if $ \phi (z)$ is the holomorphic part of a primary field of weight $\Delta = 2h$, then one has the operator product expansion (OPE)
\begin{equation}
\label{catas}
\phi (z) \phi (0) = \inv{z^{2h}} \( 1 + \frac{2h}{c} z^2 T(0) \) \ldots
\end{equation}
where $T(z)$ is the stress energy tensor.
Note the ``catastrophe" for $c=0$ \cite{Cardy}. It was proposed that this can be resolved by the existence of another field $t(z)$ of weight $2$ which is the logarithmic partner to $T(z)$.
For our purposes, these facts will in part motivate our selection rules for the bootstrap, in particular for the descendants of the identity, like $T(z)$. However we will not incorporate potential constraints from the structure of logarithmic conformal field theories in the bootstrap.
There is another important and subtle point in trying to bootstrap these theories. For both theories in 2D, the identity decouples exactly when $q=1$ or $N=0$ \cite{Delfino3,Dotsenko,Cardy}, altering the fusion rule \eqref{fusion} to
\begin{equation}
\label{newfuse}
[\sigma] \times [\sigma] = [\epsilon].
\end{equation}
This is not surprising, since when $q=1$ or $N=0$, the spin field does not formally exist, which is consistent with the fact that the fusion rule \eqref{newfuse} implies that the two point function of spin fields formally vanishes.
Taking percolation for example, this can be understood by noting that the probability $P$ that two sites are both contained in the same connected cluster is given by
\begin{equation}
P=\lim_{q\rightarrow 1}(q-1)^{-1}\langle \sigma(z_1)\sigma(z_2)\rangle
\end{equation} \cite{Delfino3,Cardy}. Since $P$ must be finite the two-point function must be proportional to $(q-1)$ and therefore go to zero at $q=1$. Furthermore, the vanishing of the identity channel is demonstrated by the more sophisticated calculation of Dotsenko \cite{Dotsenko}, through a careful renormalization procedure within the Coulomb gas formalism.
In particular, Dotsenko had to introduce a small parameter $\epsilon$, where $c\propto \epsilon$. Only after he renormalized the 4-point function in a particular manner did the identity channel vanish as $\epsilon \to 0$. In contrast, since we don't rigorously impose $c=0$, our proposed fusion rule for percolation necessarily includes the identity operator. The justification and consequences of this decision are explored in Appendix A.
This paper is organized as follows. In the next section we review some standard methods of the conformal bootstrap. The following two sections treat percolation and the SAW separately, where we provide numerical evidence for our choice of selection rules for $2<D <6$.
\section{II. Conformal Bootstrap}
At the heart of the conformal bootstrap is the notion that constraints on the four-point functions of a CFT, namely conformal invariance, crossing symmetry, and unitarity, are sufficient to restrict, or even completely fix, the spectrum of allowed scaling dimensions of a theory. Conformal invariance constrains the four-point function of a scalar field $\sigma(x)$ in a CFT to take the form
\begin{equation}
\langle \sigma(x_1)\sigma(x_2) \sigma(x_3)\sigma(x_4)\rangle = \frac{\sum_{\Delta,l}p_{\Delta,l}G_{\Delta,l}(u,v)}{|x_{12}|^{2\Delta_{\sigma}}|x_{34}|^{2\Delta_{\sigma}}},
\label{fourpoint}
\end{equation} with $x_{ij} \equiv x_i-x_j$ and $\Delta_\sigma$ the scaling dimension of $\sigma$. The coefficients $p_{\Delta,l}$ are the square of the $\sigma(x_i)\sigma(x_j)$ OPE coefficients $\lambda_{\sigma \sigma \mathcal{O}}$, with $\mathcal{O}$ signifying a global primary operator of dimension $\Delta$ and conformal spin $l$. $G_{\Delta,l}(u,v)$ are global conformal blocks, which are functions of the conformally invariant cross ratios $u=\frac{x_{12}^2x_{34}^2}{x_{13}^2x_{24}^2}$ and $v=\frac{x_{14}^2x_{23}^2}{x_{13}^2x_{24}^2}$. Crossing symmetry is imposed by considering the transformation of \eqref{fourpoint} under $x_1 \leftrightarrow x_3$. Defining
\begin{equation}
F_{\Delta_{\sigma},\Delta,l} \equiv v^{\Delta_\sigma}G_{\Delta,l}(u,v)-u^{\Delta_\sigma}G_{\Delta,l}(v,u)
\end{equation} crossing symmetry is respected if
\begin{equation}
\sum_{\Delta,l}p_{\Delta,l}F_{\Delta_{\sigma},\Delta,l} = 0.
\label{crossing}
\end{equation}
In unitary theories, the coefficients $p_{\Delta,l}$ are strictly positive due to reality of $\lambda_{\sigma \sigma \mathcal{O}}$. The contemporary conformal bootstrap \cite{Rattazzi}, only took shape after crucial advances in the study of conformal blocks \cite{Dolan1,Dolan2}. It has since been refined and applied most notabably to the $O(N)$ models \cite{Gopakumar,Shimada, ElShowk1,ElShowk2,ElShowk3,Kos1,Kos2,Kos3,Kos4,Rattazzi2,Rattazzi3,Iliesiu,Alday1,Alday2}. In this approach, a functional $\Lambda$ is sought such that $\Lambda(F_{\Delta_{\sigma},\Delta,l}) \geq 0$. When this condition is satisfied, it contradicts the crossing relation \eqref{crossing} since $p_{\Delta,l}>0$. Therefore regions of parameter space where such a $\Lambda$ exists cannot correspond to a physical CFT, and bounds can be placed on the possible scaling dimensions.
In the absence of unitarity, an alternate formulation of the conformal bootstrap which does not rely on the positivity of $p_{\Delta,l}$ is required. In the determinant or ``Gliozzi" conformal bootstrap method \cite{Gliozzi1,Gliozzi2}, this requirement is eliminated at the expense of generality. Rather than searching the space of all possible CFTs for bounds which are independent of a specific theory, a particular CFT must be chosen beforehand by specifying the dimensions and conformal spins of the first $N$ operators that appear in the crossing relation. This method has been applied to the Yang-Lee edge singularity \cite{Gliozzi1,Gliozzi2,Hikami1} and polymers \cite{Hikami2}. To set up this approach, we perform the standard variable change $v = ((2-a)^2-b)/4$, $u=(a^2-b)/4$ and Taylor expand \eqref{crossing} around $a=1,b=0$, generating the homogeneous system
\begin{equation}
\sum_{\Delta,l}p_{\Delta,l} F^{(m,n)}_{\Delta_{\sigma},\Delta,l}=0 \qquad (m,n \in \mathbb{N},m \, \text{odd})
\label{constraint}
\end{equation} where
\begin{equation}
F^{(m,n)}_{\Delta_{\sigma},\Delta,l} = \partial^m_a \partial^n_b \left(v^{\Delta_\sigma}G_{\Delta,l}(u,v)-u^{\Delta_\sigma}G_{\Delta,l}(v,u)\right)|_{a,b=1,0}.
\label{F}
\end{equation} Note the exclusion of even $m$ is owed to the two terms of \eqref{F} contributing oppositely in such cases. Truncating the sum to the first $N$ operators appearing in the OPE and taking $M\geq N$ derivatives, where each $M$ signifies a distinct $(m,n) = \partial^m _a \partial^n _b$ pair, gives a system of $\begin{pmatrix} M \\ N \end{pmatrix}$ equations which has a solution only if all minors of order $N$ vanish. Instead of searching for intersections of vanishing minors, we adopt the equivalent condition \cite{Esterlis} that the $M \times N$ matrix $\mathbf{F}$, with elements $F^{(m,n)}_{\Delta_\sigma,\Delta,l}$, must have at least one vanishing singular value.
As more operators are kept in the truncation of \eqref{crossing}, additional derivatives must be added. For smaller matrices the set of derivatives chosen can greatly influence the bootstrapped scaling dimensions, as explored in the appendix. This appears to be an inherent ambiguity in the determinant conformal bootstrap method (hereafter referred to simply as the conformal bootstrap), and a method of objectively choosing derivatives should be decided. We find using only longitudinal derivatives $(m,0)$ to be most effective. Calculation of $F^{(m,n)}_{\Delta_\sigma,\Delta,l}$ is performed with the numerical bootstrap package JuliBootS \cite{Juliboots}, which implements a partial fraction representation of conformal blocks \cite{ElShowk2} and recursively calculates their derivatives \cite{Hogervorst}. Finally, before describing how the conformal bootstrap is applied to percolation and the SAW, we note that while in this work no attempt is made to calculate the error introduced in truncating \eqref{crossing}, a recent study \cite{Li} has taken steps to formalize an error estimation procedure.
\section{III. Percolation}
Let us first provide some arguments for the selection rules we will impose on the operator content of the conformal bootstrap. It is not difficult to show that there is a null state at level $2$ for a primary field with conformal weights $h$, $\overline{h}$ if the following equation is satisfied
\begin{equation}
\label{null}
c = \frac{2h(5-8h)}{(2h+1)}
\end{equation}
(See for instance \cite{Ginsparg}.)
For $c=0$, this null state occurs at $h=5/8$ and $h=0$. Since $h=5/8$ corresponds to the energy operator, this suggests we discard its level 2 descendant, $[\Delta_\epsilon+2, 2 ]$. Here we introduce the notation $[\Delta,l]$ to represent an operator with dimension $\Delta = h+\overline{h}$ and conformal spin $l=h-\overline{h}$.
The $c=0$ catastrophe discussed in relation to equation \eqref{catas} also suggests we discard $[D,2]$ and its descendants, based on the null state at $h=0$.
One can also interpret this as effectively setting $T=0$ in \eqref{catas} to avoid the $c=0$ catastrophe. This motivates a fusion rule consisting of the identity operator and Virasoro descendants of $\epsilon$:
\begin{equation}
[\Delta_\sigma,0] \times [\Delta_\sigma,0] = [0,0] + [\Delta_\epsilon,0] + [\Delta_\epsilon+4,4] + [\Delta_\epsilon+6,6] + [\Delta_\epsilon+8,8]+ \dots
\label{perc_spectrum}
\end{equation}
Curiously, when constructing $\mathbf{F}$ with the above operators we observe a noticeable increase in the accuracy of the bootstrapped 2D percolation scaling dimensions if the $(m,n)=(1,0)$ constraint is avoided. For consistency we also omit the $(1,0)$ derivative constraint in higher dimensions, as well as in our treatment of the self-avoiding walk. In all dimensions considered for percolation, the $M$ rows of $\mathbf{F}$ are labeled by the $M$ lowest order longitudinal derivatives with $m\geq 3$, and the $N$ columns are labeled by the first $N$ operators present in the trial spectrum \eqref{perc_spectrum}. A discussion of the decision to use only longitudinal derivatives with $m\geq 3$ is provided in Appendix B.
Bootstrapping in $D=2$ dimensions with the above operators for fixed $\Delta_\sigma=5/48$ gives a vanishing singular value at $\Delta_\epsilon=1.255$, in agreement with the exact $\Delta_\epsilon=5/4$. Varying the spin field scaling dimension and minimizing $z$, the smallest singular value of $\mathbf{F}$, as a function of both $\Delta_\epsilon$ and $\Delta_\sigma$ finds $\Delta_\sigma = 0.101, \Delta_\epsilon=1.235$, as shown in Figure \ref{2DP}. In $D=4$ the presence of the free field theory with scaling dimensions $\Delta_\sigma=1$ and $\Delta_\epsilon=2$ makes it difficult to minimize in $\Delta_\sigma$, and the omission of the $(1,0)$ derivative constraint only compounds the problem. All higher order derivatives of the convolved vacuum conformal block $F_{\Delta_\sigma,0,0}$ quickly tend to zero as the free field $\Delta_\sigma$ is reached, since $F^{(1,0)}_{\Delta_\sigma,0,0} $ becomes linear as $\Delta_\sigma \rightarrow 1$. Thus with our approach a trivial vanishing singular value near $\Delta_\sigma=1$ is unavoidable in four dimensions. Nevertheless, minimizing the smallest singular value of $\mathbf{F}$ gives $\Delta_\sigma=0.997, \Delta_\epsilon=2.557$. This solution is depicted in Figure \ref{4DP}, where we actually work with the scaled matrix $\mathbf{F}/F^{(3,0)}_{\Delta_\sigma,0,0}$. This is purely for visual convenience; it smooths the precipitous dip in $z$ near $\Delta_\sigma=1$ but has no bearing on the bootstrapped scaling dimensions. Our bootstrapped $\Delta_\epsilon$ corresponds to a correlation length critical exponent $\nu=0.693$ which compares favorably with $\nu=0.6920$, obtained by four-loop calculation \cite{Gracey}
Applying the bootstrap to percolation's upper critical dimension $D=6$ with the same OPE truncation as in two and four dimensions is unsuccessful. No vanishing singular values of $\mathbf{F}$ are found when $M>N$, which for our minimal set of operators appears to be necessary in order to restrict both $\Delta_\sigma$ and $\Delta_\epsilon$. In some sense it's surprising this problem does not arise in three or four dimensions. Our postulated fusion rule, which is clearly reliant on Virasoro symmetry, is likely not more than a very rough approximation to the true spectrum of low-lying percolation operators in $D > 2$. Even without finding a solution to \eqref{constraint} in $6D$, there's still a signature of the free field result. In Figure \ref{6DP}, $\log(z)$ curves flatten as $\Delta_\sigma=2,\Delta_\epsilon=4$ is approached. The diminishing peaks can be viewed as a lesser violation of crossing symmetry, with the smallest such violation (peak) occurring when $\Delta_\sigma=2.002$ (red curve in Figure \ref{6DP}). A plot of $z$ at fixed $\Delta_\sigma=2.002$ exhibits a slight but well-defined dip at $\Delta_\epsilon=4.003$, as shown in Figure \ref{6DP_2}.
Unlike in even spatial dimensions, in $D=3$ and $D=5$ the fusion rule \eqref{perc_spectrum} is not adequate to distinguish both the spin and energy field scaling dimensions. In $3D$, for any given $\Delta_\sigma$ a vanishing singular value is present, but no clear minimal $z$ is found as a function of $\Delta_\sigma$ and $\Delta_\epsilon$. This may be due to the similarity in operator content and close proximity of percolation, SAW, and the Ising model, as all three theories have spin field scaling dimensions clustered near $\Delta_\sigma = 0.5$ in three dimensions. In $5D$ no vanishing singular values are present when $M>N$. In both cases we can still bootstrap one of the scaling dimensions given the other is held fixed. Taking $\Delta_{\sigma,3D}=0.4765$ and $\Delta_{\sigma,5D}=1.4718$ \cite{Gracey}, $\Delta_{\epsilon,3D}=1.615$ and $\Delta_{\epsilon,5D}=3.416$ are obtained using $N=5, M=6$ and $N=M=7$, respectively. Compiled in Table \ref{ptable} are all of our bootstrapped scaling dimensions for percolation.
As a final note before moving on to the SAW, we mention the work of \cite{Flohr} which argues many of the relevant observables of $2D$ percolation can be obtained within a conformal field theory with $c=-24$. Without restating their argument, they find all the weights in the Kac table shift by $-1$, implying
\begin{eqnarray*}
\Delta_\sigma &= 5/48 &\rightarrow -91/48 \\
\Delta_\epsilon &= 5/4 &\rightarrow -3/4.
\end{eqnarray*}
Bootstrapping with longitudinal derivatives and \eqref{perc_spectrum} with scaling dimensions shifted accordingly, for fixed $\Delta_\sigma = -91/48$ we obtain a clear solution at $\Delta_\epsilon = -0.728$ with $N=M=6$. The general agreement with $\Delta_\epsilon = -3/4$ lends further evidence that our fusion rule is not just coincidentally successful.
\begin{table}
\begin{center}
\caption{Percolation scaling dimensions. Bold values are calculated with the bootstrap, and adjacent values in parenthesis are either exact results ($D=2,D=6$) or calculated by Pad\'e approximant at four loops ($D=3, D=4, D=5$) \cite{Gracey}. In odd spatial dimensions we're unable to determine both $\Delta_\sigma$ and $\Delta_\epsilon$, and instead bootstrap with the referenced value of $\Delta_\sigma$.}
\label{ptable}
\begin{tabular}{ c c c}
\hline \hline
$D$ & $\Delta_\sigma$ &$\Delta_\epsilon$ \\
\hline
$2$ & $\mathbf{0.101} \, (5/48)$ &$\mathbf{1.235}\,(5/4)$ \\
$3$ &-\quad $(0.4765)$ &$\mathbf{1.615}\, (1.8849)$ \\
$4$ & $\mathbf{0.997} \, (0.9523)$ &$\mathbf{2.557}\, (2.5549)$ \\
$5$ & -\quad $(1.4718)$ &$\mathbf{3.416}\, (3.2597)$ \\
$6$ & $\mathbf{2.002} \, (2)$ &$\mathbf{4.003} \,(4)$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{2DP.pdf}
\caption{{\bf 2D Percolation.} Logarithm of the smallest singular value $z$ of $\mathbf{F}$ with $N=5, M=6$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$. Each curve corresponds to a distinct value of $\Delta_\sigma$, linearly spaced from $\Delta_\sigma = 4/48$ (left-most dip) to $\Delta_\sigma =6/48$ (right-most dip). The minimal $\log(z)$ occurs at $\Delta_\sigma = 0.101, \Delta_\epsilon=1.235$. }
\label{2DP}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{4DP.pdf}
\caption{{\bf 4D Percolation.} Logarithm of the smallest singular value $z$ of the matrix $\mathbf{F}/F^{(3,0)}_{\Delta_\sigma,0,0}$ with $N=5, M=6$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$. Each curve corresponds to a distinct value of $\Delta_\sigma$, linearly spaced from $\Delta_\sigma = 0.98$ (left-most dip) to $\Delta_\sigma =1.02$ (right-most dip). The minimal $\log(z)$ occurs at $\Delta_\sigma = 0.997, \Delta_\epsilon=2.557$. }
\label{4DP}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{6DP.pdf}
\caption{{\bf 6D Percolation.} Logarithm of the smallest singular value $z$ of the matrix $\mathbf{F}/F^{(3,0)}_{\Delta_\sigma,0,0}$ with $N=6, M=8$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$. Each curve corresponds to a distinct value of $\Delta_\sigma$, linearly spaced from $\Delta_\sigma = 1.8$ (left) to $\Delta_\sigma =2.2$ (right). Near $\Delta_\epsilon=4$ the curves flatten. The red curve corresponding to $\Delta_\sigma=2.002$ has the smallest peak and a minimum at $\Delta_\epsilon=4.003$. As in $4D$, using $\mathbf{F}/F^{(3,0)}_{\Delta_\sigma,0,0}$ rather than $\mathbf{F}$ has no bearing on the determination of the spin and energy operator scaling dimensions.}
\label{6DP}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{6DP_2.pdf}
\caption{{\bf 6D Percolation.} Smallest singular value $z$ of $\mathbf{F}$ at fixed $\Delta_\sigma=2.002$ (red curve from Figure \ref{6DP}) achieves its minimal value at $\Delta_\epsilon=4.003$.}
\label{6DP_2}
\end{center}
\end{figure}
\section{IV. Self-Avoiding Walk}
The energy operator for the 2D SAW corresponds to the primary field $\Phi_{1,3}$ with $h_{1,3}=1/3$, and it has a null state at level 3 rather than level 2 as in percolation. Therefore one difference in operator content which may distinguish the two $c=0$ theories is the inclusion of the $[\Delta_\epsilon+2,2]$ descendant. Another is the inclusion of the lowest lying $O(N)$ symmetric tensor $[\Delta_T,2]$, whose dimension $\Delta_T \rightarrow \Delta_\epsilon$ as $N\rightarrow 0$ \cite{Shimada}. We find $T$ essential in applying the bootstrap to the SAW. The primary purpose of this operator is to input $O(N)$ symmetry. Secondarily it fulfills the role the identity operator did for percolation: it introduces an OPE coefficient independent of the energy sector, which can roughly account for the ignorance of logarithmic features. Retaining the identity operator in the presence of $T$ is therefore redundant; we find $2D$ scaling dimensions change by less than $5\%$ if the identity operator is also included. As in percolation
$[D, 2]$ and other descendants of the identity discarded to avoid the $c=0$ catastrophe. For SAW in $2 \leq D \leq 4$ we thus create $\mathbf{F}$ with the operators
\begin{equation}
[\Delta_\sigma,0] \times [\Delta_\sigma,0] =[\Delta_\epsilon,0] +[\Delta_T,2] + [\Delta_\epsilon+2,2] + [\Delta_T+2,4] + [\Delta_\epsilon+4,4]+ [\Delta_\epsilon+6,6] + \dots
\label{SAW_spectrum}
\end{equation} and the $M$ lowest order longitudinal derivatives of $F_{\Delta_\sigma,\Delta,l}$ with $m\geq 3$. Bootstrapping in $2D$ with the above spectrum we're unable to distinguish a solution with $\Delta_T,\Delta_\epsilon$, and $\Delta_\sigma$ all left arbitrary. The SAW is more difficult to isolate than percolation due to the collision of $\Delta_\epsilon$ and $\Delta_T$. Taking $N=M=6$, fixing both $\Delta_T=0.667$ and $\Delta_\sigma=5/48$ finds $\Delta_\epsilon=0.666$. With just a single scaling dimension fixed, the minimization procedure is not as reliable as in the percolation case, often getting caught in a local rather than a global minima. Fixing only $\Delta_T=0.667$ tentatively finds $\Delta_\epsilon = 0.666$, $\Delta_\sigma = 0.101$.
In three and four dimensions the free theory obscures the SAW solution, due to the $[\Delta_\epsilon,0]$ and $[\Delta_T,2]$ operators. $\mathbf{F}$ always has a vanishing singular value as $\Delta_T \rightarrow \Delta_\epsilon$ because $G_{\Delta,2} \simeq G_{\Delta,0}$ near $\Delta=D-2$ where the scalar and spin two conformal blocks become degenerate. For the $3D$ SAW again there is difficulty in determining all three scaling dimensions using our proposed fusion rule \eqref{SAW_spectrum}. As we did for $3D$ percolation we fix $\Delta_\sigma=0.514$ \cite{Shimada} and bootstrap the remaining scaling dimensions using $N=5,M=6$, finding $\Delta_T=1.326$ and $\Delta_\epsilon=1.326$ (Fig. \ref{3DSAW}). In $4D$, with $\Delta_T, \Delta_\epsilon,\Delta_\sigma$ all arbitrary two solutions are present. One corresponds to $\Delta_\epsilon=\Delta_T$ and is independent of $\Delta_\sigma$. The second varies with $\Delta_\sigma$. Minimizing the smallest singular value of $\mathbf{F}$ as a function of $\Delta_T,\Delta_\epsilon$, and $\Delta_\sigma$ finds $\Delta_T=1.999, \Delta_\epsilon = 1.999, \Delta_\sigma = 0.999$ where the two solutions converge, as shown in Figure \ref{4DSAW_T}. This is expected, since the upper critical dimension for the self-avoiding walk is $D=4$. All bootstrapped SAW scaling dimensions are collected in Table \ref{stable}.
\begin{table}
\begin{center}
\caption{Polymer scaling dimensions. Bold values are calculated with the bootstrap, and adjacent values in parenthesis are either exact results ($D=2,D=4$), computed by $\epsilon$-expansion ($\Delta_\epsilon$ in $D=3$) \cite{Wilson}, or Borel summation ($\Delta_\sigma$ in $D=3$) \cite{Zinn}. }
\label{stable}
\begin{tabular}{ c c c c}
\hline \hline
$D$ & $\Delta_\sigma$ & $\Delta_T$ &$\Delta_\epsilon$ \\
\hline
$2$ & $\mathbf{0.101} \, (5/48)$& - \, $(2/3)$ &$\mathbf{0.666}\,(2/3)$ \\
$3$ &-\, $(0.514)$ & $\mathbf{1.326}$\,$(1.336)$ &$\mathbf{1.326}$ \,$(1.336)$ \\
$4$ & $\mathbf{0.999} \, (1)$ & $\mathbf{1.999} \, (2)$&$\mathbf{1.999}\, (2)$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{3DSAW_New.pdf}
\caption{{\bf 3D SAW.} Logarithm of the smallest singular value $z$ of $\mathbf{F}$ with $N=5, M=6$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$ for fixed $\Delta_\sigma=0.514$. Each curve corresponds to a distinct value of $\Delta_T$, linearly spaced from $\Delta_T = 1.28$ (left) to $\Delta_T=1.38$ (right). $\log(z)$ has a minimum at $\Delta_T = 1.326, \Delta_\epsilon=1.326$. }
\label{3DSAW}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{4DSAW_T_New.pdf}
\caption{{\bf 4D SAW.} Logarithm of the smallest singular value $z$ of $\mathbf{F}$ with $N=6, M=8$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$ at $\Delta_T=1.999$. Each curve undergoes two dips in $\log(z)$, with one fixed at $\Delta_\epsilon=\Delta_T$ and the second shifting with $\Delta_\sigma$, which varies linearly from $ \Delta_\sigma=0.95$ (left) to $\Delta_\sigma=1.05$ (right). The two solutions coincide and achieve a minimal $\log(z)$ at $\Delta_T=1.999, \Delta_\epsilon = 1.999, \Delta_\sigma = 0.999$.}
\label{4DSAW_T}
\end{center}
\end{figure}
\vfill\eject
\section{V. Summary}
The primary purpose of this work was to determine whether or not percolation and the self-avoiding walk could be distinguished with the conformal bootstrap. Though both theories share the same fusion algebra, central charge, and spin field scaling dimension in $D=2$, we've shown they can be isolated. Using a simplistic spectrum of operators based on Virasoro symmetry, and excluding descendants of the identity to indirectly specify $c=0$, the identity operator and a pair of spin $2$ operators -- a descendant of $\epsilon$ at level $2$ and an $O(N)$ symmetric tensor operator whose scaling dimension becomes degenerate with that of $\epsilon$ as $N\rightarrow 0$ -- can be used to discriminate between percolation and the SAW in any $D$.
For percolation in two and four spatial dimensions, our bootstrapped scaling dimensions agree relatively well with established results. In particular in $4D$ our determination of the correlation length critical exponent $\nu$, obtained with only $N=5$ operators, is within about $0.1\%$ of the value obtained by an involved four-loop calculation \cite{Gracey}. For the upper critical dimension in $6D$, while no rigorous bootstrapped solution is found we do see evidence of the anticipated free field solution. Bootstrapping percolation in odd $D$ is not as robust; to obtain a solution with our particular set of selection rules $\Delta_\sigma$ must be used as input. We point out that while this is the first treatment of percolation in $D>2$ with the conformal bootstrap, a similar implementation has been used to extract the structure constants of $2D$ percolation \cite{Picco}. Applying the bootstrap to the SAW, for the upper critical dimension $4D$ we easily recover the expected scaling dimensions of the free theory. However, in $D=2$ and $D=3$ additional input is required to find solutions. Namely at least one of the three independent scaling dimensions appearing in the truncated spectrum must be held fixed. To conclude, while more accurate results are surely possible by using larger, more complicated spectrums, percolation and the self-avoiding walk are clearly distinguishable with the conformal bootstrap.
Encouraged by these results, it would be interesting to use the conformal bootstrap to explore the space of $c=0$ theories in a systematic manner, since many such theories are expected to have important physical applications. In particular very interesting problems in Anderson localization, such as the elusive critical point for transitions in the integer quantum Hall effect, are expected to be described by a $c=0$ CFT in 2D \cite{Mirlin}.
\section{Acknowledgments}
We thank Tom Hartman, David Poland, and Gesualdo Delfino for encouraging discussions.
\vfill\eject
\section{A. Percolation Fusion Rule}
A potential criticism of our work is that to be in accordance with the exact fusion rule $[ \sigma ] \times [\sigma] = [\epsilon]$, the identity operator's contribution should vanish at a solution if that solution is to truly represent percolation. In practice we instead find that the OPE coefficient of the identity, though minimized at a solution, is larger than that of the energy operator. In this appendix we posit that, while physically the identity operator should decouple, its inclusion is a) a numerical necessity in treating percolation with global conformal blocks in the Gliozzi bootstrap, and b) does not alter the bootstrapped scaling dimensions.
To show this, we'll consider $2D$ percolation. In $2D$ the 4-pt function can be written in terms of the Virasoro conformal blocks
\begin{equation}
\langle \mathcal{O}(\infty) \mathcal{O}(1) \mathcal{O}(z) \mathcal{O}(0) \rangle = \sum_p a_p|\mathcal{F}(c,h,h_p,z)|^2.
\end{equation} Here $a_p$ are the OPE coefficients squared (note in general $a_p \neq p_{\Delta,l}$ \cite{Perlmutter}), $\mathcal{F}$ the Virasoro conformal blocks, and the sum runs over Virasoro primaries. The utility of the Virasoro blocks for our purposes is twofold. First, each block contains all contributions to the four point function from a given conformal family, leading to simplification of the bootstrap equations for fusion rules containing just one Virasoro primary. Second, they're a function of $c$ and thus $c = 0$ can be implemented directly.
To bootstrap with the Virasoro blocks, the analogues of the formulas provided in section II of the main text are required. These are provided in \cite{Esterlis}, for example, and restated here.
Crossing symmetry is respected if
\begin{equation}
\sum_p a_p^2\left[\mathcal{F}(c,h,h_p,z)\overline{\mathcal{F}}(c,h,\overline{h}_p,\overline{z})-\mathcal{F}(c,h,h_p,1-z)\overline{\mathcal{F}}(c,h,\overline{h}_p,1-\overline{z})\right]=0.
\end{equation} Expanding around $z=\overline{z}=1/2$ generates the homogeneous system
\begin{equation}
\sum_p a_p^2 \, g_{h,h_p}^{(m,n)} = 0
\label{vircrossing}
\end{equation} with
\begin{equation}
g_{h,h_p}^{(m,n)} = \partial_z^m \partial_{\overline{z}}^n \left[\mathcal{F}(c,h,h_p,z)\overline{\mathcal{F}}(c,h,\overline{h}_p,\overline{z})-\mathcal{F}(c,h,h_p,1-z)\overline{\mathcal{F}}(c,h,\overline{h}_p,1-\overline{z})\right]|_{z=\overline{z}=1/2}.
\label{g}
\end{equation} Note $m+n$ must be odd or else $g_{h,h_p}^{(m,n)}$ is trivially zero. For the fusion rule $[\sigma] \times [\sigma] = [\epsilon]$ the homogeneous system becomes
\begin{equation}
\partial_z^m \mathcal{F}(c,h_\sigma,h_\epsilon,z)|_{z=1/2} = 0 \quad \text{or}\quad \partial_z^n \mathcal{F}(c,h_\sigma,h_\epsilon,z)|_{z=1/2} = 0.
\label{simple_crossing}
\end{equation} As argued in \cite{Esterlis}, since $m+n$ is odd, either all even or all odd derivatives vanish at a solution to the crossing equation.
The argument above implies a simple way to determine whether or not it's even possible to use the Gliozzi bootstrap to find a solution with the correct OPE coefficients for 2D percolation: since either all odd derivatives or all even derivatives must vanish at a solution, if $\partial_z^1 \mathcal{F}(c,h_\sigma,h_\epsilon,z)|_{z=1/2} \neq 0$ and $\partial_z^2 \mathcal{F}(c,h_\sigma,h_\epsilon,z)|_{z=1/2} \neq 0$ as $c\rightarrow 0$ near $(h_\sigma = 5/96, h_\epsilon = 5/8$), then percolation can't be correctly found by the conformal bootstrap without treating the logarithmic CFT aspects more carefully.
The results (Fig \ref{app_perc_clims}) are unfortunately not so clear. With $h_\epsilon = 5/8$ fixed, for $c>0$, no solution is found regardless of how close $c$ is to zero. For $c<0$, both even and odd derivatives vanish at two points equidistant from $h_\sigma = 5/96$. The two solutions converge as $c\rightarrow 0$, as shown in Fig. \ref{app_clim} for $\partial_z^1 \mathcal{F}(c,h_\sigma, 5/8,z)$. This structure is present only very near to $h_\epsilon=5/8$. This is expected; away from $q=1$ the fusion rule becomes $[\sigma] \times [\sigma] = [1]+[\epsilon]$.
The minima (maxima) of the $c>0$ ($c<0$) curves in Fig. \ref{app_perc_clims} all occur exactly at $h_\sigma = 5/96$, and clearly should correspond to $\partial_z^m \mathcal{F}(c,h_\sigma,h_\epsilon,z)|_{z=1/2}=0$ since percolation should be a solution. The shift above $0$ (which does not change as $|c| \rightarrow 0$) might represent the error in ignoring logarithmic terms in the OPE, which would be compounded in higher order derivatives. Including the identity operator in the fusion rule appears to correct the shift shown in Fig. \ref{app_perc_clims} at the cost of obtaining the correct OPE coefficients. In this case the sum in \eqref{vircrossing} contains $N=2$ terms: the $\epsilon$ block and the identity block. The latter is given by the Virasoro vacuum block, truncated to include only the lowest order contribution
\begin{equation}
\mathcal{F}(c,h,0,z) = 1/z^{2h}.
\end{equation}
\begin{figure}
\centering
\subfigure[\,$c=10^{-6}$]{\includegraphics[width=.49\textwidth]{posc.pdf}}\hfill
\subfigure[\,$c=-10^{-6}$]{\includegraphics[width=.49\textwidth]{negc.pdf}}
\caption{$\partial_z^m \mathcal{F}(c,h_\sigma,5/8,z)|_{z=1/2}$ for $m=1,2,3$ (solid blue, green dash-dot, dashed red)}
\label{app_perc_clims}%
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{clim.pdf}
\caption{$h_\sigma$ vs $\partial_z^1 \mathcal{F}(c,h,5/8,z)|_{z=1/2}$. The two solutions converge towards $h_\sigma = 5/96$ as $c\rightarrow 0$. $c$ values:$ -10^{-5}$ (solid blue),$-10^{-6}$ (dashed green),$-10^{-7}$(red circles),$-10^{-8}$ (cyan dash-dot),$-10^{-9}$ (magenta dots).}
\label{app_clim}
\end{center}
\end{figure}
Since now two blocks are included in the fusion rule, the bootstrap must be performed with \eqref{g} rather than \eqref{simple_crossing}. With $m+n$ necessarily odd, we take $M=2$ derivatives and
\begin{equation}
d_{23}=\begin{pmatrix}
g_{h,0}^{(2,1)} & g_{h,5/8}^{(2,1)} \\
g_{h,,0}^{(3,0)} & g_{h,5/8}^{(3,0)}
\end{pmatrix}
\label{d23}
\end{equation}with $c=-10^{-6}$ in order to make the closest possible comparison to the green and red curves of Fig. \ref{app_perc_clims}b. The smallest vanishing singular value of $d_{23}$ is found to occur at $h_\sigma=0.0519 \approx 5/96$. Thus the two solutions equidistant from $h_\sigma = 5/96$ found with the exact fusion rule are replaced with a single solution at the proper value solely by including the identity operator. The exact fusion rule is sufficient to find percolation only if $c\rightarrow 0^-$, which is unenforceable when bootstrapping with global blocks as in the main text. Keeping the identity operator in our fusion rule is essentially a numerical crutch; a method of correcting for using scalar rather than logarithmic conformal blocks.
The drawback of retaining the identity operator in our fusion rule comes in the form of inaccurate OPE coefficients. For $a_\epsilon$ normalized to unity, inserting our solution ($h_\sigma = 0.0519, h_\epsilon = 5/8$) into the linear system associated with $d_{23}$ finds $a_\mathds{1} = 0.453$. As the magnitude of $c$ is further decreased this OPE coefficient grows, becoming larger than $a_\epsilon$. For example generating $d_{23}$ with $c=-10^{-7}$ instead leads to an approximate solution at $h_\sigma = 0.0521$ and $a_\mathds{1} \approx 4.3$. It's encouraging that even as $a_\mathds{1}$ increases $h_\sigma$ remains relatively unperturbed. Including the identity operator here, and in the main text, does not drive the solution away from the percolation critical point. In this case its non-vanishing contribution, and more specifically $a_\mathds{1} > a_\epsilon$, appears to be a signature of bootstrapping very close to $c=0$. This analysis suggests deviation from the known exact $h_\sigma, h_\epsilon$ values of $2D$ percolation has more to do with the truncation of the $\epsilon$ block than the presence of the identity operator. Also playing a role is the choice of derivative constraints used to construct the homogeneous system of equations in the bootstrap, which is the subject of Appendix B.
\section{B. Derivatives}
If a theory is easily truncable, which Taylor expansion terms are chosen to create $\mathbf{F}$ shouldn't strongly influence the outcome of the bootstrap. With the small number of operators kept in this work, a significant volatility in convergence is observed as the chosen set of derivatives is changed. This also arises in \cite{Hikami2} where for the $3D$ self-avoiding walk $\Delta_\epsilon = 1.325$ is found with just one of the four $3 \times 3$ minors considered.
To illustrate we consider the spectrum
\begin{equation}
[\Delta_\sigma,0] \times [\Delta_\sigma,0] = [0,0] + [\Delta_\epsilon,0] + [\Delta_\epsilon+4,4] + [\Delta_\epsilon+6,6] + [\Delta_\epsilon+8,8]+ \dots
\label{appendix_spectrum}
\end{equation} in $2D$. Aside from the identity these operators are all present in both the SAW and percolation. With this fusion rule and fixed $\Delta_\sigma=5/48$, we report in Table \ref{deriv_table} the bootstrapped value of $\Delta_\epsilon$, located by minimizing the smallest singular value of the crossing matrix as a function of $\Delta_\epsilon$ for three different methods of choosing derivative constraints. For the natural choice $m\geq n$ (i.e. the $(m,n)$ sequence $(1,0), (1,1), (3,0),(3,1) \dots$) a solution which converges to the $2D$ self-avoiding walk $\Delta_\epsilon = 2/3$ is found. On the other hand employing only longitudinal derivatives and excluding $M=(1,0)$ (i.e. the sequence $(3,0),(5,0),(7,0)\dots$) finds a $\Delta_\epsilon$ consistent with percolation, as shown in Figure \ref{2Dfixed}.
Thus there is evidence polymers and percolation can be distinguished without appealing to the $O(N)$ symmetry of the self-avoiding walk as done in the main text, but instead by being selective with the Taylor expansion terms used to construct $\mathbf{F}$. While this may appear to be just a trivial tuning of the system of equations to achieve a known result, using the same set of operators \eqref{appendix_spectrum} and the derivatives from column 2 (column 3) of Table \ref{deriv_table} also picks out percolation (SAW) in 4D, as shown in Figure \ref{4DP} (Figure \ref{4DSAW}).
The decision to exclude transverse derivatives in the main text was initially made out of convenience; evaluating longitudinal derivatives of conformal blocks is less computationally intensive than evaluating their transverse counterparts. However, it's clear setting $n=0$ and using only longitudinal derivatives is more successful at bootstrapping $2D$ percolation. Presumably this variance in outcome, as shown in Table III, is evidence our spectrum of operators \eqref{perc_spectrum} is not comprehensive. With an exact, complete set of operators one would anticipate the results of the bootstrap being more robust. Indeed, when the fusion rule \eqref{appendix_spectrum} is expanded to include all descendants of the energy operator, which are inherently present in the $\epsilon$ Virasoro blocks making up $d_{23}$ in the previous appendix, utilizing the $(m,n)=(2,1)$ constraint is not a problem. Appendix A also sheds some light on why accuracy is improved if the $(m,n)=(1,0)$ term is avoided. In Fig. \ref{app_perc_clims}b the curves corresponding to $m=2$ and $m=3$ have solutions which exactly coincide while those of $m=1$ deviate further from $h_\sigma = 5/96$. This discrepancy is eliminated as $c\rightarrow 0^-$, but this isn't enforceable with global conformal blocks. In theory, implementing logarithmic conformal blocks \cite{Hogervorst2} along with increasing the number of retained operators should eliminate any need to worry about which derivative constraints are chosen.
\begin{table}
\begin{center}
\caption{Comparison of possible truncations of the crossing equation in two dimensions with fixed $\Delta_\sigma = 5/48$ and square matrices ($N=M$). In each successive row of the table, the lowest dimension operator from \eqref{appendix_spectrum} and lowest order derivative available is added, and the bootstrapped $\Delta_\epsilon$ is reported. Three possible methods of choosing derivatives are considered.}
\label{deriv_table}
\begin{tabular}{ c c c c }
\hline \hline
& $m\geq 1$ \quad & $m\geq 3$ \quad & $m\geq n$ \\
$N$ &$(m,0)$ &$(m,0)$ &$(m,n)$ \\
\hline
2 & 1.221 &1.321 & - \\
3 & 1.216 &1.250 & 0.705 \\
4 & 1.216 &1.260& 0.681 \\
5 & 1.216 &1.255 & 0.672\\
6 & 1.215 &1.255 & 0.667\\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\subfigure[\, $m \geq n$ derivative prescription finds a solution at $\Delta_\epsilon=0.667$, consistent with $2D$ SAW.]{\includegraphics[width=.49\textwidth]{2DSAWfixed.pdf}}\hfill
\subfigure[\, $m \geq 3$ derivative prescription finds a solution at $\Delta_\epsilon=1.255$, consistent with $2D$ percolation.]{\includegraphics[width=.49\textwidth]{2DPercfixed.pdf}}
\caption{Logarithm of the smallest singular value $z$ of $\mathbf{F}$ for $N=M=6$ and fixed $\Delta_\sigma=5/48$.}
\label{2Dfixed}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{4DSAW_app.pdf}
\caption{ Logarithm of the smallest singular value $z$ of $\mathbf{F}$ with $N=6, M=7$ as a function of $\Delta_\sigma$ and $\Delta_\epsilon$, with the derivative prescription for 2D SAW. Each curve corresponds to a distinct value of $\Delta_\sigma$, linearly spaced from $\Delta_\sigma = 0.9$ (left) to $\Delta_\sigma =1.1$ (right). Minimizing $\log(z)$ finds the solution $\Delta_\sigma = 1.000, \Delta_\epsilon=2.000$ as anticipated for the $4D$ SAW.}
\label{4DSAW}
\end{center}
\end{figure}
\vfill\eject
|
{
"timestamp": "2019-01-03T02:00:37",
"yymm": "1802",
"arxiv_id": "1802.08911",
"language": "en",
"url": "https://arxiv.org/abs/1802.08911"
}
|
\section{Mathematical setting}
In this section we introduce the basic tools.
\subsection{The spaces}
For a complex number $b=\Re b+i\Im b$ we denote by $\overline b$ the complex conjugate
($\overline b=\Re B-i \Im b$)
and by
$|b|$ the absolute value ($|b|=\sqrt{(\Re b)^2+(\Im b)^2}$).
We consider subspaces of $\mathbb Z^2$:
\[
\mathbb Z^2_0
=\{k=(k^{(1)},k^{(2)})\in \mathbb Z^2: k\neq 0\}
\]
\[
{\mathbb Z ^2_+}
=\{k=(k^{(1)},k^{(2)})\in \mathbb Z^2_0: k^{(1)}>0\}\cup \{k=(k^{(1)},k^{(2)})\in \mathbb Z^2_0: k^{(1)}=0, k^{(2)}>0\}
\]
and
\[
\mathbb Z^2_-=\mathbb Z^2_0\setminus \mathbb Z^2_+
\]
When $k=(k^{(1)},k^{(2)})\in \mathbb Z^2$, we denote by $|k|$
the absolute value ($|k|=\sqrt {(k^{(1)})^2+(k^{(2)})^2}$).
We consider the separable Hilbert space $\mathcal H^0$ which is the $L^2$-closure of the space of smooth vectors which are
periodic, zero mean value
and divergence free.
Let $\{h_k\}_{k}$ be the basis for $\mathcal H^0$, given by
$h_k(\xi)=\frac 1{2\pi} \frac {k^\perp}{|k|}e^{i k \cdot \xi}$ for $k \in \mathbb Z^2_0$
and $\xi \in \mathbb T$.
Notice that, for any $k \in \mathbb Z^2_0$,
$h_{-k}(\xi)=-\overline{h}_k(\xi)$
and $\Delta h_k=-|k|^2 h_k$. Therefore
\[
\mathcal H^0=\{v(\xi)=\sum_{k\in {\mathbb Z ^2_0}} v_k h_k(\xi): v_{-k}=-\overline v_k\ \forall k,
\sum_{k\in{\mathbb Z ^2_0}} |v_k|^2<\infty\}
\]
Notice that the complex coefficients $v_k$ must satisfy
$v_{-k}=-\overline v_k$ in order to get a real vector $v$.
More generally, for $r\in \mathbb R$ we define
\[
\mathcal H^r=\{v(\xi)=\sum_{k\in {\mathbb Z ^2_0}} v_k h_k(\xi): v_{-k}=-\overline v_k\ \forall k,
\sum_{k\in{\mathbb Z ^2_0}} |k|^{2r}|v_k|^2<\infty\}.
\]
This is a Hilbert space with scalar product
\[
(u,v)_{\mathcal H^r}=\sum_{k\in {\mathbb Z ^2_0}} |k|^{2r}u_k \overline v_k .
\]
Following \cite{BL},
we define the periodic divergence-free vector Sobolev spaces ($r\in
\mathbb R, 1\le p\le \infty$)
$$
{\mathcal H}^r_p=\{ \;
v =\sum_{k \in {\mathbb Z ^2_0}} v_k h_k:
\sum_{k\in{\mathbb Z ^2_0}} v_k |k|^r h_k \in [L^p(\mathbb T)]^2\ \}
$$
and the periodic divergence-free vector Besov spaces as real interpolation
spaces
\begin{equation*}
\begin{split}
\mathcal B^{r}_{p\,q}=(\mathcal H^{r_0}_p,
\mathcal H^{r_1}_p)_{\theta,q}
, \qquad &r \in \mathbb R, 1\leq p,q\le \infty
\\
& r=(1-\theta)r_0+\theta r_1,
\qquad
0<\theta<1
\\
\end{split}
\end{equation*}
In particular $\mathcal B^r_{2\,2}=\mathcal H^r_2 =\mathcal H^r$. Moreover (see \cite{BL})
\[
\begin{split}
\|v\|_{\mathcal B^s_{p\, q_1}} &\le \|v\|_{\mathcal B^s_{p\, q_2}} \qquad \text{ for } q_2\le q_1\\
\|v\|_{\mathcal B^{s_1}_{p\, q}} &\le \|v\|_{\mathcal B^{s_2}_{p\, q}} \qquad \text{ for } s_1\le s_2\\
\|v\|_{\mathcal B^{s_1}_{p_1\, q}} &\le C\|v\|_{\mathcal B^{s_2}_{p_2\, q}} \qquad \text{ for } s_1-\frac 2{p_1}=s_2-\frac2{p_2}
\end{split}
\]
Here $C$ is a generic constant. We make the convention to denote
different constants by the same symbol $C$, unless we want to mark them
for further reference.
One interesting result in Besov spaces is given by the following estimate
of Chemin (see Corollary 1.3.1 in \cite{Ch}):
\begin{equation}\label{chemin}
\|v_1 v_2\|_{\mathcal B^s_{pq}}\le
\frac{C^{s_1+s_2}}{s_1+s_2}\|v_1\|_{\mathcal B^{s_1}_{pq}} \|v_2\|_{\mathcal B^{s_2}_{pq}}
\end{equation}
if
\[
s_1+s_2>0, \qquad
s_1<\frac 2p, \qquad s_2<\frac 2p, \qquad s=s_1+s_2-\frac 2p
\]
and $p,q \in [1,\infty]$.
\subsection{The abstract equation}
Let us consider a unitary viscosity $\nu=1$ in system \eqref{eq-initial}. Then we
write the evolution in abstract form as
\begin{equation}\label{NS}
dv(t)=Av(t)\ dt -B(v(t),v(t))\ dt +dw^H(t)
\end{equation}
with the operators formally defined as $A=\Delta$ and $B(u,v)=P[(u\cdot\nabla)v]$, where
$P$ is the projector operator onto the space of divergence free vector fields.
We can represent the stochastic forcing term as
\begin{equation}\label{noise}
w^H(t,\xi)=\sum_{k\in{\mathbb Z ^2_0}}h_k(\xi) b^H_k(t), \qquad (t, \xi) \in \mathbb R \times\mathbb T
\end{equation}
where $\{b^H_k\}_{k \in \mathbb Z^2_+}$ is a sequence of i.i.d. complex fractional Brownian processes
defined on a complete probability space $(\Omega,\mathbb F, \mathbb P)$ with filtration
$\{\mathbb F_t\}_{\{t \in \mathbb R\}}$ and $b^H_{-k}=\overline{b^H_k}$ for all $k \in {\mathbb Z ^2_+}$.
We denote by $\mathbb E$ the mathematical expectation with respect to $\mathbb P$.
This means that
$b^H_k(t)=\Re b_k^H(t)+i\Im b_k^H(t)$ and $\{\Re b^H_k, \Im b_k^H(t)\}_{k \in \mathbb Z^2_+}$
is a sequence of i.i.d. standard real fractional Brownian processes (fBm) with Hurst parameter $H$.
Each element of the sequence is a centered Gaussian process whose covariance
is
\[
C(t,s)=\frac 12 (|t|^{2H}+|s|^{2H}-|t-s|^{2H})
\]
$w^H$ is called an $\mathcal H^0$-cylindrical fractional Brownian motion and one can prove that the series \eqref{noise} converges in any space $U$ with
continuous embedding $\mathcal H^0\subset U$ of Hilbert-Schmidt type.
Now let us define rigorously the operators $A$ and $B$.
The Stokes operator $A$, as a linear operator in $\mathcal B^{s}_{pq}$ with domain $\mathcal B^{s+2}_{pq}$,
generates an analytic semigroup $\{e^{tA}\}_{t\ge 0}$ in $\mathcal B^{s}_{pq}$ and
\begin{equation}\label{stima-expA}
\|e^{tA}v\|_{\mathcal B^{s_1}_{pq}} \le \frac C{t^{\frac{s_1-s_2}2}} \|v\|_{\mathcal B^{s_2}_{pq}}
\end{equation}
for any $t\ge 0$, $s_1>s_2$.
As far as the bilinear term $B(u,v)=P[(u\cdot \nabla)v]$ is considered, we recall some basic properties
(see \cite{Temam}). Let $\langle\cdot,\cdot\rangle$ denote the
$\mathcal H^{-r}-\mathcal H^r$ duality bracket. One checks by integrations by parts that
\begin{equation}\label{tril}
\langle B(u_1,u_2),u_3\rangle=-\langle B(u_1,u_3),u_2\rangle
\end{equation}
and taking $u_2=u_3$
\begin{equation}\label{Bnull}
\langle B(u_1,u_2),u_2\rangle=0 .
\end{equation}
These relationships are true with regular entries and then are extended to
more general vectors by density.
A basic estimate is (see \cite{Giga}, Lemma 2.2)
\begin{equation}\label{stimeGiga}
\|B(u,v)\|_{\mathcal H^{-\delta}}\le C\|u\|_{\mathcal H^{\theta}} \|v\|_{\mathcal H^{\rho}}
\end{equation}
when
\[
0\le \delta<2,\qquad \rho>0,\qquad \theta>0
\]
\[
\rho+\delta>1,\qquad
\theta+\rho+\delta\ge 2
\]
Other estimates have been given before in \eqref{chemin}; indeed, by the divergence free condition we have
$B(u,v)=P[\text{div}\ (u \otimes v)]$; therefore $\|B(u,v)\|_{\mathcal B^s_{pq}}\le \|u \otimes v\|_{\mathcal B^{s+1}_{pq}}$.
Moreover, as done in \cite{AF}, we can develop the bilinear term in Fourier series.
Given $v=\sum_{l\in {\mathbb Z ^2_0}} v_l h_l$ and $u=\sum_{h \in {\mathbb Z ^2_0}} u_h h_h$,
we have formally
\[
\begin{split}
B(u,v)&=iP\sum_{h,l\in \mathbb Z^2_0}u_h \frac {h^\perp\cdot l}{|h|}\frac {e^{i h \cdot \xi}}{2\pi}
v_l \frac {l^\perp}{|l|}\frac {e^{i l \cdot \xi}} {2\pi}
\\&=
iP\sum_{k\in {\mathbb Z ^2_0}}\left(\sum_{\substack{h \in {\mathbb Z ^2_0}\\ h\neq k}}
\frac{h^\perp\cdot k}{2\pi |h||k-h|}u_hv_{k-h} (k-h)^\perp \right)
\frac {e^{i k \cdot \xi}} {2\pi}
\end{split}
\]
Using that the projector $P$ acts on the $k$-th component as
$P_k a=\frac{a\cdot k^\perp}{|k|^2}k^\perp$, we get
\[
B(u,v)
= i\sum_{k\in {\mathbb Z ^2_0}}\left(\sum_{\substack{h \in {\mathbb Z ^2_0}\\ h\neq k}}
\frac{h^\perp\cdot k}{2\pi |h||k-h|}u_hv_{k-h} \frac{(k-h)^\perp \cdot k^\perp}{|k|} \right)
k^\perp \frac {e^{i k \cdot \xi}} {2\pi|k|}
\]
Summing up, the bilinear term can be written in Fourier series as
\begin{equation}\label{B-inFourier}
B(u,v)=\sum_{k \in {\mathbb Z ^2_0}} B_k(u,v)h_k
\end{equation}
with
\begin{equation}\label{componentiBk}
\begin{split}
B_k(u,v)&=i \sum_{\substack{h \in {\mathbb Z ^2_0}\\ h \neq k}} \gamma_{h,k} u_h v_{k-h}
\\
\gamma_{h,k}\;&=\frac{1}{2\pi}\frac{(h^\perp \cdot k)([k-h]\cdot k)}{|h||k-h||k|}
\\
\end{split}
\end{equation}
Notice that $\overline B_k=-B_{-k}$.
The convergence of the series \eqref{B-inFourier} will be analysed in the next section.
Our aim is to study equation \eqref{NS} for $H \neq \frac 12$.
Indeed the case $H=\frac 12$ has been studied in \cite{AC,AF,DPD,Deb}:
Da Prato and Debussche proved the existence of a strong mild solution for
$\mu$-a.e. initial condition (where $\mu$ is the Gibbs measure of the enstrophy, introduced in
\cite{AFHK} which is an invariant measure for equation \eqref{NS}),
whereas Albeverio and Ferrario proved pathwise uniqueness of these solutions.
We shall prove a local existence and uniqueness result for $\frac 7{16}<H<\frac 12$
and a global existence and uniqueness result
for $H>\frac 12$. This latter result improves that of
\cite{FSV}; indeed, the case of cylindrical fBm is
included in \cite{FSV} but only for $H>\frac 34$ (see Theorem 5.1 and Corollary 4.3 there).
By the way, there are other differences with respect to \cite{FSV}:
in \cite{FSV} the spatial domain is not the torus but a generic smooth bounded
subset $D$ of $\mathbb R^2$ (and the Dirichlet boundary condition is assumed) and the solution is a
process with values in $L^4$ in time and space, whereas our solution is more regular
since a.a. paths are at least in
$C([0,T];\mathcal H^{\frac 12})$ (see next Theorem \ref{th-global-ex} with $H>\frac 34$)
and one knows that $\mathcal H^{\frac 12}\subset L^4(D)$.
\medskip
In order to analyze equation \eqref{NS} we introduce as in
\cite{DPD} two subproblems: the linear Stokes equation
\[
dz(t)=Az(t)\ dt +dw^H(t)
\]
and the equation for $u=v-z$
\[
\frac{du}{dt}(t)=Au(t)-B(u(t),u(t))-B(u(t),z(t))-B(z(t),u(t))-B(z(t),z(t))
\]
which is a Navier-Stokes type equation with random coefficients.
First we deal with the linear problem for $z$, then we define the bilinear
term $B(z,z)$ a.s. as in \cite{DPD,AF}
and finally face the nonlinear equation for $u$. At the end we recover the
existence result for $v$ from the representation $v=z+u$.
\section{The Stokes equation}\label{sectionStokes}
If we neglect the bilinear term in \eqref{NS}, we obtain the linear Stokes equation
\begin{equation}\label{eq-lin}
dv(t)=Av(t)\ dt+dw^H(t).
\end{equation}
We consider its stationary mild solution; this is the process
\begin{equation}\label{OU}
z(t)=\int_{-\infty}^t e^{(t-s)A}dw^H(s)
\end{equation}
We can write
\begin{equation}\label{z-in-Fourier}
\begin{split}
z(t)(\xi)&=\sum_{k\in {\mathbb Z ^2_0}} h_k(\xi) \int_{-\infty}^t e^{-|k|^2(t-s)}db_k^H(s)
\\&
\equiv
2\sum_{k\in{\mathbb Z ^2_+}}\frac {k^\perp}{2\pi|k|}\cos(k\cdot\xi) \int_{-\infty}^t e^{-|k|^2(t-s)}d\Re b_k^H(s)
\\&\qquad -
2\sum_{k\in{\mathbb Z ^2_+}}\frac {k^\perp}{2\pi|k|}\sin(k\cdot\xi) \int_{-\infty}^t e^{-|k|^2(t-s)}d\Im b_k^H(s)
\end{split}
\end{equation}
First, we provide a result for each stochastic convolution integral
appearing in the Fourier series representation.
\begin{lemma}
Let $\lambda>0$ and $b^H$ be a real fBm of Hurst parameter $H\in (0,1)$.
Then
\[
\int_{-\infty}^t e^{-\lambda(t-s)}db^H(s), \qquad t \in \mathbb R
\]
is a stationary centered Gaussian process whose variance is
\[
C_H\lambda^{-2H}
\]
where $C_H$ is the positive constant given in \eqref{costanteCH}.
\end{lemma}
\proof
Following the proof of Lemma 4.1 in \cite{FSV} we have that the random variables
\[
\int_{-\infty}^t e^{-\lambda (t-s)}db^H(s)
\]
and
\[
\int_0^{+\infty} e^{-\lambda r}db^H(r)
\]
have the same law.
Moreover, by self-similarity of the fBm,
the latter random variable has the same law as
\[
\lambda^{-H} \int_0^{+\infty} e^{-r}db^H(r).
\]
Therefore
\[
\mathbb E\left(\int_{-\infty}^t e^{-\lambda (t-s)}db^H(s)\right)^2
= \lambda^{-2H}
\mathbb E\left( \int_0^{+\infty} e^{-r}db^H(r)\right)^2
\]
We estimate $\mathbb E\left( \int_0^{+\infty} e^{-r}db^H(r)\right)^2$
using the representation
\[
\int_0^{+\infty} e^{-r}db^H(r)=\int_0^{+\infty} e^{-r}b^H(r) dr.
\]
This comes from the formula on a finite time interval
\[
\int_0^{T} e^{-r}db^H(r)=e^{-T}b^H(T)+\int_0^{T} e^{-r}b^H(r) dr
\]
and the fact that by the law of iterated logarithm (see \cite{BHOZ}) we get
\[
\lim_{T\to +\infty}|e^{-T}b^H(T)|=0 \qquad \mathbb P-a.s.
\]
Hence
\[\begin{split}
\mathbb E\left( \int_0^{+\infty} e^{-r}db^H(r)\right)^2
&=
\mathbb E\left( \int_0^{+\infty} e^{-r}b^H(r)dr\right)^2
\\&=
\int_0^{+\infty}\int_0^{+\infty}e^{-r}e^{-s} \frac {r^{2H}+s^{2H}-|r-s|^{2H}}2 \ dr \ ds
\end{split}\]
By elementary calculations one shows that the latter integral is finite.
We set
\begin{equation}\label{costanteCH}
C_H=\int_0^{+\infty}\int_0^{+\infty}e^{-r}e^{-s} \frac {r^{2H}+s^{2H}-|r-s|^{2H}}2 \ dr \ ds.
\end{equation}
\hfill\qe
Now we come back to the stationary process $z$ given in \eqref{OU}. We have
the following result
\begin{proposition}\label{pro-zLm}
For any $r<2(H-\frac 12)$ we have
\[
z\in C(\mathbb R;\mathcal H^r) \qquad \mathbb P-a.s.
\]
\end{proposition}
\proof
First we show that for any fixed time, the random variable $z(t)\in \mathcal H^r$, $\mathbb P$-a.s.
Indeed, using \eqref{z-in-Fourier} and the previous Lemma we have
\[\begin{split}
\mathbb E\big[\|z(t)\|_{\mathcal H^r}^2\big]
&=\sum_{k\in{\mathbb Z ^2_0}} |k|^{2r}\left|\int_{-\infty}^te^{-|k|^2(t-s)}db_k^H(s)\right|^2
\\&
=\sum_{k\in{\mathbb Z ^2_0}} |k|^{2r}2\frac{C_H}{|k|^{4H}}
\end{split}\]
The latter series is convergent for $4H-2r>2$, i.e. $r<2(H-\frac 12)$.
It follows that for any finite $m\ge 1$ we have
$z\in L^m_{\text{loc}}(\mathbb R;\mathcal H^r)$, $\mathbb P$-a.s.. Indeed,
$z(t)$ is a Gaussian random variable; so all the moments are finite, i.e.
for any $m\ge 2$ there exists a finite constant $e_m$ such that
\[
\mathbb E[\|z(t)\|_{\mathcal H^r}^m]=e_m
\]
for any $t$.
Moreover, the process $z$ is a stationary process and by interchanging the integrals,
for any $T_1<T_2$ we get
\[
\mathbb E\left[\int_{T_0}^{T_1} \|z(t)\|_{\mathcal H^r}^m dt \right]=
\int_{T_0}^{T_1}\mathbb E \left[\|z(t)\|_{\mathcal H^r}^m \right] dt=e_m(T_1-T_0)<\infty
\]
Since the expectation is finite, then $\int_{T_0}^{T_1} \|z(t)\|_{\mathcal H^r}^m dt<\infty$, $\mathbb P$-a.s.
The continuity in time of the trajectories
has been proved in \cite{DDM2002} when $H>\frac 12$ and in \cite{DDM2006} when $H<\frac 12$.
\hfill\qe
\begin{remark}
We see that when $H\le\frac 12$, the process $z$ at any fixed time takes values in a distributional space.
This is the source of the difficulty in our problem.
\end{remark}
\begin{remark}
From the proof of Proposition \ref{pro-zLm}, we obtain that the process $z$ is a stationary process
and for any time $t$ the law of $z(t)$ is the centered Gaussian measure
$\mu^H\sim\mathcal N(0,C_H(-A)^{-2H})$.
More precisely, we assign the measure $\mu^H$ on the sequences $\{(\Re v_k,\Im v_k)\}_{k \in {\mathbb Z ^2_+}}$
as
\begin{equation}\label{measure}
\mu^H=\otimes_{k \in {\mathbb Z ^2_+}}\mu^H_k
\end{equation}
with
\[
d\mu^H_k(x,y)=\frac{|k|^{4H}}{2\pi C_H}e^{-\frac{|k|^{4H}}{2C_H}(x^2+y^2)}\ dx \ dy
\]
When we identify the space $\mathcal H^r$ with that of the sequences
$\{(\Re v_k,\Im v_k)\}_{k \in {\mathbb Z ^2_+}}$ such that $\sum_{k\in{\mathbb Z ^2_+}} |k|^{2r}[(\Re v_k)^2+(\Im v_k)^2]<\infty$,
we get
$\mu^H(\mathcal H^r)=1$ for any $r<2(H-\frac 12)$ and
$\mu^H(\mathcal H^r)=0$ for any $r\ge2(H-\frac 12)$
(see \cite{Kuo}). Similarly, $\mu^H(\mathcal B^r_{pq})=1$ for any $r<2(H-\frac 12)$ and
$\mu^H(\mathcal B^r_{pq})=0$ for any $r\ge2(H-\frac 12)$.
\end{remark}
We finish this section with a result on the deterministic Stokes equation, that will be used in the sequel.
Given the deterministic linear problem
\[\begin{cases}
\dfrac{dx}{dt}(t)=Ax(t)+f(t), &\qquad t \in (0,T]\\
x(0)=x_0
\end{cases}
\]
we represent its mild solution as
\[
x(t)=e^{tA}x_0+\int_0^te^{(t-s)A}f(s)\ ds
\]
and we have ( see \cite{b} Proposition 4.1, based on \cite{dv})
\begin{proposition} \label{isom}
Let $1<p,q,r<\infty$ and $s \in \mathbb R$.\\
For any $f \in L^r(0,T;\mathcal B^{s}_{p\,q})$ and
$x_0 \in \mathcal B^{s+2-\frac{2}{r}}_{p\,r}$,
there exists a unique solution $x \in W^{1,r}(0,T)\equiv
\{ x \in L^r(0,T;\mathcal B^{s+2}_{p\,q}): \frac{dx}{dt} \in L^r(0,T;\mathcal B^{s}_{p\,q})\}$.
Moreover, the functions $x,\frac{dx}{dt}$ depend continuously on the data
$f$ and $x_0$,
that is there exists a positive constant $C$ such that
$$
\Big( \textstyle \int_0^T ( \|x(t)\|^r_{\mathcal B^{s+2}_{p\,q}}
+ \|\frac{dx}{dt}(t)\|^r_{\mathcal B^{s}_{p\,q}})\,dt\Big)^{1/r}
\leq
\Big( C \textstyle\int_0^T \|f(t)\|^r_{\mathcal B^{s}_{p\,q}} dt\Big)^{1/r} +
\|x_0\|_{\mathcal B^{s+2-\frac{2}{r}}_{p\,r}}
$$
Finally, the space $W^{1,r}(0,T)$
is continuously embedded into the space
$C([0,T];\mathcal B^{s+2-\frac{2}{r}}_{p\,r})$,
that is there exists
a positive constant $C$ such that
\begin{equation}\label{int-cont}
\|x\|_{C([0,T];\mathcal B^{s+2-\frac{2}{r}}_{p\,r})} \le \;C\; \|x\|_{W^{1,r}(0,T)}
\end{equation}
and therefore the initial condition makes sense.\\
All the constants depend only on $p,q,r,s$.
\end{proposition}
\section{The bilinear term}\label{sectionB}
When we study equation for the auxiliary process $u=v-z$,
there appears $B(z,z)$. We analyse the space regularity of this term.
Following \cite{AFHK,AC,DPD,AF}, we estimate it with respect to the Gaussian measure $\mu^H$.
\begin{proposition}\label{stimaBz}
Let $\frac 14<H<1$ and
\begin{align}
& \rho<4H-3& \text{ if } \frac 14<H<\frac 12 \label{cond-sotto}\\
& \rho< 2(H-1)& \text{ if } \frac 12\le H<1 \label{cond-sopra}
\end{align}
Then, for any $m \in \mathbb N$
\begin{equation}\label{stimaB(z,z)}
\int \|B(z,z)\|^{2m}_{\mathcal H^{\rho}}\ \mu^H(dz)<\infty .
\end{equation}
\end{proposition}
\proof
Let us begin to perform computations for $m=1$.
First, we explain why we need the lower bound $H>\frac 14$.
By \eqref{componentiBk} we have
\begin{equation}\label{prima-stima}
\begin{split}
\int&\|B(z,z)\|_{\mathcal H^\rho}^2\ \mu^H(dz)
=\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho}\int |B_k(z,z)|^2 \ \mu^H(dz)
\\&
=\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho} \sum_{h,h^\prime\in{\mathbb Z ^2_0}; h,h^\prime\neq k}
\int \gamma_{h,k} z_h z_{k-h}\gamma_{h^\prime,k} \overline{ z_{h^\prime} z_{k-h^\prime}}\ \mu^H(dz)
\\&
=\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho}\sum_{h \in {\mathbb Z ^2_0}, h \neq k}
(\gamma^2_{h,k} +\gamma_{h,k}\gamma_{k-h,k}) \int |z_h|^2 |z_{k-h}|^2 \ \mu^H(dz)
\\&
=2 \sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho}\sum_h \gamma^2_{h,k} \frac {C_H^2}{|h|^{4H} |k-h|^{4H}}
\end{split}
\end{equation}
From \eqref{componentiBk} we have that
$\gamma_{h,k}=\gamma_{k-h,k}$ and $\gamma^2_{h,k}\le |k|^2$; then we can bound
$\int\|B(z,z)\|_{\mathcal H^\rho}^2\ d\mu^H(z)$ by
\begin{equation}\label{doubles}
\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho+2}\sum_{h\in{\mathbb Z ^2_0}, h\neq k} \frac 1{|h|^{4H} |k-h|^{4H}}
\end{equation}
For any fixed $k$, the latter series (over $h$) is convergent if and only if $8H>2$. Therefore we require
\[
H>\frac 14.
\]
The inner series depends on $k$ as proved in Lemma \ref{lemma1} in the
Appendix \ref{ar}.
Therefore the double series \eqref{doubles} is estimated by
\[
\begin{cases}
\displaystyle\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho+2}\frac 1{ |k|^{8H-2}} & \text{ if } \frac 14< H<\frac 12\\
\displaystyle\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho+2}\frac {\ln |k|}{ |k|^{2}} & \text{ if } H=\frac 12\\
\displaystyle\sum_{k\in{\mathbb Z ^2_0}} |k|^{2\rho+2}\frac 1{ |k|^{4H}} & \text{ if } H> \frac 12
\end{cases}
\]
The first series converges when $\rho<4H-3$, the second one when $\rho<-1$
and the third one when $\rho<2H-2$. This provides the summability
\eqref{stimaB(z,z)} under conditions \eqref{cond-sotto}-\eqref{cond-sopra}.
Now, let us consider higher powers $m>1$.
We have that \eqref{stimaB(z,z)} holds
also for the other powers, since $\mu^H$ is Gaussian and therefore
the higher moments are expressed by means of the second moments.
For completeness we provide computations for $m=2$ in Appendix \ref{m2}.
\hfill\qe
Using the stationarity we can write \eqref{stimaB(z,z)} also as
\[
\mathbb E\big[\|B(z(t),z(t))\|^{2m}_{\mathcal H^\rho}\big]=:\tilde e_m<\infty
\]
for an $t\in \mathbb R$.
As an easy consequence, we obtain
\[
\mathbb E\left[\int_{t_0}^{t_1}\|B(z(t),z(t))\|^{2m}_{\mathcal H^{\rho}} dt \right]
=
(t_1-t_0) \int
\|B(z,z)\|^{2m}_{\mathcal H^\rho}\ \mu^H(dz)<\infty
\]
for any $\infty<t_0<t_1<\infty$. Hence
\begin{corollary}\label{cor-B-inL0T}
Let $m\ge 1$ and $T>0$.
Choosing $\rho$ as in \eqref{cond-sotto}-\eqref{cond-sopra} we get
\[
B(z,z) \in L^m(0,T;\mathcal H^\rho)
\]
$\mathbb P$-a.s.
\end{corollary}
\begin{remark}\label{reg}
Notice that for $\frac 12<H<1$ the quadratic term $B(z,z)$ is in $L^2(0,T;\mathcal H^{-1})$,
$\mathbb P$-a.s.
\end{remark}
\section{The nonlinear auxiliary equation}\label{sectionNS}
Let $v$ be the unknown for our equation \eqref{NS}
and let $z$ be the stationary Stokes process given by \eqref{OU}.
The process $u=v-z$ solves the equation
\begin{equation}\label{eq-u}
\frac{du}{dt}=Au-B(u,u)-B(u,z)-B(z,u)-B(z,z).
\end{equation}
For $r<2(H-\frac 12)$
we have $z(0)\in \mathcal B^r_{pq}$, $\mathbb P$-a.s. and we take $u(0)=v(0)-z(0)$.
We shall prove that equation \eqref{eq-u} has a local solution when $\frac7{16} <H<\frac 12$
whereas we have a global result when $\frac 12<H<1$.
This implies results for the unknown $v=z+u$.
\subsection{$\frac 14<H<\frac 12$}
We consider a mild solution $u$ to equation \eqref{eq-u}.
We want to show local existence (and uniqueness) by means of a fixed point argument.
Thus we define the mapping $\mathcal I$
\begin{multline}\label{mappaI}
[\mathcal I(u)](t)=e^{tA}u(0)-
\int_0^t e^{(t-s)A}B(u(s),u(s))\ ds
-\int_0^t e^{(t-s)A}B(z(s),u(s))\ ds\\
-\int_0^t e^{(t-s)A}B(u(s),z(s))\ ds
-\int_0^t e^{(t-s)A}B(z(s),z(s))\ ds
\end{multline}
A fixed point of $\mathcal I$ is a mild solution of equation \eqref{eq-u}.
Given $T>0$, let
\[
\mathcal E_T=L^\beta(0,T;\mathcal B^\alpha_{pq})\cap C([0,T];\mathcal B^\sigma_{pq}).
\]
First, we want to show that $\mathcal I: \mathcal E_T\to \mathcal E_T$ for suitable values of the parameters
$\alpha, \beta,\sigma,p,q,H$.
Define
\[
I_0(t)=e^{tA}u_0 .
\]
Given $u_0\in \mathcal B^\sigma_{pq}$, it is an easy result that $I_0 \in \mathcal E_T$ when
\[
\alpha<\sigma+\frac 2\beta .
\]
Indeed, $\|e^{tA}u_0\|_{\mathcal B^\sigma_{pq}}\le \|u_0\|_{\mathcal B^\sigma_{pq}}$; by \eqref{stima-expA} we have
\[
\int_0^T \|e^{tA}u_0\|^\beta_{\mathcal B^\alpha_{pq}}dt\le
C\|u_0\|^\beta_{\mathcal B^\sigma_{pq}} \int_0^T \frac{dt}{t^{\frac{\alpha-\sigma}2\beta}}
\]
and the latter intergal is finite when $\alpha<\sigma+\frac 2\beta$.
To study the integrals involving $B(u,u)$, $B(z,u)$ and $B(u,z)$ we define
\[
I_1(u,\tilde u)(t)=\int_0^t e^{(t-s)A}B(u(s),\tilde u(s))\ ds .
\]
\begin{lemma}\label{lemma-L}
Let $\alpha,\sigma \in \mathbb R$ and $\beta,p,q\ge 1$
be such that
\[\begin{cases}
\frac 2p+\frac 2\beta<\sigma +1
\\
\alpha<\frac 2p,\quad \sigma<\frac 2p
\\
\alpha+\sigma>0
\end{cases}
\]
If $u \in L^\beta(0,T;\mathcal B^\alpha_{pq})$ and $\tilde u \in C([0,T];\mathcal B^\sigma_{pq})$,
then
\[
\|I_1(u,\tilde u)\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})}
\le
CT^{\frac 12-\frac 1p+\frac \sigma 2} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})} \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})}
\]
and
\[
\|I_1(\tilde u, u)\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})}
\le
CT^{\frac 12-\frac 1p+\frac \sigma 2} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})} \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})}
\]
where the constant $C$ is independent of the time $T$.
\end{lemma}
\proof
We consider the first estimate, since the second one is obtained in the same way interchanging $u$ and $\tilde u$.
We use \eqref{chemin} with $s_1=\alpha$ and $ s_2=\sigma$:
\[
\|B(u,\tilde u)\|_{\mathcal B^{\alpha+\sigma-\frac2p-1}_{pq}}
\le
C \|u\|_{\mathcal B^\alpha_{pq}}\|\tilde u\|_{\mathcal B^\sigma_{pq}}
\]
where $\alpha<\frac 2p$, $\sigma<\frac 2p$ and $\alpha+\sigma>0$.
Then we get that $B(u,\tilde u)\in L^\beta(0,T;\mathcal B^{\alpha+\sigma-\frac2p-1}_{pq})$.
Moreover
\[
\|I_1(u,\tilde u)\|^\beta_{L^\beta(0,T;\mathcal B^\alpha_{pq})}
\le
\int_0^T (\int_0^t \|e^{(t-s)A}B(u(s),\tilde u(s))\|_{\mathcal B^\alpha_{pq}}ds)^\beta dt .
\]
Now, we perform estimates using \eqref{stima-expA} and the H\"older inequality:
\[\begin{split}
\int_0^t &\|e^{(t-s)A}B(u(s),\tilde u(s))\|_{\mathcal B^\alpha_{pq}} ds
\\&\le
\int_0^t \frac C{(t-s)^{\frac12+\frac 1p-\frac \sigma 2}} \|B(u(s),\tilde u(s))\|_{\mathcal B^{\alpha+\sigma-1-\frac 2p}_{pq}} ds
\\&\le
C\int_0^t \frac 1{(t-s)^{\frac12+\frac 1p-\frac \sigma 2}}
\|u(s)\|_{\mathcal B^\alpha_{pq}} \|\tilde u(s)\|_{\mathcal B^\sigma_{pq}}ds
\\&\le
C \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})}
\Big(\int_0^t \frac {ds}{(t-s)^{(\frac12+\frac 1p-\frac \sigma 2)\frac\beta{\beta-1}}} \Big)^{1-\frac 1\beta}
\Big(\int_0^t \|u(s)\|^\beta_{\mathcal B^\alpha_{pq}}ds\Big)^{\frac 1\beta}
\\&\leq C
t^{\frac 12-\frac 1p+\frac \sigma 2-\frac 1\beta}\|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})}
\end{split}
\]
Integrating in time over the interval $[0,T]$, we conclude the proof.
\hfill\qe
Now we consider the other norm for $I_1$.
\begin{lemma}\label{lemma-C}
Let $\alpha,\sigma \in \mathbb R$ and $\beta,p,q\ge 1$
be such that
\[
\begin{cases}
\frac 2p+\frac 2q<\alpha+1\\
\beta\ge q\\
\alpha<\frac 2p,\quad \sigma<\frac 2p\\
\alpha+\sigma>0
\end{cases}
\]
If $u \in L^\beta(0,T;\mathcal B^\alpha_{pq})$ and $\tilde u \in C([0,T];\mathcal B^\sigma_{pq})$,
then
\[
\|I_1(u,\tilde u)\|_{C([0,T];\mathcal B^\sigma_{pq})}
\le
C T^{\frac \alpha2+\frac 12-\frac 1p-\frac 1\beta} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})} \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})}
\]
and
\[
\|I_1(\tilde u,u)\|_{C([0,T];\mathcal B^\sigma_{pq})}
\le
C T^{\frac \alpha2+\frac 12-\frac 1p-\frac 1\beta} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})} \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})}
\]
where the constant $C$ is independent of the time $T$.
\end{lemma}
\proof
First, from the previous proof we know
that $B(u,\tilde u)\in L^\beta(0,T; \mathcal B^{\alpha+\sigma-\frac 2p-1}_{pq})$; when $\beta\ge q$ we also have
$B(u,\tilde u)\in L^q(0,T; \mathcal B^{\alpha+\sigma-\frac 2p-1}_{pq})$ and Proposition \ref{isom}
provides $I_1(u,\tilde u)\in C([0,T];\mathcal B^{\alpha+\sigma-\frac 2p+1-\frac 2q}_{pq}) $ and finally we use that
$\mathcal B^{\alpha+\sigma-\frac 2p+1-\frac 2q}_{pq}\subseteq \mathcal B^{\sigma}_{pq}$ when
$\frac 2p+\frac 2q\le \alpha+1 $.
Now, we perform estimates using
\eqref{stima-expA} and the H\"older inequality.
\[\begin{split}
\|I_1(u,\tilde u)(t)\|_{\mathcal B^\sigma_{pq}}&\le
\int_0^t \|e^{(t-s)A}B(u(s),\tilde u(s))\|_{\mathcal B^\sigma_{pq}}ds
\\&\le
\int_0^t \frac C{(t-s)^{\frac {1}2+\frac 1p-\frac \alpha 2}} \|B(u(s),\tilde u(s))\|_{\mathcal B^{\alpha+\sigma-\frac2p-1}_{pq}}ds
\\&
\le\int_0^t \frac C{(t-s)^{\frac {1}2+\frac 1p-\frac \alpha 2}}\|u(s)\|_{\mathcal B^\alpha_{pq}}\|\tilde u(s)\|_{\mathcal B^\sigma_{pq}} ds
\\&
\le \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})} \int_0^t \frac C{(t-s)^{\frac {1}2+\frac 1p-\frac \alpha 2}}\|u(s)\|_{\mathcal B^\alpha_{pq}}ds
\\&\le
C \|\tilde u\|_{C([0,T];\mathcal B^\sigma_{pq})} \|u\|_{L^\beta(0,T;\mathcal B^\alpha_{pq})}
(\int_0^t \frac{ds}{(t-s)^{\frac \beta{\beta-1}(\frac {1}2+\frac 1p-\frac \alpha 2)}})^{1-\frac 1\beta}
\end{split}
\]
The latter intergral is finite when $(\frac{\beta}{\beta-1})(\frac {1}2+\frac 1p-\frac \alpha 2)<1$,
i.e.
\begin{equation}\label{cond-Linfty}
\frac 2\beta+\frac 2p<\alpha+1
\end{equation}
This inequality is true when $\beta\ge q $ and $\frac 2q+\frac 2p< \alpha+1$,
that is our assumptions imply \eqref{cond-Linfty}.
Computing the time interval and taking the supremum over
$t\in [0,T]$ we get the required estimate.
\hfill\qe
For the integral involving $B(z,z)$ we define the process
\[
I_2(t)=\int_0^t e^{(t-s)A}B(z(s),z(s))ds,\qquad t\ge 0
\]
where $z$ is the Stokes process given in \eqref{OU}.
\begin{lemma}\label{lemma-B}
Let $\frac 14<H<\frac 12$, $\beta, p\ge 1$, $q\ge 2$
and $\alpha, \sigma \in \mathbb R$ be such that
\[
\alpha\le \sigma+1 \ , \qquad \sigma<4H-2.
\]
Then
$I_2\in \mathcal E_T$, $\mathbb P$-a.s.
\end{lemma}
\proof We proceed pathwise. First we show that $I_2\in L^\beta(0,T;\mathcal B^\alpha_{pq})$.
From Corollary \ref{cor-B-inL0T} we know that the paths of $B(z,z)$ are in $L^\beta(0,T;\mathcal B^{\sigma-1}_{pq})$ for any $\beta\ge 1$ and for
$\sigma<4H-2$.
Therefore, according to Proposition \ref{isom}
the paths of $I_2$ are in $L^\beta(0,T;\mathcal B^{\sigma+1}_{pq})$. When $\alpha\le \sigma+1 $, the embedding theorem
gives $I_2 \in L^\beta(0,T;\mathcal B^{\alpha}_{pq})$.
Now, we show that
$I_2 \in C([0,T];\mathcal B^\sigma_{pq})$.
Again by Corollary \ref{cor-B-inL0T}, for any $q\ge 1$ and $\sigma<4H-2$
the paths of $B(z,z)$ are in $L^q(0,T;\mathcal B^{\sigma-1}_{pq})$.
We bear in mind Proposition \ref{isom} and we get
that
$I_2\in C([0,T];\mathcal B^{\sigma+1-\frac2q}_{pq})$. When $q\ge 2$ this finishes the proof.
\hfill\qe
Summing up, we have proved estimates for all the terms in the r.h.s. of \eqref{mappaI}.
Let us point out that merging these results
we have to satisfy two conditions:
\[
\sigma<2(H-\frac12)
\]
which comes from Proposition \ref{pro-zLm} and provides
$z \in C([0,T];\mathcal H^\sigma)$, $\mathbb P$-a.s., so to apply Lemma \ref{lemma-L} and \ref{lemma-C} for
the integrals involving $B(z,u)$ and $B(u,z)$, and
\[
\sigma <4H-2
\]
which comes from Lemma \ref{lemma-B} to estimate the integral involving $B(z,z)$.
When $H<\frac 12$, the latter condition is stronger and we will write only this one in the following.
\begin{proposition}
Let $\frac 14<H<\frac12$, $\alpha,\sigma \in \mathbb R$, $\beta,p\ge 1$ and $q\ge 2$
be such that
\begin{align}
\frac 2p+\frac 2\beta&<\sigma +1 \label{uno}
\\
\frac 2p+\frac 2q&<\alpha+1 \label{due}
\\
\beta&\ge q \label{tre}
\\
\alpha&<\frac 2p \label{quattro}
\\
\sigma&<\frac 2p \label{cinque}
\\
\alpha+\sigma&>0 \label{sei}
\\
\alpha&<\sigma+\frac 2\beta \label{sette}
\\
\alpha&\le \sigma+1 \label{otto}
\\
\sigma&<4(H-\frac 12) \label{nove}
\end{align}
Then, for any finite $T$ we have that $\mathcal I: \mathcal E_T\to \mathcal E_T$, $\mathbb P$-a.s..
\end{proposition}
\begin{remark}[How to fulfil conditions]
Notice that by condition \eqref{nove} we have $\sigma<0$ when $H<\frac 12$. Therefore,
condition \eqref{sei} requires $\alpha>0$.
Moreover, conditions \eqref{sei} and \eqref{otto} provide
\[
-\sigma<\alpha\le \sigma+1;
\]
thus it is necessary that $\sigma>-\frac12$, i.e. $H>\frac38$.
Actually we are going to show that $H$ must be bigger than $\frac 38$ in order to satisfy all the conditions \eqref{uno}-\eqref{nove}.
Indeed, we write a system equivalent to the previous one. Condition \eqref{cinque} is trivially satisfied when $\sigma<0$ and can be neglected. Taking $\beta=q$, condition \eqref{due} is weaker than condition \eqref{uno}, that is \eqref{uno} implies \eqref{due}.
In addition, since condition \eqref{uno} requires $\frac 2\beta<1$, we have that condition \eqref{otto} is weaker than condition \eqref{sette}, that is \eqref{sette} implies \eqref{otto}. Therefore, in the case $\beta=q$ the previous system of conditions is equivalent to
\begin{align}
\frac 2p+\frac 2\beta&<\sigma +1 \tag{\ref{uno}}
\\
\beta&= q \tag{\ref{tre}'}
\\
\alpha&<\frac 2p \tag{\ref{quattro}}
\\
\alpha+\sigma&>0 \tag{\ref{sei}}
\\
\alpha&<\sigma+\frac 2\beta \tag{\ref{sette}}
\\
\sigma&<4(H-\frac 12) \tag{\ref{nove}}
\end{align}
which is simpler to analyse. Let us notice that
\[
-3\sigma\underset{\text{by }\eqref{sei}}{<}
\alpha+ \alpha-\sigma\underset{\text{by }\eqref{quattro} \text{ and } \eqref{sette}}{<}
\frac 2p+\frac2 \beta\underset{\text{by }\eqref{uno}}{<}
\sigma+1
\]
This sequence of inequalities is meaningful only when $-3\sigma< \sigma+1$, i.e. $\sigma>-\frac 14$.
Taking into account the last condition \eqref{nove}, we see that in order to fulfil all the above conditions it is necessary that $H>\frac7{16}$.
Therefore it is possible to fulfil all the conditions when $\frac 7{16}<H<\frac 12$. Setting $H=\frac 7{16}+c$ with
$0<c<\frac 1{16}$, we can choose for instance
\[
\alpha=\frac 14-2c,\qquad
\sigma=-\frac 14+3c,\qquad
\frac 2\beta=\frac 2q=\frac 12-4c,\qquad
\frac 2p=\frac 14-c
\]
in order to satisfy \eqref{uno}-\eqref{nove}.
\end{remark}
Now we can prove the local existence result for $u$, proving
that $\mathcal I$ is a contraction for $T$ small enough.
\begin{proposition}
Let $\frac 7{16}< H<\frac 12$ and
the parameters fulfil the conditions \eqref{uno}-\eqref{nove}.
Then, given $u_0 \in \mathcal B^\sigma_{pq}$
there exist a stopping time $\tau\in ]0,T]$ and for $\mathbb P$-a.e. $\omega$ a unique mild solution $u(\omega,\cdot)$
of equation \eqref{eq-u} with values in
$C([0,\tau(\omega)];\mathcal B^\sigma_{pq})\cap L^\beta(0,\tau(\omega);\mathcal B^\alpha_{pq})$.
\end{proposition}
\proof
Using the bilinearity of the operator $B$, we get
\[\begin{split}
\mathcal I(u_1)(t)-\mathcal I(u_2)(t)=&-\int_0^t e^{(t-s)A}B(u_1(s),u_1(s)-u_2(s))ds\\
&-\int_0^t e^{(t-s)A}B(u_1(s)-u_2(s),u_2(s))ds
\\&-\int_0^t e^{(t-s)A}B(z(s),u_1(s)-u_2(s))ds\\
&-\int_0^t e^{(t-s)A}B(u_1(s)-u_2(s),z(s))ds
\end{split}\]
Let us work in the subspace of $\mathcal E_T$ with $\|u\|_{\mathcal E_T}\le M$.
The initial data $u(0) \in \mathcal B^\sigma_{pq}$ is fixed.
Therefore, according to Lemma \ref{lemma-L} and Lemma \ref{lemma-C} we have
\begin{multline}
\|\mathcal I(u_1)-\mathcal I(u_2)\|_{\mathcal E_T}
\\\le
\overline C \big(T^{\frac 12-\frac 1p+\frac \sigma 2}+T^{\frac \alpha2+\frac 12-\frac 1p-\frac 1\beta}\big)
(M+\|z\|_{C([0,T];\mathcal B^\sigma_{pq})})
\|u_1-u_2\|_{\mathcal E_T}
\end{multline}
for a suitable constant $\overline C$ independent of $T$.
When $T$ is such that
\begin{equation}\label{ineqT}
\overline C \big(T^{\frac 12-\frac 1p+\frac \sigma 2}+T^{\frac \alpha2+\frac 12-\frac 1p-\frac 1\beta}\big)
(M+\|z\|_{C([0,T];\mathcal B^\sigma_{pq})})<1
\end{equation}
the mapping $\mathcal I$ is a contraction and hence has a unique fixed point, which is the unique solution
of equation \eqref{eq-u}.
Notice that $T$ is a random time, since inequality \eqref{ineqT} involves the random process $z$. It can be chosen to be a stopping time.
\hfill\qe
Since $v=z+u$, we also get existence of a local mild solution $v$ to equation
\eqref{NS} where the bilinear term $B(v,v)$ has to be understood as the sum of four terms, that is
\begin{multline}\label{nuovoNS}
dv(t)-Av(t)\ dt= -B(u(t),u(t))\ dt-B(u(t),z(t))\ dt
\\-B(z(t),u(t))\ dt-B(z(t),z(t))\ dt +dw^H(t)
\end{multline}
\begin{theorem}
Let $\frac 7{16}< H<\frac 12$ and
the parameters fulfil the conditions \eqref{uno}-\eqref{nove}.
Then, given $v_0 \in \mathcal B^\sigma_{pq}$
there exist a stopping time $\tau\in ]0,T]$ and
for $\mathbb P$-a.e. $\omega$ a unique mild solution $v(\omega,\cdot)$
of equation \eqref{nuovoNS} with values in
$C([0,\tau(\omega)];\mathcal B^\sigma_{pq})$.
\end{theorem}
\begin{remark}
We cannot get a global existence result for $v$ as in \cite{AC,DPD}; indeed, when $H=\frac 12$ the
Gaussian measure
$\mu^H$ defined by \eqref{measure}
is invariant for the Navier-Stokes equation \eqref{NS}.
This allows to define $B(v,v)$ and to get global existence.
However, when $H\neq \frac12$ the
measure
$\mu^H$ is invariant for the Stokes equation \eqref{eq-lin} but not
for the Navier-Stokes equation \eqref{NS}; this depends eventually on the fact that for $0<H<1$
the Gaussian measure $\mu^H$ is (formally) invariant for the deterministic Euler dynamics
\[
\frac{dv}{dt}= -B(v,v)
\]
only when $H=\frac 12$ (and in this case $\mu^{\frac 12}$ is called the enstrophy measure, see \cite{AFHK}).
\end{remark}
\subsection{$\frac12<H<1$}
When $\frac12<H<1$ the fBm $w^H$ and the Stokes process $z$ are more regular and
we expect more regularity of the processes $u$ and $v$ too.
Actually we can obtain an a priori energy estimate; this will lead to global existence.
Let us notice that now we deal with solutions which are weak in the sense of PDE's; for instance the solution $u$ of equation \eqref{eq-u} has paths at least in $L^\infty(0,T;\mathcal H^0)\cap L^2(0,T;\mathcal H^{1})$
and fulfils for any $t>0$ and any $\varphi\in \mathcal H^1$
\begin{multline*}
\langle u(t)-u(0),\varphi\rangle-\int_0^t \langle Au(s), \varphi\rangle ds +\int_0^t
\langle B(u(s),\varphi), u(s)\rangle ds
\\+\int_0^t \langle B(u(s),\varphi) ,z(s)\rangle ds
+\int_0^t \langle B(z(s),\varphi), u(s)\rangle ds
=-\int_0^t \langle B(z(s),z(s)),\varphi\rangle ds
\end{multline*}
$\mathbb P$-a.s.. This is obtained from \eqref{eq-u} by using \eqref{tril}.
Since the paths of the process $u$ are in $L^\infty(0,T;\mathcal H^0)\cap L^2(0,T;\mathcal H^{1})$ and those of $z$ are in
$C([0,T];\mathcal H^\sigma)$ for some $\sigma>0$,
then all the terms in the latter relationship are well defined. Let us check the trilinear terms, by using H\"older inequality, interpolation inequality and Sobolev embeddings:
\[
|\langle B(u(s),\varphi), u(s)\rangle|
\le \|u(s)\|_{L^4}^2\|\varphi\|_{\mathcal H^1}
\le C\|u(s)\|_{\mathcal H^0} \|u(s)\|_{\mathcal H^1} \|\varphi\|_{\mathcal H^1}
\]
\[
\begin{split}
|\langle B(u(s),\varphi), z(s)\rangle|
&\le \|u(s)\|_{L^{\frac 2\sigma}}\|\varphi\|_{\mathcal H^1} \|z(s)\|_{L^{\frac 2{1-\sigma}}}\; \text{ for } 0<\sigma<1
\\&\le C\|u(s)\|_{\mathcal H^1} \|\varphi\|_{\mathcal H^1}\|z(s)\|_{\mathcal H^\sigma}
\end{split}
\]
The third trilinear term can be dealt with as with the second term. And finally the latter term is well defined as soon as
$B(z(s),z(s))\in L^1(0,T;\mathcal H^{-1})$ (see Remark \ref{reg}).
Now, let $0<\sigma<2(H-\frac 12)$ for $\frac 12<H<1$.
Taking the $\mathcal H^0$-scalar product of equation \eqref{eq-u} with $u$, we get the usual
energy estimate (see \cite{Temam}). We make use of \eqref{Bnull} and \eqref{stimeGiga}:
\[\begin{split}
\frac 12 \frac{d}{dt} &\|u(t)\|_{\mathcal H^0}^2+\|\nabla u(t)\|^2_{L^2}
\\&
=-\langle B(u(t)+z(t),u(t)),u(t)\rangle
-\langle B(u(t),z(t)),u(t)\rangle-\langle B(z(t),z(t)),u(t)\rangle
\\
&=-\langle B(u(t), z(t)),u(t)\rangle-\langle B(z(t),z(t)),u(t)\rangle
\\
&\le
\|B(u(t), z(t))\|_{\mathcal H^{-1}} \|u(t)\|_{\mathcal H^1}
+\|B(z(t), z(t))\|_{\mathcal H^{-1}} \|u(t)\|_{\mathcal H^1}
\\
&\le C\|u(t)\|_{\mathcal H^{1-\sigma}} \|z(t)\|_{\mathcal H^\sigma}\|u(t)\|_{\mathcal H^1}
+\frac 14 \|u(t)\|_{\mathcal H^1}^2 +C \|B(z(t),z(t))\|_{\mathcal H^{-1}}^2
\end{split}
\]
Moreover, by interpolation and Young inequality
\[\begin{split}
\|u\|_{\mathcal H^{1-\sigma}} \|z\|_{\mathcal H^\sigma}\|u\|_{\mathcal H^1}
&\le
C\|u\|^\sigma_{\mathcal H^0} \|u\|^{1-\sigma}_{\mathcal H^1} \|z\|_{\mathcal H^\sigma}\|u\|_{\mathcal H^1}
\\&=
C\|u\|^\sigma_{\mathcal H^0} \|u\|^{2-\sigma}_{\mathcal H^1} \|z\|_{\mathcal H^\sigma}
\\&\le
\frac 14 \|u\|^{2}_{\mathcal H^1} +C\|u\|^2_{\mathcal H^0}\|z\|^{\frac 2\sigma}_{\mathcal H^\sigma}
\end{split}\]
Since $\|u\|^2_{\mathcal H^1}=\|u\|^2_{\mathcal H^0}+\|\nabla u\|^2_{L^2}$, collecting all the estimates
we have found
\[
\frac{d}{dt} \|u(t)\|_{\mathcal H^0}^2+\|\nabla u(t)\|^2_{L^2}
\le C(1+\|z\|^{\frac 2\sigma}_{\mathcal H^\sigma})\|u\|^2_{\mathcal H^0} +
C \|B(z(t),z(t))\|_{\mathcal H^{-1}}^2
\]
According to Remark \ref{reg}, $B(z,z)\in L^2(0,T;\mathcal H^{-1})$.
Moreover $z \in C([0,T];\mathcal H^\sigma)$ by Proposition \ref{pro-zLm}.
This provides as usual by means of Gronwall Lemma
that $u \in L^\infty(0,T;\mathcal H^0)\cap L^2(0,T;\mathcal H^1)$, $\mathbb P$-a.s.
The reader can see all the details of this standard procedure
in \cite{Temam}. First one has to work on the finite dimensional approximation and then pass to the limit.
By interpolation
\begin{equation}\label{u-reg}
u \in L^{\frac 2\sigma}(0,T;\mathcal H^\sigma).
\end{equation}
Hence, $v=z+u \in L^\infty(0,T;\mathcal H^0)\cap L^{\frac 2\sigma}(0,T;\mathcal H^\sigma)$.
We can improve the estimates, now getting
$u \in C([0,T];\mathcal H^\sigma)\cap L^2(0,T;\mathcal H^{1+\sigma})$.
This gives global existence for the process $v=z+u$ in the space $C([0,T];\mathcal H^\sigma)$ for
$0<\sigma<2(H-\frac 12)$. Therefore the term $B(v,v)$ is well defined. Actually
the process $v$ is a weak solution (in the sense of PDE's) to equation \eqref{NS}, that is it solves
for any $t>0$ and any $\varphi\in \mathcal H^{2-\sigma}$
\begin{multline*}
\langle v(t)-v_0,\varphi\rangle-\int_0^t \langle Av(s), \varphi\rangle ds +\int_0^t
\langle B(v(s),\varphi), v(s)\rangle ds
=\langle w^H(t),\varphi\rangle
\end{multline*}
$\mathbb P$-a.s.. We leave to the reader to check that all term are well defined (use that $2-\sigma>1$).
Similarly, the process $z$ can be considered as a weak solution of the
stochastic Stokes equation \eqref{eq-lin}.
\begin{theorem}[Global existence] \label{th-global-ex}
Let $\frac 12<H<1$ and
\begin{equation}
0<\sigma<2(H-\frac 12).
\end{equation}
Given $v_0\in\mathcal H^\sigma$ there exists a
$C([0,T];\mathcal H^\sigma)\cap L^2(0,T;\mathcal H^{1+\sigma})$-valued process $u$
solving equation \eqref{eq-u} with $u(0)=v_0-z(0)$.
Therefore there exists a $C([0,T];\mathcal H^\sigma)$-valued process $v$ solving
equation \eqref{NS} with $v(0)=v_0$.
\end{theorem}
\proof
We have to work on
\[\begin{split}
\frac 12 \frac{d}{dt} &\|u(t)\|_{\mathcal H^\sigma}^2+\|\nabla u(t)\|^2_{H^\sigma}
\\&
=-\langle B(u(t)+z(t),u(t)),(-A)^{\sigma}u(t)\rangle
-\langle B(u(t),z(t)),(-A)^{\sigma}u(t)\rangle
\\&\qquad-\langle B(z(t),z(t)),(-A)^{\sigma}u(t)\rangle
\end{split}
\]
We performe the estimates on the terms in the r.h.s.. Notice that $0<\sigma<1$.
From \eqref{stimeGiga}, the interpolation inequality
$\|u\|_{\mathcal H^{1}}\le C\|u\|^{\sigma}_{\mathcal H^{\sigma}}\|u\|^{1-\sigma}_{\mathcal H^{1+\sigma}}$
and Young inequality we get
\[\begin{split}
\langle B(u,u),(-A)^{\sigma}u\rangle
&\le
\|B(u,u)\|_{\mathcal H^{\sigma-1}} \|u\|_{\mathcal H^{\sigma+1}}
\\&
\le C \|u\|_{\mathcal H^{\sigma}}\|u\|_{\mathcal H^{1}}\|u\|_{\mathcal H^{1+\sigma}}
\\&
\le C \|u\|^{1+\sigma}_{\mathcal H^{\sigma}}\|u\|^{2-\sigma}_{\mathcal H^{1+\sigma}}
\\&\le
\frac 18 \|u\|^{2}_{\mathcal H^{1+\sigma}}+C \|u\|^{\frac 2 \sigma}_{\mathcal H^{\sigma}}\|u\|^2_{\mathcal H^{\sigma}}
\end{split}\]
\[\begin{split}
\langle B(z,u),(-A)^{\sigma}u\rangle
&\le
\|B(z,u)\|_{\mathcal H^{\sigma-1}} \|u\|_{\mathcal H^{\sigma+1}}
\\&
\le C \|z\|_{\mathcal H^{\sigma}}\|u\|_{\mathcal H^{1}}\|u\|_{\mathcal H^{1+\sigma}}
\\&\le
C \|z\|_{\mathcal H^{\sigma}}\|u\|^{\sigma}_{\mathcal H^{\sigma}}\|u\|^{2-\sigma}_{\mathcal H^{1+\sigma}}
\\&\le
\frac 18 \|u\|^2_{\mathcal H^{1+\sigma}}+ C \|z\|_{\mathcal H^{\sigma}}^{\frac 2 \sigma}\|u\|^2_{\mathcal H^{\sigma}}
\end{split}\]
\[\begin{split}
\langle B(u,z),(-A)^{\sigma}u\rangle
&\le
\|B(u,z)\|_{\mathcal H^{\sigma-1}} \|u\|_{\mathcal H^{\sigma+1}}
\\&
\le C \|u\|_{\mathcal H^{1}}\|z\|_{\mathcal H^{\sigma+\epsilon}}\|u\|_{\mathcal H^{1+\sigma}}
\\&\le C \|u\|^{\sigma}_{\mathcal H^{\sigma}}
\|z\|_{\mathcal H^{\sigma+\epsilon}}\|u\|^{2-\sigma}_{\mathcal H^{1+\sigma}}
\\&\le
\frac 18 \|u\|^2_{\mathcal H^{1+\sigma}}+ C \|z\|_{\mathcal H^{\sigma+\epsilon}}^{\frac 2 \sigma}\|u\|^2_{\mathcal H^{\sigma}}
\end{split}\]
where $0<\epsilon\ll 1$, and
\[\begin{split}
\langle B(z,z),(-A)^{\sigma}u\rangle
&\le
\|B(z,z)\|_{\mathcal H^{\sigma-1}} \|u\|_{\mathcal H^{\sigma+1}}
\\&
\le \frac 18 \|u\|^2_{\mathcal H^{1+\sigma}}+ C \|B(z,z)\|^2_{\mathcal H^{\sigma-1}}
\end{split}\]
Now, $ \|u\|^2_{\mathcal H^{1+\sigma}}=\|u\|^2_{\mathcal H^\sigma}+\|\nabla u\|^2_{H^\sigma}$.
Summing up
\[
\frac{d}{dt} \|u(t)\|_{\mathcal H^\sigma}^2+\|\nabla u(t)\|^2_{H^\sigma}
\le C
(1+\|u\|^{\frac 2 \sigma}_{\mathcal H^{\sigma}}+\|z\|_{\mathcal H^{\sigma+\epsilon}}^{\frac 2 \sigma}
+\|B(z,z)\|^2_{\mathcal H^{\sigma-1}}) \|u\|^2_{\mathcal H^{\sigma}} .
\]
Now we bear in mind \eqref{u-reg}, Proposition \ref{pro-zLm} with $\epsilon\ll 1$ and
Proposition \ref{stimaBz} to deal with the sum in the r.h.s..
So by means of Gronwall lemma
we conclude that
$u \in L^\infty(0,T;\mathcal H^{\sigma})\cap L^2(0,T;\mathcal H^{\sigma+1})$.
In addition, by means of the previous estimates we get
\[
\frac{du}{dt}=Au-B(u,u)-B(u,z)-B(z,u)-B(z,z)
\in L^2(0,T;\mathcal H^{\sigma-1}).
\]
Since $u\in L^2(0,T;\mathcal H^{\sigma+1})$, one gets that
$u \in C([0,T];\mathcal H^{\sigma})$ (see \cite{Temam}). Hence
$v=u+z\in C([0,T];\mathcal H^{\sigma})$.
\hfill\qe
\medskip
The solution obtained is also unique. We have a pathwise uniqueness result.
\begin{theorem}[Uniqueness]
Let $\frac 12<H<1$ and $0<\sigma<2(H-\frac 12)$.
Given $v_0\in\mathcal H^\sigma$ there exists a unique
$C([0,T];\mathcal H^\sigma)$-valued process solving \eqref{NS}.
\end{theorem}
\proof
Let $v_1,v_2 \in C([0,T];\mathcal H^\sigma)$ be solutions of \eqref{NS}.
Then the difference $V=v_1-v_2$ fulfils
\begin{equation}\label{eq-V}
\frac {dV}{dt}+AV=-B(v_1,v_1)+B(v_2,v_2)
\end{equation}
with $V(0)=0$.
We are going to prove that $V(t)=0$ for all $t\ge0$ and this is obtained by means of the a priori estimate
of the energy. Actually the paths of $V$ are more regular than those of $v_1$ and $v_2$, since the
noise term has desappeared in \eqref{eq-V}; this was remarked
already in \cite{Fe03}.
More precisely, we state that any solution $V$ of
\eqref{eq-V} with $V(0)=0$ is such that
$ V\in C([0,T];\mathcal H^0)\cap L^2(0,T; \mathcal H^1)$
and $\frac {dV}{dt}\in L^2(0,T;\mathcal H^{-1})$;
therefore the equality $\frac{d}{dt}\|V(t)\|^2_{\mathcal H^0}=2\langle \frac
{dV}{dt}(t),V(t)\rangle$ holds and the energy estimates (coming later) are justified.
Indeed we are given $v_1,v_2 \in C([0,T];\mathcal H^\sigma)$ with
$\sigma \in (0,1)$.
The r.h.s. of \eqref{eq-V}
belongs to $L^\infty(0,T;\mathcal H^{2\sigma-2})$, since
\[
\|B(v_i,v_i)\|_{\mathcal H^{2\sigma-2}} \le C \|v_i\|_{\mathcal H^\sigma}^2
\]
by \eqref{stimeGiga}.
According to Proposition \ref{isom} (used with vanishing initial data and $r=2$) we get that any solution $V$ will be in
$L^2(0,T;\mathcal H^{2\sigma})$.
If $2\sigma\ge 1$
(i.e. when $\frac 12\le \sigma<1$) we have obtained that $V \in L^2(0,T; \mathcal
H^1)$; moreover, $\frac {dV}{dt}=-AV+B(v_1,v_1)-B(v_2,v_2)$ and
therefore
$\frac {dV}{dt}\in L^2(0,T;\mathcal H^{-1})$.
Otherwise, when $0<\sigma<\frac 12$ we proceed as follows;
by the bilinearity of $B$ we get that \eqref{eq-V} can
be written as
\begin{equation}\label{eq-diffV}
\frac {dV}{dt}+AV=-B(v_1,V)-B(V,v_2)
\end{equation}
Let us look at the regularity of the r.h.s., knowing that
$v_1,v_2 \in C([0,T];\mathcal H^\sigma)$ and
$V\in C([0,T];\mathcal H^\sigma)\cap L^2(0,T;\mathcal H^{2\sigma})$.
Thanks to \eqref{stimeGiga} we get, for $0<\sigma<\frac 12$
\[
\|B(v_1,V)\|_{\mathcal H^{3\sigma-2}}\le C \|v_1\|_{\mathcal H^{\sigma}} \|V\|_{\mathcal H^{2\sigma}}
\]
\[
\|B(V,v_2)\|_{\mathcal H^{3\sigma-2}}\le C \|V\|_{\mathcal H^{2\sigma}} \|v_2\|_{\mathcal H^{\sigma}}
\]
Hence the r.h.s. of \eqref{eq-diffV} belongs to $L^2(0,T;\mathcal
H^{3\sigma-2})$ and therefore thanks to Proposition \ref{isom} any
solution $V$ belongs to $L^2(0,T;\mathcal H^{3\sigma})$.
If $3\sigma\ge 1$ (i.e. when $\frac 13\le \sigma<\frac 12$) we have
obtained that $V \in L^2(0,T; \mathcal H^1)$ and moreover the
r.h.s. of \eqref{eq-diffV} belongs to
$L^2(0,T; \mathcal H^{3\sigma-2})\subseteq L^2(0,T; \mathcal H^{-1})$. Hence
we conclude as in the previous case about $\frac {dV}{dt}$.
Otherwise, for
smaller values of $\sigma$ we proceed again with the bootstrap
argument.
We conclude that, given
$\sigma \in (0,1)$ and $v_i \in C([0,T]; \mathcal H^\sigma)$, any solution $V$ to
\eqref{eq-diffV} is in $ L^2(0,T; \mathcal H^1)$ and $\frac {dV}{dt}\in L^2(0,T;\mathcal H^{-1})$.
Now, we look for the a priori energy estimate.
Keeping in mind \eqref{Bnull}, \eqref{stimeGiga} and the interpolation inequality
$\|V\|_{\mathcal H^{1-\sigma}}\le C \|V(t)\|^\sigma_{\mathcal H^{0}} \|V(t)\|^{1-\sigma}_{\mathcal H^1}$, we get
\[\begin{split}
\frac 12 \frac{d}{dt}\|V(t)\|^2_{\mathcal H^0}+\|\nabla V(t)\|^2_{L^2}
&= -
\langle B(v_1(t),V(t)), V(t) \rangle-
\langle B(V(t),v_2(t)), V(t) \rangle
\\
&=\langle B(V(t),V(t)),v_2(t) \rangle\\
&\le \|B(V(t),V(t))\|_{\mathcal H^{-\sigma}} \|v_2(t)\|_{\mathcal H^{\sigma}}\\
&\le C \|V(t)\|_{\mathcal H^{1-\sigma}} \|V(t)\|_{\mathcal H^1} \|v_2(t)\|_{\mathcal H^{\sigma}}\\
&\le C \|V(t)\|^\sigma_{\mathcal H^{0}} \|V(t)\|^{2-\sigma}_{\mathcal H^1} \|v_2(t)\|_{\mathcal H^{\sigma}}\\
&\le
\frac 12\|V(t)\|^{2}_{\mathcal H^1}+C \|v_2(t)\|_{\mathcal H^{\sigma}}^{\frac 2\sigma}
\|V(t)\|^2_{\mathcal H^{0}}\\
&=\frac 12\|V(t)\|^{2}_{\mathcal H^0}+\frac 12\|\nabla V(t)\|^{2}_{L^2}+
C \|v_2(t)\|_{\mathcal H^{\sigma}}^{\frac 2\sigma}\|V(t)\|^2_{\mathcal H^{0}}
\end{split}
\]
From
\[
\frac{d}{dt}\|V(t)\|^2_{\mathcal H^0}\le
C(1+ \|v_2(t)\|_{\mathcal H^{\sigma}}^{\frac 2\sigma})\|V(t)\|^2_{\mathcal H^{0}}
\]
we conclude by Gronwall lemma that $\displaystyle{\sup_{0\le t\le T}}
\|V(t)\|_{\mathcal H^0}=0$.
This proves pathwise uniqueness. \hfill\qe
|
{
"timestamp": "2018-12-14T02:10:42",
"yymm": "1802",
"arxiv_id": "1802.08623",
"language": "en",
"url": "https://arxiv.org/abs/1802.08623"
}
|
\section{Introduction}
Recent small-scale experiments \cite{barends2014, corcoles2015, kelly2015, nigg2014} have shown an increasing level of control over quantum systems, constituting an important step towards the demonstration of quantum error correction~\cite{Kitaev2002, Nielsen2010}.
In order to scale up quantum devices and maintain their computational power, one needs to protect logical information from unavoidable errors by encoding it into quantum error-correcting codes \cite{Shor1995}.
One of the most successful class of quantum codes, stabilizer codes \cite{Gottesman1996}, allows one to detect errors by measuring stabilizer operators without altering the encoded information.
Subsequently, errors can be corrected by implementing a recovery operation.
A classical algorithm, which allows one to find an appropriate correction from the available classical data, i.e., the $\pm 1$ measurement outcomes of stabilizers for the given code, is called a decoder.
Optimal decoding of generic stabilizer codes is a computationally hard problem, even for simple noise models \cite{Iyer2015}.
If codes have some structure, then the task of decoding becomes more tractable and efficient decoders with good performance may be available.
For example, in the case of topological stabilizer codes \cite{Kitaev2003, Bravyi1998, Bombin2006, Bombin2013book, Haah2011}, whose stabilizer generators are geometrically local, any unsatisfied stabilizer returning $-1$ measurement outcome indicates the presence of errors on some qubits in its neighborhood.
By exploiting this pattern, many decoding schemes have been developed, some of which are based on cellular automata \cite{Harrington2004, Hastings2013, Herold2015, Herold2017, Duivenvoorden2017, Dauphinais2017, Kubicathesis}, the Minimum-Weight Perfect Matching algorithm \cite{Dennis2002, Delfosse2014, Nickerson2017}, tensor networks \cite{Bravyi2014, Darmawan2018}, renormalization group \cite{Duclos-Cianci2013, Duclos-Cianci2013a, Breuckmann2017, Bravyi2011a, Brown2015} or other approaches \cite{Delfosse2017a, Delfosse2017}.
Efficient decoders with good performance are often taylor-made for specific codes and are not easily adaptable to other settings.
For instance, despite a local unitary equivalence of two families of topological codes \cite{Kubica2015}, the color and toric codes, one cannot straightforwardly use toric code decoders in the color code setting; rather, some careful modifications are needed \cite{Delfosse2014, Kubicathesis}.
Moreover, decoding strategies are typically designed and analyzed for simplistic noise models, which may not describe well errors present in the experimental setup.
Importantly, the best approach to scalable quantum devices is still under debate and dominant sources of noise are yet to be thoroughly explored.
Thus, it would be very desirable to develop decoding methods without full characterization of quantum hardware, which are adaptable to various quantum codes and realistic noise models.
\begin{table}[h!]
\centering
\begin{tabular*}
{\columnwidth}{@{\extracolsep{\fill} } l c c c}
\hline\hline
\multicolumn{4}{c}{threshold of the triangular color code}\\
\hline\hline
\diagbox{noise}{decoder} & neural & projection & optimal\\
\hline
bit-/phase-flip & $\sim 19.0\%$ & $\sim 16.2\%$ & $20.6(4)\%$ \cite{Katzgraber2009} \\
depolarizing & $\sim 17.5\%$ & $\sim 12.6\%$ & $18.9(3)\%$ \cite{Bombin2012} \\
NN-depolarizing & $\sim 15.0\%$ & $\sim 13.5\%$ & ? \\
\hline\hline
&&&\\
\hline\hline
\multicolumn{4}{c}{threshold of the triangular toric code with a twist}\\
\hline\hline
\diagbox{noise}{decoder} & neural & MWPM & optimal\\
\hline
bit-/phase-flip & $\sim19.6\%$ & $\sim19.2\%$ & $20.68(4)\%$ \cite{Dennis2002} \\
depolarizing & $\sim17.8\%$ & $\sim15.3\%$ & $18.9(3)\%$ \cite{Bombin2012} \\
NN-depolarizing & $\sim16.7\%$ & $\sim14.2\%$ & ? \\
\hline\hline
\end{tabular*}
\caption{
The error-correction threshold for neural decoders compared with standard decoding methods based on the Minimum-Weight Perfect Matching algorithm and the projection decoder.
Neural decoders were applied to 2D toric and color codes with the code distance up to $d=11$.
Numerical simulations were performed for various noise models, including the nearest-neighbor spatially-correlated depolarizing noise model, assuming perfect syndrome measurements.
Threshold error rates are expressed in terms of the effective error rate $p_{\mathrm{eff}}$; see Section~\ref{sec_noisemodel} for details.
}
\label{tab_thresholds}
\end{table}
The main goal of our work is to systematically explore recently proposed decoding strategies based on artificial neural networks \cite{Torlai2017, Baireuther2017, Krastanov2017, Varsamopoulos2017, Breuckmann2017a}.
We consider two-step decoding.
In step 1, for any given configuration of unsatisfied stabilizers we deterministically find a Pauli operator, which returns corrupted encoded information into the code space.
After this step, all stabilizers are satisfied but a non-trivial logical operator may have been implemented by the attempted Pauli correction combined with the initial error.
In step 2, we use a feedforward neural network to determine what (if any) non-trivial logical operator is likely to be introduced in step 1, so that we can account for it in the recovery.
We emphasize that step 2 is a classification problem, particularly well-suited for machine learning.
In our work, we convincingly demonstrate the versatility of neural decoders by applying them to two families of codes, the two-dimensional (2D) triangular color and toric codes, under different noise models with realistic features, such as spatially-correlated errors.
We observe that, irrespective of the noise models, neural-network decoding outperforms standard strategies, including the Minimum-Weight Perfect Matching algorithm \cite{Dennis2002} and the projection decoder \cite{Delfosse2014}; see Table~\ref{tab_thresholds}.
It is worth emphasizing that only the training datasets, but not the explicit knowledge of the noise models or the geometric structure of the codes, were needed to train neural decoders.
We also analyze how computational costs of training and neural network parameters scale with the growing code distance.
Our work indicates that due to its adaptability neural-network decoding is a promising error-correction method, which can be used in a wide range of future small-scale quantum devices, especially if the dominant sources of errors are not well characterized.
The organization of the article is as follows.
We start by discussing quantum error correction from the perspective of topological codes, the triangular color code and the toric code with a twist.
In particular, in Section~\ref{sec_decodingreduction} we explain how to construct the excitation graph, which leads to an efficient algorithm for step 1 of the neural decoder.
In Section~\ref{sec_noisemodel} we introduce a new notion of the effective error rate, which allows us to easily compare threshold error rates for different noise models.
Then, we describe neural decoding and its performance under different noise models, including the spatially-correlated depolarizing noise.
In Section~\ref{sec_training} we explain how training of deep neural networks is accomplished by successively increasing the error rate used to generate the training dataset.
This training method likely has significant impact, since it may lead to faster convergence and better final performance of neural networks for quantum error-correction applications.
We conclude the article with the discussion of our results and their implications for future neural decoders used in practice.
\section{Error correction with topological codes}
\subsection{Topological stabilizer codes}
Stabilizer codes \cite{Gottesman1996} are an important class of quantum error-correcting codes \cite{Shor1995} specified by a stabilizer group~$\mathcal{S}$.
The stabilizer group $\mathcal{S}$ is an Abelian subgroup of the Pauli group generated by $n$-qubit Pauli operators $P_1\otimes\ldots \otimes P_n$, where $P_i \in \{I,X,Y,Z\}$ and $-I \not\in\mathcal{S}$.
The logical information is encoded into the codespace, which is the $(+1)$-eigenspace of all the elements of $\mathcal{S}$.
Logical Pauli operators $\overline L \in \mathcal{L}$ are identified with elements of the normalizer $\mathcal{S}$ of the stabilizer group $\mathcal{S}$ in the Paui group.
An operator $L$ which implements a non-trivial logical Pauli operator $\overline L \neq \overline I$ can be chosen to be a product of Pauli operators, which commute with all the elements in the stabilizer group but do not belong to $\mathcal{S}$.
The weight of the minimal-support non-trivial logical Pauli operator determines the distance of the code.
Physical qubits of the stabilizer code can be affected by noise, which can take encoded logical information outside of the codespace.
By measuring stabilizer generators no information about the original encoded state is revealed.
Rather, one effectively projects errors present in the system onto some Pauli operators and subsequently gains some knowledge about them.
The set of unsatisfied stabilizers returning $-1$ measurement outcome is called a syndrome.
The syndrome serves as a classical input to a decoding algorithm, which allows one to find a recovery Pauli operator bringing the corrupted encoded state back to the codespace.
For a special class of stabilizer codes, the CSS codes~\cite{Calderbank1997}, whose stabilizer generators are products of either $X$- or $Z$-type Pauli operators, one can independently correct $Z$- and $X$-type errors using the appropriate $X$- and $Z$-type syndrome.
Topological stabilizer codes \cite{Kitaev2003, Bravyi1998, Bombin2006, Bombin2013book, Haah2011} are a family of stabilizer codes exhibiting particularly good resilience to noise.
The distinctive feature of topological stabilizer codes is the geometric locality of their generators.
Namely, physical qubits can be arranged to form a lattice in such a way that every stabilizer generator is supported on a constant number of qubits within some geometrically local region.
At the same time, no logical Pauli operator can be implemented via a unitary acting on physical qubits in any local region.
By enlarging the system size, one increases the distance and error-correction capabilities of the topological code without changing the required complexity of local stabilizer measurements.
This is in stark contrast with other quantum codes, such as concatenated codes \cite{knill1996}, whose stabilizer weight necessarily increases with the distance and thus makes those constructions experimentally more challenging.
\begin{figure}[h!]
(a)\includegraphics[width=.7\columnwidth]{figures/CC_logicalOps_v2}
(b)\includegraphics[width=.7\columnwidth]{figures/TC_logicalOps_v2}
\caption{
(a) 2D triangular color code on a patch of the hexagonal lattice with $3$-valent vertices and $3$-colorable faces.
Every face supports both $X$- and $Z$-stabilizers.
The string of Pauli $Z$ operators (yellow $\oplus_1$) implements a logical~$\overline Z$ operator, while the string of Pauli $X$ operators (orange $\otimes_2$) implements a logical~$\overline X$.
Both operators connect all three boundaries.
(b)
2D triangular toric code with a twist.
Dark and white faces support $X$- and $Z$-stabilizers, respectively.
Depending on the coloring of mixed dark/white faces along a 1D defect line (dashed line), stabilizers are mixed products of Pauli $X$ and $Z$.
Red and blue strings depict two equivalent representatives of a logical~$\overline Z$ operator.
Upon crossing the defect line, the string changes from $X$-type (blue $\otimes_1$) to $Z$-type (blue $\oplus_1$).
}
\label{fig_logicalOps}
\end{figure}
Two well-known examples of topological stabilizer codes are the toric and color codes.
The triangular color code is defined on a two-dimensional lattice with a boundary, whose vertices are 3-valent
\footnote{All the vertices are 3-valent except for three corner vertices on the boundary.}
and faces $f\in F$ are 3-colorable; see Fig.~\ref{fig_logicalOps}(a).
Qubits are identified with vertices.
The color code is a CSS code and its stabilizer group is defined as follows
\begin{equation}
\mathcal{S}_{CC} = \langle X_f,Z_f | f\in F \rangle,
\end{equation}
where $X_f$ and $Z_f$ are Pauli $X$ and $Z$ operators supported on all qubits belonging to a face $f\in F$.
Accordingly, $X$- and $Z$-type errors can be independently corrected using the $Z$- and $X$-type syndrome.
The triangular toric code with a twist \cite{Yoder2017} can be defined for the same arrangement of physical qubits as the triangular color code.
Its lattice can be obtained from the color code lattice by keeping all the vertices, adding extra edges and modifying some faces; see Fig.~\ref{fig_logicalOps}(b).
The resulting lattice is 4-valent
\footnote{All the vertices are 4-valent except for three corner vertices on the boundary and one vertex in the bulk, which corresponds to a twist, i.e., the end of the defect line.}
and the faces are 2-colorable, except for the ``mixed'' faces along a 1D defect line.
The color of the face indicates the type of the stabilizer generator identified with that face.
Namely, dark $f\in F_D$ and white $g\in F_W$ faces support $X$-type $X_f$ and $Z$-type $Z_g$ stabilizers.
Depending on the coloring of mixed faces $h\in F_M$, stabilizers $S_h$ are defined to be mixed products of Pauli $X$ and $Z$ operators.
We emphasize that the choice of mixed stabilizer generators along the defect line is needed for the stabilizers $S_h$ to commute with $X_f$ and $Z_g$ for all $f\in F_D, g\in F_W, h\in F_M$. The full stabilizer group is thus given by
\begin{equation}
\mathcal{S}_{TC} = \langle X_f,Z_g, S_h | f\in F_D, g\in F_W, h\in F_M\rangle.
\end{equation}
We remark that due to mixed stabilizer generators it is not possible to decode $X$ and $Z$ errors independently.
Logical Pauli operators of the 2D topological stabilizer codes can be thought of as deformable non-contractible 1D string-like operators.
In the case of the triangular color and toric codes, logical operators connect certain boundaries as depicted in Fig.~\ref{fig_logicalOps}.
\subsection{Quasiparticle excitations}
It is illustrative to establish a connection between quantum error-correcting codes and quantum many-body systems described by commuting Hamiltonians.
For a topological stabilizer code with the stabilizer group $\mathcal{S}$ we can define a commuting \emph{stabilizer Hamiltonian} $H(\mathcal{S})$ to be a sum of stabilizer generators of $\mathcal{S}$ with a negative sign.
In particular, for the color code and the toric code with a twist we choose their stabilizer Hamiltonians to be
\begin{eqnarray}
H_{CC} &=& -\sum_{f\in F}X_f -\sum_{f\in F} Z_f, \\
H_{TC} &=& -\sum_{f \in F_D} X_f -\sum_{g \in F_W} Z_g -\sum_{h \in F_M} S_h.
\end{eqnarray}
Note that all the terms in the stabilizer Hamiltonian $H(\mathcal{S})$ are mutually commuting, thus any eigenstate of $H(\mathcal{S})$ has to be an eigenstate of every single term.
Since eigenstates of stabilizer generators can only have $\pm 1$ eigenvalues, we conclude that the code space defined as the $(+1)$-eigenspace of all the elements of $\mathcal{S}$ coincides with the ground space of $H(\mathcal{S})$.
We can think of errors affecting information encoded in the topological stabilizer code as operators creating localized quasiparticle excitations in the related quantum many-body system.
Namely, consider any Pauli error which anticommutes with some stabilizer generators.
The error moves the encoded logical state outside the code space or, equivalently, the ground state outside the ground space.
The resulting state is excited in the sense that its energy is larger than the ground space energy by the amount proportional to the number of violated stabilizer Hamiltonian terms.
The unsatisfied stabilizer terms can be identified with quasiparticle excitations
\cite{Wilczek1982, Kitaev2003, Preskill1999, Bombin2007}.
Depending on whether the unsatisfied stabilizer is of $X$- or $Z$-type, we will call the excitation electric $e_K$ or magnetic $m_K$.
\footnote{For the mixed stabilizers along the defect line, there is ambiguity in associating the type of the excitation since the electric and magnetic excitations are exchanged upon crossing the defect line.
Thus, we would refer to those excitations without specifying their type.
}
The subscript $K$ indicates the color of the face supporting the excitation.
In particular, for the toric code we can only have $e_D$ and $m_W$, whereas the color code excitations can be supported on faces of any color, i.e., $e_K$ and $m_K$ for any $K\in \{ R, G, B\}$.
In order to understand excitation configurations arising from any Pauli errors, it suffices to know what excitations geometrically local Pauli operators can create and how to combine them.
We now discuss these constraints, also known as \emph{fusion rules} for topological stabilizer codes.
In case of the toric code, a single-qubit Pauli $X$ or $Z$ error on the qubit in the bulk of the system violates two $Z$- or $X$-type stabilizers on neighboring faces and thus necessarily creates two excitations of the same type, either magnetic or electric; see Fig.~\ref{fig_toricandcolor}(b).
If two errors with non-overlapping support independently create the same excitation on a face $f\in F$, then the product of both errors will not create any excitation at that location.
For an illustration, let us consider two single-qubit errors $X_i$ and $X_j$ on qubits $i$ and $j$ belonging to the edge $\{ i,j \}$.
Each error independently creates a magnetic excitation on the face $f$ containing the edge $\{ i,j \}$; however, the combined error $X_i X_j$ results in no excitation on $f$.
The above discussion can be summarized by the toric code fusion rules
\begin{equation}
e_D \times e_D = m_W \times m_W = 1,
\label{eq_TCfusion}
\end{equation}
which express the fact that in the bulk excitations of the same type can only be created (by geometrically local operators) or annihilated in pairs.
Note that $1$ denotes no excitation.
\begin{figure}
(a)\includegraphics[width=.7\columnwidth]{figures/CC_lattice_v2}
(b)\includegraphics[width=.7\columnwidth]{figures/TC_lattice_v2}
\caption{
Quasiparticle excitations in the 2D triangular color and toric codes.
(a) A single $X$-error (white $\otimes_1$) in the bulk of the color code leads to three unsatisfied $Z$-stabilizers on neighboring faces, thus creates a triple of magnetic excitations (red, green and blue $\boxplus_1$).
A string of $X$-errors (white $\otimes_2$) creates a pair of magnetic excitations (red $\boxplus_2$).
A string of $Z$-errors (white $\oplus_3$) terminating at the blue boundary creates a single electric excitation (blue $\boxtimes_3$).
(b) A single $Z$-error (white $\oplus_1$) in the bulk of the toric code with a twist leads to two unsatisfied $X$-stabilizers on neighboring dark faces, thus creates a pair of electric excitations (gray $\boxtimes_1$).
A single $X$-error (white $\otimes_2$) on the rough boundary creates a single magnetic excitation (white $\boxplus_2$).
A pair of electric (gray $\boxtimes_3$) and magnetic (white $\boxplus_3$) can be created by a string of errors (white $\otimes_3$ and $\otimes_3$) across the defect line (dashed line).
}
\label{fig_toricandcolor}
\end{figure}
The fusion rules for the color code are slightly more complicated than for the toric code. Namely, we have
\begin{eqnarray}
e_K \times e_K = m_K \times m_K &=& 1,\\
\label{eq_fusion1}
e_R \times e_G \times e_B = m_R \times m_G \times m_B &=& 1,
\label{eq_fusion2}
\end{eqnarray}
where $K\in\{R,G,B\}$.
Similarly as for the toric code, combining two excitations of the same type and color results in no excitation.
However, in the bulk of the color code it is also possible to create (by a local operator) or annihilate a \emph{triple} of excitations.
We can see that by considering a single-qubit Pauli $X$ or $Z$ error.
It violates three $Z$- or $X$-type stabilizers on neighboring red, green and blue faces and thus creates a triple of magnetic or electric excitations; see Fig.~\ref{fig_toricandcolor}(a).
The topological stabilizer codes we consider are defined on lattices with boundaries.
By acting with a local Pauli operator on the qubits near the boundary of the system it is possible to create or annihilate a \emph{single} magnetic or electric excitation.
We emphasize that the type of the boundary determines the type of the allowed excitation \cite{Levin2013}.
For the triangular toric code, there are two types of boundaries, rough or smooth \cite{Bravyi1998}, and a single electric (respectively magnetic) excitation can only be created on the rough (smooth) boundary; see Fig.~\ref{fig_toricandcolor}(b).
In case of the triangular color code, there are three types of boundaries, red, green or blue \cite{Bombin2006}, and single electric and magnetic excitations of given color can be created on the boundary of the matching color; see Fig.~\ref{fig_toricandcolor}(a).
Once a quasiparticle excitation is created, it can always be moved in the bulk of the 2D topological stabilizer code by applying an appropriate 1D string-like Pauli operator \cite{Bombin2014}.
Given fusion rules, the excitation movement can be understood as a process of creating pairs of excitations along some path and fusing them together with the initial one, which results in the excitation changing its position.
When the quasiparticle excitation moves its type does not change, unless it passes through a \emph{defect line}.
A defect line, also known as a transparent domain wall\footnote{
A transparent domain wall can be thought of as an automorphism of the excitation labels which preserves the braiding and fusion rules of the quasiparticle excitations.}
\cite{Bombin2010, Bombin2011, Kitaev2012}, is a 1D object, along which the stabilizer generators are appropriately modified.
In case of the triangular toric code with a twist, one chooses stabilizers on faces intersected by the defect line to be mixed products of Pauli $X$ and $Z$ operators; see Fig.~\ref{fig_logicalOps}(b).
When an electric excitation $e_D$ crosses the defect line, it becomes a magnetic excitation $m_W$, and vice versa, $e_D \leftrightarrow m_W$.
We emphasize that logical Pauli operators for the triangular color and toric codes can be implemented by creating a single excitation on one of the boundaries and transporting it to the other boundary, where it can annihilate; see Fig.~\ref{fig_logicalOps} for examples of logical operators.
We remark that there are only two possible types of defect lines in the toric code, one of which is trivial.
However, in case of the color code, there are 72 different defect lines \cite{Yoshida2015}.
We encourage readers to explore \cite{Kesselring2018} for an illuminating discussion of all the possible boundaries and defect lines in the 2D color code.
\subsection{Decoding of topological codes as a classification problem}
\label{sec_decodingreduction}
\begin{figure}[h!]
(a)\includegraphics[width=.8\columnwidth]{figures/CC_hypergraph_v2}
(b)\includegraphics[width=.8\columnwidth]{figures/TC_hypergraph_v2}
\caption{
Construction of the excitation graph $G = (V,E)$ for (a) the color code and (b) the toric code with a twist.
For every face $f$ of the topological code lattice we add a vertex $v_f$ to the set of vertices $V$ of $G$.
We also include the boundary vertex $w$ (enclosing circle) in $V$.
(a) It is not possible to move a single excitation in the bulk (without creating more excitations) by applying a single-qubit operator.
However, since a two-qubit operator $XX$ or $ZZ$ can move an excitation between two nearby faces $f$ and $g$ of the same color, we add an edge $\{ v_f,v_g \}$ to $E$.
(b) A single-qubit Pauli $X$ or $Z$ error can move an excitation between two neighboring faces $f$ and $g$ of the same color, thus we add an edge $\{ v_f,v_g \}$ between $v_f$ and $v_g$ to the set of edges $E$ of $G$.
We connect a vertex $v_f$ with the boundary vertex $w$ if one can create a single excitation on $f$ by (a) a single- or two-qubit operators and (b) a single-qubit operator.
Note that in (a) we depict only a part of the excitation graph corresponding to electric excitations and $Z$-type errors, since the part for magnetic excitations and $X$-type errors is identical.
}
\label{fig_hypergraph}
\end{figure}
As we already discussed, generic errors affect the encoded information by moving it outside the code space, which results in some stabilizers being unsatisfied.
A classical algorithm which takes the syndrome as an input and finds an appropriate recovery restoring all stabilizers to yield $+1$ measurement outcome is called a \emph{decoder}.
For stabilizer codes the recovery operator is a Pauli operator.
We say that decoding is successful if no non-trivial logical operator has been implemented by the recovery combined with the error.
We can view decoding as a process of removing quasiparticle excitations from the system and returning the state to the ground space of the stabilizer Hamiltonian.
To facilitate the discussion, we introduce an \emph{excitation graph} $G = (V,E)$, which captures how the excitations can be moved (and eventually removed) within the lattice of the topological stabilizer code.
The vertices $V$ of the excitation graph $G$ correspond to the (possible locations of) quasiparticle excitations.
Note that there is one vertex for every single electric, as well as for magnetic excitation.
We also include in $V$ one special vertex $w$, called the boundary vertex.
Two different vertices $v_1,v_2\in V \setminus \{ w \}$ are connected by an edge $\{v_1,v_2\}\in E$ if there is a Pauli operator $P_{v_1, v_2}$ with geometrically local support which can move an excitation from $v_1$ to $v_2$ without creating any other excitations.
We say that $v\in V\setminus \{ w \}$ and the boundary vertex $w$ are connected by an edge $\{v,w\}$ if one can locally create a single excitation at $v$.
In case of the toric and color codes, we restrict our attention to local operators, which are supported on respectively one or at most two neighboring qubits.
We identify the edges $\{v_1, v_2\}$ in $E$ with the local operators $P_{v_1,v_2}$.
We illustrate how to construct the excitation graph in Fig.~\ref{fig_hypergraph}.
We consider a very simple deterministic procedure, the excitation removal algorithm, which efficiently eliminates quasiparticle excitations from the toric and color codes.
Let $Q$ be some Pauli error operator, which results in the excitation configuration $U \subset V \setminus \{ w \}$ in the system.
The input of the algorithm is $U$, but not $Q$.
For every excitation $u\in U$ we find the shortest path $(v_1, v_2, \ldots, v_n)$ in the excitation graph $G$ between $u = v_1$ and the boundary vertex $w = v_n$, where $v_i\in V$ and $\{ v_i,v_{i+1}\} \in E$.
We define an operator $P_{u}$ to be a product of local Pauli operators $P_{v_i,v_{i+1}}$ identified with the edges $\{ v_i,v_{i+1}\}$ along the path $(v_1, v_2, \ldots, v_n)$, namely $P_u = \prod_{i=1}^{n-1}P_{v_i,v_{i+1}}$.
The operator $P_u$ moves an excitation from $u$ to the boundary where it is annihilated.
As the output of the algorithm we choose an operator $R_U = \prod_{u\in U} P_u$.
We remark that the operator $R_U$ returns the state to the ground space since it removes all the excitations, and thus $R_U Q \in \mathcal{L}$.
At the same time, the output $R_U$ combined with the initial error $Q$ likely implements some non-trivial logical operator.
Thus, the excitation removal algorithm viewed as a decoder would perform rather poorly.
\begin{algorithm
\caption{excitation removal}
\SetKwInOut{Require}{Require}
\vspace*{2pt}
\Require{the excitation graph $G = (V,E)$}
\vspace*{2pt}
\KwIn{positions $U\subset V \setminus \{ w\} $ of excitations}
\vspace*{2pt}
\KwOut{Pauli operator $R_U$ removing all excitations}
\vspace*{2pt}
initialize $R_U \leftarrow I$\\
\vspace*{2pt}
for every $u\in U$:\\
\begin{enumerate}
\item find the shortest path $(v_1, \ldots, v_n)$ in $G$ between $u = v_1$ and the boundary vertex $w = v_n$\\
\item find an operator $P_u = P_{v_1,v_2} \cdot\ldots\cdot P_{v_{n-1},v_n}$ corresponding to the path $(v_1, \ldots, v_n)$\\
\item $R_U \leftarrow R_U P_u$\\
\end{enumerate}
\vspace*{-2pt}
\KwRet{$R_U$}
\vspace*{2pt}
\end{algorithm}
Now we explain how to reduce the decoding problem to a classification problem by using the excitation removal algorithm.
The task of classification is to assign labels, typically from some small set, to the elements of some high-dimensional dataset.
In the decoding problem, we know positions $U \subset V \setminus \{ w \}$ of the excitations and want to find a recovery operator removing all the excitations and implementing the trivial logical operator.
We do not know, however, the Pauli operator $Q$ resulting in the excitation configuration $U$.
Using the excitation removal algorithm we easily find the operator $R_U$.
Clearly, we would be able to successfully decode if we chose $R_U L$ as a recovery operator, where $L$ is any operator implementing the same logical operator $\overline L\in\mathcal{L}$ as $R_U Q$.
Unfortunately, there are many different error operators creating the same configuration of excitations $U$.
We can split all those error operators $Q$ into equivalence classes identified with different logical operators $\overline L$ implemented by $R_U Q$.
Then, for any given excitation configuration $U$ we can find the most probable equivalence class of errors creating $U$.
What we would like to achieve is to label $U$ by a logical operator $\overline L$, which is implemented by the output $R_U$ of the excitation removal algorithm and any operator $Q$ from the most probable class of errors.
Such a problem is well-suited for machine learning techniques, in particular for artificial neural networks.
We defer further discussion of the classification problem to Section~\ref{sec_neuraldecoder}, where we explain it in the context of neural-network decoding.
\subsection{Noise models and thresholds}
\label{sec_noisemodel}
In order to test versatility of neural decoders, we numerically simulate their performance for various noise models.
In particular, we consider the following three Pauli error models specified by just one parameter, the error rate $p$.
\begin{itemize}
\item \emph{Bit-/phase-flip noise}: every qubit is independently affected by an $X$ error with probability $p$, and by a $Z$ error with the same probability $p$.
\item \emph{Depolarizing noise}: every qubit is independently affected with probability $p$ by an error, which is uniformly chosen from three errors $X$, $Y$ and $Z$.
\item \emph{NN-depolarizing noise}: the spatially-correlated depolarizing noise on nearest-neighbor qubits, i.e., every pair of qubits $i$ and $j$ sharing an edge in the lattice is independently affected with probability $p$ by a non-trivial error, which is uniformly chosen from 15 errors of the form $P_i P_j$, where $P_i,P_j \in \{ I,X,Y,Z\}$ and $P_i P_j\neq II$.
\end{itemize}
We emphasize that one should not necessarily think of the aforementioned noise models as accurately describing errors in the experimental setup.
Rather, we choose those models since they are easy to specify and simulate but, at the same time, they also capture realistic noise features, such as spatial correlations of errors, which any good decoder should be able to handle \cite{Nickerson2017}.
In addition, in the current proposed circuit-based models for syndrome measurement~\cite{Fowler2012} correlated errors across neighboring qubits would naturally arise.
We would like to easily compare the bit-/phase-flip, depolarizing and NN-depolarizing noise models.
However, the error rate $p$ has a different meaning depending on the considered model.
This motivates us to introduce a new figure of merit for Pauli error models, the \emph{effective error rate} $p_{\mathrm{eff}}$.
For any physical qubit we define the effective error rate $p_{\mathrm{eff}}$ to be the probability of any non-trivial error affecting that qubit.
Note that in the scenarios we consider the effective error rate is the same for all the qubits (except for the ones identified with the corner vertices and the twist for the NN-depolarizing noise).
Thus, we can unambiguously talk about the effective error rate without specifying which qubit we are referring to.
For the depolarizing noise we simply have $p_{\mathrm{eff}} = p$, whereas for the the bit-/phase-flip noise we find $p_{\mathrm{eff}} = 1- (1-p)(1-p) = 2p-p^2$.
In case of the NN-depolarizing noise, the effective error rate depends on the local structure of the lattice.
Namely, if $n$ denotes the number of nearest neighbors for some qubit, then the effective error rate $p_{\mathrm{eff}}^{(n)}$ for that qubit can be recursively calculated as
\begin{eqnarray}
p_{\mathrm{eff}}^{(n)} &=& p_{\mathrm{eff}}^{(n-1)}\left(1-\frac{4}{15}p\right) + \left(1- p_{\mathrm{eff}}^{(n-1)} \right)\frac{12}{15}p\\
&=& \frac{4}{5} np + o(p^2),
\end{eqnarray}
where we use $p_{\mathrm{eff}}^{(0)} = 0$ and denote by $o(p^2)$ the second-order corrections in $p$.
In particular, for the analyzed color and toric code lattices we respectively have $p_{\mathrm{eff}}^{(3)}$ and $p_{\mathrm{eff}}^{(4)}$.
In order to assess the performance of a decoder for the given family of codes with growing code distance $d$ and specified noise model, we use the quantity called the \emph{error-correction threshold}.
The error-correction threshold is defined as the largest $p_{\mathrm{th}}$, such that for all effective error rates $p_{\mathrm{eff}} < p_{\mathrm{th}}$ the probability of unsuccessful decoding $p_{\mathrm{fail}}(p_{\mathrm{eff}}, d)$ for the code with distance $d$ goes to zero in the limit of infinite code distance,
$\lim_{d\rightarrow \infty} p_{\mathrm{fail}}(p_{\mathrm{eff}}, d) = 0$.
Note that in the definition of the threshold we assume perfect stabilizer measurements.
We remark that one typically estimates the threshold $p_{\mathrm{th}}$ by plotting the decoder failure probability $p_{\mathrm{fail}}(p_{\mathrm{eff}}, d)$ as a function of the effective error rate $p_{\mathrm{eff}}$ for different code distances $d$ and identifying their crossing point; see Figs.~\ref{fig_thresholds_CC}~and~\ref{fig_thresholds_TC}.
\section{Performance of neural-network decoding}
\label{sec_results}
\subsection{Neural decoders}
\label{sec_neuraldecoder}
We have already seen in Section~\ref{sec_decodingreduction} that the task of successful decoding can be deterministically reduced to the following problem:
for any configuration of excitations $U \subset V \setminus \{ w\}$ created by some unknown Pauli operator $Q$ assign a label $\overline{L}$ from the set of logical operators $\mathcal{L}$, such that $\overline L$ is the logical operator implemented by
$R_U Q$, where $R_U$ is the output of the excitation removal algorithm with $U$ as the input.
We approach this classification problem by using one of the leading machine learning techniques, feedforward neural networks.
For each code of distance $d$, we train a neural network consisting of $H_d+2$ layers; see Fig.~\ref{fig_neuralnetwork}.
The input layer encodes the configuration of excitations $U$.
Then, there are $H_d$ hidden layers, each containing $N_d$ nodes.
Nodes from layer $l+1$ are fully connected with nodes from the preceding layer $l$.
Every node $\nu$ in layer $l+1$ evaluates an activation function $\sigma (w_\nu \cdot o_{l} + b_\nu)$ on the output $o_l$ of nodes from layer $l$, where $w_\nu$ and $b_\nu$ are the weights and biases associated with the node $\nu$.
We choose the rectified linear unit activation function $\sigma (x) = \max (0,x)$.
The output layer uses the softmax classifier, which converts an output vector to a discrete probability distribution describing the likelihood of different logical operators $\overline{L}\in\mathcal{L}$ being implemented by $R_U Q$.
\begin{figure}[h!]
\includegraphics[width=.95\columnwidth]{figures/NeuralNetwork_v2}
\caption{
A feedforward neural network with $H_d = 3$ hidden layers.
Each hidden layer has the same number of nodes $N_d$.
Nodes from layer $l+1$ are fully connected with nodes from the preceding layer $l$.
The input layer encodes all the initial excitation configuration $U \subset V \setminus \{ w\}$.
The output layer encodes the likelihood of each logical operator $\overline L \in \{ \overline I, \overline X, \overline Y, \overline Z \}$ assigned to the input configuration $U$.
}
\label{fig_neuralnetwork}
\end{figure}
We are now ready to describe neural-network decoding for topological stabilizer codes.
The neural decoder is an algorithm which returns a recovery operator $R$ for any configuration of excitations $U\subset V \setminus \{ w\}$ created by some unknown operator $Q$.
We emphasize that error operators $Q$ are chosen according to some a priori unknown noise model.
The neural decoders we consider consist of the following two steps.
In step 1, we use a simple deterministic procedure, the excitation removal algorithm, to find a Pauli operator $R_U$, which removes quasiparticle
excitations by moving them to the boundaries of the system, where they disappear.
In step 2, we use a neural network to guess what are the most likely errors $Q$ resulting in $U$ and which logical operator $\overline L$ is subsequently implemented by $R_U Q$.
As the output, the operator $R_U L$ is returned, where $L$ is any operator implementing the logical operator $\overline L$.
We emphasize that the neural decoder always returns a valid recovery operator but decoding succeeds if and only if the neural network correctly identifies the logical operator $\overline L$ implemented by $R_U Q$.
Moreover, determining the output of the trained neural network is efficient since it reduces to matrix multiplication.
We see that in step 1 we implicitly make use of the excitation graph, which contains information about the topological code lattice and the fusion rules.
However, no information about the topological code is required to train the neural network, which is used in step 2.
\begin{algorithm
\caption{neural decoder}
\SetKwInOut{Require}{Require}
\vspace*{2pt}
\Require{excitation removal algorithm, trained neural network}
\vspace*{2pt}
\KwIn{locations of excitations $U\subset V \setminus \{ w\}$ created by some unknown operator $Q$}
\vspace*{2pt}
\KwOut{recovery operator $R$}
\vspace*{2pt}
using the excitation removal algorithm with $U$ as the input, find an operator $R_U$\\
\vspace*{2pt}
using the neural network with $U$ as the input, find the logical operator $\overline{L}$ implemented by $R_U Q$\\
\vspace*{2pt}
$R \leftarrow R_U L$, where $L$ is any operator implementing $\overline L$\\
\vspace*{2pt}
\KwRet{$R$}
\vspace*{2pt}
\end{algorithm}
We emphasize that the details of step 1 in the neural decoder do not matter as long as the returned operator $R_U$ is found in an efficient deterministic way.
We choose the excitation removal algorithm because it is simple and has an intuitive explanation --- it removes all the excitations by moving them to the boundaries of the system.
We point out that we could use a similar version of the neural decoder for other topological codes (or even codes without geometric structure), as long as we knew how to efficiently find the operator $R_U$.
For instance, if we considered the toric or color codes on a torus, with or without boundaries, then we could always find a simple removal procedure which deterministically moves all excitations of the same color to the same location in the bulk or on the boundary, where they are guaranteed to disappear.
Such a procedure can then be used to create the training dataset for the neural network.
We remark that step 1 becomes more challenging for codes without string-like operators, such as the cubic code~\cite{Haah2011}.
\subsection{Training deep neural networks}
\label{sec_training}
Before a neural network can be used for decoding, it needs to be trained.
We do this via supervised learning, where the network is trained on a dataset of preclassified samples.
Sample Pauli errors are generated using Monte Carlo sampling according to the appropriate probability distribution determined by the noise model.
For each generated error configuration $Q$, we determine the corresponding syndrome, i.e., the excitation configuration $U\subset V \setminus \{ w \}$, which is the input to the neural network.
Then, using the excitation removal algorithm, we find the Pauli operator $R_U$, and check what logical operator $\overline{L}$ is implemented by $R_U Q$.
This allows us to label each input excitation configuration $U$ with the corresponding classification label $\overline{L}$ we want the neural network to output.
We remark that the testing samples used to numerically estimate thresholds are created in the same way as the training samples.
Training the neural network can now be framed as a minimization problem.
The network parameters, i.e., the weights and biases, are optimized to minimize classification error on the training dataset.
We use the categorical cross entropy cost function $C$ to quantify the error, namely
\begin{equation}
C = \sum_i \vec{y_i}\cdot \log\left(\vec{f}(\vec{x_i})\right) + (\vec{1}-\vec{y_i}) \cdot \log \left(\vec{1} - \vec{f}(\vec{x_i})\right),
\label{eq_cost}
\end{equation}
where $\vec{y_i}$ is the classification bit-string for the input $\vec{x_i}$, $\vec{f}(\vec{x_i})$ is the likelihood vector returned by the neural network, and $\vec{1} = (1,\ldots,1)$.
Importantly, this cost function is differentiable, which allows us to use backpropagation to efficiently compute the gradient of the cost function with respect to network parameters in a single backwards pass of the network.
The minimization is performed using Adam optimization \cite{kingma2014adam}, a highly effective variant of gradient descent, whose learning parameters do not need to be fine-tuned for good performance.
In practice, we find that Adam optimization converges significantly faster than standard gradient descent, with the effects becoming more pronounced for larger networks.
Instead of computing the cost function on the entire training set, which becomes computationally expensive for very large datasets, we use mini-batch optimization.
This is a standard technique, which estimates the cost function on individual batches, i.e., small subsets of the training datasets; see e.g.~\cite{hinton2012}.
We define a training step as one round of backpropagation and a subsequent network parameter update, using the cost function
$C$ in Eq.~(\ref{eq_cost}) estimated on a single batch.
The batch size controls the accuracy of this estimate and needs to be manually adjusted.
Until recently, training deep neural networks had been next to impossible.
However, innovations by the machine learning community have made it easy to train extremely deep networks.
We too were unable to successfully train networks with more than three hidden layers, until we implemented two of these improvements: He initialization and batch normalization.
He initialization \cite{he2015delving} ensures that learning is efficient for the rectified linear unit activation function, whereas batch normalization \cite{ioffe2015batch} stabilizes the input distribution for each layer.
Batch normalization makes it possible to train deeper networks, as well as improves performance on shallower three-layer networks.
The training set is generated according to the noise model and some chosen error rates.
Once the neural network is trained, it should be able to successfully label syndromes for error configurations generated at various error rates below the threshold.
In particular, any fine-tuning of the network for specific error rates is not desired.
Since the error syndromes for higher error rates are in general more challenging to classify, it would be desirable to train the neural network mainly on configurations corresponding to error rates close to the threshold.
However, during training of the networks for higher-distance codes and correlated noise models the optimization algorithm is very likely to get stuck in local minima if we start training on the high error-rate dataset directly.
This problem is manifested in the network not effectively learning the noise features and the resulting performance showing only small improvements over random guessing.
A solution we propose is to first pre-train the network on a lower error-rate dataset, and only then use the training data corresponding to the near-threshold error rate; changing of the error rate does not have to be very slow.
We believe that this is an important observation for any future implementations of neural networks for decoding quantum error-correcting codes.
We also speculate that a similar strategy might help to speed up training of neural networks for experimental systems.
Namely, we imagine pretraining the neural network for some simple theoretical error models at low error rates, and then using the experimental data for further training.
\begin{table}[h!]
\centering
\begin{tabular*}
{\columnwidth}{@{\extracolsep{\fill} } l c c c c c}
\hline\hline
\multicolumn{6}{c}{training cost for the triangular color code}\\
\hline\hline
\diagbox{noise}{parameters} & $d$ & $H_d$ & $N_d$ & $B_d$ & $T_d$ \\
\hline
bit-/phase-flip & 5 & 3 & 100 & $10^3$ & $3 \times 10^4$ \\
& 7 & 5 & 200 & $ 5 \times10^3$ & $5 \times 10^4$ \\
& 9 & 7 & 400 & $10^4$ & $1.1 \times 10^5$ \\
& 11 & 9 & 800 & $10^4$ & $2.1 \times 10^5$ \\
depolarizing & 5 & 3 & 200 & $10^4$ & $1.1 \times 10^5$ \\
& 7 & 5 & 600 & $10^4$ & $3 \times 10^5$ \\
& 9 & 7 & 1400 & $10^4$ & $4.1 \times 10^5$ \\
NN-depolarizing & 5 & 3 & 200 & $5 \times 10^3$ & $6 \times 10^4$ \\
& 7 & 5 & 400 & $10^4$ & $1.1 \times 10^5$ \\
& 9 & 7 & 800 & $10^4$ & $2.1 \times 10^5$ \\
& 11 & 9 & 1600 & $10^4$ & $4.1 \times 10^5$ \\
\hline\hline
&&&\\
\hline\hline
\multicolumn{6}{c}{training cost for the triangular toric code with a twist}\\
\hline\hline
\diagbox{noise}{parameters} & $d$ & $H_d$ & $N_d$ & $B_d$ & $T_d$ \\
\hline
bit-/phase-flip & 5 & 3 & 100 & $10^3$ & $3 \times 10^4$ \\
& 7 & 5 & 200 & $10^4$ & $6 \times 10^4$ \\
& 9 & 7 & 400 & $10^4$ & $1.6 \times 10^5$ \\
& 11 & 9 & 800 & $10^4$ & $2.6 \times 10^5$ \\
depolarizing & 5 & 3 & 200 & $5 \times 10^3$ & $3 \times 10^4$ \\
& 7 & 5 & 600 & $10^4$ & $1.1 \times 10^5$ \\
& 9 & 7 & 1200 & $10^4$ & $2.1 \times 10^5$\\
NN-depolarizing & 5 & 3 & 200 & $5 \times10^3$ & $6 \times 10^4$ \\
& 7 & 5 & 400 & $10^4$ & $1.1 \times 10^5$ \\
& 9 & 7 & 800 & $10^4$ & $2.1 \times 10^5$ \\
& 11 & 9 & 1600 & $10^4$ & $4.1 \times 10^5$\\
\hline\hline
\end{tabular*}
\caption{
Optimal neural-network hyperparameters of the neural decoder for the triangular color code (top) and the triangular toric code with a twist (bottom) with distance $d$ under different noise models.
Hyperparameters varied are: the number of hidden layers $H_d$, the number of nodes in the hidden layer $N_d$, the batch size $B_d$ and the number of training steps~$T_d$.
The total number of training samples seen during training is $B_d T_d$.
}
\label{tab_hyperparameters}
\end{table}
\subsection{Selecting neural-network hyperparameters}
In addition to network parameters, there are also hyperparameters which cannot be trained via backpropagation.
These include the number of hidden layers $H_d$, the number of nodes per hidden layer $N_d$, the size of each batch $B_d$, and the total number of training steps $T_d$.
We optimize these parameters using a grid search based approach; see Table~\ref{tab_hyperparameters} for the optimal values we find.
A heuristic rule for determining the size of a well-performing neural network for the code with distance $d$ is to use $H_d = d-2$ hidden layers and $N_d \propto 2^{d/2}$ nodes per layer.
Whether or not this exponential trend continues for larger code sizes is an open question.
We notice that very large training sets are needed for optimal performance.
In order to save on computational memory, we choose to generate training samples in parallel to training, since it can be done efficiently.
Note that with this strategy the number of different samples seen during training is $B_d T_d$.
We observe that the training time appears to scale exponentially with code distance, approximately doubling as the distance increases by two.
We find evidence that there is some minimal batch size below which the gradient estimates are too noisy for the network to converge to a solution that outperforms random guessing.
However, increasing the batch size beyond that minimal value does not improve the final network performance.
Rather, it reduces the number of training steps needed for convergence, but with diminishing returns.
The batch size we choose is primarily optimized to minimize the training time.
\subsection{Thresholds of neural decoders}
\begin{figure*}[ht!]
\centering
(a)\includegraphics[width= 0.29\textwidth]{figures/CC_BF_Neural_log}\quad
(b)\includegraphics[width= 0.29\textwidth]{figures/CC_DP_Neural_log}\quad
(c)\includegraphics[width= 0.29\textwidth]{figures/CC_NN_Neural_log}
(d)\includegraphics[width= 0.29\textwidth]{figures/CC_BF_Proj_log}\quad
(e)\includegraphics[width= 0.29\textwidth]{figures/CC_DP_Proj_log}\quad
(f)\includegraphics[width= 0.29\textwidth]{figures/CC_NN_Proj_log}
\caption{
The failure probability $p_{\mathrm{fail}}(p_{\mathrm{eff}},d)$ of (a)-(c) the neural decoder and (d)-(f) the projection decoder for the 2D triangular color code of distance $d$ as a function of the effective error rate $p_{\mathrm{eff}}$.
We consider three noise models: (a),(d) bit-/phase-flip, (b),(e) depolarizing and (c),(f) NN-depolarizing.
We report that the neural decoder outperforms the projection decoder for all types of noise, exhibiting threshold near the optimal one.
}
\label{fig_thresholds_CC}
\end{figure*}
\begin{figure*}[ht!]
\centering
(a)\includegraphics[width= 0.29\textwidth]{figures/TC_BF_Neural_log}\quad
(b)\includegraphics[width= 0.29\textwidth]{figures/TC_DP_Neural_log}\quad
(c)\includegraphics[width= 0.29\textwidth]{figures/TC_NN_Neural_log}
(d)\includegraphics[width= 0.29\textwidth]{figures/TC_BF_MWPM_log}\quad
(e)\includegraphics[width= 0.29\textwidth]{figures/TC_DP_MWPM_log}\quad
(f)\includegraphics[width= 0.29\textwidth]{figures/TC_NN_MWPM_log}
\caption{
The failure probability $p_{\mathrm{fail}}(p_{\mathrm{eff}},d)$ of the (a)-(c) the neural decoder and (d)-(f) the Minimum-Weight Perfect Matching decoder for the 2D triangular toric code with a twist of distance $d$ as a function of the effective error rate $p_{\mathrm{eff}}$.
We consider three noise models: (a),(d) bit-/phase-flip, (b),(e) depolarizing and (c),(f) NN-depolarizing.
We report that the neural decoder significantly outperforms the Minimum-Weight Perfect Matching decoder for noise models with correlated errors and exhibits threshold near the optimal one.
}
\label{fig_thresholds_TC}
\end{figure*}
In order to assess the versatility of neural-network decoding, we qualitatively study its performance for the toric and color codes under three different noise models: bit-/phase-flip, depolarizing and NN-depolarizing.
First, we train a neural network for every code with the code distance up to $d=11$.
The optimized hyperparameters of considered neural networks are presented in Table~\ref{tab_hyperparameters}.
Then, we numerically find the decoder failure probability $p_{\mathrm{fail}}(p_{\mathrm{eff}},d)$ of the neural decoder as a function of the effective error rate $p_{\mathrm{eff}}$.
By plotting the decoder failure probability $p_{\mathrm{fail}}(p_{\mathrm{eff}},d)$ for different code distances $d$ and finding their intersection we numerically establish the existence of non-zero threshold for the neural decoder and estimate its value; see Figs.~\ref{fig_thresholds_CC}~and~\ref{fig_thresholds_TC}.
We benchmark the performance of the neural decoder against the leading efficient decoders of the toric and color code.
In particular, we analyze the standard decoders based on the Minimum-Weight Perfect Matching algorithm and the projection decoder.
In our implementation, we use the Blossom V algorithm provided by Kolmogorov \cite{Kolmogorov2009}.
We report that the neural decoder for the color code significantly outperforms the projection decoder for all considered noise models, even for the simplest bit-/phase-flip noise model.
The neural decoder threshold values we find approach the upper bounds from the maximal-likelihood decoder.
The neural decoder for the toric code shows comparable performance as the Minimum-Weight Perfect Matching decoder for the bit-/phase-flip noise, however offers noticeable improvements for correlated noise models.
We remark that optimal decoding thresholds for topological codes can be found via statistical-mechanical mapping; see \cite{Dennis2002, Katzgraber2009, Bombin2012, Kubica2017}.
The threshold values we find are expressed in terms of the effective error rate $p_{\mathrm{eff}}$ and are listed in Table~\ref{tab_thresholds}.
As with all learning models, it is important to address the possibility of overfitting.
We know that the test samples are different (with high probability) from the training samples, since they are randomly chosen from a set that scales exponentially with the number of physical qubits.
We remark that the required training set seems to scale exponentially with the code distance, however it constitutes a vanishing fraction of all possible syndrome configurations.
Moreover, the classification accuracy on the test samples is the same as the final training accuracy.
Thus, we can conclude that the neural network learns to correctly label syndromes typical for the studied noise models, resulting in well-performing neural decoders.
\section{Discussions}
We have conclusively demonstrated that neural-network decoding for topological stabilizer codes is very versatile and clearly outperforms leading efficient decoders.
We focused on the triangular color code and the toric code a twist, whose physical qubits are arranged in the same way but their stabilizer groups are different.
We studied the performance of neural-network decoding for different noise models, including the spatially-correlated depolarizing noise.
In particular, we numerically established the existence of non-zero threshold and found significant improvements of the color code threshold over the previously reported values; see Table~\ref{tab_thresholds} and Figs.~\ref{fig_thresholds_CC}~and~\ref{fig_thresholds_TC}.
This result indicates that the relatively low threshold of the color code, which was considered to be one of its main drawbacks, can be easily increased, making quantum computation with the color code more appealing than initially perceived \cite{Wang2010, Fowler2011, Landahl2014}.
We emphasize that the neural network does not explicitly use any information about the topological code or the noise model.
The neural network is trained on very simple data usually available from the experiment, which includes the information about the measured syndrome and whether the simple deterministic decoding, i.e., the excitation removal algorithm, succeeds.
Importantly, this raw data can not only be used to train the neural network, but also to characterize the quantum device \cite{Combes2014}.
Without assuming any simplistic noise models the neural network efficiently detects the actual error patterns in the system and subsequently ``learns'' about the correlations between observed errors.
This provides a heuristic explanation why neural decoding is currently the best strategy to decode the color code, since the correlations between errors in the color code are difficult to account for in standard approaches \cite{Delfosse2014a}.
Using neural networks simplifies and speeds up the process of designing good decoders, which is rather challenging due to its heavy dependency on the choice of the quantum error-correcting code as well as the noise model.
Our results show that neural-network decoding can be successfully used for quantum error-correction protocols, especially in the systems affected by a priori unknown noise with correlated errors.
We stress that neural-network decoding already provides an enormous data-compression advantage over methods based on (partial) look-up tables, even for small-distance quantum codes.
However, an important question of scalability has to be addressed if neural decoders are ever going to be used for practical purposes on future fault-tolerant universal quantum devices.
One possible approach to scalable neural networks is to reduce the connectivity between the layers by exploiting the information about the topological code lattice and geometric locality of stabilizer generators.
We imagine incorporating convolutional neural networks as well as some renormalization ideas in the future scalable neural decoders.
Also, a fully-fledged neural decoder should account for the possibility of faulty stabilizer measurements \cite{chamberland2018}.
We do not perceive any fundamental reasons why neural-network decoding, possibly based on recurrent neural networks, would not work for the circuit level noise model.
However, in that setting the training dataset as well as the size of the required neural network grow substantially, making the training process computationally very challenging.
\begin{acknowledgments}
We would like to thank Ben Brown, Jenia Mozgunov and John Preskill for valuable discussions, as well as Evert van Nieuwenburg for his feedback on this manuscript.
During the preparation of the manuscript two related preprints were made available \cite{Davaasuren2018, Jia2018}, however their scope and emphasis are different from our work.
NM acknowledges funding provided by the Caltech SURF program.
AK acknowledges funding provided by the Simons Foundation through the ``It from Qubit'' Collaboration. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
TJ acknowledges the support from the Walter Burke Institute for Theoretical Physics in the form of the Sherman Fairchild Fellowship. The authors acknowledge the support from the Institute for Quantum Information and Matter~(IQIM).
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
{
"timestamp": "2018-02-26T02:12:40",
"yymm": "1802",
"arxiv_id": "1802.08680",
"language": "en",
"url": "https://arxiv.org/abs/1802.08680"
}
|
\section{Introduction \label{sec:Intro}}
The analysis of big data is one of the most important challenges we currently face.
A typical problem concerns partitioning the data based on some notion of similarity.
When the method makes use of a (usually small) subset of the data for which there are labels then this is known as a \emph{classification problem}.
When the method only uses geometric features, i.e. there are are no a-priori known labels, then this is known as a \emph{clustering problem}.
We refer to both problems as labelling problems.
A popular method to represent the geometry of a given data set is to construct a graph embedded in an ambient space $\mathbb{R}^d$.
Typically the labelling task is fulfilled via a minimization procedure.
In the machine learning community, successfully implemented approaches include minimizing graph cuts and total variation (see, for instance, \cite{arona, boykov, breslau, breslau2, breslau3, bres, ekaber, shi00, szlambress, szlambress2, slepcev17}).
Of capital importance for evaluating a labelling method is whether it is \emph{consistent} or not; namely it is desirable that the minimization procedure approaches some limit minimization method when the number of elements of the data set goes to infinity.
Indeed, knowing whether a specific minimization strategy is an approximation of a limit (minimizing) object can help explain properties of the finite data algorithm.
For a consistent methodology properties of the large data limit will be evident when a large, but finite, number of data points is being considered.
In particular, this can also be used to justify, a posteriori, the use of a certain procedure in order to obtain some desired features of the classification.
Furthermore, understanding the large data limits can open up new algorithms. \vspace{0.5\baselineskip}
This paper is part of an ongoing project aimed at justifying analytically the consistency of several models for soft labelling used by practitioners.
Here we consider a generalization of the approach introduced by Bertozzi and Flenner in \cite{bertozzi12} (see also \cite{calatroni17}, for an introduction on this topic see \cite{vanGenCarola}), where a Ginzburg-Landau (or Modica-Mortola, see \cite{modica87,modica77}) type functional is used as the underlining energy to minimize in the context of the soft classification problem.
The functional we consider is a discretisation of the non-local Ginzburg-Landau functional studied by Alberti and Bellettini~\cite{alberti98,alberti98a} with the generalisation that we consider
non-uniform densities and $\ell^p$ (rather than $\ell^2$) cost on finite differences.
Our goal is to prove the consistency of the model.
There are multiple extensions to the approach we consider here; for instance our Ginzburg-Landau functional is based on the $p$-Laplacian, one can also consider the normalised $p$-Laplacian or the random walk Laplacian~(see~\cite{shi00, ng01}).
Further open problems concern the extention to multi-phase labelling (see \cite{BertozziFlenner}) and convergence of the associated gradient flows. \vspace{0.5\baselineskip}
The paper is organized as follows: in the following subsection we define the discrete model, and in Subsection~\ref{subsec:Intro:InfinModel} we define the continuum limiting problem.
The main results are given in Section~\ref{sec:MainRes} with the proofs presented in Sections~\ref{sec:ConvGraph} and~\ref{sec:Const}.
In Section~\ref{subsec:Intro:LitRev} we give an overview on the related literature.
In Sections~\ref{subsec:Intro:pEx} and~\ref{subsec:Intro:AnisoEx} we include two examples with the purpose of demonstrating key properties of our functional; in particular how the choice of $p$ effects minimizers of our Ginzburg-Landau functional and an example to
image segmentation.
Section~\ref{sec:Back} contains some prelimimary material we include for the convenience of the reader.
Finally, Section~\ref{sec:ConvNLContinuum} is devoted to the proofs of some technical results that are of interest in their own right, and are later used in the proofs in Section~\ref{sec:ConvGraph}.
\subsection{Finite Data Model \label{subsec:Intro:FinModel}}
In the graph representation of a data set, vertexes are points $X_n:=\{x_i\}_{i=1}^n\subset X$, where $X\subset \mathbb{R}^d$ is a connected, bounded and open set, with weighted edges $\{W_{ij}\}_{i,j=1}^n$, where each $W_{ij}\geq0$ is meant to represent similarities between the vertexes $x_i$ and $x_j$, and in some sense encode the geometry.
The larger $W_{ij}$ is the more similar the points $x_i$ and $x_j$ are and "the closer they are on the graph".
Let us consider the problem of partitioning a set of data into two classes.
A partition of the set of points $X_n$ is a map $u:X_n\to\{-1,1\}$, where $-1$ and $1$ represent the two classes.
This is referred to as \emph{hard} labelling, since $u$ can only assume a finite number of values.
From the computational point of view it is preferable to work with functions whose values range in the whole interval $[-1,1]$, \emph{i.e.}, labellings $u:X_n\to[-1,1]$, thus allowing for a \emph{soft} labelling.
Labels that are close to $1$, or to $-1$, are supposed to be in the same class.
The model used to obtain the binary classification should then force the labelling to be either $1$ or $-1$ when the number of data points is large.
In order to scale the weights on the edges of the graph we define $W_{ij}$ through a kernel $\eta:\mathbb{R}^d\to\mathbb{R}$.
More precisely, we define the graph weights by $W_{ij} = \eta_\varepsilon(x_i-x_j) = \frac{1}{\varepsilon^d} \eta((x_i-x_j)/\varepsilon)$ where $\varepsilon$ controls the scale of interactions on the graph; in particular choosing $\varepsilon$ large implies the graph is dense, and choosing $\varepsilon$ small implies the graph is disconnected.
Later assumptions, see Remark~\ref{rem:MainRes:Connected}, imply that we scale $\varepsilon=\varepsilon_n$ such that the graph is eventually connected (with probability one).
We now introduce the discrete functional we are going to study.
\begin{mydef}
For $p\geq 1$ and $n\in\mathbb{N}$ define the functional $\mathcal{G}_n^{(p)}:L^1(X_n) \to [0,\infty)$ by
\[
\mathcal{G}_n^{(p)}(u) := \frac{1}{\varepsilon_n n^2} \sum_{i,j=1}^n W_{ij} |u(x_i) - u(x_j)|^p + \frac{1}{\varepsilon_n n} \sum_{i=1}^n V(u(x_i))\,,
\]
where
\begin{equation}\label{eq:wij}
W_{ij} := \eta_{\varepsilon_n}(x_i-x_j) := \frac{1}{\varepsilon_n^d}\eta\left(\frac{x_i-x_j}{\varepsilon_n}\right)
\end{equation}
and $V$ is a double well potential (see Assumption~(B4)).
\vspace{0\baselineskip}
\end{mydef}
The first term in $\mathcal{G}_n$ plays the role of penalising oscillations, intuitively one wants a labelling solution such that if $x_i$ and $x_j$ are close on the graph then the labels are also close.
This term, when $p=2$, can also be written as $\frac{1}{\varepsilon_n n} \langle u,L u\rangle_{\mu_n}$ where $L$ is the graph Laplacian.
The second term penalises soft labellings.
In particular, we assume that $V(t)=0$ if and only if $t\in \{\pm 1\}$ and $V(t)>0$ for all $t\neq \pm 1$.
Hence any soft labelling is given a penalty of $\frac{1}{\varepsilon_n n}\sum_{i=1}^n V(u(x_i))$, as $\varepsilon_n\to 0$ this penality blows up unless $u$ takes the values $\pm 1$ almost everywhere.
The function $\eta$ plays the role of a mollifier, and that explains the definition of $\eta_{\varepsilon_n}$.
Moreover, to justify the scaling $\frac{1}{\varepsilon_n}$ we reason as follows: assume $\eta$ has support contained in a ball, we get
\[
|u(x_i) - u(x_j)|^p\sim \varepsilon_n^{p}|\nabla u|^p\,.
\]
So that, dividing by $\varepsilon_n$ will give us the typical form of the singular perturbation used in the gradient theory of phase transitions (see \cite{modica87}), namely
\[
\int_X \frac{1}{\varepsilon_n}V(u)+\varepsilon_n^{p-1}|\nabla u|^p\,.
\]
The consistency of the model is studied using $\Gamma$-convergence (see Section \ref{sec:Gammaconv}), a very important tool introduced by De Giorgi in the 70's to understand the limiting behavior of a sequence of functionals (see \cite{DG}).
This kind of variational convergence gives, almost immediately, convergence of minimizers.
\subsection{Infinite Data Model \label{subsec:Intro:InfinModel}}
In order to define the limiting functional, we first introduce some notation.
\begin{mydef}
Let $\nu\in\mathbb{R}^d$. Define $\nu^\perp:=\{ z\in \mathbb{R}^d \, : \, z\cdot \nu = 0 \}$.
Moreover, for $x\in\mathbb{R}^d$, set
\[
\mathcal{C}(x,\nu) := \left\{ C\subset\nu^\perp \,:\, C \text{ is a } (d-1)\text{-dimensional cube centred at } x \right\} \,.
\]
For $C\in\mathcal{C}(x,\nu)$, we denote by $v_1,\dots,v_{d-1}$ its principal directions (where each $v_i$ is a unit vector normal to the $i^\text{th}$ face of $C$),
and we say that a function $u:\mathbb{R}^d\to\mathbb{R}$ is \emph{$C$-periodic} if $u(y+rv_i) = u(y)$ for all $y\in\mathbb{R}^d$, all $r\in\mathbb{N}$ and all $i=1,\dots,d-1$.
Finally, we consider the following space of functions:
\[
\mathcal{U}(C,\nu) := \left\{ u:\mathbb{R}^d\to [-1,1] \, : \, u \text{ is } C\text{-periodic,} \lim_{y\cdot\nu\to \infty} u(y) = 1, \text{ and } \lim_{y\cdot\nu\to -\infty} u(y) = -1 \right\} \,.
\]
\vspace{0\baselineskip}
\end{mydef}
We now define the limiting (continuum) model.
\begin{mydef}
Let $p\geq1$ and $X\subset \mathbb{R}^d$ be open and bounded, and $\rho\in L^\infty(X)$ a positive function.
Define the functional $\mathcal{G}_\infty^{(p)}:L^1(X)\to [0,\infty]$ by
\[
\mathcal{G}_\infty^{(p)}(u):=
\left\{
\begin{array}{ll}
\displaystyle\int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) & \text{if } u\in BV(X;\{\pm 1\})\,, \\
&\\
+\infty & \text{else}\,,
\end{array}
\right.
\]
where
\[
\sigma^{(p)}(x,\nu) := \inf\left\{ \frac{1}{\mathcal{H}^{d-1}(C)} G^{(p)}(u,\rho(x),T_C) \, : \, C\in \mathcal{C}(x,\nu)\,, u\in \mathcal{U}(C,\nu) \right\}\,,
\]
and, for $C\in \mathcal{C}(x,\nu)$, we set $T_C:= \left\{ z+t\nu \, : \, z\in C, t\in \mathbb{R} \right\}$.
Finally, for $\lambda\in\mathbb{R}$ and $A\subset\mathbb{R}^d$ define
\[
G^{(p)}(u,\lambda,A):= \lambda \int_{A} \int_{\mathbb{R}^d} \eta(h) |u(z+h) - u(z)|^p \, \mathrm{d} h \, \mathrm{d} z + \int_{A} V(u(z)) \, \mathrm{d} z\,.
\]
Here $\partial^*\{u=1\}$ denotes the reduced boundary of $\{u=1\}$ and $\nu_u(x)$ is the measure theoritic exterior normal to the set $\{u=1\}$ at the point $x\in \partial^*\{u=1\}$ (see Definition \ref{eq:defnormal}).
\vspace{0\baselineskip}
\end{mydef}
Notice that the discrete functional $\mathcal{G}_n^{(p)}$ is nonlocal while the functional $\mathcal{G}_\infty^{(p)}$ is local.
The minimization problem defining $\sigma^{(p)}$ is called the \emph{cell problem} and it is common in phase transitions problems (see related works in Section~\ref{subsec:Intro:LitRev}).
Although not explicit, we have at least information on the form of the limiting functional: an anisotropic weighted perimeter.
This shows that minimizers of $\mathcal{G}_\infty^{(p)}$ are sets $E\subset X$ whose boundary $\partial E$ (or, to be precise, the reduced boundary $\partial^* E$) will likely be in the region where $\rho$ is small and orthogonal to directions $\nu$ for which $\sigma^{(p)}(x,\nu)$ is small. \vspace{0.5\baselineskip}
Finally, we want to point out that one of the main issues we have to deal with is that, for each $n\in\mathbb{N}$, the data set $X_n$ is a discrete set, while in the limit the data is given by a probability measure $\mu$ on the set $X$, hence why we call $\mathcal{G}_\infty^{(p)}$ the continuum model.
Thus, we will need to compare functions (the labeling) defined on different sets. To do so we will implement the strategy introduced by Garc\'{i}a Trillos and Slep\v{c}ev in \cite{garciatrillos16}, that consists in extending a function $u:X_n\to\mathbb{R}$ to a function $v:X\to\mathbb{R}$ in an optimal piecewise constant way. Optimal here is meant in the sense of optimal transportation.
In particular, a sequence of functions $\{u_n\}_{n=1}^\infty$ with $u_n\in L^1(X_n)$, is said to converge in the $TL^1$ topology to a function $u\in L^1(X)$ if there exists a sequence $\{T_n\}_{n=1}^\infty\subset L^1(X;X_n)$ converging to the identity map in $L^1(X)$ and with
\[ \mu(T^{-1}_n(B)) = \frac{1}{n} \# \left\{ x_i\in B \, : \, i=1,2,\dots, n\right\} \]
for every Borel set $B\subset X$, such that $u_n\circ T_n\to u$ in $L^1(X)$.
We review the $TL^1$ topology in more detail in Section~\ref{subsec:Prelim:Trans}.
\subsection{Main Results \label{sec:MainRes}}
This section is devoted to the precise statements of the main results of this paper.
Let $X\subset \mathbb{R}^d$ be a bounded, connected and open set with Lipschitz boundary.
Fix $\mu\in \mathcal{P}(X)$ and assume the following.
\begin{itemize}
\item[(A1)] $\mu \ll \mathcal{L}^d$, has a continuous density $\rho:X\rightarrow[c_1,c_2]$ for some $0<c_1\leq c_2<\infty$.
\end{itemize}
We extend $\rho$ to a function defined in the whole space $\mathbb{R}^d$ by setting $\rho(x):=0$ for $x\in\mathbb{R}^d\setminus X$.
For all $n\in\mathbb{N}$, consider a point cloud $X_n=\{x_i\}_{i=1}^n\subset X$ and let $\mu_n$ be the associated empirical measure (see Definition \ref{def:empmeas}).
Let $\{\varepsilon_n\}_{n=1}^\infty$ be a positive sequence converging to zero and such that the following rate of convergence holds:
\begin{itemize}
\item[(A2)] $\displaystyle \frac{\mathrm{dist}_\infty(\mu_n,\mu)}{\varepsilon_n} \to 0\,,$ where $\mathrm{dist}_\infty(\mu_n,\mu)$ is the $\infty$-Wasserstein distance between the measures $\mu_n$ and $\mu$, see Definition~\ref{def:Back:Trans:Wass}.
\end{itemize}
\begin{remark}
\label{rem:MainRes:Connected}
When $x_i\stackrel{\mathrm{iid}}{\sim} \mu$ then (with probability one), Assumption~(A2) is implied by $\varepsilon_n \gg \delta_n$,
where $\delta_n$ is defined in Theorem \ref{thm:optrate}.
Notice that for $d\geq 3$ this lower bound on $\varepsilon_n$ ensures that the graph with vertices $x_n$ and edges weighted by $W_{ij}$ (see \eqref{eq:wij}) is eventually connected (see \cite[Theorem 13.2]{penrose}).
The lower bound can potentially be improved when $x_i$ are not independent.
For example if $\{x_i\}_{i=1}^n$ form a regular graph then $\mu_n$ converges to the uniform measure and the lower bound is given by
$\varepsilon_n \gg n^{-\frac{1}{d}}$. \vspace{0.8\baselineskip}
\end{remark}
The double well potential $V:\mathbb{R}\to\mathbb{R}$ satisfies the following.
\begin{itemize}
\item[(B1)] $V$ is continuous.
\item[(B2)] $V^{-1}\{0\} = \{\pm 1\}$ and $V\geq0$.
\item[(B3)] There exists $\tau>0, R_V>1$ such that for all $|s|\geq R_V$ that $V(s)\geq \tau|s|$.
\item[(B4)] $V$ is Lipschitz continuous on $[-1,1]$.
\end{itemize}
The assumptions on $V$ imply that in the limit there are only two phases $\pm 1$.
Assumption (B3) is used to establish compactness, in particular it is used to show that minimisers can be bounded in $L^\infty$ by~1. The prototypical example of a function $V:\mathbb{R}^d\to\mathbb{R}$ satisfying (B1-4) is given by $V(s):=(s^2-1)^2$.
Recall that the graph weights are defined by $W_{ij} = \eta_{\varepsilon_n}(x_i-x_j)$.
We assume that $\eta:\mathbb{R}^d\to[0,\infty)$ is a measurable function satisfying the following.
\begin{itemize}
\item[(C1)] $\eta\geq0$, $\eta(0)>0$ and $\eta$ is continuous at $x=0$.
\item[(C2)] $\eta$ is an even function, i.e. $\eta(-x) = \eta(x)$.
\item[(C3)] $\eta$ has support in $B(0,R_\eta)$, for some $R_\eta>0$.
\item[(C4)] For all $\delta>0$ there exists $c_\delta,\alpha_\delta$ such that if $|x-z|\leq \delta$ then $\eta(x) \geq c_\delta \eta(\alpha_\delta z)$, furthermore $c_\delta\to 1$, $\alpha_\delta\to 1$ as $\delta\to 0$.
\end{itemize}
\begin{remark}\label{rem:eta}
Note that (C3) and (C4) imply that $\|\eta\|_{L^\infty}<\infty$ and, in particular, $\int_{\mathbb{R}^d} \eta(x) |x| \, \mathrm{d} x < \infty$.
Indeed, given $\delta>0$, it is possible to cover $B(0,R_\eta)$ with a finite family
$\widetilde{B}_\delta(x_1),\dots,\widetilde{B}_\delta(x_r)$ of sets of the form
\[
\widetilde{B}_\delta(x_i):=\{\, \alpha_\delta z \,:\, |z-x_i|<\delta \,\}\,.
\]
\vspace{0\baselineskip}
\end{remark}
Assumption~(C2) is justified by the fact that $\eta$ plays the role of an interaction potential.
Finally, Assumption~(C4) is a version of continuity of $\eta$ we need in order to perform our technical computations.
We note that (C4) is general enough to include $\eta(x) = \chi_{A}$ where $A\subset \mathbb{R}^d$ is open, bounded, convex and $0\in A$, see~\cite[Proposition 2.2]{thorpe17AAA}.
The main result of the paper is the following theorem.
\begin{theorem}
\label{thm:MainRes:Compact&Gamma}
Let $p\geq 1$ and assume (A1-2), (B1-4) and (C1-4) are in force.
Then, the following holds:
\begin{itemize}
\item (compactness) let $u_n\in L^1(\mu_n)$ satisfy $\sup_{n\in \mathbb{N}} \mathcal{G}_n^{(p)}(u_n) < \infty$, then $u_n$ is relatively compact in $TL^1$ and each cluster point $u$ has $\mathcal{G}_\infty^{(p)}(u)<\infty$;
\item ($\Gamma$-convergence) $\Glim_{n\to \infty}(TL^1) \mathcal{G}_n^{(p)} = \mathcal{G}_\infty^{(p)}$.
\end{itemize}
\vspace{0\baselineskip}
\end{theorem}
The proof of compactness is in Section~\ref{subsec:ConvGraph:Compact} and the $\Gamma$-convergence is proved in Sections~\ref{subsec:ConvGraph:Liminf} and~\ref{subsec:ConvGraph:Limsup}.
Since the proof of Theorem \ref{thm:MainRes:Compact&Gamma} is quite long, we briefly sketch here the main idea behind the $\Gamma$-convergence result.
We approximately follow the method of~\cite{garciatrillos16} where the authors considered the continuum limit of total variation on point clouds.
We will show the convergence of the discrete nonlocal functional $\mathcal{G}^{(p)}_n$ to the continuum local one $\mathcal{G}_\infty^{(p)}$ via an intermediate nonlocal continuum functional $\mathcal{F}^{(p)}_{\varepsilon_n}$ (defined in~\eqref{eq:ConvNLContinuum:Feps}).
In particular, we will prove that:
\begin{itemize}
\item[(i)] the functionals $\mathcal{F}^{(p)}_{\varepsilon_n}$ $\Gamma$-converge in $L^1(X)$ to $\mathcal{G}_\infty^{(p)}$, see Section~\ref{sec:ConvNLContinuum}, where we implement a strategy similar to the one of \cite{alberti98}, where the authors considered the functional $\mathcal{F}^{(p)}_{\varepsilon_n}$ with $\rho\equiv1$ and $p=2$,
\item[(ii)] it is possible to bound from below $\mathcal{G}^{(p)}_n$ with $\mathcal{F}^{(p)}_{\varepsilon'_n}$ (see \eqref{eq:ineq2}), where $\lim_{n\to \infty}\frac{\varepsilon^\prime_n}{\varepsilon_n}=1$, from which the liminf inequality follows,
\item[(iii)] if $u\in BV(X;\{\pm 1\})$ we use the same recovery sequence $u_{\varepsilon}$ as in~\cite{alberti98} to show $\limsup_{\varepsilon\to 0}\mathcal{F}_{\varepsilon}^{(p)}(u_\varepsilon) \leq \mathcal{G}_\infty^{(p)}$, after which we can get an upper bound of $\mathcal{G}^{(p)}_n(u_n)$ in terms of $\mathcal{F}^{(p)}_{\varepsilon'_n}(u_{\varepsilon'_n})$ where $u_n:X_n\to[0,\infty)$ is defined at each $x_i\in X_n$ as a suitable average of $u_{\varepsilon'_n}$ around the point $x_i$ and $\lim_{n\to\infty}\frac{\varepsilon'_n}{\varepsilon_n}=1$. This will give us the limsup inequality.
\end{itemize}
Similarly, the compactness property follows by comparing $\mathcal{G}_n^{(p)}$ with the intermediary functional $\mathcal{F}_{\varepsilon_n}^{(p)}$.
As an application of the Theorem \ref{thm:MainRes:Compact&Gamma}, we consider the functional $\mathcal{G}^{(p)}_n$ with a data fidelity term.
\begin{mydef}\label{def:fidelityterm}
Let $k_n:X_n\times \mathbb{R} \to\mathbb{R}$ and $k_\infty:X\times \mathbb{R}\to\mathbb{R}$.
Define the functionals $\mathcal{K}_n: TL^1(X_n)\to\mathbb{R}$ and $\mathcal{K}_\infty: TL^1(X)\to\mathbb{R}$ by
\[
\mathcal{K}_n(u,\nu) = \left\{ \begin{array}{ll} \frac{1}{n} \sum_{i=1}^n k_n(x_i,u(x_i)) & \text{if } \nu=\mu_n, \\ +\infty & \text{else,} \end{array} \right.
\]
and
\[
\mathcal{K}_\infty(u,\nu) = \left\{ \begin{array}{ll} \int_X k_\infty(x,u(x)) \rho(x) \, \mathrm{d} x & \text{if } \nu=\mu, \\ +\infty & \text{else} \end{array} \right.
\]
respectively. \vspace{0.5\baselineskip}
\end{mydef}
We make the following assumptions on $k_n, k_\infty$:
\begin{itemize}
\item[(D1)] $k_n\geq 0$, $k_\infty\geq0$.
\item[(D2)] There exist $\beta>0$ and $q\geq 1$ such that $k_n(x,u) \leq \beta(1+|u|^q)$, for all $n\in\mathbb{N}$ and almost all $x\in X_n$.
\item[(D3)] For almost every $x\in X$ the following holds: let $u_n\to u$ be a converging real valued sequence and $x_n\to x$, then
\[
\lim_{n\to \infty} k_n \left( x_n,u_n \right) = k_\infty(x,u)\,.
\]
\end{itemize}
\begin{remark}
\label{rem:Intro:MainRes:SLL}
For example, we can use this form of $\mathcal{K}_n,\mathcal{K}_\infty$ to include a data fidelity term in a specific subset of $X$.
Let $B\subset X$ be an open set with $\mathrm{Vol}(B)>0$ and $\mathrm{Vol}(\partial B) = 0$.
Let $\lambda_n\geq0$ with $\lambda_n\to\lambda$ as $n\to\infty$.
Let $y_n\in L^1(X_n)$ and $y_\infty\in L^1(X)$ with $\sup_{n\in \mathbb{N}}\|y_n\|_{L^\infty}<\infty$ and such that $y_n(x_{i_n})\to y_\infty(x)$ for almost every $x\in X$ and any sequence $x_{i_n}\to x$.
Define
\[
k_n(x,u) :=
\left\{
\begin{array}{ll}
\lambda_n |y_n(x) - u|^q & \text{ in } B\cap X_n\,, \\
0 & \text{ on } X_n\setminus B\,,
\end{array}
\right.
\]
\[
k_\infty(x,u):=
\left\{
\begin{array}{ll}
\lambda |y_\infty(x) - u|^q & \text{ in } B\,, \\
0 & \text{ on } X\setminus B\,.
\end{array}
\right.
\]
Then $k_n$ and $k_\infty$ satisfy Assumptions~(D1-3). Indeed, (D1) follows directly from the definition of the fidelity terms, while (D3) holds thanks to continuity and the fact that $\mathrm{Vol}(\partial B) = 0$. Finally, in order to prove (D2) we simply notice that
\[
k_n(x,u) = \lambda_n| y_n(x) - u|^q \leq \sup_{n\in \mathbb{N}} \lambda_n 2^{q-1} \left( \|y_n\|^q_{L^\infty} + |u|^q \right)\leq \beta (1+|u|^q)\,.
\]
for some $\beta>0$. \vspace{0.8\baselineskip}
\end{remark}
We now consider the minimisation problem
\[
\text{minimise} \quad \mathcal{G}_n^{(p)}(u) + \mathcal{K}_n(u) \quad \text{over } u \in L^1(X_n)\,.
\]
\begin{corollary}
\label{cor:MainRes:Constrained}
In addition to Assumptions (A1-3), (B1-2), (C1-4), (D1-3), assume that for the same $q\geq 1$ as in Assumption (D2) there exists $\tau,R_V>0$ such that for all $|s|\geq R_V$ that $V(s) \geq \tau|s|^q$.
Then any sequence of almost minimizers of $\mathcal{G}_n^{(p)}+\mathcal{K}_n$ is compact in $TL^1$.
And furthermore, any cluster point of almost minimizers is a minimizer of $\mathcal{G}_\infty^{(p)}+\mathcal{K}_\infty$ in $L^1(X)$. \vspace{0.5\baselineskip}
\end{corollary}
We prove the corollary in Section~\ref{sec:Const}. \vspace{0.5\baselineskip}
Finally, we comment on the hypothesis $\rho\geq c_1>0$.
If it is omitted, we can still get the following result:
\begin{corollary}\label{cor:DegeneratedDensity}
Let $p\geq 1$ and assume (A2), (B1-3) and (C1-4) are in force and that $\rho\in[0, c_2]$, for some $c_2<\infty$.
Set $X_+:=\{ x\in X \,:\, \rho(x)>0 \}$ and define the functional $\widetilde{\mathcal{G}}_\infty^{(p)}: L^1(X)\to[0,+\infty]$ as
\[
\widetilde{\mathcal{G}}_\infty^{(p)}(u):=
\left\{
\begin{array}{ll}
\displaystyle\int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) & \text{if }
u\in BV_{loc}(X_+;\{\pm 1\})\,, \\
&\\
+\infty & \text{else}\,,
\end{array}
\right.
\]
where $BV_{loc}(X_+;\{\pm 1\})$ denotes the space of functions $u\in L^1(X;\{\pm1\})$ such that $u\in BV(K;\{\pm1\})$ for any compact set $K\subset X_+$.
Then, the following holds:
\begin{itemize}
\item (compactness) for any compact set $K\subset X_0$ we have that any sequence $\{u_n\}_{n=1}^\infty\subset L^1(K;\mu_n)$ satisfying $\sup_{n\in \mathbb{N}} \mathcal{G}_n^{(p)}(u_n) < \infty$ is relatively compact in $TL^1$ and each cluster point $u$ has $\widetilde{\mathcal{G}}_\infty^{(p)}(u)<\infty$;
\item ($\Gamma$-convergence) $\Glim_{n\to \infty}(TL^1) \mathcal{G}_n^{(p)} = \widetilde{\mathcal{G}}_\infty^{(p)}$.
\end{itemize}
\vspace{0\baselineskip}
\end{corollary}
We omit a rigorous proof of the corollary and just explain the details.
Indeed, for fixed a compact set $K\subset X_+$, the continuity of $\rho$ implies that $\min_{K}\rho\geq c_1>0$.
Thus the result of Corollary \ref{cor:DegeneratedDensity} follows by applying Theorem \ref{thm:MainRes:Compact&Gamma}.
\subsection{Related Works \label{subsec:Intro:LitRev}}
The functional $\mathcal{G}^{(p)}_n$ in the case $p=1$ has been considered by the second author and Theil in \cite{thorpe17AAA}, where a similar $\Gamma$-convergence result has been proved. The difference is that, in the case $p=1$, the limit energy density function $\sigma^{(1)}$ can be given explicitly, via an integral.
In \cite{vangennip12a} van Gennip and Bertozzi studied the Ginzburg-Landau functional on 4-regular graphs for $d=2$ and $p=2$ proving limits for $\varepsilon\to 0$ and $n\to \infty$ (both simultaneously and independently).
The $TL^p$ topology, as introduced by Garc\'{i}a Trillos and Slep\v{c}ev~\cite{garciatrillos16}, provides a notion of convergence upon which the $\Gamma$-convergence framework can be applied.
This method has now been applied in many works, see, for instance, \cite{garciatrillos16,thorpe17AAA,garciatrillos15aAAA,dunlop18,davis16AAA,garciatrillos15c,garciatrillos16bAAA,garciatrillos16aAAA,slepcev17}.
Further studies on this topology can be found in
\cite{garciatrillos16,garciatrillos15aAAA,thorpe17bAAA,thorpe17cAAA}.
The literature on phase transitions problems is quite extensive.
Here we just recall some of the main results, starting from the pioneering work \cite{modica77} of Modica and Mortola and of Mortola~\cite{modica87} (see also Sternberg \cite{sternberg88}), where the scalar isotropic case has been studied.
The vectorial case has been considered by Kohn and Sternberg in \cite{kohn89}, Fonseca and Tartar in \cite{fonseca89} and Baldo \cite{baldo}.
A study of the anisotropic case has been carried out by Bouchitt\'{e}~\cite{bouchitte90} and Owen~\cite{owen91} in the scalar case, and by Barroso and Fonseca~\cite{barroso94} and Fonseca and Popovici~\cite{fonpop} in the vectorial case.
Nonlocal approximations of local functionals of the perimeter type go back to the work \cite{alberti98} of Alberti and Bellettini (see also \cite{alberti98a}). Several variants and extensions have been considered since then (see, for instance, Savin and Valdinoci \cite{savvaldi} and Esedo\={g}lu and Otto \cite{eseotto}).
In particular, nonlocal functionals have been used by Brezis, Bourgain and Mironescu in \cite{brebourmir} to characterize Sobolev spaces (see also the work \cite{ponce04} of Ponce).
Approximations of (anisotropic) perimeter functionals via energies defined in the discrete setting have been carried out by Braides and Yip in \cite{brayip} and by Chambolle, Giacomini and Lussardi in \cite{chagialus}.
\subsection{The Choice of \texorpdfstring{$p$}{p}} \label{subsec:Intro:pEx}
By allowing for any $p\geq 1$ in the definition of our functionals $\mathcal{G}_n^{(p)}$ (rather than $p=2$ usually considered) we allow for greater flexibility.
In particular the choice of $p$ has an important consequence regarding the balance between minimising the length of the perimeter and avoiding regions of high density.
As for Euclidean norms, as $p\to \infty$ the Dirichlet energy $\mathcal{E}_n^{(p)}$, defined by
\[ \mathcal{E}_n^{(p)}(u) = \left[ \sum_{i,j=1}^n W_{ij}^p |u(x_i)-u(x_j)|^p\right]^{\frac{1}{p}}, \]
converges as $p\to \infty$ to $\mathcal{E}_n^{(\infty)}$, defined by
\[ \mathcal{E}_n^{(\infty)}(u) = \max_{i,j=1,\dots,n} W_{ij} |u(x_i)-u(x_j)|. \]
In fact the convergence also holds in a variational sense~\cite{egger90,kyng15}.
It is therefore of no surprise that the same holds in the Ginzburg-Landau setting (where the normalisation has been adjusted so that terms are of order 1 with respect to $p$).
\begin{proposition}
Fix $n\in \mathbb{N}$, and assume $V:\mathbb{R}\to [0,\infty)$ is continuous and weights $W_{ij}\geq 0$ for all $i,j=1,\dots, n$.
Let
\begin{align*}
\tilde{\mathcal{G}}_n^{(p)}(u) & = \left[\frac{1}{\varepsilon_n n^2} \sum_{i,j=1}^n W_{ij}^p |u(x_i)-u(x_j)|^p + \frac{1}{\varepsilon_n^p n} \sum_{i=1}^n V^p(u(x_i)) \right]^{\frac{1}{p}} \\
\mathcal{G}_n^{(\infty)}(u) & = \max\left\{ \max_{i,j=1,\dots,n} W_{ij} |u(x_i)-u(x_j)|, \frac{1}{\varepsilon_n} \max_{i=1,\dots,n} V(u(x_i)) \right\}.
\end{align*}
Then,
\[ \Glim_{p\to \infty} \tilde{\mathcal{G}}_n^{(p)} = \mathcal{G}_n^{(\infty)}. \]
\end{proposition}
\begin{proof}
The proof is easy since the functionals are on discrete domains.
In particular, for the liminf inequality we assume $L^0(X_n)\ni u_p \to u \in L^0(X_n)$ then,
\begin{align*}
\tilde{G}_n^{(p)}(u_p) & \geq \left[ \frac{1}{\varepsilon_n n^2} W_{rt}^p |u_p(x_r) - u_p(x_t)|^p + \frac{1}{\varepsilon_n^p n} V^p(u_p(x_\ell)) \right]^{\frac{1}{p}} \\
& \geq \max\left\{ \frac{1}{\varepsilon_n^{\frac{1}{p}} n^{\frac{2}{p}}} W_{rt} |u_p(x_r) - u_p(x_t)|, \frac{1}{\varepsilon_n n^{\frac{1}{p}}} V(u_p(x_\ell)) \right\} \\
& \to \max \left\{ W_{rt} |u(x_r) - u(x_t)|, \frac{1}{\varepsilon_n} V(u(x_\ell)) \right\}
\end{align*}
for any $r,t,\ell\in \{1,\dots, n\}$.
Choosing $(r,t) = \argmax_{i,j} W_{ij}|u(x_i)-u(x_j)|$ and $\ell = \argmax_i V(u(x_i))$ implies
\[ \liminf_{p\to \infty}\tilde{G}_n^{(p)}(u_p) \geq G_n^{(\infty)}(u). \]
For the recovery sequence we consider the sequence $u_p=u$ then,
\[ \tilde{G}_n^{(p)}(u_p) \leq \max\left\{ \frac{1}{\varepsilon_n^{\frac{1}{p}}} \max_{i,j=1,\dots, n} W_{ij} |u(x_i) - u(x_j)|, \frac{1}{\varepsilon_n} \max_{i=1,\dots,n} V(u(x_i)) \right\} \to \mathcal{G}_n^{(\infty)}(u) \]
as required.
\end{proof}
There are several ways we could have renormalised $\mathcal{G}_n^{(p)}$ to obtain a well defined limit as $p\to \infty$.
We choose a normalisation $\tilde{\mathcal{G}}_n^{(p)}$ that closely resembled $\mathcal{G}_n^{(p)}$ and such that the limit $\mathcal{G}_n^{(\infty)}$ will exhibit a phase transition as $n\to \infty$.
In particular, the limit functional $\mathcal{G}_n^{(\infty)}$ is chosen so that minimisers $u_n$ have the property that $u_n(x)\to \{\pm 1\}$ for almost every $x$, and $\max_{i,j} W_{ij}|u_n(x_i)-u_n(x_j)| = O(1)$.
The important property to note is that the $p=\infty$ limit depends only on the data through its support.
In particular, if one considers the semi-supervised learning problem (similar to Remark~\ref{rem:Intro:MainRes:SLL} with $\lambda_n=\infty$):
\[ \text{minimise } \tilde{\mathcal{G}}_n^{(p)}(u) \text{ subject to } u(x_i) = y_i \text{ for } x_i \in X^\prime \]
then the density of the data in $X\setminus X^\prime$ is irrelevant for $p=\infty$.
To illustrate this we consider a simple example.
We consider a density $\rho$ as in Figure~\ref{fig:Intro:p:ex}; in particular the density is given by $\rho(x) \propto \phi(x_1)\lfloor_X$ where $\phi$ is the density of a normal random variable, with mean at $0$ and standard deviation $0.25$, and $X$ is the bean shape.
The regions of labelled data are chosen to be the balls $B((-0.3,0),0.08)$ and $B((0.3,0),0.08)$, with the first ball labelled $-1$ and the second ball labelled $+1$.
The distribution is chosen so that the density is largest where the bean is narrowest.
The optimal partitioning, given by minimizing $\mathcal{G}_n^{(p)}$ is therefore a trade off between minimizing the length of the perimeter of the partitioning and avoiding high density regions.
We plot the optimal partitioning for two choices of $p$, $p=2$ and $p=100$, using the same data set.
We see that for $p=2$ the partitioning avoids the region of high density at the cost of increasing the length of the boundary.
On the other hand, when $p=100$ the partitioning is much closer to the centre of the bean where the length of the partitioning is minimized.
\begin{figure}
\centering
\setlength\figureheight{0.25\textwidth}
\setlength\figurewidth{0.32\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\scriptsize
\input{BeanDensityMS.tikz}
\caption{The density of the data generating distribution $\rho$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\scriptsize
\input{p2Cut_n1000_N10_s1_sd0p25.tikz}
\caption{The minimiser of $\mathcal{G}_n^{(2)}$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\scriptsize
\input{p100Cut_n1000_N10_s1_sd0p25.tikz}
\caption{The minimiser of $\mathcal{G}_n^{(100)}$}.
\end{subfigure}
\caption{How the choice of $p$ affects the trade-off between avoiding high density regions and minimising the length of the classification boundary.
The experiment setup is described in Section~\ref{subsec:Intro:pEx} and we computed minimisers using a gradient flow (known as the Allen-Cahn equation).
\label{fig:Intro:p:ex}
}
\end{figure}
\subsection{Application to Image Segmentation} \label{subsec:Intro:AnisoEx}
Let us consider segmentation in images.
Let $x_i\in [0,1]^2$ denote the location of pixels (in particular $X_n = \{x_i\}_{i=1}^n$ form a regular grid with distance $\frac{1}{\sqrt{n}}$ between neighbouring pixels).
Let $y_i\in \mathbb{R}^3$ be the RGB values of a given image at the pixel located at $x_i$.
We assume $\mathcal{I}\subset \{1,\dots,n\}$ indexes the a-priori labelled pixels and $f:\{x_i\}_{i\in\mathcal{I}} \to \mathbb{R}$ are the given labels.
We wish to label the remaining pixels $\{x_i\}_{i\not\in \mathcal{I}}$.
Define
\[ W_{i,j} = \frac{1}{\varepsilon^2} \Phi(|y_{i} - y_{j}|) \chi_{{|x_{i} - x_{j}|\leq\varepsilon}} \]
where $\Phi:\mathbb{R}\to\mathbb{R}$ is a decreasing function and $\varepsilon$ determines the number of neighbouring pixels we connect, e.g. $\varepsilon = \frac{1}{\sqrt{n}}$ connects the four neighbouring pixels, $\varepsilon = \frac{\sqrt{2}}{\sqrt{n}}$ connects the eight (including diagonals) neighbouring pixels, etc.
Note that we are thinking of this problem in 2D where the kernel $\eta$ depends not just on the spatial difference but also on the values of pixels, i.e. $\eta(x_i-x_j) = \eta(x_i-x_j;y_i,y_j)$.
This falls slightly outside of our theoretical framework as the function $\eta$ now depends on more than just the difference between $x_i$ and $x_j$.
The segmentation is defined by minimising
\[ \mathcal{G}^{(p)}_n(u) + \mathcal{K}_n(u) = \frac{\lambda}{2} \sum_{i\in \mathcal{I}} |f(x_{i})- u(x_{j})|^2 + \frac{1}{\varepsilon n^2} \sum_{i,j\in \{1,\dots,n\}} W_{ij} |u(x_{i}) - u(x_{j})|^p + \frac{1}{\varepsilon n} \sum_{i=1}^n V(u(x_{i})). \]
This setup is similar to Bertozzi and Flenner~\cite{bertozzi12} and
Calatroni, van Gennip, Sch\"{o}nlieb, Rowland and Flenner~\cite{calatroni17} however they embed the problem into a much higher dimensional space.
In particular, in~\cite{bertozzi12,calatroni17} the vertex corresponding to pixel $i$ is the vector consisting of the RGB values of the $M$ nearest neighbours, so that the graph is embedded in $\mathbb{R}^{3M}$ compared to the ambient dimension in our setup which is $\mathbb{R}^2$.
We refer to methods in~\cite{bertozzi12,calatroni17} as the segmentation in colour space method and our method as segmentation in pixel space.
It is not within the scope of this paper to comprehensively compare the differences in approach (in fact we note that the theoretical results in this paper apply to~\cite{bertozzi12,calatroni17}).
However, we point out that the method proposed here has the advantage that the parameter $\varepsilon$ directly controls the number of pixels connected which can make it easier for parameter selection.
Choosing the parameter $\varepsilon$ in the colour space segmentation method is not as clear as one needs to compute the connectivity radius of the graph.
On the other hand, the pixel space segmentation method proposed here is spatially local which can make it difficult for similar colours that are spatially separated to be placed in the same class.
In the colour space segmentation method similar colours are close on the graph regardless of the pixel locations.
In Figure~\ref{fig:Intro:AnisoEx:ex} we plot an example of an image segmented into background and foreground.
Given the image in Figure~\ref{fig:Intro:AnisoEx:ex}(a), where regions have been labelled either background (marked green), or foreground (marked red).
We construct a graph as described above with
\[ \Phi(t) = \exp\left( -\frac{t^2}{\tau}\right), \qquad \tau = 5\times 10^{-4} \]
and choose $\varepsilon = \frac{1}{\sqrt{n}}$ and $p=2$.
Using a gradient flow to minimise $\mathcal{G}_n^{(p)}+\mathcal{K}_n$ we produce the output given in Figure~\ref{fig:Intro:AnisoEx:ex}(b).
Since $\varepsilon$ determines the number of connections of each pixel it was easy to select parameters with a desirable output.
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=0.9\textwidth]{GLSegmentationInput.png}
\caption{Input images with a-priori given labels marked.}
\end{subfigure}
~
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=0.9\textwidth]{GLSegmentationOutput.png}
\caption{The minimiser of $\mathcal{G}_n^{(2)}+\mathcal{K}_n$.}
\end{subfigure}
\caption{An image segmentation example with experimental setup as described in Section~\ref{subsec:Intro:AnisoEx}.
The image is from the Microsoft image database and also appeared in~\cite{bertozzi12}.
\label{fig:Intro:AnisoEx:ex}
}
\end{figure}
\section{Background \label{sec:Back}}
\subsection{Notation}
In the following $\chi_E$ will denote the characteristic function of a set $E\subset\mathbb{R}^d$, while
$\mathrm{Vol}(E)=\mathcal{L}^d(E)$ will denote its $d$-dimensional Lebesgue measure and $\mathcal{H}^{d-1}(E)$ its $(d-1)$-Hausdorff measure.
Moreover, with $B(x,r)$ we will denote the ball centered at $x\in\mathbb{R}^d$ with radius $r>0$
and we set $\mathbb{S}^{d-1}:=\partial B(0,1)$.
The identity map will be denoted by $\mathrm{Id}$.
Given an open set $X\subset\mathbb{R}^d$, we define the space
\[
\mathcal{P}(X):=\{\, \text{ Radon measures } \mu \text{ on } X \text{with } \mu(X)=1 \,\}\,.
\]
Given a set of data points $\{x_i\}_{i=1}^n$ we define the empirical measure as follows.
\begin{mydef}\label{def:empmeas}
For all $n\in\mathbb{N}$ let $X_n:=\{x_i\}_{i=1}^n$ be a set of $n$ random variables. We define the \emph{empirical measure} $\mu_n$ as
\[
\mu_n:=\frac{1}{n}\sum_{i=1}^n \delta_{x_i}\,,
\]
where $\delta_x$ denotes the Dirac delta centered at $x$.
\vspace{0.5\baselineskip}
\end{mydef}
We state our results in terms of a general sequence of empirical measures $\mu_n$ that converge weak$^*$ to some $\mu\in\mathcal{P}(X)$.
An important special case is when $x_i$ are independent and identically distributed (which we abbreviate to iid) from $\mu$.
\begin{remark}\label{rem:weakconv}
When $x_i\stackrel{\mathrm{iid}}{\sim} \mu$ then $\mu_n$ converges, with probability one, to $\mu$ weakly${^*}$ in the sense of measures, see for example~\cite[Theorem 11.4.1]{dudley02}, (and we write $\mu_n\weakstarto\mu$), \emph{i.e.}
\[ \int_X \varphi \,\mathrm{d} \mu_n\to \int_X \varphi\,\mathrm{d}\mu \]
as $n\to\infty$, for all $\varphi\in C_c(X)$.
\end{remark}
We write $L^p(X,\mu;Y)$ for the space of $L^p$ integrable, with respect to $\mu$, functions from $X$ to $Y$.
We will often suppress the $Y$ dependence and just write $L^p(X,\mu)$.
Moreover, if $\mu=\mathcal{L}^d$ then we will often write $L^p(X) = L^p(X,\mu)$.
If $\mu=\mu_n$ is the empirical measure we also write $L^p(X_n) = L^p(X,\mu_n)$.
\subsection{Transportation theory \label{subsec:Prelim:Trans}}
In this section we collect the material needed in order to explain how to compare functions defined in different spaces, namely a function $w\in L^1(X,\mu)$ and a function $u\in L^1(X_n,\mu_n)$, where $X\subset\mathbb{R}^d$ is an open set and $X_n\subset X$ is a finite set of points.
This is fundamental in stating our $\Gamma$-convergence result (Theorem \ref{thm:MainRes:Compact&Gamma}).
The $TL^p$ space was introduced in~\cite{garciatrillos16} and consists of comparing $w$ and a piecewise constant extension of the function $u$ in $L^p$.
In particular, we take a map $T:X\to X_n$ and we consider the function $v:X\to\mathbb{R}$ defined as $v:=u\circ T$.
In order that this defines a metric one needs to impose conditions on $T$, the natural conditions are that $T$ "matches the measure $\mu$ with $\mu_n$" and is \emph{optimal} in the sense that matching moves as little mass as possible (see Theorem \ref{thm:optrate}).
This will be done by using the optimal transport distance that we recall now (see also~\cite{villani} for background on optimal transport and \cite{garciatrillos16,thorpe17cAAA} for a further description of the $TL^p$ space).
\begin{mydef}
\label{def:Back:Trans:Wass}
Let $X\subset\mathbb{R}^d$ be an open set and let $\mu,\lambda\in\mathcal{P}(X)$.
We define the set of couplings $\Gamma(\mu,\lambda)$ between $\mu$ and $\lambda$ as
\[
\Gamma(\mu,\lambda):=\left\{\, \pi\in\mathcal{P}(X\times X) \,:\, \pi(A\times X)=\mu(A),\, \pi(X\times A)=\lambda(A),\, \text{ for all measurable } A\subset X \,\right\}\,.
\]
For $p\in[1,+\infty]$, we define the $p$-Wasserstein distance between $\mu$ and $\lambda$ as follows:
\begin{itemize}
\item when $1\leq p<\infty$,
\[
\mathrm{dist}_p(\mu,\lambda):=\inf\left\{\, \left(\int_{X\times X} |x-y|^p \, \mathrm{d}\pi(x,y)\,\right)^{\frac{1}{p}} \,:\, \pi\in\Gamma(\mu,\lambda) \,\right\}\,,
\]
\item when $p=\infty$,
\[
\mathrm{dist}_\infty(\mu,\lambda):=\inf\left\{\, \mathrm{esssup}_\pi \left\{\, |x-y| \,:\, (x,y)\in X\times X \,\right\}\, : \, \pi\in\Gamma(\mu,\lambda) \,\right\}\,,
\]
where $\mathrm{esssup}_\pi$ denotes the essential supremum with respect to the measure $\pi$.
\end{itemize} \vspace*{0\baselineskip}
\end{mydef}
\begin{remark}\label{rem:ot}
The infimum problems in the above definition are known as the Kantorovich optimal transport problems and the distance is commonly called the $p^\text{th}$ Wasserstein distance or sometimes the earth movers distance.
It is possible to see (see \cite{villani}) that the infimum is actually achieved.
Moreover, the metric $\mathrm{dist}_p$ is equivalent to the weak$^*$ convergence of probability measures $\mathcal{P}(X)$ (plus convergence of $p^{\text{th}}$ moments).
\vspace{0.8\baselineskip}
\end{remark}
We now consider the case we are interested in: take $\mu\in\mathcal{P}(X)$ with $\mu=\rho\mathcal{L}^d$ (where $\mathcal{L}^d$ is the $d$-dimensional Lebesgue measure on $\mathbb{R}^d$) and assume the density $\rho$ is such that $0<c_1\leq \rho\leq c_2<\infty$.
In this case the Kantorovich optimal transport problem is equivalent to the Monge optimal transport problem (see \cite{gangmcc}).
In particular, for $p\in[1,+\infty)$ it holds that
\[
\mathrm{dist}_p(\mu,\lambda)=\min\left\{\, \|\mathrm{Id}-T\|_{L^p(X,\mu)} \,:\, T:X\to X \text{ Borel},\,\, T_\# \mu=\lambda \,\right\}\,,
\]
where
\[
\|\mathrm{Id}-T\|^p_{L^p(X,\mu)}:=\int_X |x-T(x)|^p \rho(x)\,\mathrm{d} x
\]
and we define the \emph{push forward} measure $T_\#\mu\in\mathcal{P}(X)$ as $T_\#\mu(A):=\mu\left( T^{-1}(A) \right)$
for all $A\subset X$.
In the case $p=+\infty$ we get
\[
\mathrm{dist}_\infty(\mu,\lambda)=\inf\left\{\, \|\mathrm{Id}-T\|_{L^\infty(X,\mu)} \,:\, T:X\to X \text{ Borel},\,\, T_\# \mu=\lambda \,\right\}\,,
\]
where
\[ \|\mathrm{Id}-T\|_{L^\infty(X,\mu)}:=\esssup_{X}\rho(x)[x-T(x)]. \]
A map $T$ is called a \emph{transport map} between $\mu$ and $\lambda$ if $T_{\#}\mu=\lambda$.
Throughout the paper we will assume the empirical measures $\mu_n$ converges weakly${^*}$ to $\mu$ (see Remark \ref{rem:weakconv} for iid samples) so by Remark~\ref{rem:ot} there exists a sequence of Borel maps $\{T_n\}_{n=1}^\infty$ with $T_n: X\to X_n$ and $(T_n)_\#\mu=\mu_n$ such that
\[
\lim_{n\to\infty}\|\mathrm{Id}-T_n\|^p_{L^p(X,\mu)}=0\,.
\]
Such a sequence of functions $\{T_n\}_{n=1}^\infty$ will be called \emph{stagnating}.
We are now in position to define the notion of convergence for sequences $u_n\in L^p(X_n)$ to a continuum limit $u\in L^p(X,\mu)$.
\begin{mydef} \label{def:Back:Trans:TLpConv}
Let $u_n\in L^p(X_n)$, $w\in L^p(X,\mu)$ where $X_n=\{x_i\}_{i=1}^n$ and assume that the empirical measure $\mu_n$ converges weak$^*$ to $\mu$. We say that $u_n\to w$ in $TL^p(X)$, and we write $u_n\TLpto w$, if there exists a sequence of stagnating transport maps $\{T_n\}_{n=1}^\infty$ between $\mu$ and $\mu_n$ such that
\begin{equation}\label{eq:stagn}
\| v_n-w\|_{L^p(X,\mu)}\to0\,,
\end{equation}
as $n\to\infty$, where $v_n:=u_n\circ T_n$.
\vspace{0.5\baselineskip}
\end{mydef}
\begin{remark}
One can show that if \eqref{eq:stagn} holds for one sequence of stagnating maps, then it holds for all sequences of stagnating maps\cite[Proposition 3.12]{garciatrillos16}. Moreover, since $\rho$ is bounded above and below it holds
\[
\| v_n-\mathrm{Id}\|_{L^p(X,\mu)}\to0\quad\Leftrightarrow\quad \| v_n-\mathrm{Id}\|_{L^p(X)}\to0\,.
\]
\vspace{0.8\baselineskip}
\end{remark}
We have introduced $TL^p$ convergence $u_n\TLpto u$ by defining transport maps $T_n:X\to X_n$ which "optimally partition" the space $X$ after which we define a piecewise constant extension of $u_n$ to the whole of $X$.
This constructionist approach is how we use $TL^p$ convergence in our proofs.
However, this description hides the metric properties of $TL^p$.
We briefly mention here the metric structure which characterises the convergence given in Definition~\ref{def:Back:Trans:TLpConv}.
We define the $TL^p(X)$ space as the space of couplings $(u,\mu)$ where $\mu\in \mathcal{P}(X)$ has finite $p^\text{th}$ moment and $u\in L^p(\mu)$.
We define the distance $d_{TL^p}:TL^p(X)\times TL^p(X)\to [0,+\infty)$ for $p\in [1,+\infty)$ by
\begin{align*}
d_{TL^p}((u,\mu),(v,\lambda)) & := \min_{\pi\in \Gamma(\mu,\nu)} \left(\int_{X^2} |x-y|^p + |u(x) - v(y)|^p \, \mathrm{d} \pi(x,y)\right)^{\frac{1}{p}} \\
& = \inf_{T_{\#}\mu=\lambda} \left(\int_X |x-T(x)|^p + |u(x)-v(T(x))|^p \, \mathrm{d} \mu(x)\right)^{\frac{1}{p}},
\end{align*}
or for $p=+\infty$ by
\begin{align*}
d_{TL^\infty}((u,\mu),(v,\lambda)) & := \inf_{\pi\in \Gamma(\mu,\nu)} \left( \essinf_\pi \left\{ |x-y| + |u(x) - v(y)| \, : \, (x,y)\in X\times X \right\} \right) \\
& = \inf_{T_{\#}\mu=\lambda} \left( \essinf_\mu \left\{ |x-T(x)| + |u(x)-v(T(x))| \, : \, x\in X \right\} \right).
\end{align*}
\begin{proposition}
The distance $d_{TL^p}$ is a metric and furthermore, $d_{TL^p}((u_n,\mu_n),(u,\mu))\to 0$ if and only if $\mu_n\weakstarto \mu$ and there exists a sequence of stagnating transport maps $\{T_n\}_{n=1}^\infty$ between $\mu$ and $\mu_n$ such that $\|u_n\circ T_n-u\|_{L^p(X,\mu)}\to 0$.
\end{proposition}
The proof is given in~\cite[Remark 3.4 and Proposition 3.12]{garciatrillos16}.
Note that Definition~\ref{def:Back:Trans:TLpConv} characterises $TL^p$ convergence.
In order to be able to write the discrete functional we will need the following result.
\begin{lemma}\label{lem:writeint}
Let $\lambda\in\mathcal{P}(X)$ and let $T:X\to X$ be a Borel map. Then, for any $u\in L^1(X,\lambda)$ it holds
\[
\int_X u\,\mathrm{d} T_\#\lambda = \int_X u\circ T\,\mathrm{d}\lambda\,.
\]
\end{lemma}
\begin{proof}
Let $s:X\to\mathbb{R}$ be a simple function. Write
\[
s=\sum_{i=1}^k a_i \chi_{U_i}\,.
\]
Then
\[
\int_X s\,\mathrm{d} T_\#\lambda =\sum_{i=1}^k a_i T_\#\lambda(U_i)=\sum_{i=1}^k a_i \lambda\left(T^{-1}(U_i)\right)=\int_X s\circ T \,\mathrm{d} \lambda\,.
\]
The result then follows directly from the definition of the integral.
\end{proof}
\begin{remark}\label{rem:writeint}
Applying the above result to the empirical measures $\mu_n=\frac{1}{n}\sum_{i=1}^n \delta_{x_i}$ and $u\in L^1(X_n)$ we get
\[
\frac{1}{n}\sum_{i=1}^n u(x_i)=\int_X v_n(x) \,\mathrm{d}\mu(x)\,,
\]
where $v_n:=u\circ T_n$ for any $T_n$ such that $(T_n)_\#\mu=\mu_n$. \vspace{0.8\baselineskip}
\end{remark}
In \cite{garciatrillos15} the authors, Garc\'{i}a Trillos and Slep\v{c}ev, obtain the following rate of convergence for a sequence of stagnating maps. This is of crucial importance for applying the results of this paper to the iid setting.
\begin{theorem}\label{thm:optrate}
Let $X\subset\mathbb{R}^d$ be a bounded, connected and open set with Lipschitz boundary.
Let $\mu\in\mathcal{P}(X)$ be of the form $\mu=\rho\mathcal{L}^d$ with $0<c_1\leq\rho\leq c_2<\infty$.
Let $\{x_i\}_{i=1}^\infty$ be a sequence of independent and identically distributed random variables distributed on $X$ according to the measure $\mu$, and let $\mu_n$ be the associated empirical measure.
Then, there exists a constant $C>0$ such that, with probability one, there exists a sequence $\{T_n\}_{n=1}^\infty$ of maps $T_n:X\to X$ with $(T_n)_\#\mu=\mu_n$ and
\[
\limsup_{n\to\infty}\frac{\| T_n-\mathrm{Id}\|_{L^\infty(X)}}{\delta_n}\leq C\,,
\]
where
\[
\delta_n:=
\left\{ \begin{array}{ll}
\sqrt{\frac{\log\log n}{n}} & \text{if } d=1\,, \\
\frac{(\log n)^{\frac{3}{4}}}{\sqrt{n}} & \text{if } d=2\,, \\
\left( \frac{\log n}{n} \right)^{\frac{1}{d}} & \text{if } d\geq 3\,.
\end{array}
\right.
\]
\vspace{0.5\baselineskip}
\end{theorem}
\begin{remark}
The proof for $d=1$ is simpler and follows from the law of iterated logarithms.
Notice that the connectedness of $X$ is essential in order to get the above result. \vspace{0.8\baselineskip}
\end{remark}
By the above theorem our main result, Theorem~\ref{thm:MainRes:Compact&Gamma}, holds with probability one when $x_i\stackrel{\mathrm{iid}}{\sim} \mu$ and the graph weights are scaled by $\varepsilon_n$ with $\varepsilon_n\gg \delta_n$.
\subsection{Sets of finite perimeter}
In this section we recall the definition and basic facts about sets of finite perimeter. We refer the reader to \cite{AFP} for more details.
\begin{mydef}
Let $E\subset\mathbb{R}^d$ with $\mathrm{Vol}(E)<\infty$ and let $X\subset\mathbb{R}^d$ be an open set.
We say that $E$ has \emph{finite perimeter} in $X$ if
\[
|D\chi_E|(X):=\sup\left\{\, \int_E \mathrm{div}\varphi \,\mathrm{d} x \,:\, \varphi\in C^1_c(X;\mathbb{R}^d)\,,\, \|\varphi\|_{L^\infty}\leq1 \,\right\}<\infty\,.
\]
\vspace{0\baselineskip}
\end{mydef}
\begin{remark}\label{rem:defvar}
If $E\subset\mathbb{R}^d$ is a set of finite perimeter in $X$ it is possible to define a finite vector valued Radon measure $D\chi_E$ on $A$ such that
\[
\int_{\mathbb{R}^d} \varphi \,\mathrm{d} D\chi_E=\int_E \mathrm{div}\varphi \,\mathrm{d} x
\]
for all $\varphi\in C^1_c(X;\mathbb{R}^d)$. \vspace{0.8\baselineskip}
\end{remark}
\begin{mydef}
Let $X\subset\mathbb{R}^d$ be an open set and let $u\in L^1(X;\{\pm1\})$ with $\|u\|_{L^1(X)}<\infty$.
We say that $u$ is of \emph{bounded variation} in $X$, and we write $u\in BV(X;\{\pm1\}),$ if $\{u=1\}:=\{ x\in X \,:\, u(x)=1\}$ has finite perimeter in $X$. \vspace{0.5\baselineskip}
\end{mydef}
\begin{mydef}\label{eq:defnormal}
Let $E\subset\mathbb{R}^d$ be a set of finite perimeter in the open set $X\subset\mathbb{R}^d$. We define $\partial^* E$, the \emph{reduced boundary} of $E$, as the set of points $x\in\mathbb{R}^d$ for which the limit
\[
\nu_E(x):=-\lim_{r\to0}\frac{D\chi_E(x+rQ)}{|D\chi_E|(x+rQ)}
\]
exists and is such that $|\nu_E(x)|=1$. Here $Q$ denotes the unit cube of $\mathbb{R}^d$ centered at the origin with sides parallel to the coordinate axes. The vector $\nu_E(x)$ is called the \emph{measure theoretic exterior normal} to $E$ at~$x$. \vspace{0.5\baselineskip}
\end{mydef}
We now recall the structure theorem for sets of finite perimeter due to De Giorgi, see~\cite[Theorem 3.59]{AFP} for a proof of the following theorem.
\begin{theorem}\label{thm:DeGiorgi}
Let $E\subset\mathbb{R}^d$ be a set with finite perimeter in the open set $X\subset\mathbb{R}^d$.
Then
\begin{itemize}
\item[(i)] for all $x\in\partial^* E$ the set $E_r:=\frac{E-x}{r}$ converges locally in $L^1(\mathbb{R}^d)$ as $r\to0$ to the
halfspace orthogonal to $\nu_E(x)$ and not containing $\nu_E(x)$,
\item[(ii)] $D\chi_E=\nu_E\,\mathcal{H}^{d-1}\res\partial^* E$,
\item[(iii)] the reduced boundary $\partial^* E$ is $\mathcal{H}^{d-1}$-rectifiable, \emph{i.e.},
there exists Lipschitz functions $f_i:\mathbb{R}^{d-1}\to\mathbb{R}^d$ such that
\[
\partial^* E=\bigcup_{i=1}^\infty f_i(K_i)\,,
\]
where each $K_i\subset\mathbb{R}^{d-1}$ is a compact set.
\end{itemize}
\vspace{0\baselineskip}
\end{theorem}
\begin{remark}\label{rem:newdefnormal}
Using the above result it is possible to prove that (see \cite{fonsecamuller93})
\[
\nu_E(x)=-\lim_{r\to0}\frac{D\chi_E(x+rQ)}{r^{d-1}}
\]
for all $x\in\partial^* E$, where $Q$ is a unit cube centred at $0$ with sides parallel to the co-ordinate axis.
\end{remark}
The construction of the recovery sequences in Section \ref{subsec:ConvNLContinuum:Limsup} and Section \ref{subsec:ConvGraph:Limsup} will be done for a special class of functions, that we introduce now.
\begin{mydef}\label{def:polyfun}
We say that a function $u\in L^1(X;\{\pm1\})$ is \emph{polyhedral} if $u=\chi_E-\chi_{X\setminus E}$, where $E\subset X$ is a set whose boundary is a Lipschitz manifold contained in the union of finitely many affine hyperplanes. In particular, $u\in BV(X,\{\pm1\})$. \vspace{0.5\baselineskip}
\end{mydef}
Using the result \cite[Theorem 3.42]{AFP} and the fact that it is possible to approximate every smooth surface with polyhedral sets, it is possible to obtain the following density result.
\begin{theorem}\label{thm:denspoly}
Let $u\in BV(X;\{\pm1\})$. Then there exists a sequence $\{u_n\}_{n=1}^\infty\subset BV(X;\{\pm1\})$ of polyhedral functions such that $u_n\to u$ in $L^1(X)$ and $|Du_n|(X)\to|Du|(X)$. In particular $D u_n\stackrel{{w}^*}{\rightharpoonup}Du$. \vspace{0.5\baselineskip}
\end{theorem}
Finally, we recall a result due to Reshetnvyak in the form we will need in this paper (for a proof of the general case see, for instance, \cite[Theorem 2.38]{AFP}).
\begin{theorem}\label{thm:rese}
Let $\{E_n\}_{n=1}^\infty$ be a sequence of sets of finite perimeter in the open set $X\subset\mathbb{R}^d$ such that $D\chi_{E_n}\stackrel{{w}^*}{\rightharpoonup} D\chi_E$ and $|D\chi_{E_n}|(X)\to|D\chi_{E}|(X)$, where $E$ is a set of finite perimeter in $X$.
Let $f:X\times \mathbb{S}^{d-1}\to[0,\infty)$ be an upper semi-continuous function. Then
\[
\limsup_{n\to\infty}\int_{\partial^* E_n\cap X} f\left(\,x,\nu_{E_n}(x)\right)\,\mathrm{d}\mathcal{H}^{d-1}(x)
\leq \int_{\partial^* E\cap X} f\left(\,x,\nu_E(x)\right)\,\mathrm{d}\mathcal{H}^{d-1}(x)\,.
\]
\end{theorem}
\begin{remark}
A set of finite perimeter can have a very wild reduced boundary. Since the limiting energy density $\sigma^{(p)}$ depends on the exterior theoretic normal $\nu_u$, it would be very difficult to build an explicit recovery sequence for a set $\{u=1\}$ that is only assumed to have finite perimeter.
For this reason, we consider the family of polyhedral sets, for which we provide an explicit construction, and then we use the density result Theorem \ref{thm:denspoly} together with Theorem \ref{thm:rese} to address the general case.
\end{remark}
\subsection{\texorpdfstring{$\Gamma$}{Gamma}-convergence}\label{sec:Gammaconv}
We recall the basic notions and properties of $\Gamma$-convergence (in metric spaces) we will use in the paper (for a reference, see \cite{braides02,dalmaso93}).
\begin{mydef}
Let $(A,\mathrm{d})$ be a metric space. We say that $F_n:A\to[-\infty,+\infty]$ $\Gamma$-converges to $F:A\to[-\infty,+\infty]$,
and we write $F_n\Gammato F$ or $F=\Glim (d)_{n\to \infty} F_n$, if the following hold true:
\begin{itemize}
\item[(i)] for every $x\in A$ and every $x_n\to x$ we have
\[
F(x)\leq\liminf_{n\to\infty} F_n(x_n)\,;
\]
\item[(ii)] for every $x\in A$ there exists $\{x_n\}_{n=1}^\infty\subset A$ (the so called \emph{recovery sequence}) with $x\to x$ such that
\[
\limsup_{n\to\infty} F_n(x_n)\leq F(x)\,.
\]
\end{itemize} \vspace{0\baselineskip}
\end{mydef}
With a small abuse of notation we will write $\Glim(L^1)$ and $\Glim(TL^1)$ for $\Gamma$-convergence with respect to the $L^1$ metric $d_{L^1}:L^1(X)\times L^1(X)\to [0,\infty)$ and the $TL^1$ metric $d_{TL^1}:TL^1(X)\times TL^1(X)\to [0,\infty)$.
The notion of $\Gamma$-convergence has been designed in order for the following convergence of minimisers and minima result to hold (see for example~\cite{braides02,dalmaso93}).
\begin{theorem}\label{thm:convmin}
Let $(A,\mathrm{d})$ be a metric space and let $F_n\stackrel{\Gamma-(\mathrm{d})}{\longrightarrow} F$, where $F_n$ and $F$ are as in the above definition.
Let $\{\varepsilon_n\}_{n=1}^\infty$ with $\varepsilon_n\to0^+$ as $n\to\infty$ and let $x_n\in A$ be \emph{$\varepsilon_n$-minimizers} for $F_n$, that is
\begin{equation}\label{eq:infFn}
F_n(x_n)\leq \max\left\{\, \inf_A F_n + \frac{1}{\varepsilon_n}\,,\, -\frac{1}{\varepsilon_n} \,\right\}\,.
\end{equation}
Then every cluster point of $\{x_n\}_{n=1}^\infty$ is a minimizer of $F$. \vspace{0.5\baselineskip}
\end{theorem}
\begin{remark}
The condition defining an $\varepsilon$-minimizer takes into account the fact that the infimum of the functional can be $-\infty$. In the case $\inf_{n\in\mathbb{N}}\inf_{A}F_n>-\infty$, condition \eqref{eq:infFn} reduces, for $n$ sufficiently large, to
$F_n(x_n)\leq \inf_A F_n + \frac{1}{\varepsilon_n}$.
\vspace{0.8\baselineskip}
\end{remark}
In the context of this paper we apply Theorem~\ref{thm:convmin} in order to prove Corollary~\ref{cor:MainRes:Constrained}.
In particular, we show that $\mathcal{G}_n^{(p)}+\mathcal{K}_n$ $\Gamma$-converges to $\mathcal{G}_\infty+\mathcal{K}_\infty$ and satisfies a compactness property.
We note that in general
\[
\Glim_{n\to \infty}(d) (F_n+G_n) \neq \Glim_{n\to \infty}(d) F_n + \Glim_{n\to \infty}(d) G_n.
\]
However, with a suitable strong notion of convergence of $G_n\to G$ we can infer the additivity of $\Gamma$-limits.
\begin{proposition}
\label{prop:contconv2}
Let $(A,d)$ be a metric space and let $F_n\Gammato F$.
Assume $G_n(u_n)\to G(u)$ and $G(u)>-\infty$ for any sequence $u_n\to u$ with $\sup_{n\in \mathbb{N}} F_n(u_n)<+\infty$ and $F(u)<+\infty$ then
\[ \Glim_{n\to \infty}(d) (F_n+G_n) = \Glim_{n\to \infty}(d) F_n + \Glim_{n\to \infty}(d) G_n. \]
\end{proposition}
The assumption in the above proposition is similar to the notion of continuous convergence, see~\cite[Definition 4.7 and Proposition 6.20]{dalmaso93}.
In our context continuous convergence is not quite the right concept, indeed the fidelity terms $\mathcal{K}_n$ defined in Definition \ref{def:fidelityterm} do not continuously converge to $\mathcal{K}_\infty$.
However we show, in Section~\ref{sec:Const}, that $\mathcal{K}_n$ does satisfy the assumptions in Proposition~\ref{prop:contconv2}.
\section{Convergence of the Non-Local Continuum Model \label{sec:ConvNLContinuum}}
We first introduce the intermediary functional $\mathcal{F}_\varepsilon^{(p)}$ that is a non-local continuum approximation of the discrete functional $\mathcal{G}_n^{(p)}$.
\begin{mydef}
Let $p\geq1$, $\varepsilon>0$, $s_\varepsilon>0$, and let $A\subset X$ be an open and bounded set. Define the functional $\mathcal{F}^{(p)}_\varepsilon(\cdot,A):L^1(X)\to[0,\infty]$ by
\begin{equation} \label{eq:ConvNLContinuum:Feps}
\mathcal{F}^{(p)}_\varepsilon(u,A) = \frac{s_\varepsilon}{\varepsilon} \int_{A\times A} \eta_\varepsilon(x-z) |u(x)-u(z)|^p \rho(x)\rho(z) \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon} \int_A V(u(x)) \rho(x) \, \mathrm{d} x\,.
\end{equation}
When $A=X$, we will simply write $\mathcal{F}^{(p)}_\varepsilon(u)$. \vspace{0.5\baselineskip}
\end{mydef}
This section is devoted to proving the following result.
\begin{theorem}
\label{thm:ConvNLContinuum:ConvNLContinuum}
Let $p\geq 1$, and assume $s_{\varepsilon}\to1$ as $\varepsilon\to0$.
Under conditions (A1), (B1-3) and (C1-3) the following holds:
\begin{itemize}
\item (Compactness) Let $\varepsilon_n\to 0^+$ and $u_n\in L^1(X,\mu)$ satisfy $\sup_{n\in \mathbb{N}} \mathcal{F}_{\varepsilon_n}^{(p)}(u_n)<\infty$, then $u_n$ is relatively compact in $L^1(X,\mu)$ and each cluster point $u$ has $\mathcal{G}_\infty^{(p)}(u)<\infty$;
\item ($\Gamma$-convergence) $\Glim_{\varepsilon\to 0}(L^1) \mathcal{F}_\varepsilon^{(p)} = \mathcal{G}_\infty^{(p)}$ and furthermore, if $u\in L^1(X,\mu)$ is a polyhedral function then for any $\zeta_\varepsilon\to 0$ the recovery sequence $u_\varepsilon$ can be chosen to satisfy the following: each $u_\varepsilon$ is Lipschitz continuous with $\mathrm{Lip}(u_\varepsilon) = \frac{1}{\zeta_\varepsilon\eps}$ and $u_\varepsilon(x) = u(x)$ for all $x$ satisfying $\mathrm{dist}(x,\partial^*\{u=1\}) > \frac{\varepsilon}{\zeta_\varepsilon}$.
\end{itemize} \vspace{0\baselineskip}
\end{theorem}
See Section~\ref{subsec:ConvNLContinuum:Compact} for compactness and Sections~\ref{subsec:ConvNLContinuum:Liminf} and~\ref{subsec:ConvNLContinuum:Limsup} for the $\Gamma$-convergence.
The result of Theorem \ref{thm:ConvNLContinuum:ConvNLContinuum} is a generalization of a result by Alberti and Bellettini (see \cite{alberti98}) that we are going to recall for the reader's convenience.
First, we introduce notation for the special case when $p=2$ and $\rho\equiv1$ on $X$.
\begin{mydef}
Let $\varepsilon>0$, and define the functionals $\mathcal{E}_\varepsilon, \mathcal{E}_0: L^1(X)\to[0,\infty]$ by
\begin{align*}
\mathcal{E}_\varepsilon(u) & := \frac{1}{\varepsilon} \int_X\int_X \eta_\varepsilon(x-z) |u(x) - u(z)|^2 \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon} \int_X V(u(x)) \, \mathrm{d} x
\\
& \nonumber\\
\mathcal{E}_0(u) & := \left\{ \begin{array}{ll} \displaystyle\int_{\partial^*\{u=1\}} \hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) & \text{if } u\in BV(X,\{\pm 1\})\,, \\
&\\
+ \infty & \text{else}\,,
\end{array} \right. \notag
\end{align*}
respectively, where
\begin{align*}
\hat{\sigma}(\nu) & := \min\left\{ E(f;\nu) \, | \, f:\mathbb{R}\to \mathbb{R}, \lim_{t\to \infty} f(t) = 1, \lim_{t\to-\infty} f(t) = -1 \right\} \\
E(f;\nu) & := \int_{-\infty}^\infty \int_{\mathbb{R}^d} \eta(h) |f(t+h_\nu) - f(t)|^2 \, \mathrm{d} h \, \mathrm{d} t + \int_{-\infty}^\infty V(f(t)) \, \mathrm{d} t \nonumber
\end{align*}
and $h_\nu := h \cdot \nu$. \vspace{0.5\baselineskip}
\end{mydef}
\begin{remark}
We make the following observations.
\begin{enumerate}
\item The fact that there exists a minimizer of $\hat{\sigma}(\nu)$ follows from~\cite[Theorem 2.4]{alberti98a}.
\item The minimum in $\hat{\sigma}(\nu)$ can be taken over non-decreasing functions with $tf(t)\geq 0$ for all $t\in \mathbb{R}$.
\item If $\eta$ is isotropic, \emph{i.e.}, $\eta(h)=\eta(|h|)$, then $E(f;\nu)$ is independent of $\nu$ and therefore $\sigma(\nu)$ is a constant and the functional $\mathcal{E}_0$ is a multiple of the perimeter $\mathcal{H}^{d-1}(\partial^*\{u=1\})$.
\end{enumerate} \vspace{0\baselineskip}
\end{remark}
The following theorem summerizes two results, namely ~\cite[Theorem 1.4]{alberti98} and~\cite[Theorem 3.3]{alberti98a}.
\begin{theorem}
\label{thm:ConvNLContinuum:AB:AB}
Assume that $X\subset\mathbb{R}^d$ is open, $V$ satisfies conditions (B1-3), and $\eta$ satisfies conditions (C1-3).
Then the following holds:
\begin{itemize}
\item (Compactness) Any sequence $\varepsilon_n\to 0^+$ and $\{u_n\}\subset L^1(X)$ with $\sup_{n\in \mathbb{N}}\mathcal{E}_{\varepsilon_n}(u_n)<\infty$ is relatively compact in $L^1(X)$, furthermore any cluster point $u$ satisfies $\mathcal{E}_0(u)<\infty$;
\item ($\Gamma$-convergence) $\Glim_{\varepsilon\to 0}(L^1) \mathcal{E}_\varepsilon = \mathcal{E}_0$.
\end{itemize} \vspace{0\baselineskip}
\end{theorem}
In fact the above theorem is true if (C1) is relaxed to $\eta\geq 0$ (i.e. positivity and continuity at the origin are not needed).
The proof of Theorem \ref{thm:ConvNLContinuum:ConvNLContinuum} is based on the proof of \cite[Theorem 1.4]{alberti98}, where we
have to deal with the fact that, in our case, we are considering a generic exponent $p\geq 1$, and that we have a density $\rho$.
The compactness proof is in Section~\ref{subsec:ConvNLContinuum:Compact} and the $\Gamma$-convergence in Sections~\ref{subsec:ConvNLContinuum:Liminf}-\ref{subsec:ConvNLContinuum:Limsup}.
\subsection{Compactness \label{subsec:ConvNLContinuum:Compact}}
The aim of this section is to show that any sequence $\{u_n\}_{n=1}^\infty\subset L^1(X,\mu)$ with $\sup_{n\in \mathbb{N}}\mathcal{F}_{\varepsilon_n}^{(p)}(u_n) < \infty$ is relatively compact in $L^1(X,\mu)$ and that $\mathcal{G}_\infty^{(p)}(u)<\infty$ for any cluster point $u\in L^1(X,\mu)$.
This will prove the first part of Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum}.
The strategy of the proof is to apply the Alberti and Bellettini compactness result in Theorem~\ref{thm:ConvNLContinuum:AB:AB}.
When $p=2$ this follows from the upper and lower bounds on $\rho$ that imply an `equivalence' between $\mathcal{E}_{\varepsilon_n}$ and $\mathcal{F}_{\varepsilon_n}^{(2)}$.
When $p\neq 2$ we approximate $u_n$ with a sequence $v_n$ satisfying $v_n(x)\in\{\pm 1\}$ then since
$|v_n(x)-v_n(z)|^2=2^{2-p} |v_n(x)-v_n(z)|^p$ we can easily find an equivalence between $\mathcal{E}_{\varepsilon_n}$ and $\mathcal{F}_{\varepsilon_n}^{(p)}$.
We start with the preliminary result that shows $\mathcal{E}_0(u)<\infty \Leftrightarrow \mathcal{G}_\infty^{(p)}(u)<\infty$.
\begin{proposition}
\label{prop:ConvNLContinuum:Compact:PerEquiv}
Let $X\subset \mathbb{R}^d$ be open and bounded, and let $u\in L^1(X;\{\pm 1\})$.
Under assumptions (A1), (B1-2), (C1,3) we have
\begin{align*}
\int_{\partial^*\{u=1\}}\hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty \quad & \Leftrightarrow \quad \mathcal{H}^{d-1}(\partial^*\{u=1\})<+\infty \\
& \Rightarrow \quad \int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty.
\end{align*} \vspace{0\baselineskip}
\end{proposition}
\begin{remark}
The above proposition implies that
\[ \mathcal{E}_0(u) < +\infty \quad \Leftrightarrow \quad u\in BV(X;\{\pm 1\}) \quad \Leftrightarrow \quad \mathcal{G}_\infty^{(p)}(u)< +\infty \]
since by definition of $\mathcal{G}_\infty^{(p)}$ if $\mathcal{G}_\infty^{(p)}(u)<+\infty$ then $u\in BV(X;\{\pm 1\})$. \vspace{0.8\baselineskip}
\end{remark}
\begin{remark}
The missing implication,
\[ \int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty \implies \mathcal{H}^{d-1}(\partial^*\{u=1\})<+\infty \,, \]
is also true.
One can show that the minimisation problem in $\sigma^{(p)}(x,\nu)$ (for any $x\in X$ and $\nu\in \mathbb{S}^{d-1}$) can be be reduced to a minimisation problem over functions $f:\mathbb{R}\to \mathbb{R}$ similar to $\hat{\sigma}(\nu)$ (but with an additional $x$ dependence); see Lemma~\ref{lem:ConvNLContinuum:Preliminary:Cell}.
Since the missing implication is not needed then we do not include it.
\vspace{0.8\baselineskip}
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:ConvNLContinuum:Compact:PerEquiv}]
\emph{Step 1.} We show
\[ \mathcal{H}^{d-1}(\partial^*\{u=1\})<+\infty \quad \implies \quad \int_{\partial^*\{u=1\}}\hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty. \]
Choose $\hat{f}(t) = +1 $ if $t\geq 0$ and $\hat{f}(t) = -1$ if $t<0$.
Then,
\[
E(\hat{f};\nu) = \int_{-\infty}^\infty \int_{\mathbb{R}^d} \eta(h) |\hat{f}(t+h_\nu) - \hat{f}(t) |^2 \, \mathrm{d} h \, \mathrm{d} t = \int_{\mathbb{R}^d} \eta(h) |h_\nu| \, \mathrm{d} h \leq R_\eta \int_{\mathbb{R}^d} \eta(h) =: C
\]
where $C\in (0,\infty)$, $h_\nu = h\cdot \nu$ and $R_\eta$ is given by assumption (C3).
Note that $C$ is independent of $\nu$.
So,
\[
\int_{\partial^*\{u=1\}}\hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) \leq \int_{\partial^*\{u=1\}} E(\hat{f};\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) \leq C \mathcal{H}^{d-1}(\partial^*\{u=1\}).
\]
\emph{Step 2.} We show
\[ \int_{\partial^*\{u=1\}}\hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty \quad \implies \quad \mathcal{H}^{d-1}(\partial^*\{u=1\})<+\infty. \]
Using assumption (C1) it is possible to find $a>0$ and $c>0$ satisfying $\eta(h)\geq c$ for all $|h|\leq a$.
Let $\hat{f}:\mathbb{R}\to \mathbb{R}$ such that
\begin{equation} \label{eq:ConvNLContinuum:Compact:fAss}
\hat{f} \text{ is non-decreasing,} \quad \lim_{t\to\infty} \hat{f}(t) = -\lim_{t\to-\infty} \hat{f}(t) = 1, \quad \hat{f}(t)t\geq 0 \text{ for } t\in\mathbb{R}\,.
\end{equation}
If $\hat{f}(\frac{a}{2})\leq \frac{1}{2}$ then $\hat{f}(t) \in [0,\frac12)$ for $t\in [0,\frac{a}{2}]$ and
\begin{equation}\label{eq:estimateE1}
E(\hat{f};\nu) \geq \frac{a}{2} \inf_{t\in [0,\frac12]} V(t) =: \tilde{a}_1 >0.
\end{equation}
Otherwise, if $\hat{f}\left(\frac{a}{2}\right)> \frac12$ then for $h\geq \frac{a}{2}$
\[
\int_{\mathbb{R}} |\hat{f}(t+h)-\hat{f}(t)|^2 \, \mathrm{d} t \geq \int_{\frac{a}{2}-h}^0 |\hat{f}(t+h) - \hat{f}(t)|^2 \, \mathrm{d} t \geq \frac{h-\frac{a}{2}}{4}.
\]
Similarly, if $h\leq -\frac{a}{2}$ we have
\[
\int_{\mathbb{R}} |\hat{f}(t+h)-\hat{f}(t)|^2 \, \mathrm{d} t \geq \frac{\frac{a}{2}-h}{4}\,.
\]
So, for all $|h|\geq\frac{a}{2}$ it holds
\[
\int_{\mathbb{R}} |\hat{f}(t+h)-\hat{f}(t)|^2 \, \mathrm{d} t \geq \frac{|h-\frac{a}{2}|}{4}\,.
\]
Therefore
\begin{align}\label{eq:estimateE2}
E(\hat{f};\nu) & \geq \int_{\mathbb{R}^d} \eta(h) \int_{\mathbb{R}} |\hat{f}(t+h_\nu) - \hat{f}(t)|^2 \, \mathrm{d} t \, \mathrm{d} h \nonumber\\
& \geq \frac{1}{4} \int_{\{h\in\mathbb{R}^d\,:\,|h_\nu|\geq \frac{3a}{4}\}} \eta(h) \left| h_\nu-\frac{a}{2} \right| \, \mathrm{d} h \nonumber\\
& \geq \frac{ac}{16} \int_{\{h\in\mathbb{R}^d\,:\,|h_\nu|\geq \frac{3a}{4}\}} \chi_{B(0,a)} \, \mathrm{d} h \nonumber\\
&=:\tilde{a}_2 >0\,.
\end{align}
Set $\tilde{a} := \min\{\tilde{a}_1,\tilde{a}_2\}>0$. Using \eqref{eq:estimateE1} and \eqref{eq:estimateE2} we have, for any $f:\mathbb{R}\to \mathbb{R}$ satisfying~\eqref{eq:ConvNLContinuum:Compact:fAss}, that $E(\hat{f};\nu)\geq \tilde{a}$.
We get
\[
\tilde{a} \mathcal{H}^{d-1}(\partial^*\{u=1\})\leq \int_{\partial^*\{u=1\}} \hat{\sigma}(\nu_u(x)) \, \mathrm{d} \mathcal{H}^{d-1}(x)=\mathcal{E}_0(u)<\infty\,.
\]
\emph{Step 3.} We show
\[ \mathcal{H}^{d-1}(\partial^*\{u=1\})<+\infty \quad \implies \quad \int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) < +\infty. \]
Following the same argument as in the first part of the proof we let $\hat{f}(t) = 1$ for $t\geq 0$ and $\hat{f}(t) = -1$ for $t<0$.
Fix $\nu\in\mathbb{S}^{d-1}$ and let $f_\nu(x) = \hat{f}(x\cdot \nu)$.
For any $\bar{x}\in X$ and $C\in \mathcal{C}(\bar{x},\nu)$ we clearly have $f_\nu\in \mathcal{U}(C,\nu)$.
So,
\begin{align*}
\frac{1}{\mathcal{H}^{d-1}(C)} G^{(p)}(f_\nu,\rho(\bar{x}),T_C) & = \rho(\bar{x})\int_{-\infty}^\infty \int_{\mathbb{R}^d} \eta(h) |\hat{f}(t+h_\nu) - \hat{f}(t)|^p \, \mathrm{d} h \, \mathrm{d} t \\
& \leq 2^{p}c_2 \int_{\mathbb{R}^d}\eta(h) |h_\nu| \, \mathrm{d} h \\
& \leq \tilde{c}
\end{align*}
where $\tilde{c}\in (0,\infty)$ can be chosen to be independent of $\nu$ and in the last step we used assumption (C3).
Then,
\begin{align*}
\int_{\partial^*\{u=1\}} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) & \leq \int_{\partial^*\{u=1\}} \frac{1}{\mathcal{H}^{d-1}(C)} G^{(p)}(f_\nu,\rho(\bar{x}),T_C) \rho(\bar{x}) \, \mathrm{d} \mathcal{H}^{d-1}(\bar{x}) \\
& \leq c_2 \tilde{c} \mathcal{H}^{d-1}(\partial^*\{u=1\}).
\end{align*}
This concludes the proof.
\end{proof}
By the above proposition it is enough to show that any sequence $\{u_n\}_{n=1}^\infty\subset L^1(X,\mu)$ satisfying $\sup_{n\in \mathbb{N}}\mathcal{F}_{\varepsilon_n}(u_n)<+\infty$ is relatively compact and any cluster point $u$ satisfies $\mathcal{E}_0(u)<\infty$.
We do this by a direct comparison with $\mathcal{E}_{\varepsilon_n}$.
\begin{proof}[Proof of Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} (Compactness)]
Assume $\varepsilon_n\to 0^+$, $s_n := s_{\varepsilon_n} \to 1$ and $\sup_{n\in \mathbb{N}}\mathcal{F}^{(p)}_{\varepsilon_n}(u_n)<+\infty$.
Let
\[
v_n(x) :=\mathrm{sign}(u_n):= \left\{ \begin{array}{ll} +1 & \text{if } u_n(x) \geq 0 \\ -1 & \text{if } u_n(x)<0. \end{array} \right.
\]
We claim that
\begin{equation} \label{eq:ConvNLContinuum:Compact:unvn}
\|u_n-v_n\|_{L^1}\to 0,
\end{equation}
\begin{equation} \label{eq:ConvNLContinuum:Compact:Fepsvn}
\sup_{n\in \mathbb{N}} \mathcal{F}_{\varepsilon_n}^{(p)}(v_n)<+\infty.
\end{equation}
\emph{Step 1.} Let us first prove \eqref{eq:ConvNLContinuum:Compact:unvn}.
Fix $\delta>0$ and let
\begin{align*}
K_n^{(\delta)} & = \left\{ x\in X \, : \, |u_n(x)| \geq 1+\delta \right\} \\
L_n^{(\delta)} & = \left\{ x\in X \, : \, |u_n(x)|\leq 1-\delta \right\}.
\end{align*}
Note that for $x\in X\setminus (K_n^{(\delta)}\cup L_n^{(\delta)})$ we have $|v_n(x)-u_n(x)|\leq \delta$.
Now,
\begin{align*}
\int_X |u_n(x) - v_n(x)| \, \mathrm{d} x & \leq \delta \mathrm{Vol}(X) + \int_{K_n^{(\delta)}} |u_n(x)-v_n(x)| \, \mathrm{d} x + \int_{L_n^{(\delta)}} |u_n(x) - v_n(x)| \, \mathrm{d} x \\
& \leq \delta \mathrm{Vol}(X) + \int_{K_n^{(\delta)}} |u_n(x)| \, \mathrm{d} x + \mathrm{Vol}(K_n^{(\delta)}) + 2\mathrm{Vol}(L_n^{(\delta)}).
\end{align*}
Since $V$ is continuous and zero only at $\pm 1$ then there exists $\gamma_\delta>0$ such that $V(t)\geq \gamma_\delta$ for all $t\in (-\infty,-1-\delta)\cup (-1+\delta,1-\delta) \cup (1+\delta,+\infty)$.
Hence $V(u_n(x))\geq \gamma_\delta$ for all $x\in K_n^{(\delta)}\cup L_n^{(\delta)}$.
This implies,
\[ \mathrm{Vol}(K_n^{(\delta)}) \leq \frac{1}{\gamma_\delta} \int_{K_n^{(\delta)}} V(u_n(x)) \, \mathrm{d} x \leq \frac{\varepsilon_n}{c_1 \gamma_\delta} \mathcal{F}_{\varepsilon_n}^{(p)}(u_n). \]
By the same calculation $\mathrm{Vol}(L_n^{(\delta)}) \leq \frac{\varepsilon_n}{c_1\gamma_\delta} \mathcal{F}_{\varepsilon_n}^{(p)}(u_n)$.
Furthermore,
\begin{align*}
\int_{K_n^{(\delta)}} |u_n(x)| \, \mathrm{d} x & = \int_{K_n^{(\delta)} \cap \{ |u_n(x)|\leq R_V\}} |u_n(x)| \, \mathrm{d} x + \int_{\{|u_n(x)|>R_V\}} |u_n(x)| \, \mathrm{d} x \\
& \leq R_V \mathrm{Vol}(K_n^{(\delta)}) + \frac{1}{\tau} \int_{\{|u_n(x)|>R_V\}} V(u_n(x)) \, \mathrm{d} x \\
& \leq \left(\frac{R_V}{\gamma_\delta} + \frac{1}{\tau} \right) \frac{\varepsilon_n \mathcal{F}_{\varepsilon_n}^{(p)}(u_n)}{c_1}.
\end{align*}
So, $\lim_{n\to \infty} \| u_n - v_n\|_{L^1} \leq \delta \mathrm{Vol}(X)$.
Since this is true for all $\delta>0$ then we have $\lim_{n\to \infty} \| u_n - v_n\|_{L^1}=0$ which proves claim \eqref{eq:ConvNLContinuum:Compact:unvn}. \vspace{0.5\baselineskip}
\emph{Step 2.} In order to prove \eqref{eq:ConvNLContinuum:Compact:Fepsvn} we reason as follows.
If $|u_n(x)|\geq \frac12$ then $\mathrm{sign}(u_n(x)) \neq \mathrm{sign}(u_n(y))$ implies $|u_n(x)-u_n(y)|\geq \frac12$.
Now since,
\[ |v_n(x) - v_n(y)| = \left\{ \begin{array}{ll} 0 & \text{if } \mathrm{sign}(u_n(x)) = \mathrm{sign}(u_n(y)) \\ 2 & \text{if } \mathrm{sign}(u_n(x)) \neq \mathrm{sign}(u_n(y)) \end{array} \right. \]
then $|v_n(x) - v_n(y)|\leq 4 |u_n(x) - u_n(y)|$ when $|u_n(x)|\geq \frac12$.
Let $M_n = \{x\in X \, : \, |u_n(x)|\leq \frac12 \}$.
We have,
\begin{align*}
\mathcal{F}_{\varepsilon_n}^{(p)}(v_n) & = \frac{s_n}{\varepsilon_n} \int_{M_n\times X} \eta_{\varepsilon_n}(x-z) |v_n(x)-v_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \\
& \quad \quad \quad \quad + \frac{s_n}{\varepsilon_n} \int_{M_n^c\times X} \eta_{\varepsilon_n}(x-z) |v_n(x) - v_n(y)|^p \rho(x) \rho(y) \, \mathrm{d} x \, \mathrm{d} z \\
& \leq \frac{2^{p}c_2^2 s_n}{\varepsilon_n} \int_{M_n\times X} \eta_{\varepsilon_n}(x-z) \, \mathrm{d} x \, \mathrm{d} z \\
& \quad \quad \quad \quad + \frac{4^p s_n}{\varepsilon_n} \int_{M_n^c\times X} \eta_{\varepsilon_n}(x-z) |u_n(x)-u_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \\
& \leq \frac{2^{p}c_2^2 s_n}{\varepsilon_n} \int_{\mathbb{R}^d} \eta(w) \, \mathrm{d} w \mathrm{Vol}(M_n) + 4^p \mathcal{F}_{\varepsilon_n}^{(p)}(u_n).
\end{align*}
Now,
\begin{align*}
\frac{1}{\varepsilon_n} \mathrm{Vol}(M_n) \leq \frac{1}{\gamma\varepsilon_n} \int_{\{|u_n(x)|\leq \frac12\}} V(u_n(x)) \, \mathrm{d} x \leq \frac{1}{\gamma c_1} \mathcal{F}_{\varepsilon_n}^{(p)}(u_n)
\end{align*}
where $V(t)\geq \gamma>0$ for all $t\in [-\frac12,\frac12]$.
Hence $\sup_{n\in\mathbb{N}} \mathcal{F}_{\varepsilon_n}^{(p)}(v_n)<+\infty$. \vspace{0.5\baselineskip}
\emph{Step 3.} We conclude the proof by noticing that, since $v_n\in L^1(X;\{\pm 1\})$ we have $\mathcal{F}_{\varepsilon_n}^{(p)}(v_n) = 2^{p-2} s_n \mathcal{E}_{\varepsilon_n}(v_n)$.
So $v_n$ is relatively compact in $L^1$ by Theorem~\ref{thm:ConvNLContinuum:AB:AB} and by Proposition~\ref{prop:ConvNLContinuum:Compact:PerEquiv} $\mathcal{G}_\infty^{(p)}(u)<+\infty$ for any cluster point $u$ of $\{v_n\}_{n=1}^\infty$. Using \eqref{eq:ConvNLContinuum:Compact:unvn} the same holds true for the sequence $\{u_n\}_{n\in\mathbb{N}}$.
\end{proof}
\subsection{Preliminary results \label{subsec:ConvNLContinuum:Preliminary}}
Here we prove some technical results needed in the proof of the $\Gamma$-convergence result stated in Theorem \ref{thm:ConvNLContinuum:ConvNLContinuum}.
We start by stating the result that the minimisation in the cell problem $\sigma^{(p)}(x,\nu)$ can be restricted to functions $u$ that only vary in the direction $\nu$.
\begin{lemma}
\label{lem:ConvNLContinuum:Preliminary:Cell}
Fix $\nu\in \mathbb{S}^{d-1}$ and let $C\subset \nu^\perp$.
Let $p\geq 1$ and assume (B1-3) and (C1-3) are in force.
Then, for any $\lambda>0$, the minimisation problem
\[ \min \left\{ \frac{1}{\mathcal{H}^{d-1}(C)} G^{(p)}(u,\lambda,T_C) \,:\, u\in \mathcal{U}(C,\nu) \right\} \]
has a solution $u^\dagger$ that can be written $u^\dagger(x) = \bar{u}(x\cdot \nu)$ where $\bar{u}:\mathbb{R}\to [-1,1]$ is increasing.
\end{lemma}
The proof of the lemma is a simple adaptation of~\cite[Theorem 3.3]{alberti98a}.
Indeed, one can absorb $\lambda$ into the mollifier $\eta$ and then the only difference between Lemma~\ref{lem:ConvNLContinuum:Preliminary:Cell} and~\cite[Theorem 3.3]{alberti98a} is the exponent $p$.
By modifying~\cite[Proposition~3.7]{alberti98a} to treat this more general case we can conclude Lemma~\ref{lem:ConvNLContinuum:Preliminary:Cell} holds.
We omit the proof of Lemma~\ref{lem:ConvNLContinuum:Preliminary:Cell}.
\begin{remark}
\label{rem:ConvNLContinuum:Preliminary:Cell}
By Lemma~\ref{lem:ConvNLContinuum:Preliminary:Cell} we can write
\[ \sigma^{(p)}(x,\nu) = \inf \left\{ \rho(x) \int_{\mathbb{R}} \int_{\mathbb{R}^d} \eta(h) |u(z + h\cdot \nu) - u(z) |^p \, \mathrm{d} h \, \mathrm{d} z + \int_{\mathbb{R}} V(u(z)) \, \mathrm{d} z \right\} \]
where the infimum is taken over functions $u:\mathbb{R}\to [-1,1]$ satisfying $\lim_{x\to \infty} u(x)=1$, $\lim_{x\to -\infty} u(x)=-1$.
\end{remark}
We continue by proving some continuity properties of the function $\sigma^{(p)}$.
\begin{lemma}\label{lem:ConNLContinuum:Liminf:contsigma}
Under assumptions (A1) and (C3) the followings hold:
\begin{itemize}
\item[(i)] the function $(x,\nu)\mapsto\sigma^{(p)}(x,\nu)$ is upper semi-continuous on $X\times\mathbb{S}^{d-1}$,
\item[(ii)] for every $\nu\in\mathbb{S}^{d-1}$, the function $x\mapsto\sigma^{(p)}(x,\nu)$ is continuous on $X$.
\end{itemize}
\end{lemma}
\begin{proof}
\emph{Proof of (i).} Fix $\bar{x}\in X$ and $\nu\in\mathbb{S}^{d-1}$ and let $\{x_n\}_{n=1}^\infty\subset X$ and $\{\nu_n\}_{n=1}^\infty\subset\mathbb{S}^{d-1}$ be such that $x_n\to\bar{x}$ and $\nu_n\to\nu$ as $n\to\infty$.
Let $R_n$ be a rotation such that $R_n\nu=\nu_n$.
Fix $t>0$ and let $D\in\mathcal{C}(\bar{x},\nu)$ and $w\in\mathcal{U}(D,\nu)$ be such that
\[
\frac{1}{\mathcal{H}^{d-1}(D)}G^{(p)}\left( w,\rho(\bar{x}), T_{D} \right)\leq \sigma^{(p)}(\bar{x},\nu)+t\,.
\]
By Remark~\ref{rem:ConvNLContinuum:Preliminary:Cell} we can assume that $w(x) = \bar{w}(x\cdot \nu)$ for some increasing function $\bar{w}:\mathbb{R}\to[-1,1]$.
Then,
\begin{align*}
\int_{T_D} |w(z+h) - w(z)|^p \, \mathrm{d} z & \leq 2^{p-1} \mathcal{H}^{d-1}(D) \int_{\mathbb{R}} | \bar{w}(z_\nu + h_\nu) - \bar{w}(z_\nu) | \, \mathrm{d} z_\nu \quad \text{where } h_\nu = h\cdot \nu \\
& = 2^{p-1} \mathcal{H}^{d-1}(D) \lim_{M\to \infty} \sum_{i=-M}^{M} \int_0^{h_\nu} |\bar{w}(y+(i+1)h_\nu) - \bar{w}(y+ih_\nu)| \, \mathrm{d} y \\
& = 2^{p-1} \mathcal{H}^{d-1}(D) \lim_{M\to \infty} \int_0^{h_\nu} \bar{w}(y+(M+1)h_\nu) - \bar{w}(y-Mh_\nu) \, \mathrm{d} y \\
& = 2^p \mathcal{H}^{d-1}(D) h_\nu
\end{align*}
with the last line following from the monotone convergence theorem.
Hence,
\begin{equation}\label{eq:inequality}
\int_{T_{D}} |w(z+h) - w(z)|^p \, \mathrm{d} z \leq 2^p \mathcal{H}^{d-1}(D) h\cdot \nu\,.
\end{equation}
For $n\in\mathbb{N}$ define $C_n\in\mathcal{C}(x_n,\nu_n)$ and $u_n\in\mathcal{U}(C_n,\nu_n)$ by
\[
C_n:=R_n(D-\bar{x})+x_n\,,
\]
\[
u_n(x):=w(R_n^{-1}(x-x_n)+\bar{x})\,,
\]
respectively. Then
\begin{align*}
\sigma^{(p)}(x_n,\nu_n)&\leq \frac{1}{\mathcal{H}^{d-1}(C_n)}G^{(p)}(u_n,\rho(x_n),T_{C_n})\\
&\leq \frac{1}{\mathcal{H}^{d-1}(D)}G^{(p)}(w,\rho(\bar{x}),T_{D}) + \delta_n\\
&\leq \sigma^{(p)}(\bar{x},\nu)+t+\delta_n\,,
\end{align*}
where
\[
\delta_n:=\frac{1}{\mathcal{H}^{d-1}(D)} \left|\, G^{(p)}(u_n,\rho(x_n),T_{C_n})-G^{(p)}(w,\rho(\bar{x}),T_{D}) \,\right|\,.
\]
We claim that $\delta_n\to0$ as $n\to\infty$. Since $t>0$ is arbitrary, this will prove the upper semi-continuity.
Fix $\varepsilon>0$.
Using the continuity of $\rho$, for $n$ sufficiently large we have that $|\rho(x_n)-\rho(\bar{x})|<\varepsilon$.
Thus, we get
\begin{align*}
\delta_n & = \frac{1}{\mathcal{H}^{d-1}(D)} \Bigg| \rho(x_n) \int_{T_{D}}\int_{\mathbb{R}^d} \eta(R_n h) \left| w(z+h)-w(z) \right|^p \, \mathrm{d} h \, \mathrm{d} z \\
&\hspace{2.5cm}- \rho(\bar{x}) \int_{T_{D}}\int_{\mathbb{R}^d} \eta(h) \left| w(z+h)-w(z) \right|^p \, \mathrm{d} h \, \mathrm{d} z \Bigg| \\
& \leq \frac{|\rho(x_n)-\rho(\bar{x})|}{\mathcal{H}^{d-1}(D)} \int_{\mathbb{R}^d} \eta(h) \int_{T_{D}} |w(z+h)-w(z)|^p \, \mathrm{d} z \, \mathrm{d} h \\
&\hspace{2.5cm}+ \frac{\rho(x_n)}{\mathcal{H}^{d-1}(D)} \int_{\mathbb{R}^d} \left| \eta(R_nh) - \eta(h) \right| \int_{T_{D}} |w(z+h)-w(z)|^p \, \mathrm{d} z \, \mathrm{d} h \\
& \leq 2^p\varepsilon R_\eta \|\eta\|_{L^1} + 2^p c_2 R_\eta \int_{\mathbb{R}^d} \left| \eta(R_n h) - \eta(h) \right| \, \mathrm{d} h
\end{align*}
where in the last step we used \eqref{eq:inequality}.
In order to show that the second term in the parenthesis vanishes, we use an argument similar to the one for proving that translations are continuous in $L^p$. For every $s>0$ let $\widetilde{\eta}_s:\mathbb{R}\to[0,\infty)$ be a continuous function with support in $B(0,2R_\eta)$ such that $\|\eta-\widetilde{\eta}_s\|_{L^1(\mathbb{R}^d)}<s$. Then, for every $r>0$ there exists $\bar{n}\in\mathbb{N}$ such that $|\widetilde{\eta}_s(R_n h)-\widetilde{\eta}_s(h)|<r$ for all $h\in\mathbb{R}^d$ and all $n\geq\bar{n}$.
So that, for $n\geq\bar{n}$
\[
\int_{\mathbb{R}^d} \left| \eta(R_n h) - \eta(h) \right| \, \mathrm{d} h\leq \|\eta-\widetilde{\eta}_s\|_{L^1(\mathbb{R}^d)}
+\int_{\mathbb{R}^d} |\widetilde{\eta}_s(R_n h)-\widetilde{\eta}_s(h)| \, \mathrm{d} h \leq s+r\mathrm{Vol}(B(0,2R_\eta))\,.
\]
Since $r$ and $s$ are arbitrary, we conclude that $\int_{\mathbb{R}^d} \left| \eta(R_n h) - \eta(h) \right| \, \mathrm{d} h\to0$ as $n\to\infty$ and, in turn, that $\delta_n\to0$ as $n\to\infty$.\vspace{0.5\baselineskip}
\emph{Proof of (ii).}
Fix $\nu\in \mathbb{S}^{d-1}$, $\bar{x}\in X$ and let $x_n\rightarrow \bar{x}$.
By part (i) we have that $\sigma^{(p)}(\bar{x},\nu)\geq\limsup_{n\to\infty}\sigma^{(p)}(x_n,\nu)$.
\vspace{0.5\baselineskip}
We claim that $\sigma^{(p)}(\bar{x},\nu)\leq\liminf_{n\to\infty}\sigma^{(p)}(x_n,\nu)$.
Without loss of generality, let us assume that
\[ \liminf_{n\to\infty}\sigma^{(p)}(x_n,\nu)=\lim_{n\to\infty}\sigma^{(p)}(x_n,\nu)<\infty\,. \]
For every $n\in\mathbb{N}$ let $C_n\in\mathcal{C}(x_n,\nu)$ and $u_n\in\mathcal{U}(C_n,\nu)$
be such that
\begin{equation}\label{eq:ConNLContinuum:Liminf:est1}
\frac{1}{\mathcal{H}^{d-1}(C_n)}G^{(p)}(u_n,\rho(x_n),T_{C_n})\leq \sigma^{(p)}(x_n,\nu)+\frac{1}{n}.
\end{equation}
Set $\lambda_n:=\bar{x}-x_n$ and define
\[ \widetilde{C}_n:=C_n+\lambda_n\,,\quad\quad\quad \widetilde{u}_n(x):=u_n(x-\lambda_n). \]
So, $\widetilde{C}_n\in\mathcal{C}(\bar{x},\nu)$ and $\widetilde{u}_n\in\mathcal{U}(\widetilde{C}_n,\nu)$.
Using \eqref{eq:ConNLContinuum:Liminf:est1}, we get
\begin{align}\label{eq:estimateliminfsigma}
\sigma^{(p)}(\bar{x},\nu) &\leq \frac{1}{\mathcal{H}^{d-1}(\widetilde{C}_n)}G^{(p)}(\widetilde{u}_n,\rho(\bar{x}),T_{\widetilde{C}_n}) \nonumber\\
&\leq \frac{G^{(p)}(u_n,\rho(x_n),T_{C_n}) }{\mathcal{H}^{d-1}(C_n)}
+ \frac{1}{\mathcal{H}^{d-1}(C_n)}\left|\,G^{(p)}(\widetilde{u}_n,\rho(\bar{x}),T_{\widetilde{C}_n}) - G^{(p)}(u_n,\rho(x_n),T_{C_n}) \,\right| \nonumber\\
& \leq \sigma^{(p)}(x_n,\nu)+\frac{1}{n} + \frac{1}{\mathcal{H}^{d-1}(C_n)}\left|\,G^{(p)}(\widetilde{u}_n,\rho(\bar{x}),T_{\widetilde{C}_n}) - G^{(p)}(u_n,\rho(x_n),T_{C_n}) \,\right|
\end{align}
To estimate the last term, we reason as follows.
First of all, we notice that
\begin{equation}\label{eq:vvanishes}
\int_{T_{C_n}} V\left( u_n(z) \right)\mathrm{d} z = \int_{T_{\widetilde{C}_n}} V\left( \widetilde{u}_n(z) \right)\mathrm{d} z\,.
\end{equation}
Fix $\varepsilon>0$.
Using the continuity of $\rho$, there exists $\bar{n}\in\mathbb{N}$, such that
$|\rho(x_n)-\rho(\bar{x})|<\varepsilon$ for all $n\geq\bar{n}$.
From \eqref{eq:vvanishes} we get that
\begin{align*}
&\frac{1}{\mathcal{H}^{d-1}(C_n)}\left|\,G^{(p)}(\widetilde{u}_n,\rho(\bar{x}),T_{\widetilde{C}_n}) - G^{(p)}(u_n,\rho(x_n),T_{C_n}) \,\right|\\
&\quad \quad\quad\quad\leq \frac{1}{\mathcal{H}^{d-1}(C_n)}|\rho(x_n)-\rho(\bar{x})|
\int_{T_{C_n}} \int_{\mathbb{R}^d} \eta(h) \left| u_n(z+h) - u_n(z) \right|^p \, \mathrm{d} h \, \mathrm{d} z \\
&\quad \quad\quad\quad \leq \frac{\varepsilon}{c_1} \left( \sigma^{(p)}(x_n,\nu) + \frac{1}{n} \right) \,. \\
\end{align*}
By the above and~\eqref{eq:estimateliminfsigma} we have
\[ \left( 1-\frac{\varepsilon}{c_1 n} \right) \sigma^{(p)}(\bar{x},\nu) \leq \left( 1+\frac{\varepsilon}{c_1}\right) \sigma^{(p)}(x_n,\nu) + \frac{1}{n}. \]
Using the arbitrariness of $\varepsilon$ we conclude that
\[
\sigma^{(p)}(\bar{x},\nu)\leq\liminf_{n\to\infty}\sigma^{(p)}(x_n,\nu)\,
\]
as required.
\end{proof}
\begin{remark}
Notice that the above result did not require the existence of a solution for the infimum problem defining $\sigma^{(p)}$. \vspace{0.8\baselineskip}
\end{remark}
We notice that the main feature of the $\Gamma$-convergence of $\mathcal{F}^{(p)}_{\varepsilon_n}$ to $\mathcal{G}^{(p)}_\infty$ is that we recover, in the limit, a local functional starting from a nonlocal functional.
To be more precise, let $A,B\subset X$ be disjoint sets. Then, it holds that
\begin{equation}\label{eq:nonloc}
\mathcal{F}^{(p)}_{\varepsilon_n}(u, A\cup B)=\mathcal{F}^{(p)}_{\varepsilon_n}(u, A)+\mathcal{F}^{(p)}_{\varepsilon_n}(u, B)+2\widetilde{\Lambda}_{\varepsilon_n}(u, A, B)
\end{equation}
where we define the nonlocal deficit
\begin{equation} \label{eq:ConvNLContinuum:Prelim:Lambdatilde}
\widetilde{\Lambda}_{\varepsilon}(u, A, B):=\frac{s_{\varepsilon}}{\varepsilon}\int_A\int_B \eta_\varepsilon(x-z) |u(x)-u(z)|^p \rho(x)\rho(z) \, \mathrm{d} x \, \mathrm{d} z\,.
\end{equation}
On the other hand, for the limiting functional we have
\begin{equation}\label{eq:local}
\mathcal{G}^{(p)}_\infty(u, A\cup B)=\mathcal{G}^{(p)}_\infty(u, A)+\mathcal{G}^{(p)}_\infty(u, B)\,,
\end{equation}
where, for $u\in BV(X;\{\pm1\})$, we set
\[
\mathcal{G}^{(p)}_\infty(u, A):=\int_{\partial^*\{u=1\}\cap A} \sigma^{(p)}(x,\nu_u(x)) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x)\,.
\]
Identity \eqref{eq:nonloc} states that the functionals $\mathcal{F}^{(p)}_{\varepsilon_n}$ are nonlocal, while \eqref{eq:local} is the locality property of the limiting functional $\mathcal{G}^{(p)}_\infty$. Thus, we expect the nonlocal deficit to disappear in the limit,
\emph{i.e.}, that if $u_{\varepsilon_n}\to u$ in $L^1(X)$, then
\begin{equation}\label{eq:nonlocdefvanish}
\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n}, A, B)\to0\,,
\end{equation}
as $n\to\infty$.
For technical reasons we also consider the nonlocal deficits without weighting by $\rho$ or $s_\varepsilon$:
\[
\Lambda_\varepsilon(u, A, B):=\frac{1}{\varepsilon}\int_A\int_B \eta_\varepsilon(x-z) |u(x)-u(z)|^p \, \mathrm{d} x \, \mathrm{d} z\,.
\]
By continuity of $\rho$ if $A$ and $B$ are sets in $X$ that are close to $\bar{x}$ then $\widetilde{\Lambda}_\varepsilon(u,A,B)\approx s_\varepsilon\rho^2(\bar{x})\Lambda_\varepsilon(u,A,B)$.
In~\cite{alberti98} the authors prove that the limit of the nonlocal deficit is determined by the behavior of $u_{\varepsilon_n}$ close to the boundaries of $A$ and $B$ and, in turn, that \eqref{eq:nonlocdefvanish} holds in certain cases of interest. Here we only state the main technical result of \cite{alberti98} in a version we need in the paper, addressing the interested reader to the paper by Alberti and Bellettini for the details.
\begin{proposition}\label{prop:nonlocdef}
Assume (A1) and (C1-3) hold.
Let $v_n\to v$ in $L^1(X)$ with $|v_n|\leq1$. Then, for all $\bar{x}\in\mathbb{R}^d$ and for all $\nu\in\mathbb{S}^{d-1}$ the following holds:
given $C\in\mathcal{C}(\bar{x},\nu)$ consider the strip $T_C$ and any cube $Q\subset\mathbb{R}^d$ whose intersection with $\nu^\perp$ is $C$, for a.e. $t>0$:
\begin{itemize}
\item[(i)] $\Lambda_{\varepsilon_n}(v_n,tT_C,\mathbb{R}^d\setminus tT_C)\to0$ as $n\to\infty$,
\item[(ii)] $\Lambda_{\varepsilon_n}(v_n,tQ,tT_C\setminus tQ)\to0$ as $n\to\infty$.
\end{itemize} \vspace{0\baselineskip}
\end{proposition}
\begin{remark}\label{rem:nonlocldef}
The boundness assumption on the sequence $\{v_n\}_{n=1}^\infty$ allows one to obtain the proof of the above result directly from~\cite[Proposition 2.5 and Theorem 2.8]{alberti98}.
In particular, in~\cite{alberti98} the authors prove the result for $p=1$ and $\rho\equiv 1$, using the $L^\infty$ bound on $v_n$ one can easily bound the more general case considered here by the $L^1$ case.
With similar computations it is also possible to obtain the same result without the $L^\infty$ bound.
Finally, notice that when $A,B\in\mathbb{R}^d$ are disjoint sets with $\mathrm{d}(A,B)>0$, using the fact that the function $\eta$ has support in the ball $B(0,R_\eta)$ (see (C3)), it is easy to prove that there exists $\bar{n}\in\mathbb{N}$ such that for all $n\geq\bar{n}$ it holds
\[
\Lambda_{\varepsilon_n}(v_n, A, B)=0\,.
\]
\vspace{0\baselineskip}
\end{remark}
For technical reasons we need to introduce a scaled version of the functional $G^{(p)}$.
\begin{mydef}
For $\varepsilon>0$, $p\geq1$, $u:\mathbb{R}^d\to\mathbb{R}$, $\lambda\in\mathbb{R}$, and $A\subset\mathbb{R}^d$, we define
\[
G^{(p)}_\varepsilon(u,\lambda,A):= \frac{\lambda}{\varepsilon} \int_{A} \int_{\mathbb{R}^d} \eta_\varepsilon(h)
|u(z+h) - u(z)|^p \, \mathrm{d} h \, \mathrm{d} z + \frac{1}{\varepsilon}\int_{A} V(u(z)) \, \mathrm{d} z\,.
\]
\vspace{0\baselineskip}
\end{mydef}
Let $r>0$ and $x\in X$. For a set $A\subset\mathbb{R}^d$, we define $x+rA:=\{ x+ry \,:\, y\in A \}$.
Moreover, for a function $u:\mathbb{R}^d\to\mathbb{R}$, we set
\[ R_{x,r}u(y):=u(x+ry)\,. \]
Using a change of variables, it is easy to see that the following scaling property holds true:
\begin{equation}\label{eq:scalprop1}
G^{(p)}_\varepsilon(u,\lambda, x+rA)=r^{d-1}G^{(p)}_{\varepsilon/r}(R_{x,r}u,\lambda, A)\,.
\end{equation}
\subsection{The Liminf Inequality \label{subsec:ConvNLContinuum:Liminf}}
This section is devoted to proving the following: let $u_{\varepsilon_n}\rightarrow u$ in $L^1$, then
\begin{equation}\label{eq:liminftoprove}
\mathcal{G}^{(p)}_{\infty}(u)\leq\liminf_{n\rightarrow\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})\,.
\end{equation}
We will follow the proof of \cite[Theorem 1.4]{alberti98}, with some modifications due to the presence of the density~$\rho$.
\begin{proof}[Proof of Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} (Liminf).]
Let $\varepsilon_n\to 0^+$ and $u_{\varepsilon_n}\to u$ in $L^1(X,\mu)$.
Assume without loss of generality that
\begin{equation}\label{eq:bound}
\liminf_{n\rightarrow\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})
=\lim_{n\rightarrow\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})<\infty\,.
\end{equation}
\emph{Step 1.} By compactness (see Section \ref{subsec:ConvNLContinuum:Compact}) it holds $u=\chi_A$ for some set $A\subset X$ of finite perimeter in $X$. In order to prove \eqref{eq:liminftoprove} we use the strategy introduced by Fonseca and M\"{u}ller in \cite{fonsecamuller93}. Write
\begin{equation}\label{eq:writing}
\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})=\int_X g_{\varepsilon_n}(x)\mathrm{d} x
\end{equation}
and set $\mathrm{d} \lambda_{\varepsilon_n}:= g_{\varepsilon_n} \mathrm{d} \mathcal{L}^d\res X
$,
so that
\begin{equation}\label{eq:writing1}
|\lambda_{\varepsilon_n}|(X)=\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})\,.
\end{equation}
Using \eqref{eq:bound}, \eqref{eq:writing}, \eqref{eq:writing1}
, up to a subsequence (not relabeled) it holds $\lambda_{\varepsilon_n}\stackrel{*}{\rightharpoonup}\lambda$
for some finite Radon measure $\lambda$ on $X$.
Then
\begin{equation}\label{eq:lsctv}
|\lambda|(X)\leq\liminf_{n\to\infty}|\lambda_{\varepsilon_n}|(X)\,.
\end{equation}
In view of \eqref{eq:writing1} and \eqref{eq:lsctv}, the liminf inequality \eqref{eq:liminftoprove} is implied by the following claim: for $\mathcal{H}^{d-1}$-a.e. $\bar{x}\in \partial^*\{u=1\}$ it holds
\[
\sigma^{(p)}(\bar{x},\nu(\bar{x}))\rho(\bar{x})\leq\frac{\mathrm{d}\lambda}{\mathrm{d}\theta}(\bar{x})\,,
\]
where $\theta:=\mathcal{H}^{d-1}\res \partial^*\{u=1\}$.
In order to prove the claim we reason as follows. For $\mathcal{H}^{d-1}$-a.e. $\bar{x}\in \partial^*\{u=1\}$ it is possible to find the density of $\lambda$ with respect to $\theta$ via (recall Remark \ref{rem:newdefnormal})
\begin{equation}\label{eq:density}
\frac{\mathrm{d}\lambda}{\mathrm{d}\theta}(\bar{x})=\lim_{r\to0}\,\frac{\lambda(\bar{x}+r Q)}{r^{d-1}}\,,
\end{equation}
where $Q$ is a unit cube centered at the origin and having $\nu(\bar{x})$, the measure theoretic exterior normal to $A$ at $\bar{x}$, as one of its axes. Let $\bar{x}\in \partial^*\{u=1\}$. Theorem \ref{thm:DeGiorgi} implies that
\begin{equation}\label{eq:convhalfplane}
R_{\bar{x},r}u\rightarrow v_{\bar{x}}
\end{equation}
in $L^1_{loc}(\mathbb{R}^N)$ as $r\rightarrow0$, where
\[
v_{\bar{x}}(x):=
\left\{
\begin{array}{ll}
-1 & x\cdot \nu(\bar{x})\geq0\,,\\
1 & x\cdot \nu(\bar{x})<0\,.
\end{array}
\right.
\]
Let $\bar{x}\in \partial^*\{u=1\}$ be a point for which \eqref{eq:density} and \eqref{eq:convhalfplane} hold.
Without loss of generality, we can assume that
\begin{equation}\label{eq:densfinite}
\frac{\mathrm{d}\lambda}{\mathrm{d}\theta}(\bar{x})<\infty\,.
\end{equation}
Since $u_{\varepsilon_n}\rightarrow u$ in $L^1$ and $\lambda_{\varepsilon_n}\stackrel{*}{\rightharpoonup}\lambda$, it is possible to find a (not relabeled) subsequence
$\{\varepsilon_n\}_{n=1}^\infty$ and a sequence $\{r_n\}_{n=1}^\infty$
with $r_n\rightarrow0^+$ and $\frac{\varepsilon_n}{r_n}\rightarrow0^+$, such that
\begin{equation}\label{eq:conv1}
\frac{\mathrm{d} \lambda}{\mathrm{d}\theta}(\bar{x})=\lim_{n\rightarrow\infty}\frac{\lambda_{\varepsilon_n}(\bar{x}+r_nQ)}{r_n^{d-1}}
\end{equation}
and
\[
R_{\bar{x},r_n}u_{\varepsilon_n}\rightarrow v_{\bar{x}}\,.
\]
Using the fact that $X$ is open, we can assume that $\bar{x}+r_n Q\subset X$ for all $n\in\mathbb{N}$. Thus
\begin{equation}\label{eq:est2}
\frac{\lambda_{\varepsilon_n}(\bar{x}+r_nQ)}{r_n^{d-1}}\geq\frac{\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},\bar{x}+r_n Q)}{r_n^{d-1}}\,.
\end{equation}
\vspace{0.5\baselineskip}
\emph{Step 2.} We claim that
\begin{equation}\label{eq:deltantozero}
\delta_n:=\frac{|\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},\bar{x}+r_nQ) -
\rho(\bar{x})\widetilde{\mathcal{F}}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},\rho(\bar{x}),\bar{x}+r_nQ) |}{r_n^{d-1}}\rightarrow0\,,
\end{equation}
as $n\to\infty$, where
\[
\widetilde{\mathcal{F}}^{(p)}_{\varepsilon}(u,\xi,A):= \frac{\xi s_\varepsilon}{\varepsilon}\int_A \int_A \eta_{\varepsilon}(x-z)|u(x)-u(z)|^p\,\mathrm{d} x \, \mathrm{d} z
+\frac{1}{\varepsilon}\int_A V(u(x)) \, \mathrm{d} x\,,
\]
for $\varepsilon>0$, $A\subset X$, $u:A\to\mathbb{R}$ and $\xi\in\mathbb{R}$.
Indeed, fix $t>0$. Thanks to Assumption (A1) the function $\rho$ is continuous in $X$. Then, it is possible to find $\bar{n}\in\mathbb{N}$ such that for all $n\geq\bar{n}$ and all $y\in \bar{x}+r_nQ$ it holds
\[
|\rho(y)-\rho(\bar{x})|<t\,.
\]
Thus,
\begin{align*}
\delta_n&\leq \frac{t}{r_n^{d-1}\varepsilon_n} \int_{\bar{x}+r_n Q} V(u_{\varepsilon_n}(x)) \, \mathrm{d} x
+\frac{ts_{\varepsilon_n}\rho(\bar{x})}{r_n^{d-1}\varepsilon_n} \int_{\bar{x}+r_n Q} \int_{\bar{x}+r_n Q} \eta_{\varepsilon_n}(x-z)|u_{\varepsilon_n}(x)-u_{\varepsilon_n}(z)|^p\,\mathrm{d} x \, \mathrm{d} z\\
&\hspace{0.6cm} + \frac{t s_{\varepsilon_n}}{r_n^{d-1} \varepsilon_n} \int_{\bar{x}+r_n Q} \int_{\bar{x}+r_n Q} \eta_{\varepsilon_n}(x-z)|u_{\varepsilon_n}(x)-u_{\varepsilon_n}(z)|^p \rho(z)\,\mathrm{d} x \, \mathrm{d} z\\
&\leq \frac{t}{r_n^{d-1}\varepsilon_n c_1} \int_{\bar{x}+r_n Q} V(u_{\varepsilon_n}(x))\rho(x) \, \mathrm{d} x \\
& \hspace{0.6cm} + \frac{t s_{\varepsilon_n}(c_1+c_2)}{r_n^{d-1}\varepsilon_n c_1^2} \int_{\bar{x}+r_n Q} \int_{\bar{x}+r_n Q} \eta_{\varepsilon_n}(x-z)|u_{\varepsilon_n}(x)-u_{\varepsilon_n}(z)|^p \rho(z)\rho(x)\,\mathrm{d} x \, \mathrm{d} z \\
&\leq \frac{t(c_1+c_2)}{c_1^2}\frac{\lambda_{\varepsilon_n}(\bar{x}+r_nQ)}{r_n^{d-1}} \,,
\end{align*}
where in the last step we used \eqref{eq:est2}.
By~\eqref{eq:densfinite} and~\eqref{eq:conv1} $\lim_{n\to \infty}\delta_n\leq Ct$ for some constant $C<\infty$.
Since $t>0$ is arbitrary, this proves the claim. \vspace{0.5\baselineskip}
\emph{Step 3.} Observe that for any $\lambda\geq 0$, $\varepsilon>0$, $r>0$ and $v\in L^1$ we have
\[ \min\left\{ 1,\frac{s_{\varepsilon}}{s_{\frac{\varepsilon}{r}}} \right\} \tilde{\mathcal{F}}_{\frac{\varepsilon}{r}}^{(p)}(R_{\bar{x},r}v,\lambda,Q) \leq \frac{1}{r^{d-1}} \tilde{\mathcal{F}}_\varepsilon^{(p)}(v,\lambda,\bar{x}+rQ) \leq \max\left\{ 1,\frac{s_{\varepsilon}}{s_{\frac{\varepsilon}{r}}} \right\} \tilde{\mathcal{F}}_{\frac{\varepsilon}{r}}^{(p)}(R_{\bar{x},r}v,\lambda,Q). \]
Let $C=Q\cap \nu(\bar{x})^\perp \in \mathcal{C}(\bar{x},\nu(\bar{x}))$.
Define the function $w_n:\mathbb{R}^d\to\mathbb{R}$ as the periodic extension of the function that is $R_{\bar{x},r_n}\,u_{\varepsilon_n}$ in $Q$ and $v_{\bar{x}}$ in $T_C\setminus Q$.
Set $\varepsilon'_n:=\frac{\varepsilon_n}{r_n}$ and $s_n^\prime=\min\left\{ 1,\frac{s_{\varepsilon_n}}{s_{\varepsilon'_n}}\right\}$.
Using \eqref{eq:est2} and \eqref{eq:deltantozero} together with the scaling identity \eqref{eq:scalprop1} we get
\begin{align}\label{eq:finalest1}
\frac{\lambda_{\varepsilon_n}(\bar{x}+r_nQ)}{r_n^{d-1}}&\geq
\rho(\bar{x})\frac{\widetilde{\mathcal{F}}_{\varepsilon_n}^{(p)}(u_{\varepsilon_n},\rho(\bar{x}),\bar{x}+r_nQ)}{r_n^{d-1}}-\delta_n
\nonumber\\
&\geq s^\prime_n \rho(\bar{x})\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(R_{\bar{x},r_n}\,u_{\varepsilon_n},\rho(\bar{x}),Q)-\delta_n
\nonumber\\
&=s^\prime_n \rho(\bar{x})\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),Q)-\delta_n
\nonumber\\
&\geq s^\prime_n \rho(\bar{x})G^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),T_C)-\delta_n \nonumber\\
&\hspace{0.6cm}-s^\prime_n\rho(\bar{x})\left|\,\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),Q) -
G^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),T_C) \,\right| \nonumber\\
&=s^\prime_n\rho(\bar{x})\left(\frac{\varepsilon_n}{r_n}\right)^{d-1}G^{(p)}\left(R_{0,\varepsilon'_n}w_n,\rho(\bar{x}),\frac{r_n}{\varepsilon_n}T_C\right)-\delta_n \nonumber\\
&\hspace{0.6cm}-s^\prime_n\rho(\bar{x})\left|\,\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),Q) -
G^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),T_C) \,\right| \,.
\end{align}
We would like to say that
\[
\left|\,\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),Q) -G^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),T_C) \,\right|\to0
\]
as $n\to\infty$. Unfortunately, this might not be true. In order to overcome this difficulty, take $t\in (0,1)$.
Notice that we can bound
\begin{align}\label{eq:finalest2}
& \left|\widetilde{\mathcal{F}}^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),tQ) - G^{(p)}_{\varepsilon'_n}(w_n,\rho(\bar{x}),tT_C)\right|
\leq 2\rho(\bar{x})\Lambda_{\varepsilon'_n}(w_n,tQ,tT_C\setminus tQ) \nonumber\\
&\quad\quad\quad+ \rho(\bar{x}) \Lambda_{\varepsilon'_n}(w_n,tT_C,\mathbb{R}^d\setminus tT_C)
+ \rho(\bar{x}) \Lambda_{\varepsilon'_n}(w_n,tT_C\setminus tQ,tT_C\setminus tQ) \nonumber\\
&\quad\quad\quad+\frac{r_n\rho(\bar{x})}{\varepsilon_n} |1-s_{\varepsilon'_n}| \int_{tQ} \int_{tQ} \eta_{\frac{\varepsilon_n}{r_n}}(y-z) |w_n(y) - w_n(z)|^p \, \mathrm{d} y \, \mathrm{d} z \,.
\end{align}
Now,
\begin{align}\label{eq:finalest3}
\frac{r_n\rho(\bar{x})}{\varepsilon_n} \int_{tQ} \int_{tQ} \eta_{\varepsilon'_n}(y-z) |w_n(y) - w_n(z)|^p \, \mathrm{d} y \, \mathrm{d} z
& \leq G_{\varepsilon'_n}^{(p)}(w_n,\rho(\bar{x}),T_C) \\
& = \left(\frac{\varepsilon_n}{r_n}\right)^{d-1} G^{(p)}\left(R_{0,\varepsilon'_n}w_n,\rho(\bar{x}),\frac{r_n}{\varepsilon_n}T_C\right)\,. \nonumber
\end{align}
Moreover, using Proposition \ref{prop:nonlocdef} we get that for a.e. $t\in(0,1)$ it holds
\begin{equation}\label{eq:finalest4}
\Lambda_{\varepsilon'_n}(w_n,tQ,tT_C\setminus tQ)\to0\,,\quad\quad\quad \Lambda_{\varepsilon'_n}(w_n,tT_C,\mathbb{R}^d\setminus tT_C)\to0
\end{equation}
as $n\to\infty$. Finally, using Remark \ref{rem:nonlocldef} and the fact that $w_n$ is constant on $T_C\setminus Q$ it is easy to see that
\begin{equation}\label{eq:finalest5}
\lim_{t\to1}\lim_{n\to\infty}\Lambda_{\varepsilon'_n}(w_n,tT_C\setminus tQ,tT_C\setminus tQ)=0\,.
\end{equation}
Hence, from \eqref{eq:deltantozero}, \eqref{eq:finalest1}, \eqref{eq:finalest2}, \eqref{eq:finalest3}, \eqref{eq:finalest4} and \eqref{eq:finalest5} and recalling that $s'_n\to1$ we get
\[
\lim_{n\to\infty}\frac{\lambda_{\varepsilon_n}(\bar{x}+r_nQ)}{r_n^{d-1}} \geq \rho(\bar{x})\sigma^{(p)}(\bar{x},\nu(\bar{x}))
\]
as required
\end{proof}
\subsection{The Limsup Inequality \label{subsec:ConvNLContinuum:Limsup}}
This section is devoted at proving the following: let $u\in BV(X,\{\pm1\})$, then it is possible to find $\{u_{\varepsilon_n}\}_{n=1}^\infty\subset L^1(X)$ with $u_{\varepsilon_n}\to u$ in $L^1(X,\mu)$ such that
\begin{equation}\label{eq:limsuptoprove}
\limsup_{n\rightarrow\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})\leq\mathcal{G}_\infty^{(p)}(u)\,.
\end{equation}
Without loss of generality, we can assume $\mathcal{G}_\infty^{(p)}(u)<\infty$, namely $u\in BV(X;\{\pm1\})$.
The proof will follow the lines of the argument used to prove \cite[Theorem 5.2]{alberti98}.
\begin{proof}[Proof of Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} (limsup).]
We first prove the result for polyhedral functions then, via a diagonalisation argument, generalise to arbitrary functions in $BV(X;\{\pm1\})$.
We fix the sequence $\varepsilon_n\to 0^+$ now.
\vspace{0.5\baselineskip}
\emph{Step 1. Polyhedral functions.} Assume $u\in BV(X,\{\pm1\})$ is a polyhedral function (see Definition \ref{def:polyfun}).
Then we claim that there exists a sequence $\{u_{\varepsilon_n}\}_{n=1}^\infty$ with $|u_{\varepsilon_n}|\leq1$, converging uniformly to $u$ on every compact set $K\subset X\setminus \partial^*\{u=1\}$, and in particular $u_{\varepsilon_n}\rightarrow u$ in $L^1(X,\mu)$, such that \eqref{eq:limsuptoprove} holds.
Let us denote by $E$ the polyhedral set $\{u=1\}$ and by $E_1,\dots,E_k$ its faces.
It is possible to cover $\partial E\cap X$ with a finite family of sets $\bar{A}_1,\dots,\bar{A}_k$, where each $A_i$ is an open set satisfying the following properties:
\begin{itemize}
\item[(i)] $\partial A_i$ can be written as the union of two Lipschitz graphs over the face $E_i$,
\item[(ii)] every point in the relative interior of $E_i$ belongs to $A_i$,
\item[(iii)] $\mathcal{H}^{d-1}\left(\bar{A}_i\cap \bigcup_{j\neq i}E_j\right)=0$,
\item[(iv)] $A_i\cap A_j=\emptyset$ if $i\neq j$.
\end{itemize}
Set
\[
A_0:=\{u=1\}\setminus\bigcup_{i=1}^k\bar{A}_i\,,\quad\quad\quad A_{k+1}:=X\setminus\bigcup_{i=0}^k\bar{A_i}\,.
\]
\begin{figure}
\centering
\includegraphics[scale=1]{Polyhedral.pdf}
\caption{The polyhedral set $E$ (shaded) and the sets $A_i$ (dotted lines).}
\end{figure}
We then define $u_{\varepsilon_n}$ in each $A_i$ separately.
Set $u_{\varepsilon_n}(x):=1$ for $x\in A_0$ and $u_{\varepsilon_n}(x):=-1$ for $x\in A_{k+1}$.
Now fix $i\in\{1,\dots,k\}$ and $n\in\mathbb{N}$.
In order to define $u_{\varepsilon_n}$ in $A_i$, we reason as follows.
Denote by $\nu$ the normal of the hyperplane containing $E_i$.
Without loss of generality, we can assume $\nu=e_d$ and $E_i\subset\{x_d=0\}$.
A point $x\in\mathbb{R}^d$ will be denoted as
\[
x=(x',x_d)\,,\quad\quad x'\in\mathbb{R}^{d-1}\,,\quad x_d\in\mathbb{R}\,.
\]
Fix $\xi>0$.
Using the continuity of $\sigma$ (see Lemma \ref{lem:ConNLContinuum:Liminf:contsigma}) and of $\rho$ in $X$ (see (A1)) it is possible to find a finite family of $d-1$ dimensional disjoint cubes $\{Q_j\}_{j=1}^{M_\xi}$, for some $M_\xi\in\mathbb{N}$, of side $r_\xi>0$ lying in the hyperplane containing $E_i$, having $E_i\subseteq \cup_{j=1}^{M_\xi} Q_j$, $E_i\cap Q_j \neq \emptyset$, and satisfying the following properties: denoting by $\{x_j\}_{j=1}^{M_\xi}$ their centers (or, in the case the center of a cube $Q_j$ is not contained in $E_i$, a point of $Q_j\cap E_i$) we have
\begin{equation}\label{eq:cond1}
\left|\, \int_{E_i} \sigma^{(p)}(x,\nu)\rho(x)\,\mathrm{d}\mathcal{H}^{d-1}(x) -
r^{d-1}_\xi \sum_{j=1}^{M_\xi} \sigma^{(p)}(x_j,\nu)\rho(x_j) \,\right|<\xi\,.
\end{equation}
It is possible to find, for every $j=1,\dots M_\xi$, $C_j\in\mathcal{C}(x_j,\nu)$ and $w_j\in \mathcal{U}(C_j,\nu)$ such that
\begin{equation}\label{eq:condu0}
\frac{1}{\mathcal{H}^{d-1}(C_j)}G^{(p)}(w_j,\rho(x_j),T_{C_j})<\sigma^{(p)}(x_j,\nu)+\frac{\xi}{\rho(x_j)M_\xi r^{d-1}_\xi}\,.
\end{equation}
We can assume $|w_j|\leq 1$.
For every $j=1,\dots,M_\xi$, let $L_j\in\mathbb{N}$ be such that
\[ \frac{1}{\mathcal{H}^{d-1}(C_j)}\left|\, G^{(p)}(w_j,\rho(x_j),T_{C_j})-G^{(p)}(\widetilde{w}_j,\rho(x_j),T_{C_j}) \,\right|<\frac{\xi}{\rho(x_j)M_\xi r^{d-1}_\xi}\,, \]
where
\[ \widetilde{w}_j(x):=
\left\{
\begin{array}{ll}
w_j(x) & \text{if } |x_d|<L_j \,,\\
+1 & \text{if } x_d>L_j\,,\\
-1 & \text{if } x_d<-L_j\,.
\end{array}
\right. \]
By mollifying, $\widetilde{w}_j$, i.e. $\widetilde{w}_j^\prime = J_\beta\ast \widetilde{w}_j$, we have that $\widetilde{w}_j^\prime$ is Lipschitz continuous with $\mathrm{Lip}(\widetilde{w}_j^\prime) \leq \frac{1}{\beta}$ and $\widetilde{w}_j^\prime(x)\in \{\pm 1\}$ for $|x_d|>L_j+\beta$.
We can choose $\beta=\beta_\xi$ such that
\begin{equation}\label{eq:condun}
\frac{1}{\mathcal{H}^{d-1}(C_j)}\left|\, G^{(p)}(w_j,\rho(x_j),T_{C_j})-G^{(p)}(\widetilde{w}_j^\prime,\rho(x_j),T_{C_j}) \,\right|<\frac{2\xi}{\rho(x_j)M_\xi r^{d-1}_\xi}\,.
\end{equation}
For every $j=1,\dots,M_\xi$ cover $Q_j$ with copies of $\varepsilon_n C_j$.
Denote them by $\{Q_{j,s}^{(n)}\}_{s=1}^{k^{(n)}_j}$ and by $\{y^{(n)}_{j,s}\}_{s=1}^{k^{(n)}_j}$ their centers.
Notice that it might be necessary to consider the intersection of some cubes with $Q_j$, and that
\begin{equation}\label{eq:ratek}
k^{(n)}_j=\Bigg\lceil \frac{1}{\varepsilon_n^{d-1}}r^{d-1}_\xi\left(\mathcal{H}^{d-1}(C_j)\right)^{-1} \Bigg\rceil\,,
\end{equation}
where $\lceil x \rceil$ denotes the smallest natural number greater than or equal to $x\in\mathbb{R}$.
Define the function $v_j^{(n)}:\mathbb{R}^d\to[-1,+1]$ as the periodic extension of
\[ v^{(n)}_j(x):=\sum_{s=1}^{k_j^{(n)}}\widetilde{w}^\prime_j\left(\frac{x-(x_j+y^{(n)}_j)}{\varepsilon_n}\right)\chi_{Q^{(n)}_{j,s}}(x')\,. \]
We are now in position to define the function $u_{\varepsilon_n}$ in $A_i$: for $x\in A_i$ define
\[
u_{\varepsilon_n}(x):=\sum_{j=1}^{M_\xi} \chi_{Q^{(n)}_j}(x')v^{(n)}_j(x)\,.
\]
Note that $u_{\varepsilon_n}$ has Lipschitz constant $\mathrm{Lip}(u_{\varepsilon_n}) \leq \frac{1}{\beta_\xi \varepsilon_n}$ and $u_{\varepsilon_n}(x) = u(x)$ for any $x$ with $\mathrm{dist}(x,\partial^*\{u=1\}) \geq \max_j (L_j+\beta_\xi) \varepsilon_n$.
Hence, $u_{\varepsilon_n}\rightarrow u$ as $n\to\infty$ uniformly on compact sets $K\subset X\setminus \partial^*\{u=1\}$.
We now prove the validity of inequality \eqref{eq:limsuptoprove}.
We claim that:
\begin{itemize}
\item[(i)] $\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n},A_i,A_j)\to0$ as $n\to\infty$ for all $i\neq j$, where $\widetilde{\Lambda}$ is defined by~\eqref{eq:ConvNLContinuum:Prelim:Lambdatilde};
\item[(ii)] $\limsup_{n\to\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},A_i)\leq \mathcal{G}^{(p)}_\infty(u,A_i)$ for all $i=0,\dots,k+1$.
\end{itemize}
If the above claims hold true, then we can conclude as follows: we have
\begin{align*}
\limsup_{n\to\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n})&\leq \sum_{i=0}^{k+1} \limsup_{n\to\infty}\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},A_i)
+ 2\sum_{i<j=0}^{k+1} \limsup_{n\to\infty}\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n},A_i,A_j)\\
&\leq \sum_{i=0}^{k+1} \mathcal{G}^{(p)}_\infty(u,A_i)\\
&= \mathcal{G}^{(p)}_\infty(u)\,.
\end{align*}
We start by proving claim (ii). It is easy to see that it holds true for $i=0, k+1$.
Fix $i\in\{1,\dots,k\}$.
Noticing that
\[
A_i\subset \bigcup_{j=1}^{M}\bigcup_{s=1}^{k_j^{(n)}}T_{Q^{(n)}_{j,s}}\,,
\]
we get
\begin{align}\label{eq:firstpartconmput}
\mathcal{F}^{(p)}_{\varepsilon_n}(u_{\varepsilon_n},A_i)&\leq \mathcal{F}^{(p)}_{\varepsilon_n}\left(u_{\varepsilon_n},
\bigcup_{j=1}^{M_\xi}\bigcup_{s=1}^{k_j^{(n)}} T_{Q^{(n)}_{j,s}} \right) \nonumber\\
&\leq \sum_{j=1}^{M_\xi}\sum_{s=1}^{k_j^{(n)}}\Biggl[\,
\frac{s_{\varepsilon_n}}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}}\int_{\mathbb{R}^d} \eta_{\varepsilon_n}(x-z)
|u_{\varepsilon_n}(x)-u_{\varepsilon_n}(z)|^p \rho(x)\rho(z) \, \mathrm{d} x \, \mathrm{d} z \nonumber\\
&\hspace{2.6cm} + \frac{1}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}} V(u_{\varepsilon_n}(x)) \rho(x) \, \mathrm{d} x \,\Biggr]\,.
\end{align}
Here, for every $j$ and $s$ we are using the whole cube $Q^{(n)}_{j,s}$, not only its intersection with $Q_j$.
Note that,
\begin{align*}
\mathcal{A}^{(n)}_{j,s} := & \left\{ (x,z) \in \mathbb{R}^d\times T_{Q^{(n)}_{j,s}} \, : \, \eta_{\varepsilon_n}(x-z)|u_{\varepsilon_n}(x)-u_{\varepsilon_n}(z)|^p \neq 0 \right\} \\
& \hspace{2cm} \subseteq B(x_j,\sqrt{2d} r_\xi) \times \left( T_{Q^{(n)}_{j,s}} \cap \left\{ |z_d|\leq \varepsilon_n(L_j+R_\eta) \right\} \right)
\end{align*}
for $\varepsilon_n$ sufficiently small (compared to $r_\xi$).
Hence there exists $\delta_\xi>0$ such that for all $(x,z)\in \mathcal{A}^{(n)}_{j,s}$ we have $|\rho(x)-\rho(x_j)|\leq \delta_\xi$ and $|\rho(z)-\rho(x_j)|\leq \delta_\xi$ where $\delta_\xi\to 0$ as $\xi\to 0^+$.
This implies
\begin{align*}
s_{\varepsilon_n} \rho(x) \rho(z) & \leq |s_{\varepsilon_n} - 1| \rho(x) \rho(z) + \rho(x) \rho(z) \\
& \leq |s_{\varepsilon_n}-1|c_2^2 + \rho(x_j) \rho(z) + \delta_\xi c_2 \\
& \leq |s_{\varepsilon_n}-1|c_2^2 + \rho^2(x_j) + 2\delta_\xi c_2\,.
\end{align*}
Hence,
\begin{align*}
& \frac{s_{\varepsilon_n}}{\varepsilon_n} \int_{T_{Q_{j,s}^{(n)}}} \int_{\mathbb{R}^d} \eta_{\varepsilon_n}(x-z) |u_{\varepsilon_n}(x) - u_{\varepsilon_n}(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \\
& \hspace{2cm} \leq \left( |s_{\varepsilon_n}-1|c_2^2 + 2\delta_\xi c_2 + \rho^2(x_j) \right) \frac{1}{\varepsilon_n} \int_{T_{Q_{j,s}^{(n)}}} \int_{\mathbb{R}^d} \eta_{\varepsilon_n}(x-z) |u_{\varepsilon_n}(x) - u_{\varepsilon_n}(z)|^p \, \mathrm{d} x \, \mathrm{d} z\,.
\end{align*}
Similarly,
\[ \frac{1}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}} V(u_{\varepsilon_n}(x)) \rho(x) \, \mathrm{d} x \leq \left( \rho(x_j) + \delta_\xi \right) \frac{1}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}} V(u_{\varepsilon_n}(x)) \, \mathrm{d} x\,. \]
Let $\gamma_{\xi,n} = 1+\frac{2\delta_\xi}{c_2} + |s_{\varepsilon_n}-1|$ and notice that
\[ \lim_{\xi\to 0} \lim_{n\to \infty} \gamma_{\xi,n} = 1 \]
(there is a subtle dependence that requires $\varepsilon_n$ be small compared to $r_\xi$ which means it is necessary to first take the limits in this order).
Then
\begin{align*}
& \frac{s_{\varepsilon_n}}{\varepsilon_n} \int_{T_{Q_{j,s}^{(n)}}} \int_{\mathbb{R}^d} \eta_{\varepsilon_n}(x-z) |u_{\varepsilon_n}(x) - u_{\varepsilon_n}(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}} V(u_{\varepsilon_n}(x)) \rho(x) \, \mathrm{d} x \\
& \,\, \leq \gamma_{\xi,n} \rho(x_j) \left( \frac{\rho(x_j)}{\varepsilon_n} \int_{T_{Q_{j,s}^{(n)}}} \int_{\mathbb{R}^d} \eta_{\varepsilon_n}(x-z) |u_{\varepsilon_n}(x) - u_{\varepsilon_n}(z)|^p \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon_n} \int_{T_{Q^{(n)}_{j,s}}} V(u_{\varepsilon_n}(x)) \, \mathrm{d} x \right) \\
& \,\, \leq \gamma_{\xi,n} \rho(x_j) G_{\varepsilon_n}^{(p)}(u_{\varepsilon_n},\rho(x_j),T_{Q_{j,s}^{(n)}})\,.
\end{align*}
Hence, using \eqref{eq:cond1}, \eqref{eq:condu0}, \eqref{eq:condun}, \eqref{eq:ratek} and~\eqref{eq:firstpartconmput}, we get
\begin{align*}
\mathcal{F}_{\varepsilon_n}^{(p)}(u_{\varepsilon_n},A_i) & \leq \gamma_{\xi,n} \sum_{j=1}^{M_\xi} \sum_{s=1}^{k_j^{(n)}} \rho(x_j) G_{\varepsilon_n}^{(p)}(u_{\varepsilon_n},\rho(x_j),T_{Q^{(n)}_{j,s}}) \\
& = \gamma_{\xi,n} \sum_{j=1}^{M_\xi} \sum_{s=1}^{k^{(n)}_j} \rho(x_j) \varepsilon_n^{d-1} G_1^{(p)}(\widetilde{w}_j^\prime,\rho(x_j),T_{C_j}) \\
& = \gamma_{\xi,n} \sum_{j=1}^{M_\xi} \frac{r_\xi^{d-1} \rho(x_j)}{\mathcal{H}^{d-1}(C_j)} G_1^{(p)}(\widetilde{w}_j^\prime,\rho(x_j),T_{C_j}) \\
& \leq \gamma_{\xi,n} \left( r_\xi^{d-1} \sum_{j=1}^{M_\xi} \rho(x_j) \sigma^{(p)}(x_j,\nu) + 3\xi \right) \\
& \leq \gamma_{\xi,n} \left( \int_{E_i} \sigma^{(p)}(x,\nu) \rho(x) \, \mathrm{d} \mathcal{H}^{d-1}(x) + 4\xi \right)\,.
\end{align*}
Taking limsup as $n\to \infty$ and $\xi\to 0$ implies (ii).
Note that $u_{\varepsilon_n}$ is not the recovery sequence, rather each $u_{\varepsilon_n}$ depended on $\xi$ (through $L_j$ and $\beta_\xi$).
Hence, if we make the $\xi$ dependence explicit, we showed $\limsup_{n\to \infty} \mathcal{F}_{\varepsilon_n}^{(p)}(u_{\varepsilon_n}^{(\xi)},A_i) \leq \limsup_{n\to \infty} \gamma_{\xi,n} \mathcal{G}_\infty^{(p)}(u,A_i)$.
By a diagonalisation argument we can find a sequence $\xi_n\to 0$ such that $\limsup_{n\to \infty} \mathcal{F}_{\varepsilon_n}^{(p)}(u_{\varepsilon_n}^{(\xi_n)},A_i) \leq \mathcal{G}_\infty^{(p)}(u,A_i)$.
Moreover if the sequence $\xi_n\to 0$ sufficiently slowly then we can infer $\max L_j+\beta_{\xi_n} \geq \frac{1}{\zeta_{\varepsilon_n}}$ and $\beta_{\xi_n}\leq \zeta_{\varepsilon_n}$ hence, $\mathrm{Lip}(u_{\varepsilon_n}) \leq \frac{1}{\zeta_{\varepsilon_n} \varepsilon_n}$ and $u_{\varepsilon_n}(x)=u(x)$ whenever $\mathrm{dist}(x,\partial^*\{u=1\}) \geq \frac{\varepsilon_n}{\zeta_{\varepsilon_n}}$ for any sequence $\zeta_{\varepsilon_n}\to 0$.
\vspace{0.5\baselineskip}
We are thus left to prove claim (i).
Analogously to Remark~\ref{rem:nonlocldef}, $\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n},A_i,A_j)\to0$ as $n\to\infty$ for all $i,j$ for which $\mathrm{d}(A_i,A_j)>0$.
Let us now consider indexes $i\neq j$ for which
$\mathrm{d}(A_i,A_j)=0$. Write
\begin{align*}
\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n},A_i,A_j)
&=\frac{1}{\varepsilon_n}\int_{B(0,R_\eta)}\int_{A_{\varepsilon_n h}}
\eta(h)|u_{\varepsilon_n}(z+\varepsilon_n h)-u_{\varepsilon_n}(z)|^p \rho(z+\varepsilon_n h)\rho(z)\,\mathrm{d} z\,\mathrm{d} h\,,
\end{align*}
where $A_{\varepsilon_n h}:=\{ z\in A_i \,:\, z+\varepsilon_n h\in A_j \}$.
Since $A_i$ and $A_j$ are Lipschitz graphs over sets lying on two different hyperplanes, it holds that $\mathrm{Vol}(A_{\varepsilon_n h}) = O(\varepsilon_n^d)$.
Thus
\[
\widetilde{\Lambda}_{\varepsilon_n}(u_{\varepsilon_n},A_i,A_j)\to0\,,
\]
as $n\to\infty$. \vspace{\baselineskip}
\emph{Step 2. The general case.}
Let $u\in BV(X;\{\pm1\})$.
Using Theorem~\ref{thm:denspoly} it is possible to find a sequence $\{v_n\}_{n=1}^\infty$ of polyhedral function such that $v_n\to u$ in $L^1$ (which, in turn, implies that $Dv_n\stackrel{{w}^*}{\rightharpoonup}Du$) and $|D v_n|(X)\to|Du|(X)$.
Using Step 1 and a diagonalisation argument we get that there exists a sequence $\{u_n\}_{n=1}^\infty$ with $u_n\to u$ in $L^1(X)$ such that
\[
\mathcal{F}^{(p)}_{\varepsilon_n}(u_n)\leq \mathcal{G}^{(p)}_\infty(v_n)+\frac{1}{n}\,.
\]
Then, Theorem~\ref{thm:rese} together with Lemma~\ref{lem:ConNLContinuum:Liminf:contsigma} gives us that
\[
\limsup_{n\to\infty}\mathcal{G}^{(p)}_\infty(v_n)=\mathcal{G}^{(p)}_\infty(u)\,.
\]
This concludes the proof.
\end{proof}
\section{Convergence of the Graphical Model \label{sec:ConvGraph}}
In this Section we prove Theorem~\ref{thm:MainRes:Compact&Gamma}.
In particular, in Section~\ref{subsec:ConvGraph:Compact} we prove the compactness part of Theorem~\ref{thm:MainRes:Compact&Gamma} and in Sections~\ref{subsec:ConvGraph:Liminf}-\ref{subsec:ConvGraph:Limsup} we prove the $\Gamma$-convergence result.
\subsection{Compactness \label{subsec:ConvGraph:Compact}}
\begin{proof}[Proof of Theorem~\ref{thm:MainRes:Compact&Gamma} (Compactness).]
\emph{Step 1.} We first show that there exist $c_n,\alpha_n>0$ with $\alpha_n,c_n \to 1$ as $n\to\infty$, such that
\begin{equation}\label{eq:esteta1}
\eta\left(\frac{T_n(x)-T_n(z)}{\varepsilon_n}\right) \geq c_n \eta\left(\frac{\alpha_n(x-z)}{\varepsilon_n}\right)\,.
\end{equation}
Let $\delta_n := \frac{2\|T_n-\mathrm{Id}\|_{L^\infty}}{\varepsilon_n}$.
By Assumption~(C4) we can find $\alpha_n,c_n$ such that, for all $a,b\in \mathbb{R}^d$ with $|a-b|\leq \delta_n$, we have
\begin{equation}\label{eq:ineq1}
\eta(a) \geq c_n \eta(\alpha_n b)\,.
\end{equation}
Since by assumption (A2) we have that $\delta_n\to 0$ then $\alpha_n,c_n$ can be chosen such that $\alpha_n\to 1,c_n\to 1$.
Now if we let $a:=\frac{T_n(x)-T_n(z)}{\varepsilon_n}$ and $b:=\frac{x-z}{\varepsilon_n}$ we have
\[ |a-b| = \frac{|T_n(x) - T_n(z) + z - x|}{\varepsilon_n} \leq \frac{2\|T_n-\mathrm{Id}\|_{L^\infty}}{\varepsilon_n} = \delta_n \]
and therefore, by \eqref{eq:ineq1}, we get $\eta\left(\frac{T_n(x)-T_n(z)}{\varepsilon_n}\right) \geq c_n \eta\left(\frac{\alpha_n (x-z)}{\varepsilon_n}\right)$ as required. \vspace{0.5\baselineskip}
\emph{Step 2.} Let $v_n := u_n\circ T_n$. Using Lemma \ref{lem:writeint} and \eqref{eq:esteta1} we have that
\begin{align}
\mathcal{G}_n^{(p)}(u_n) & = \frac{1}{\varepsilon_n^{d+1}} \int_X\int_X \eta\left(\frac{T_n(x)-T_n(z)}{\varepsilon_n}\right) |v_n(x) - v_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \nonumber \\
&\hspace{0.6cm}+ \frac{1}{\varepsilon_n} \int_X V(v_n(x)) \rho(x) \, \mathrm{d} x \nonumber \\
& \geq \frac{c_n}{\varepsilon_n^{d+1}} \int_X\int_X \eta\left( \frac{\alpha_n(x-z)}{\varepsilon_n}\right) |v_n(x) - v_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \nonumber \\
&\hspace{0.6cm}+ \frac{1}{\varepsilon_n} \int_X V(v_n(x)) \rho(x) \, \mathrm{d} x \nonumber \\
& = \frac{c_n}{\alpha_n^{d+1}(\varepsilon_n^\prime)^{d+1}} \int_X\int_X\eta\left( \frac{x-z}{\varepsilon_n^\prime}\right) |v_n(x) - v_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \nonumber \\
&\hspace{0.6cm}+ \frac{1}{\varepsilon_n} \int_X V(v_n(x)) \rho(x) \, \mathrm{d} x \nonumber \\
& = \frac{1}{\alpha_n} \mathcal{F}_{\varepsilon_n^\prime}^{(p)}(v_n) \label{eq:ineq2}
\end{align}
where $\varepsilon_n^\prime := \frac{\varepsilon_n}{\alpha_n}$ and $\mathcal{F}_\varepsilon^{(p)}$ is defined by~\eqref{eq:ConvNLContinuum:Feps} with $s_n := \frac{c_n}{\alpha_n^d}$ (in~\eqref{eq:ConvNLContinuum:Feps} $s$ depended on $\varepsilon$ not on $n$, since we have fixed the sequence $\varepsilon_n$ then clearly we could write $n$ in terms of $\varepsilon_n^\prime$, however this would make the notation cumbersome). \vspace{0.5\baselineskip}
\emph{Step 3.} Since $\lim_{n\to\infty}\alpha_n=1$, we can infer that $\mathcal{F}_{\varepsilon_n^\prime}^{(p)}(v_n)$ is bounded and hence by Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} the sequence $\{v_n\}_{n=1}^\infty$ is relatively compact in $L^1$. Therefore the sequence $\{u_n\}_{n=1}^\infty$ is relatively compact in $TL^1$, with any limit $u$ satisfying $\mathcal{G}_\infty^{(p)}(u)<\infty$.
\end{proof}
\subsection{The Liminf Inequality \label{subsec:ConvGraph:Liminf}}
\begin{proof}[Proof of Theorem~\ref{thm:MainRes:Compact&Gamma} (Liminf).]
For any $u\in L^1(\mu)$ and any $u_n\in L^1(\mu_n)$ with $u_n\to u$ in $TL^1$ we claim that
\[ \liminf_{n\to \infty} \mathcal{G}_n^{(p)}(u_n) \geq \mathcal{G}_\infty^{(p)}(u). \]
Indeed, taking the liminf on both sides of \eqref{eq:ineq2}
and using Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} we have
\[ \liminf_{n\to \infty} \mathcal{G}_n^{(p)}(u_n) \geq \liminf_{n\to \infty} \frac{1}{\alpha_n} \mathcal{F}_{\varepsilon_n^\prime}^{(p)}(v_n) \geq \mathcal{G}_\infty^{(p)}(u) \]
since $\lim_{n\to \infty} \frac{1}{\alpha_n} = 1$.
\end{proof}
\subsection{The Limsup Inequality \label{subsec:ConvGraph:Limsup} }
\begin{proof}[Proof of Theorem~\ref{thm:MainRes:Compact&Gamma} (Limsup).]
The aim of this section is to prove the following: given $u\in L^1(X,\mu)$ it is possible to find
a sequence $\{u_n\}_{n=1}^\infty\subset L^1(X_n)$ with $u_n\to u$ in $TL^1(X)$ such that
\[
\limsup_{n\to \infty} \mathcal{G}_n^{(p)}(u_n) \leq \mathcal{G}_\infty^{(p)}(u)\,.
\]
Without loss of generality we can assume $\mathcal{G}_\infty^{(p)}(u)<+\infty$. In particular, $u\in BV(X;\{\pm1\})$.
We divide the proof in two cases: we first assume that $u$ is a polyhedral function and then
we extend the argument to any function $u$ with $\mathcal{G}_\infty^{(p)}(u)<+\infty$ via a diagonalisation argument. \vspace{0.5\baselineskip}
\emph{Case 1.} Assume that $u$ is a polyhedral function (see Definition \ref{def:polyfun}).
Let $w_n\in L^1(X,\mu)$ be a recovery sequence for the $\Gamma$-convergence of $\mathcal{F}_{\varepsilon_n^\prime}^{(p)}$ to $\mathcal{G}_\infty^{(p)}$ where $\varepsilon_n^\prime = \frac{\varepsilon_n}{\hat{\alpha}_n}$ and $\hat{\alpha}_n$ is a sequence we fix shortly (see step 2)
\emph{i.e.}, we assume $w_n\to u$ in $L^1(X,\mu)$ and $\lim_{n\to\infty} \mathcal{F}_{\varepsilon_n^\prime}^{(p)}(w_n) = \mathcal{G}_\infty^{(p)}(u)$.
Let $u_n(x_i) = n\int_{T_n^{-1}(x_i)} w_n(x) \, \mathrm{d} x$ where $T_n$ is any sequence of transport maps satisfying the conclusions of Theorem~\ref{thm:optrate}.
\vspace{0.5\baselineskip}
\emph{Step 1.} We show that $u_n\to u$ in $TL^1$.
Setting $v_n = u_n\circ T_n$, we are required to show $v_n\to u$ in $L^1$.
Let $\zeta_n\to 0$ with $\zeta_n\gg \sqrt{\frac{\|T_n-\mathrm{Id}\|_{L^\infty}}{\varepsilon_n}}$.
By Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} we can assume that $w_n$ are Lipschitz continuous with $\mathrm{Lip}(w_n)\leq \frac{1}{\zeta_n\varepsilon_n}$ and $w_n(x) = u(x)$ for all $x$ satisfying $\mathrm{dist}(x,\partial^*\{u=1\})>\frac{\varepsilon_n}{\zeta_n}$.
Hence, $v_n(x) = w_n(x) = u(x)$, for $n$ sufficiently large, for all $x$ such that $\mathrm{dist}(x,\partial^*\{u=1\})>\frac{3\varepsilon_n}{\zeta_n}$.
Therefore,
\begin{align*}
\| v_n - w_n \|_{L^1(X)} & = \sum_{i=1}^n \int_{T_n^{-1}(x_i)} |v_n(x) - w_n(x) | \, \mathrm{d} x \\
& \leq n \sum_{i=1}^n \int_{T_n^{-1}(x_i)} \int_{T_n^{-1}(x_i)} |w_n(y) - w_n(x) | \, \mathrm{d} y \, \mathrm{d} x \\
& \leq \frac{2 \|T_n-\mathrm{Id}\|_{L^\infty}}{\zeta_n \varepsilon_n n} \#\left\{ i\,:\, \mathrm{dist}(x_i,\partial^*\{u=1\})\leq \frac{3\varepsilon_n}{\zeta_n} \right\}\,.
\end{align*}
Now,
\begin{align*}
\#\left\{ i\,:\, \mathrm{dist}(x_i,\partial^*\{u=1\})\leq \frac{3\varepsilon_n}{\zeta_n} \right\} & = n\mu\left( \left\{ x\,:\, \mathrm{dist}(T_n(x),\partial^*\{u=1\})\leq \frac{3\varepsilon_n}{\zeta_n} \right\}\right) \\
& \leq nc_2 \mathrm{Vol}\left(\left\{ x\,:\, \mathrm{dist}(x,\partial^*\{u=1\})\leq \frac{4\varepsilon_n}{\zeta_n} \right\}\right) \\
& = O\left(\frac{n\varepsilon_n}{\zeta_n}\right).
\end{align*}
Hence,
\begin{equation} \label{eq:ConvGraph:Limsup:vnwn}
\|v_n-w_n\|_{L^1(X)} = o(\varepsilon_n)\,.
\end{equation}
Moreover,
\[ \|v_n-u\|_{L^1(X)} \leq \|v_n-w_n\|_{L^1(X)} + \|w_n-u\|_{L^1(X)} \to 0\,, \]
so $u_n\to u$ in $TL^1$.\vspace{0.5\baselineskip}
\emph{Step 2.} We show that there exists $\hat{\alpha}_n,\hat{c}_n\to 1$ such that
\begin{equation} \label{eq:ConvGraph:Limsup:EtaUB}
\eta\left( \frac{T_n(x) - T_n(z)}{\varepsilon_n} \right) \leq \hat{c}_n \eta\left(\frac{\hat{\alpha}_n (x-z)}{\varepsilon_n}\right).
\end{equation}
To show the validity of \eqref{eq:ConvGraph:Limsup:EtaUB} we use the following subclaim: for all
$\hat{\delta}>0$ sufficiently small there exists $\hat{\alpha}_{\hat{\delta}},\hat{c}_{\hat{\delta}}>0$ such that $\hat{\alpha}_{\hat{\delta}}\to 1, \hat{c}_{\hat{\delta}}\to 1$, as $\hat{\delta}\to 0$, and, for any $\hat{a},\hat{b}\in\mathbb{R}^d$, it holds
\begin{equation}\label{eq:claim1}
|\hat{a}-\hat{b}|<\hat{\delta}\quad\Rightarrow\quad\hat{c}_{\hat{\delta}} \eta(\hat{\alpha}_{\hat{\delta}} \hat{b}) \geq \eta(\hat{a})\,.
\end{equation}
Then~\eqref{eq:ConvGraph:Limsup:EtaUB} can be obtained as follows: for any $n\in\mathbb{N}$ take
\[
\hat{a} := \frac{T_n(x) - T_n(z)}{\varepsilon_n}\,,\quad\quad
\hat{b} := \frac{x-z}{\varepsilon_n}\,,\quad\quad
\hat{\delta} := \frac{2\|T_n-\mathrm{Id}\|_{L^\infty}}{\varepsilon_n}\,,
\]
and let $\hat{\alpha}_n,\hat{c}_n$ be the numbers given by the subclaim for which \eqref{eq:claim1} holds.
Note that $\hat{\alpha}_n$ is chosen independently from $w_n$ (since $w_n$ depends on $\hat{\alpha}_n$ there is therefore no circular argument).
Then, since $|\hat{a}-\hat{b}|\leq \delta_n$, we infer~\eqref{eq:ConvGraph:Limsup:EtaUB}.
To prove the subclaim, we let $\alpha_\delta,c_\delta$ be as in Assumption~(C4) and let $\hat{\delta} >0$.
Without loss of generality we assume that $\inf_{\gamma\in (0,1]}\alpha_\gamma\in (0,\infty)$.
We choose $\delta := \min\left\{ 1,\frac{\hat{\delta}}{\inf_{s\in (0,1]} \alpha_s}\right\}$, trivially $\delta\to 0$ as $\hat{\delta}\to 0$.
We assume that $\frac{\hat{\delta}}{\inf_{\gamma\in (0,1]} \alpha_\gamma} \leq 1$.
Let $\hat{a},\hat{b}\in \mathbb{R}^d$ with $|\hat{a}-\hat{b}|<\hat{\delta}$, and define $a := \frac{\hat{a}}{\alpha_\delta}$ and $b := \frac{\hat{b}}{\alpha_\delta}$.
Since, $|a-b|\leq \frac{\hat{\delta}}{\alpha_\delta} \leq \frac{\hat{\delta}}{\inf_{\gamma\in (0,1]}\alpha_\gamma} = \delta$ then
\[
\eta(b) \geq c_\delta \eta(\alpha_\delta a) \quad \Rightarrow \quad \frac{1}{c_\delta} \eta\left( \frac{\hat{b}}{\alpha_\delta} \right) \geq \eta(\hat{a})\,.
\]
Let $\hat{c}_{\hat{\delta}} := \frac{1}{c_\delta}$, $\hat{\alpha}_{\hat{\delta}} := 1/\alpha_\delta$ then $\hat{\delta}\to 0$ implies $\delta\to 0$ which in turn implies $\alpha_\delta,c_\delta\to 1$ and therefore $\hat{\alpha}_{\hat{\delta}},\hat{c}_{\hat{\delta}}\to 1$.
This proves the claim. \vspace{0.5\baselineskip}
\emph{Step 3.}
Using Lemma \ref{lem:writeint} and \eqref{eq:ConvGraph:Limsup:EtaUB} we get
\begin{align*}
\mathcal{G}^{(p)}_n(u_n) & = \frac{1}{\varepsilon_n} \int_X\int_X \eta_{\varepsilon_n}(T_n(x) - T_n(z)) |v_n(x) - v_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon_n} \int_X V(v_n(x)) \rho(x) \, \mathrm{d} x \\
& \leq \frac{\hat{c}_n}{\hat{\alpha}_n^{d+1}\varepsilon^\prime_n} \int_X\int_X \eta_{\varepsilon^\prime_n}( x-z) |v_n(x) - v_n(z) |^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon_n} \int_X V(v_n(x)) \rho(x) \, \mathrm{d} x \\
& = \frac{\hat{c}_n}{\hat{\alpha}_n^{d+1}\varepsilon^\prime_n} \int_X\int_X \eta_{\varepsilon^\prime_n}( x-z) |w_n(x) - w_n(z) |^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z + \frac{1}{\varepsilon_n} \int_X V(w_n(x)) \rho(x) \, \mathrm{d} x \\
& \hspace{1cm} + a_n + b_n
\end{align*}
where we recall $\varepsilon_n^\prime := \frac{\varepsilon_n}{\hat{\alpha}_n}$ and
\begin{align*}
a_n & := \frac{\hat{c}_n}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \int_X\int_X \eta_{\varepsilon_n^\prime}(x-z)\left( |v_n(x) - v_n(z)|^p - |w_n(x) - w_n(z)|^p \right) \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z\,, \\
b_n & := \frac{1}{\varepsilon_n} \int_X \left( V(v_n(x)) - V(w_n(x)) \right) \rho(x) \, \mathrm{d} x\,.
\end{align*}
We recall the followings inequalities: $\forall \delta>0$ there exits $C_\delta>0$ such that for any $a,b\in \mathbb{R}^d$ we have
\begin{equation}\label{eq:inequality1}
|a|^p \leq (1+\delta)|b|^p + C_\delta |a-b|^p\,.
\end{equation}
Inequality \eqref{eq:inequality1} follows by noticing that, for every given $\delta>0$, the function $x\mapsto\frac{|x|^p-1-\delta}{|x-1|^p}$ is bounded in $[0,\infty)$, and then setting $x=\frac{|a|}{|b|}$.
Moreover, for all $p\geq1$ and all $a,b\in\mathbb{R}$, it holds
\begin{equation}\label{eq:inequality2}
|a+b|^p\leq 2^{p-1}\left( |a|^p+|b|^p \right)\,.
\end{equation}
Fix $\delta>0$. Using \eqref{eq:inequality1} and \eqref{eq:inequality2} we infer
\[
a_n \leq \frac{\hat{c}_n \delta}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \int_X\int_X \eta_{\varepsilon_n^\prime}(x-z) |w_n(x) - w_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z + d_n
\]
where
\[
d_n := \frac{2^{p-1}\hat{c}_n C_\delta}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \int_X\int_X \eta_{\varepsilon_n^\prime}(x-z) \Big( |v_n(x) - w_n(x)|^p + |v_n(z) - w_n(z)|^p \Big) \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z\,.
\]
We show $d_n \to 0$. We have that
\begin{align*}
d_n & \leq \frac{2^{p} \hat{c}_nC_\delta c_2^2}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \int_X |v_n(x) - w_n(x)|^p \, \mathrm{d} x \int_{\mathbb{R}^d} \eta(x) \, \mathrm{d} x \\
& \leq \frac{2^{2p-1} \hat{c}_nC_\delta c_2^2}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \| v_n-w_n\|_{L^1(X)} \int_{\mathbb{R}^d} \eta(x) \, \mathrm{d} x \\
& = O\left(\frac{C_\delta \|v_n-w_n\|_{L^1(X)}}{\varepsilon_n}\right)\,,
\end{align*}
where we used the fact that $\hat{c}_n\to1$, $\hat{\alpha}_n\to 1$ as $n\to\infty$.
From Equation~\eqref{eq:ConvGraph:Limsup:vnwn} we get that
\begin{equation}\label{eq:bn}
d_n\to0\,,
\end{equation}
as $n\to\infty$.
Let $L_V$ be the Lipschitz constant for $V$ (as given by Assumption~(B4)).
Since,
\begin{align*}
b_n & \leq \frac{c_2}{\varepsilon_n} \int_X \left| V(v_n(x)) - V(w_n(x)) \right| \, \mathrm{d} x \leq \frac{c_2 L_V}{\varepsilon_n} \|v_n-w_n\|_{L^1(X)}
\end{align*}
then $b_n\to 0$ by~\eqref{eq:ConvGraph:Limsup:vnwn}.
Combining the above estimates we have,
\begin{equation}\label{eq:kn}
\begin{split}
\mathcal{G}^{(p)}_n(u_n) & \leq \frac{\hat{c}_n(1+\delta)}{\hat{\alpha}_n^{d+1} \varepsilon_n^\prime} \int_X \int_X \eta_{\varepsilon_n^\prime}(x-z) |w_n(x) - w_n(z)|^p \rho(x) \rho(z) \, \mathrm{d} x \, \mathrm{d} z \\
& \hspace{1cm} + \frac{1}{\varepsilon_n} \int_X V(w_n(x)) \rho(x) \, \mathrm{d} x + b_n + d_n\,.
\end{split}
\end{equation}
\vspace{0.5\baselineskip}
\emph{Step 4.} We now conclude as follows.
Using \eqref{eq:kn} we get
\[ \mathcal{G}_n^{(p)}(u_n) \leq \frac{(1+\delta)}{\hat{\alpha}_n} \mathcal{F}_{\varepsilon_n^\prime}^{(p)}(w_n) + b_n + d_n \]
for $\mathcal{F}_{\varepsilon_n^\prime}^{(p)}$ defined as in~\eqref{eq:ConvNLContinuum:Feps} with $s_n := \frac{\hat{c}_n}{\hat{\alpha}_n^d}$.
Taking the limsup on both sides and using \eqref{eq:bn}, together with Theorem~\ref{thm:ConvNLContinuum:ConvNLContinuum} we have
\[ \limsup_{n\to \infty} \mathcal{G}_n^{(p)}(u_n) \leq \limsup_{n\to \infty} (1+\delta) \mathcal{F}_{\varepsilon_n^\prime}(w_n) \leq (1+\delta) \mathcal{G}_\infty^{(p)}(u). \]
Taking $\delta\to 0$ completes the proof for case 1.
\vspace{0.5\baselineskip}
\emph{Case 2.} To extend to arbitrary functions $u\in BV(X;\{\pm1\})$
we apply the following diagonalisation argument.
Using Theorem \ref{thm:denspoly} together with Lemma \ref{lem:ConNLContinuum:Liminf:contsigma} and Theorem \ref{thm:rese} we can find a sequence of polyhedral functions $\{u^{(m)}\}_{m=1}^\infty$ such that
\[
\|u-u^{(m)}\|_{L^1}\leq \frac{1}{m}\,,\quad\quad
\mathcal{G}_\infty^{(p)}(u^{(m)}) \leq \mathcal{G}_\infty^{(p)}(u) + \frac{1}{m} \,.
\]
Using the result of Case 1, for each $m\in \mathbb{N}$ we have
\[
\limsup_{n\to \infty} \mathcal{G}_n^{(p)}(u_n^{(m)}) \leq \mathcal{G}_\infty^{(p)}(u^{(m)})\,,
\]
where $u_n^{(m)}$ is the recovery sequence for $u^{(m)}$ in the $\Gamma$-convergence $\mathcal{G}_\infty^{(p)} = \Glim_{n\to \infty} \mathcal{F}_{\varepsilon_n^\prime}^{(p)}$.
For each $m\in\mathbb{N}$ let $n_m\in\mathbb{N}$ be such that
\[
\mathcal{G}_n^{(p)}(u_n^{(m)}) \leq \mathcal{G}_\infty^{(p)}(u^{(m)}) + \frac{1}{m} \quad\text{ and }\quad
\| u_n^{(m)}\circ T_n - u^{(m)}\|_{L^1} \leq \frac{1}{m}
\]
for all $n\geq n_m$.
At the cost of increasing $n_m$ we assume that $n_{m+1}> n_m$ for all $m$.
Let $u_n:=u_n^{(m)}$ for $n\in [n_m,n_{m+1})$.
Then,
\[ \limsup_{n\to \infty} \mathcal{G}_n^{(p)}(u_n) = \limsup_{m\to \infty} \sup_{n\in [n_m,n_{m+1})} \mathcal{G}_n^{(p)}(u_n^{(m)}) \leq \limsup_{m\to \infty} \left( \mathcal{G}_\infty^{(p)}(u^{(m)}) + \frac{1}{m} \right) \leq \mathcal{G}_\infty^{(p)}(u)\,. \]
Similarly,
\[ \lim_{n\to \infty} \|u_n\circ T_n - u\|_{L^1} \leq \limsup_{m\to \infty} \sup_{n\in [n_m,n_{m+1})} \left( \| u_n^{(m)}\circ T_n - u^{(m)} \|_{L^1} + \frac{1}{m} \right) \leq \lim_{m\to \infty} \frac{2}{m} = 0 \]
therefore $u_n$ converges to $u$ in $TL^1$.
Hence $u_n$ is a recovery sequence for $u$.
This completes the proof.
\end{proof}
\section{Convergence of Minimizers with Data Fidelity \label{sec:Const}}
In this section we prove Corollary \ref{cor:MainRes:Constrained}.
\begin{proof}[Proof of Corollary~\ref{cor:MainRes:Constrained}.]
In view of Theorem \ref{thm:convmin} and Proposition \ref{prop:contconv2} it is enough to prove that, for any $u\in L^1(X,\mu)$ and any $\{u_n\}_{n=1}^\infty$ with $u_n\in L^1(X_n)$ such that $u_n\to u$ in $TL^1(X)$ it holds that
\[
\lim_{n\to \infty} \mathcal{K}_n(u_n) = \mathcal{K}_\infty(u)\,.
\]
We can restrict ourselves to sequences $u_n\to u$ satisfying
\begin{equation}\label{eq:hypGn}
\sup_{n\in \mathbb{N}} \mathcal{G}_n^{(p)}(u_n)< +\infty\,.
\end{equation}
Let
\[
v_n(x) :=
\left\{
\begin{array}{ll}
u_n(x) & \text{if } x\in X_n(u_n)\,,\\
1 &\text{otherwise}\,.
\end{array}
\right.
\]
where
\[
X_n(u_n) := \left\{ x\in X_n \, : \, |u_n(x)|^q\leq R_V \right\}\,,
\]
and $R_V$ is as in Assumption~(B3). \vspace{0.5\baselineskip}
\emph{Step 1.} We claim that
\begin{equation}\label{eq:convDnunvn}
\lim_{n\to \infty} |\mathcal{K}_n(u_n) - \mathcal{K}_n(v_n) | \to 0\,,
\end{equation}
as $n\to\infty$.
Indeed we have that
\begin{align*}
\lim_{n\to \infty} \left| \mathcal{K}_n(u_n) - \mathcal{K}_n(v_n) \right| & = \lim_{n\to \infty} \left| \frac{1}{n} \sum_{x_i\not\in X_n(u_n)} \left( k_n(x_i,u_n(x_i)) - k_n(x_i,1) \right) \right| \\
& = \beta \lim_{n\to \infty} \frac{1}{n} \sum_{x_i\not\in X_n(u_n)} \left( 3+|u_n(x_i)|^q \right),
\end{align*}
where in the last equality we used (D2).
Now,
\[ \frac{1}{n} \sum_{x_i\not\in X_n(u_n)} 1 \leq \frac{1}{nR_V} \sum_{x_i\not\in X_n(u_n)} |u_n(x_i)|^q \leq \frac{1}{nR_V \tau} \sum_{x_i\not\in X_n(u_n)} V(u_n(x_i)) \leq \frac{\varepsilon_n \mathcal{G}_n^{(p)}(u_n)}{\tau R_V}. \]
Using \eqref{eq:hypGn} we conclude~\eqref{eq:convDnunvn}. \vspace{0.5\baselineskip}
\emph{Step 2.} We claim that $v_n\to u$ in $TL^1(X)$.
By a direct computation we get
\begin{align*}
\| u_n - v_n\|_{L^1(\mu_n)} & = \frac{1}{n} \sum_{x_i \not\in X_n(u_n)} |u_n(x_i) - 1| \\
& \leq \frac{1}{n} \sum_{x_i\not\in X_n(u_n)} \left( 1 + |u_n(x_i)| \right) \\
& \leq \frac{1}{n} \sum_{x_i\not\in X_n(u_n)} \left( 1 + |u_n(x_i)|^q \right) \to 0,
\end{align*}
so $v_n\to u$ in $TL^1$. \vspace{0.5\baselineskip}
\emph{Step 3.} We show that $\mathcal{K}_n(v_n) \to \mathcal{K}_\infty(u)$.
Consider any subsequence of $v_n$ which we do not relabel.
From Step 2 we have that $v_n\circ T_n\to u$ in $L^1$. Thus there exists a further subsequence (not relabeled) such that $v_n(T_n(x)) \to u(x)$ for almost every $x\in X$.
Using Assumption (D3) we get
\begin{equation}\label{eq:domconv1}
\lim_{n\to \infty} k_n\left( T_n(x),v_n(T_n(x)) \right) = k_\infty(x,u(x))\,.
\end{equation}
Moreover it holds
\begin{equation}\label{eq:domconv2}
k_n(T_n(x),v_n(T_n(x))) \leq \beta\left( 1+|v_n(T_n(x))|^q \right) \leq \beta\left( 1 + R_V \right)
\end{equation}
Using \eqref{eq:domconv1}, \eqref{eq:domconv2} and applying the Lebesgue's dominated convergence theorem we get
\begin{align}\label{eq:convDn}
\lim_{n\to \infty} \mathcal{K}_n(v_n) & = \lim_{n\to \infty} \int_X k_n(T_n(x),v_n(T_n(x))) \, \rho(x) \, \mathrm{d} x \nonumber \\
& = \int_X k_\infty(x,u(x)) \rho(x) \, \mathrm{d} x \nonumber\\
& = \mathcal{K}_\infty(u)
\end{align}
Since any subsequence of $v_n$ has a further subsequence such that $\lim_{n\to \infty} \mathcal{K}_n(v_n) = \mathcal{K}_\infty(u)$ then we can conclude the convergence is over the full sequence. \vspace{0.5\baselineskip}
\emph{Step 4.} Using \eqref{eq:convDnunvn} and \eqref{eq:convDn} we conclude that
\[
\lim_{n\to\infty}\mathcal{K}_n(u_n)\to \mathcal{K}_\infty(u)
\]
as required
\end{proof}
\section*{Acknowledgements}
The authors thank the Center for Nonlinear Analysis at Carnegie Mellon University and the Cantab Capital Institute for the Mathematics of Information at the University of Cambridge for their support during the preparation of the manuscript.
In addition the authors would like to thank Dejan Slep\v{c}ev and Nicol\'{a}s Garc\'{i}a Trillos for discussions and references that improved the manuscript.
The matlab implementation of the bean used in the example in Section~\ref{subsec:Intro:pEx} was written by Dejan Slep\v{c}ev.
The example in Section~\ref{subsec:Intro:AnisoEx} was aided by Luca Calatroni.
The research of R.C. was funded by National Science Foundation under Grant No. DMS-1411646.
Part of the research of M.T. was funded by the National Science Foundation under Grant No. CCT-1421502.
Both authors would like to thank the referees for valuable feedback that improved the paper.
\bibliographystyle{siam}
|
{
"timestamp": "2018-09-25T02:26:43",
"yymm": "1802",
"arxiv_id": "1802.08703",
"language": "en",
"url": "https://arxiv.org/abs/1802.08703"
}
|
\section{Introduction}\label{sec:intro}
Galaxies, due to complexities inherent to their formation and evolution, are biased tracers of the underlying matter distribution. In other words, the galaxy power spectrum measured from redshift surveys, $P_{gg}(k,z)$, is related to the underlying matter power spectrum $P(k,z)$ (which cannot be directly measured, but represents the true source of cosmological information) through a factor $b$ known as \textit{bias}~\cite{Desjacques:2016bnm}:
\begin{eqnarray}
P_{gg}(k,z) \approx b_{\rm auto}^2P(k,z) \, ,
\label{galaxy}
\end{eqnarray}
The subscript ``auto'' refers to the fact that $P_{gg}(k,z)$ is an auto-correlation quantity, since it corresponds to the Fourier transform of the 2-point auto-correlation function of the galaxy overdensity field, $\xi(r)$.
Galaxy bias also enters in cross-correlation quantities, such as the matter-galaxy cross-power spectrum $P_{mg}(k,z)$. This quantity is given by the Fourier transform of the 2-point cross-correlation function between the matter (dark matter plus baryons) and galaxy overdensity fields, $\xi^{mg}(r)$. However, the bias appearing in $P_{mg}(k,z)$ differs from that of Eq.~(\ref{galaxy}):
\begin{eqnarray}
P_{mg} (k,z) \approx b_{\rm cross}P(k,z) \, .
\label{pmg}
\end{eqnarray}
The difference between $b_{\rm auto}$ and $b_{\rm cross}$, explained more in detail in Sec.~\ref{sec:theo}, is expected based on results of N-body simulations~\cite{Villaescusa-Navarro:2013pva,Okumura:2012xh,Hand:2017ilm,Modi:2017wds,Vlah:2013lia}, as well as theoretical arguments.
Heretofore, the bias has often been modeled as a scale-independent quantity in cross-correlation analysis~\cite{Giusarma:2013pmn,Cuesta:2015iho, Giusarma:2016phn,Vagnozzi:2017ovm}. However, this approach is truly reliable only on large, linear scales ($k<k_{\max}=0.15\, h{\rm Mpc}^{-1}$ today and $k<k_{\max}=0.2\, h{\rm Mpc}^{-1}$ at a redshift of about $0.5$)~\cite{Desjacques:2016bnm}, therefore preventing one from fully retrieving information on cosmological parameters. The simplest and best-motivated forms of the scale-dependent biases read~\cite{Desjacques:2016bnm,Sheth:1999mn,Seljak:2000jg,Schulz:2005kj,Smith:2006ne,Manera:2009zu,Desjacques:2010gz,Musso:2012ch,
Paranjape:2012ks,Schmidt:2012ys,Verde:2014nwa,Senatore:2014eva,Castorina:2016pqq,Modi:2016dah}:
\begin{eqnarray}
b_{\rm cross}(k) = a+c k^2 \, ,
\label{biascross} \\
b_{\rm auto}(k) = a+d k^2 \, ,
\label{biasauto}
\end{eqnarray}
\\
where $a$, $c$ and $d$ are three free parameters describing the scale-dependent bias. It is worth remarking that, while various phenomenological expressions for $b(k)$ abound in the literature (although see~\cite{Smith:2006ne} for earlier criticisms related to phenomenological parametrizations), the expression we use is extremely well motivated on both theory and simulations grounds. As a token of the robustness of this model, it is remarkable that at least three well-known but distinct theoretical approaches to the study of galaxy bias (peaks theory~\cite{Desjacques:2010gz}, the excursion set approach~\cite{Musso:2012ch}, and the effective field theory of large-scale structure~\cite{Senatore:2014eva}) predict \textit{exactly the same} functional form for $b(k)$ in the mildly non-linear regime that we are interested in, with results from simulations agreeing with these findings (see Appendix for further discussions). In fact, in Fourier space, the lowest-order correction to a constant bias one can expect on general grounds, based on the sole assumption of isotropy, is a $k^2$ correction (a correction linear in $k$ would instead not respect isotropy).
Our goal is to provide a proof-of-principle for a correct and simple treatment enabling the retrieval of information on $b_{\rm auto}$ and $b_{\rm cross}$, in order to more robustly extract information from galaxy redshift surveys. To this end we require, in addition to galaxy power spectrum data [sensitive to $b_{\rm auto}$, Eq.~(\ref{galaxy})], measurements sensitive to the matter-galaxy cross-spectrum $P_{mg}(k)$ [containing information on $b_{\rm cross}$, Eq.~(\ref{pmg})]. Since the matter distribution is responsible for the gravitational lensing of CMB photons, we expect the cross-correlation between CMB lensing and galaxy overdensity maps, $C_\ell^{\rm \kappa g}\,$, to carry information on $P_{mg}(k)$ and hence on $b_{\rm cross}(k)$. Here $\kappa$ denotes the CMB lensing convergence.~\footnote{A CMB photon coming from a direction $\boldsymbol{\hat{n}}$ on the sky is deflected due to lensing by an angle $d(\boldsymbol{\hat{n}})=\boldsymbol{\nabla}\phi(\boldsymbol{\hat{n}})$, where $\phi(\boldsymbol{\hat{n}})$ is the lensing potential. The lensing convergence is then given by $\kappa(\boldsymbol{\hat{n}}) \equiv -\frac{1}{2}\boldsymbol{\nabla}^2\phi(\boldsymbol{\hat{n}})$.} The information one can extract on $b_{\rm cross}(k)$ (and therefore on $a$) is put to best use when combining $C_\ell^{\rm \kappa g}\,$ measurements with galaxy power spectrum data $P_{gg}(k)$. The reason is that an improved determination of $b_{\rm auto}(k)$ (through the improved constraints on $a$) significantly bolsters the constraining power of the galaxy power spectrum. This improved determination is especially important for the estimation of cosmological parameters affecting the growth of structure, such as massive neutrinos.
Previous works have suggested combining lensing and clustering (power spectrum) measurements~\cite{Giannantonio:2015ahz, Joudaki:2017zdt} or adopting a scale-dependent galaxy bias parametrization~\cite{Pen:2004rm, More:2014uva, Amendola:2015pha, Beutler:2016zat, Beutler:2016arn}. In this paper, it is the \textit{first time} that:
\begin{itemize}
\item $C_\ell^{\rm \kappa g}\,$ and $P_{gg}(k)$ measurements are combined, interpreted and analyzed in light of the simple but well-motivated~\cite{Desjacques:2016bnm,Sheth:1999mn,Seljak:2000jg,Schulz:2005kj,Smith:2006ne,Manera:2009zu,Musso:2012ch,
Paranjape:2012ks} scale-dependent biases models given by Eqs.~(\ref{biascross},\ref{biasauto}).
\item The achieved increase in constraining power of $P_{gg}(k)$ is used to extract tighter and more robust limits on the sum of the neutrino masses $M_{\nu}$. We show that our limits on $M_{\nu}$ are substantially strengthened when compared to previous results obtained through a scale-independent treatment of the bias~\cite{Cuesta:2015iho, Giusarma:2016phn, Vagnozzi:2017ovm}.
\end{itemize}
This work should be seen as a proof-of-principle of our methodology, rather than a fully fledged analysis. There are several aspects of our method and analysis that deserve a more in-depth investigation, as we shall discuss later in our paper: we plan to return to these issues in future work.
\section{Theory}\label{sec:theo}
To obtain information on cosmological parameters from $C_\ell^{\rm \kappa g}\,$, one must be able to model the theoretical prediction for $C_\ell^{\rm \kappa g}\,$ given a set of cosmological parameters. Within a $\Lambda$CDM framework and adopting the Limber approximation~\cite{Limber:1954zz,LoVerde:2008re}, $C_\ell^{\rm \kappa g}\,$ reads:
\begin{eqnarray}
\hskip -1 cm C_\ell^{\kappa g} = \int_{z_0}^{z_1} dz\frac{H(z)}{\chi^{2}(z)}W^{\kappa}(z)f_g(z)P_{mg}\left(k=\frac{\ell}{\chi(z)},z\right) \, .
\label{clkg}
\end{eqnarray}
The theoretical matter-galaxy cross-power spectrum $P_{mg}$ appearing on the right-hand-side of Eq.~(\ref{clkg}) is modeled following Eq.~(\ref{pmg}), with the theoretical $b_{\rm cross}(k)$ given by Eq.~(\ref{biascross}) and determined by the choice of parameters $a$ and $c$ in the MCMC analysis, while the theoretical non-linear matter power spectrum $P(k,z)$ is computed using the Boltzmann solver \texttt{CAMB}~\cite{Lewis:1999bs} and \texttt{Halofit}~\cite{Bird:2011rb,Takahashi:2012em} starting from the given cosmological parameters. Furthermore, $\chi(z)$ is the comoving distance to redshift $z$, $f_g(z)$ is the redshift distribution of the galaxy sample, $H(z)$ is the Hubble parameter, and $W^{\kappa}(z)$ is the CMB lensing convergence kernel~\cite{Peiris:2000kb,Hirata:2008cb,Bleem:2012gm,Sherwin:2012mr,Vallinotto:2013eva,Pearson:2013iha,
Bianchini:2014dla,Giannantonio:2015ahz,Pullen:2015vtb,Bianchini:2015yly,Singh:2016xey,Prat:2016xor,Bianchini:2018mwv}:
\begin{eqnarray}
W^{\kappa}(z)= \frac{3\Omega_{m,0}}{2c}\frac{H_0^2}{H(z)}(1+z)\chi(z)\frac{\chi(z_{_{\rm CMB}})-\chi(z)}{\chi(z_{_{\rm CMB}})} \, ,
\end{eqnarray}
where $H_0$ and $\Omega_{m,0}$ denote the Hubble parameter and matter density at present time. Comparing the theoretical prediction for $C_\ell^{\rm \kappa g}\,$ [right-hand side of Eq.~(\ref{clkg})] to its measured value through the likelihood function allows us to derive constraints on $b_{\rm cross}(k)$. In Eq.~(\ref{clkg}) we have chosen for simplicity not to include the contribution of redshift-space distortions, as well as the contribution of lensing to the observed galaxy clustering. The former is negligible on the scales of interest, whereas~\cite{Dizgah:2016bgm} showed that neglecting the latter at $z=0.57$ induces a relative error of less than $5\%$ in $C_{\ell}^{\kappa g}$, which is well below the current error budget in the measured $C_{\ell}^{\kappa g}$. \\
From peaks theory~\cite{Bardeen:1985tr}, as well as on more general grounds, one expectes differences between $b_{\rm cross}(k)$ and $b_{\rm auto}(k)$ [Eqs.~(\ref{biascross},\ref{biasauto})]. To some extent, these differences are partly attributable to stochasticity~\cite{Desjacques:2016bnm,Sheth:1999mn,Seljak:2000jg,Schulz:2005kj,Smith:2006ne,Manera:2009zu,Musso:2012ch,
Paranjape:2012ks,Schmidt:2012ys,Verde:2014nwa,Castorina:2016pqq,Modi:2016dah} (see also Figs.~1 and 2 of~\cite{Villaescusa-Navarro:2013pva}). The stochastic component, which is expected to be scale-dependent and hence more complex than a simple white shot-noise component~\cite{Baldauf:2013hka}, originates from the discrete nature of galaxies as tracers of the density field, as well as the non-Poissonian behavior of satellite galaxies whose spatial distribution does not follow that of the dark matter in halos~\cite{Dvornik:2018frx}. Auto-power spectra measurements therefore include a stochastic component, whereas cross-power spectra measurements are substantially less sensitive to the stochastic component. We take into account this difference by considering two separate parameterizations for $b_{\rm cross}$ and $b_{\rm auto}$ as per Eqs.~(\ref{biascross},\ref{biasauto}).~\footnote{Note that a relation between the bias parameters $c$ [Eq.~(\ref{biascross})] and $d$ [Eq.~(\ref{biasauto})] is still not present in the literature.} Eq.~(\ref{biascross}) and Eq.~(\ref{biasauto}) are used to model the theoretical values of $C_\ell^{\rm \kappa g}\,$ [Eq.~(\ref{clkg})] and $P_{gg}(k)$ [Eq.~(\ref{galaxy})] respectively when comparing them to their measured values in the likelihood function, allowing us to derive constraints on the bias parameters $a$, $c$, and $d$.
Note that, on simulations grounds, $b_{\rm cross}$ is typically expected to increase with increasing $k$ (\textit{i.e.} $db_{\rm cross}/dk>0$), whereas the opposite behaviour is expected for $b_{\rm auto}$ (\textit{i.e.} $db_{\rm auto}/dk<0$). To see this behaviour in simulations of luminous red galaxies (LRGs, which we will use in our work) at $z=0.5$, see the light blue short-dashed and long-dashed curves in the second panel from the left of the upper row of Fig.~2 in~\cite{Okumura:2012xh}. This behaviour is even more enhanced for more massive and hence more biased galaxies, see the purple and dark blue curves in the same figure.~\footnote{For LRGs, $b_{\rm cross}$ and $b_{\rm auto}$ appear to be nearly equal up to $k \sim 0.2\, h{\rm Mpc}^{-1}$, suggesting that in principle we could have taken $c=d$. However, in order to be conservative we have decided to allow the two scale-dependent factors to be independent. In fact, as we shall see later, data ends up detecting differences between $c$ and $d$.} On theoretical grounds, such a behaviour is not unexpected. Concerning $b_{\rm cross}$, it is known that on small scales the matter-galaxy 2-point correlation function $\xi^{mg}(r)$ traces the halo density profile $\rho(r)$ (see e.g. Fig.~1 in~\cite{Hayashi:2007uk}) and hence rises steeply. One therefore expects $b_{\rm cross}$ to rise on small scales (large $k$), as seen in simulations. Turning to auto-correlation measurements instead, halos are extended objects and therefore the distance between halos cannot be less than the sum of their radii: this effect of \textit{halo exclusion} is translated into the fact that, on small scales, the galaxy 2-point correlation function $\xi(r) \to -1$~\cite{CasasMiranda:2001ym,Smith:2006ne,Baldauf:2013hka}. Therefore, one expects $b_{\rm auto}$ to drop on small scales (large $k$), again in agreement with what is observed in simulations. This justifies our choice of treating $b_{\rm cross}$ and $b_{\rm auto}$ separately, albeit using the same functional form for both, which is justified on both theory and simulations grounds.
\section{Datasets and methodology}\label{sec:data}
The baseline dataset we consider consists of measurements of the CMB temperature, polarization, and cross-correlation spectra from the Planck 2015 data release~\cite{Ade:2015xua, Adam:2015rua, Aghanim:2015xee}. We combine the high-$\ell$ and low-$\ell$ temperature likelihoods, as well as the low-$\ell$ polarization likelihood. This dataset combination is referred to as \textbf{\textit{CMB}}.
In addition, we also include the galaxy power spectrum data from the BOSS DR12 CMASS sample~\cite{Alam:2016hwk,Gil-Marin:2015sqa}. We denote this dataset by $\boldsymbol{P_{gg}(k)}$. The measured galaxy power spectrum is compared to the theoretical value through the likelihood function, where the theoretical galaxy power spectrum $P_{gg}^{\rm th}(k,z)$ is modeled as follows:
\begin{eqnarray}
\hskip -1 cm P_{gg}^{\rm th}(k,z) = b_{\rm auto}^2(k) \left ( 1+\frac{2}{3}\beta + \frac{1}{5}\beta^2 \right )P_{\rm HF\nu}(k,z)+P^{\rm s} \, .
\label{Pgg_rsd}
\end{eqnarray}
In Eq.~(\ref{Pgg_rsd}), $\beta = \Omega_m(z_{\text{eff}})^{0.545}/b_{\rm auto}(k)$ parametrizes the amplitude of redshift-space distortions at the effective redshift $z_{\text{eff}} = 0.57$ determined by the BOSS collaboration~\cite{Alam:2016hwk,Gil-Marin:2015sqa}, and $b_{\rm auto}(k)$ is given in Eq.~(\ref{biasauto}).~\footnote{We also verified that if we consider a linear redshift distortion parameter, $\beta = \Omega_m(z_{\text{eff}})^{0.545}/a$, this choice has no effects on our results.} $P_{\rm HF\nu}(k,z)$ is the theoretical non-linear matter power spectrum computed using \texttt{Halofit}~\cite{Bird:2011rb, Takahashi:2012em}. Notice that we do not model non-linear redshift-space distortions in Eq.~(\ref{Pgg_rsd}) because their contribution on the scales of interest ($k<0.2\,h{\rm Mpc}^{-1}$) is small (see e.g Figure 5 of~\cite{Okumura:2015fga}). Finally, $P^{\rm s}$ is a nuisance parameter taking into account residual shot-noise contribution due to the discrete nature of galaxies. We consider the same wavenumber range used in~\cite{Vagnozzi:2017ovm}, $0.03\, h{\rm Mpc}^{-1} < k < 0.2\, h{\rm Mpc}^{-1}$, in order to avoid the use of non-linear scales, which would require a more sophisticated bias model beyond the relatively simple one we are using. In future work we will explore how a more sophisticated bias model can allow us to push to more non-linear scales.
In addition to the CMB and galaxy power spectrum data, we consider the cross-correlation, measured by Pullen \textit{et al.}~\cite{Pullen:2015vtb}, between CMB lensing convergence maps from the Planck 2015 data release~\cite{Ade:2015zua} and galaxy overdensity maps from the DR11 CMASS sample~\cite{Anderson:2013zyy}. We refer to this dataset as $\boldsymbol{C_\ell^{\rm \kappa g}\,}$. Following~\cite{Pullen:2015vtb}, we limit our use of the measurements of $C_\ell^{\rm \kappa g}\,$ from $\ell=130$ to $\ell=950$, thus removing the points in the low-$\ell$ range. The choice is dictated by the observed discrepancy between measurements of $P_{gg}(k)$ in the North and South Galactic caps~\cite{Ross:2012qm}, as well as possible contamination from the thermal Sunyaev-Zel'dovich (SZ) effect or other unknown systematics on large angular scales, to be discussed briefly later. This observation suggests that large-scale clustering measurements could be affected by systematics (see also~\cite{Hahn:2016kiy}).
It is worth pointing out that $C_\ell^{\rm \kappa g}\,$ measurements are extremely valuable due to their ability of breaking the degeneracy between $a$ and $\sigma_8$. While $P_{gg}$ is sensitive to the quantity $a^2\sigma_8^2$, $C_\ell^{\rm \kappa g}\,$ is instead sensitive to the combination $a\sigma_8^2$. The combination of $C_\ell^{\rm \kappa g}\,$ and $P_{gg}$ is thus capable of breaking the degeneracy between the parameters $a$ and $\sigma_8$.
We assume the standard six-parameter $\Lambda$CDM cosmological model, complemented by four parameters describing the scale-dependent bias ($a$, $c$, and $d$) and the sum of the three active neutrino masses $M_{\nu}$. For $M_{\nu}$ we adopt the currently sufficiently precise assumption of a degenerate mass spectrum~\cite{Lesgourgues:2004ps,DeBernardis:2009di,Wagner:2012sw,Gerbino:2016sgw,Archidiacono:2016lnv,Lattanzi:2017ubx, Lesgourgues:2006nd}. We do not model the modification to the scale-dependent bias induced by massive neutrinos~\cite{Lesgourgues:2009am,Saito:2009ah,Shoji:2010hm,Ichiki:2011ue,Dupuy:2013jaa,
Biagetti:2014pha,LoVerde:2014pxa,LoVerde:2014rxa,Blas:2014hya,Fuhrer:2014zka,
Dupuy:2015ega,Archidiacono:2015ota,Levi:2016tlf,Raccanelli:2017kht,Senatore:2017hyk,Brandbyge:2008rv,Viel:2010bn,
Brandbyge:2010ge,Agarwal:2010mt,Marulli:2011he,AliHaimoud:2012vj,Villaescusa-Navarro:2013pva,
Castorina:2013wga,Costanzi:2013bha,Baldi:2013iza,Massara:2014kba,Castorina:2015bma,Carbone:2016nzj,
Banerjee:2016zaa,Rizzo:2016mdr,Villaescusa-Navarro:2017mfx,Chiang:2017vuk,Munoz:2018ajr,Vagnozzi:2018pwo}, as~\cite{Raccanelli:2017kht,Vagnozzi:2018pwo} found that this effect is negligible given the sensitivity of current data.
We sample the posterior distributions of the cosmological parameters using the publicly available MCMC sampler \texttt{CosmoMC}~\cite{Lewis:2002ah,Lewis:2013hha}. We assume a Gaussian likelihood for $C_\ell^{\rm \kappa g}\,$, with covariance matrix estimated by jackknife resampling~\cite{Pullen:2015vtb}. The theoretical values of $P_{gg}$ and $C_{\ell}^{\kappa g}$ are convolved with the respective window functions, which take into account the finite geometry of the surveys, before being compared to their measured values in the likelihood function.
Unless otherwise specified, a uniform prior is assumed for all cosmological parameters. We allow $M_{\nu}$ to be as small as $0\, {\rm eV}$, ignoring prior information from oscillation experiments, which set a lower limit of $0.06\, {\rm eV}$~\cite{Esteban:2016qun,Capozzi:2017ipn,deSalas:2017kay}.~\footnote{This choice for the lower limit of the $M_{\nu}$ prior can also be viewed as a phenomenological proxy for models where the neutrino energy density can be smaller than the one predicted in $\Lambda$CDM, if not vanishing, see e.g.~\cite{Beacom:2004yd}.} For completeness we also report constraints on $M_{\nu}$ when this lower limit is imposed. For $a$ we impose a uniform prior in the range between 0 and 5, while for $c$ and $d$ we adopt a uniform prior between -50 and 10 (in units of $h^{-2}\,{\rm Mpc}^2$). The choice for the lower ranges of $c$ and $d$ is dictated by N-body simulations~\cite{Villaescusa-Navarro:2013pva,Vlah:2013lia}. These prior ranges are large enough to not cut the respective posterior distributions where these are significantly different from zero: in other words, the data really will be deciding the preferred ranges of $c$ and $d$, and not the priors.
\section{Results}\label{sec:results}
Table~\ref{tab:tab1} shows the constraints we obtain on $a$, $c$, $d$, and $M_{\nu}$, for various datasets combinations. We begin by considering the \textbf{\textit{CMB}} CMB-only dataset, and find $M_{\nu}<0.72\, {\rm eV}$ at $95\%$~C.L.~\cite{Ade:2015xua}.
The addition of $\boldsymbol{C_\ell^{\rm \kappa g}\,}$ (second and third rows of Table~\ref{tab:tab1}) allows us to constrain $a$ and $c$. We find $a \simeq 1.5 \pm 0.2$ at $1\sigma$, a value which is low when compared to the expectation from simulations for this galaxy sample ($a \approx 2$~\cite{Alam:2016hwk,Gil-Marin:2015sqa}), although compatible at $\approx 2.5\sigma$. We attribute this low value to a deficit of large-scale power observed in several measurements of $C_\ell^{\rm \kappa g}\,$~\cite{Kuntz:2015wza,Giannantonio:2015ahz}, including ours. Explanations range from systematics introduced in the \textit{Planck} 2015 lensing maps~\cite{Omori:2015qda,Liu:2015xfa,Kuntz:2015wza} to contamination from thermal SZ~\cite{vanEngelen:2013rla}.
The observed deficit in power also affects the bounds on $c$, because $a$, $c$, and $M_{\nu}$ are mutually degenerate when considering $C_\ell^{\rm \kappa g}\,$ measurements only. The reason is that a decrease in $a$ can be compensated on small scales by increasing $c$. An increase in $c$ increases power on small scales: this can be compensated by increasing $M_{\nu}$ in order to damp small-scale power.
The fourth and fifth rows of Table~\ref{tab:tab1} report the bounds obtained from the \textbf{\textbf{\textit{CMB}}}+$\boldsymbol{P_{gg}(k)}$ dataset. In this case $a$ and $d$ do not show a strong degeneracy. The reason is that the shot noise in Eq.~(\ref{Pgg_rsd}) smooths the matter power spectrum on small scales and partially breaks the degeneracy between $a$ and $d$. A negative correlation between $d$ and $P^{\rm s}$ is then induced. Finally, the estimate of $a \approx 2$ is now compatible with expectations~\cite{Alam:2016hwk,Gil-Marin:2015sqa} and the limits on $M_{\nu}$ are considerably improved, reaching $M_{\nu}<0.22\, {\rm eV}$ at 95\%~C.L..
The addition of $C_\ell^{\rm \kappa g}\,$ measurements leads to the bounds reported in the sixth and seventh row. For both the \textbf{\textbf{\textit{CMB}}}+$\boldsymbol{P_{gg}(k)}$ and \textbf{\textbf{\textit{CMB}}}+$\boldsymbol{P_{gg}(k)}$+$\boldsymbol{C_\ell^{\rm \kappa g}\,}$ combinations we find a negative $d$, in agreement with the expectations from N-body simulations~\cite{Villaescusa-Navarro:2013pva,Vlah:2013lia}. The bound reported on $M_{\nu}$ for the \textbf{\textit{CMB}}+$\boldsymbol{P_{gg}(k)}$+$\boldsymbol{C_\ell^{\rm \kappa g}\,}$ dataset combination ($M_{\nu}<0.19\, {\rm eV}$ at 95\%~C.L.) is the strongest available bound in the literature obtained when considering comparable datasets~\cite{Giusarma:2013pmn,Giusarma:2014zza,Palanque-Delabrouille:2015pga,
Zhen:2015yba,Gerbino:2015ixa,DiValentino:2015wba,DiValentino:2015sam,
Cuesta:2015iho,Huang:2015wrx,Giusarma:2016phn,Vagnozzi:2017ovm,Yeche:2017upn,
Couchot:2017pvz,Doux:2017tsv,Wang:2017htc,Chen:2017ayg,Upadhye:2017hdl,Salvati:2017rsn,Nunes:2017xon,Boyle:2017lzt,Zennaro:2017qnp,Sprenger:2018tdb,
Wang:2018lun,Mishra-Sharma:2018ykh,Choudhury:2018byy} and within the assumption of a $\Lambda$CDM model~\footnote{However, see also~\cite{Emami:2017wqa,Hu:2014sea,Bellomo:2016xhl,Dirian:2017pwp,Renk:2017rzu,Peirone:2017vcq}.}. Previously, the study~\cite{Vagnozzi:2017ovm} obtained $M_{\nu}<0.30 \, {\rm eV}$ at 95\%~C.L. for the \textbf{\textit{CMB}}+$\boldsymbol{P_{gg}(k)}$ dataset with a scale-independent treatment of the bias.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{all_mnu_new4.pdf}
\caption{One-dimensional marginalized posterior for $M_{\nu}$ obtained with the baseline \textbf{\textit{CMB}} dataset (CMB temperature and large-scale polarization anisotropy, black line), in combination with the $\boldsymbol{P_{gg}(k)}$ dataset (galaxy power spectrum from the DR12 CMASS sample, blue line), with the $\boldsymbol{C_\ell^{\rm \kappa g}\,}$ dataset (CMB lensing-galaxy overdensity cross-correlation angular power spectrum, green line), and with both $\boldsymbol{P_{gg}(k)}$ and $\boldsymbol{C_\ell^{\rm \kappa g}\,}$ (magenta line). We also show the posterior obtained in~\cite{Vagnozzi:2017ovm} for the \textbf{\textit{CMB}}+$\boldsymbol{P_{gg}(k)}$ dataset with a scale-independent treatment of the bias (red line).}
\label{fig:mnu}
\end{figure}
The improvement in the constraints on $M_{\nu}$ can be seen in Fig.~\ref{fig:mnu}: the previous result of~\cite{Vagnozzi:2017ovm} is represented by the red curve. The small peak appearing at low values of $M_{\nu}$ has been attributed to possible systematics in the measurement, resulting in a slight suppression of small-scale power and hence a preference for higher neutrino masses. Moreover, the red curve is obtained through a scale-independent treatment of the bias [i.e. $b_{\rm auto}(k) = a$]. Thus, the results obtained using the scale-dependent expressions for $b_{\rm auto}(k)$ [Eq.~(\ref{biasauto})] and $b_{\rm cross}(k)$ [Eq.~(\ref{biascross})] lead to a constraint on $M_{\nu}$ which is tighter and, especially, more robust (see blue and magenta curves in Fig.~\ref{fig:mnu}). We notice that the impact of the $C_{\ell}^{\kappa g}$ dataset on improving our $M_{\nu}$ constraints is rather modest, which is best explained by the currently modest signal-to-noise of this measurement. We expect that future high signal-to-noise measurements of $C_{\ell}^{\kappa g}$, in combination with a reduction of systematics, should significantly increase the impact of this dataset, and therefore of our methodology, on constraining the cosmological parameters. Finally, triangular plots showing the joint posteriors on $a$, $d$, and $M_{\nu}$ are shown in Fig.~\ref{fig:pgg_mnu_tri}.
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{compare_mnu2.pdf}
\caption{68\% and 95\% CL allowed regions in the combined two-dimensional planes for the parameters $M_{\nu}$, $a$ and $d$ [the bias parameter $d$ enters the modeling of $\boldsymbol{P_{gg}(k)}$ as this is an auto-correlation measurement, see Eqs.~(\ref{galaxy}) and (\ref{biasauto})] together with their one-dimensional posterior probability distributions. We considered the combination of the \textbf{\textit{CMB}} data with the $\boldsymbol{P_{gg}(k)}$ galaxy power spectrum data (blue contours), with the further addition of the $\boldsymbol{C_\ell^{\rm \kappa g}\,}$ CMB lensing-galaxy overdensity cross-correlation angular power spectrum (red contours). In order to compare these two combination of data, we do not show the parameter $c$ in the plot as it is not present in the auto-correlation parameterization [Eq.~(\ref{biasauto})].}
\label{fig:pgg_mnu_tri}
\end{figure}
The bounds obtained are among the most conservative in the literature, given the bare minimum number of datasets adopted. We expect that the addition of geometrical information from BAO measurements would contribute strongly to further lowering the upper bound on $M_{\nu}$. This might open the doors towards possibly unraveling the neutrino mass hierarchy from cosmology~\cite{Huang:2015wrx,Vagnozzi:2017ovm,Allison:2015qca,Hannestad:2016fog,
Xu:2016ddc,Gerbino:2016ehw,Simpson:2017qvj,Schwetz:2017fey,Hannestad:2017ypp,
Long:2017dru,Gariazzo:2018pei,Heavens:2018adv,deSalas:2018bym}, due to parameter space volume effects. The neutrino mass bounds, and accordingly the volume effects, are actually stronger in dynamical dark energy models where $w(z)\geq-1$~\cite{Vagnozzi:2018jhn} (see also~\cite{Zhang:2015uhk,Wang:2016tsz,Zhao:2016ecj,Guo:2017hea,Zhang:2017rbg,Li:2017iur,Yang:2017amu} for related work).
\section{Conclusions}\label{sec:concl}
In this work, it is the first time that measurements of the cross-correlation between CMB lensing and galaxy overdensity maps [$C_\ell^{\rm \kappa g}\,$], and of the galaxy power spectrum [$P_{gg}(k)$], have been: a) combined and analysed in light of a well-motivated parametrization of the scale-dependent bias $b(k)$ and b) used to obtain tighter and more robust constraints on the sum of neutrino masses $M_{\nu}$. We detect scale-dependence in the bias at moderate significance, thus showing that already on linear or mildly non-linear scales ($k<0.2\, h{\rm Mpc}^{-1}$), modeling leading-order corrections to the usually assumed constant bias is important. The upper bound on $M_{\nu}$ of $0.19\, {\rm eV}$ we have determined by combining CMB data with $P_{gg}(k)$ and $C_\ell^{\rm \kappa g}\,$ measurements is among the strongest and most conservative in the literature obtained with comparable datasets~\cite{Cuesta:2015iho,Huang:2015wrx,Giusarma:2016phn,Vagnozzi:2017ovm,Palanque-Delabrouille:2015pga,Alam:2016hwk}.
We expect our method to be particularly useful for future surveys, in particular for constraining cosmological parameters or models which affect small-scale clustering or the growth of structure (for example, massive neutrinos and $\sigma_8$). Moreover, our method can be extended to a tomographic analysis, using several redshift bins, allowing one to sample more modes and constrain the time-dependent suppression in the matter power spectrum due to neutrinos~\cite{Banerjee:2016suz}. Alternatively, weak lensing surveys can be used in place of CMB lensing maps~\cite{Simon:2017osp}. In order to increase the available number of modes by modeling increasingly non-linear scales, a more accurate treatment of the scale-dependent bias is necessary~\cite{Hand:2017ilm,Modi:2017wds,Seljak:2017rmr,Upadhye:2017hdl}. It will be particularly interesting to interpret CMB lensing-galaxy cross-correlation measurements within perturbation theory frameworks, for instance within convolution Lagrangian effective field theory~\cite{Modi:2017wds}. The use of such approaches will be particularly useful when cross-correlating with future galaxy surveys which will probe higher redshifts, and hence increasingly linear scales at a given wavenumber. We plan on exploring these and other issues in future work.
Finally, we expect the signal-to-noise ratio ($S/N$) for future CMB lensing-galaxy overdensity cross-correlation measurements to improve significantly. CMB-S4 like experiments in cross-correlation with future galaxy surveys should provide a $S/N$ of $\gtrsim 150$, allowing a proper modeling for the scale-dependent bias to be made. This modeling will allow a substantial recovery of information on the matter power spectrum and improve our constraints on cosmological parameters, such as $M_{\nu}$~\cite{Seljak:2008xr,Schmittfull:2017ffw}. \\
\section*{Appendix: The bias model}
In this section we discuss our choice of the bias model, Eqs.~(\ref{biascross},\ref{biasauto}), by studying the impact of using other different functional forms and quantifying to some extent the systematic error introduced adopting an incorrect model.
As discussed in Sec.~\ref{sec:intro}, our model for the scale-dependent galaxy bias is motivated by both theory and simulations. In particular, the $k^2$ model we adopted can be derived within at least three very different theoretical approaches to understanding galaxy bias by linking the statistics of haloes to fluctuations of the primordial density field. These three extremely well-motivated and well-studied approaches, which give the same expression for the leading terms of the scale-dependent bias, are: peaks theory with Gaussian smoothing [see Eq.~(10) in~\cite{Desjacques:2010gz}], the excursion set approach [see Eq.~(50) in~\cite{Musso:2012ch}], and the effective field theory of large-scale structure\footnote{The $k^2$-correction can be understood by looking at the derivatives of $\phi$ appearing in Eqs.~(52,53) of~\cite{Senatore:2014eva}.}. A hybrid peaks theory-excursion set approach also leads to the same form for the scale-dependent bias (see Fig.~4 of~\cite{Paranjape:2012ks}).\footnote{The $k^2$-correction can also be seen in the well-known review paper~\cite{Desjacques:2016bnm}. In particular, in Eq.~(2.66), the term $b_\delta$ coincides with the standard large-scale constant bias, while the term proportional to $b_{\bigtriangledown^2\delta}$ corresponds to a $k^2$-dependent term.} Moreover, the agreement with predictions from N-body simulations (e.g.~\cite{Villaescusa-Navarro:2013pva,Vlah:2013lia}) further lend support in favour of the robustness of our choice of bias model, as being the one most justified by theory and simulations on mildly non-linear scales.
Nevertheless, several phenomenological bias models exist and have been used in literature. For instance, some reasonable choices of bias models could be those considered in Sec.~IIA of~\cite{Smith:2006ne}. These include some well-known bias forms such as the $Q$-model of Cole \textit{et al.}~\cite{Cole:2005sx}, the model of Seo \& Eisenstein~\cite{Seo:2005ys} and variants thereof~\cite{Seljak:2000jg,Schulz:2005kj,Guzik:2006bu}, the model of Huff \textit{et al.}~\cite{Huff:2006gs}, or the power law bias model of Amendola \textit{et al.}~\cite{Amendola:2015pha}. For concreteness, we have examined how the bounds would change if we used the $Q$-model of~\cite{Cole:2005sx}:
\begin{eqnarray}
b(k) = b_Q\frac{1+Qk^2}{1+1.4k} \, ,
\end{eqnarray}
where $b_Q$ and $Q$ mimic the scale dependence of the power spectrum at small scales.
After marginalizing over $b_Q$ and $Q$, we find that also for this bias model, as for the one we used in our manuscript, the upper limit on $M_{\nu}$ is tighter than the one obtained using a scale-independent bias model. The reason is that the Monte Carlo shows a preference for values of $Q$ which result in the value of the bias decreasing as $k$ is increased (i.e. $db/dk<0$). This is exactly the same behavior we observed using our $k^2$ model, where the data prefers negative values of the $d$ bias parameter (in agreement with theoretical arguments and simulations, although at no point in the analysis have we used this information, \textit{i.e.} the prior on $d$ was large enough that the data would have been free to choose positive values of $d$ as well). In other words, galaxy power spectrum data, when interpreted using the bias models we examined, seem to prefer a bias which decreases when moving towards smaller scales: this effect can naturally be compensated by decreasing $M_{\nu}$, in order to reduce the small-scale suppression in the power spectrum caused by neutrino masses. Notice that this behavior is exactly what is expected from N-body simulations~\cite{Villaescusa-Navarro:2013pva,Vlah:2013lia}. Of course, we cannot confirm that this behavior occurs for any possible scale-dependent bias model one can think about, but the results of N-body simulations as well as our investigation of two independent bias models (the $Q$-model and the $k^2$ model we examined here) suggests that this might well be the case. A complete investigation, however, is well beyond the scope of our work. It would definitely be interesting to return to this point in more detail in the future.
Finally, in order to somehow quantify the systematic error due to the choice of the bias model, we opted for providing a qualitative assessment by comparing the posteriors we obtain for the scale-independent bias parameter $a$, according to whether or not the $k^2$-correction is switched on (i.e., in one case we allow $c$ and $d$ to vary, and in the other case we set $c=d=0$). We plot the results in Fig.~\ref{fig:systematic}, with the red curve being the one obtained when the full scale-dependent bias model is used, whereas the black curve is obtained by considering the extreme case where we switch off the scale-dependent correction. As we can see from Fig.~\ref{fig:systematic}, the shift in the posterior of $a$ induced by introducing or not the scale-dependent correction is minimal, well below the $1\sigma$ level. From a qualitative point of view, we can expect that an incorrect bias model would lead to systematics in the recovered value of the $a$ bias parameter, which instead we find to be in agreement with the theoretical value for the galaxy sample in question ($a \sim 2$).
\begin{figure}[!htb]
\centering
\includegraphics[width=8.5cm]{referee.pdf}
\caption{One-dimensional marginalized posterior for $a$ (scale-independent bias parameter) obtained by combining the baseline CMB dataset, with the $P_{gg}(k)$ dataset and with the $C_{\ell}^{\kappa g}$ dataset used in this work. The red line shows the posterior obtained introducing the $k^2$-correction, while the black line illustrates the posterior obtained with a scale-independent treatment of the bias. The $k$ and $\ell$ range we choose are the same for both the cases considered.}
\label{fig:systematic}
\end{figure}
\begin{acknowledgments}
We are indebted to Anthony Pullen for providing the $C_\ell^{\rm \kappa g}\,$ measurements and for extremely useful discussions in this respect. We thank Shadab Alam, Federico Bianchini, Emanuele Castorina, Chang Hoon Hahn, Siyu He, Alex Krolewski, Elena Massara, Patrick McDonald, Uro\v{s} Seljak, Ravi Sheth, and Martin White for useful discussions. We also thank Sebastian Baum, Alex Millar, Janina Renk, and Luca Visinelli for comments on an earlier version of the draft. This work is based on observations obtained with Planck (\href{http://www.esa.int/Planck}{\href{www.esa.int/Planck}{www.esa.int/Planck}}), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. We acknowledge use of the Planck Legacy Archive. We also acknowledge the use of computing facilities at NERSC and at the McWilliams Center for Cosmology. E.G. is supported by NSF grant AST1412966. S.V. and K.F. acknowledge support by the Vetenskapsr\aa det (Swedish Research Council) through contract No. 638-2013-8993 and the Oskar Klein Centre for Cosmoparticle Physics. S.H. acknowledges support by NASA-EUCLID11-0004, NSF AST1517593 and NSF AST1412966. S.F. thanks the Miller Institute for Basic Research in Science at the University of California, Berkeley for support. K.F. acknowledges support from DoE grant DE-SC0007859 at the University of Michigan as well as support from the Leinweber Center for Theoretical Physics.
\end{acknowledgments}
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Dataset & $a$ (68\%~C.L.) & $c$ (68\%~C.L., $h^{-2}\,{\rm Mpc}^2$) & $d$ (68\%~C.L., $h^{-2}\,{\rm Mpc}^2$) & \multicolumn{2}{c}{$M_{\nu}$ [{\rm eV}] (95\%~C.L.)}\\
\hline
\textit{CMB} $\equiv$ \textit{PlanckTT}+\textit{lowP} & & & & $< 0.72$ & [$<0.77$] \\
\textit{CMB}+$C_\ell^{\rm \kappa g}\,$ & $1.45 \pm 0.19$ & $2.59 \pm 1.22$ & & \textbf{0.06} & \\
& $1.50 \pm 0.21$ & $2.97 \pm 1.42$ & & $< 0.72$ & [$<0.77$] \\
\textit{CMB}+$P_{gg}$(k) & $1.97\pm0.05$ & & $-13.76\pm4.61$& \textbf{0.06} & \\
& $1.98\pm0.08$ & & $-14.03\pm4.68$ & $< 0.22$ &[$<0.24$] \\
\textit{CMB}+$P_{ gg}$(k)+$C_\ell^{\rm \kappa g}\,$ & $1.95 \pm 0.05$ & $0.45\pm0.87$& $-13.90 \pm 4.17$ & \textbf{0.06} &\\
& $1.95 \pm 0.07$ & $0.48\pm0.90$ & $-14.13 \pm 4.02$ & $< 0.19$ & [$<0.22$] \\
\end{tabular}
\caption{Constraints on the bias parameters $a$, $c$, and $d$, as well as the sum of the three active neutrino masses $M_{\nu}$. The bounds on $M_{\nu}$ not in square brackets have been obtained imposing a lower bound of $M_{\nu} > 0\, {\rm eV}$, i.e. only making use of cosmological data, whereas the ones in square brackets have been obtained imposing the lower bound set by neutrino oscillations of $M_{\nu}>0.06\, {\rm eV}$. The \textbf{\textit{CMB}} dataset denotes measurements of the CMB temperature and large-scale polarization anisotropy from the Planck satellite 2015 data release. Measurements of the angular cross-power spectrum between CMB lensing convergence maps from the Planck 2015 data release and galaxies from BOSS DR11 CMASS sample [$\boldsymbol{C_\ell^{\rm \kappa g}\,}$], as well as the galaxy power spectrum measured from BOSS DR12 CMASS sample [$\boldsymbol{P_{gg}(k)}$], are then added. Rows featuring the symbol \textbf{0.06} were obtained fixing the sum of the neutrino masses $M_{\nu}$ to the minimum value allowed by oscillation data, $0.06\, {\rm eV}$.}
\label{tab:tab1}
\end{ruledtabular}
\end{table*}
|
{
"timestamp": "2018-12-07T02:00:59",
"yymm": "1802",
"arxiv_id": "1802.08694",
"language": "en",
"url": "https://arxiv.org/abs/1802.08694"
}
|
\section{Introduction}
Topological semimetals (TSMs) were predicted
theoretically~\cite{Burkov2016,PhysRevB.84.235126,PhysRevB.92.081201,
PhysRevB.93.085138,PhysRevLett.107.127205,PhysRevLett.115.036806,Weng2015}
and then discovered
experimentally~\cite{neupane2014observation,xie2015new,PhysRevB.93.201104,
PhysRevLett.115.036807}.
The main feature of the TSMs is that the valence and conduction bands
intersect in several (nodal) points or closed (nodal) lines in momentum
space~\cite{PhysRevB.84.235126,PhysRevLett.107.127205,PhysRevB.83.205101}.
Thus, in contrast to a usual metal with a two-dimensional Fermi surface,
the Fermi surface of a three-dimensional TSM is reduced to a finite set of
points or curves. Specifically, for the nodal-line semimetals, which we
discuss below, the Fermi surface is shrunk to a curve.
Within a simplest theoretical
framework~\cite{PhysRevB.93.085138},
the nodal line (or ring) of a Weyl semimetal is a closed plane curve in a
three-dimensional Brillouin
zone~\cite{PhysRevB.84.235126,PhysRevB.92.081201,PhysRevB.93.085138,Fang2016}.
In the plane of the nodal line, which we will also refer to as the basal
plane, the kinetic energies of the electrons and holes are proportional to
the square of the distance from the nodal line. As the momentum deviates
from the nodal line in the direction normal to the basal plane, the energy
variation is proportional to this deviation.
Since the spectrum of the nodal-ring semimetals is rather peculiar, one can
expect some unusual scattering effects in these materials. For example, the
electron and hole bands touch each other near the Fermi energy in the TSMs,
and even weak spatial variation of the potential energy can lead to the
interband transitions. Consequently, a single-particle scattering problem
must take into account both electron and hole states. This is a necessary
condition for observation of the Klein phenomenon, that is, a process in
which a particle passes through high and long potential barrier without
reflection~\cite{klein1929reflexion,klein_tunn2006}.
Klein tunneling was described
theoretically~\cite{klein_tunn2006,rozhkov2016electronic}
and observed
experimentally~\cite{young_fabry_perot}
in graphene. It suppresses the backscattering, thus contributing to the
increase of the graphene electron mean free path.
It is known that the Klein phenomenon requires linear single-particle
spectrum. Since electron dispersion in the nodal-ring semimetals is linear
in some directions in
$\mathbf{k}$-space,
the Klein tunneling might be expected in these materials under certain
conditions.
Here we study the scattering of an electron by a step-wise potential
barrier in the ballistic regime. Four different cases are considered:
barriers with finite and infinite width, both perpendicular and parallel to
the basal plane. We demonstrate that reflectionless tunneling is possible
for both orientations of the barrier. Similar to the
graphene~\cite{klein_tunn2006},
the perfect transmission in the nodal-ring semimetals is associated with
both Klein tunneling and `magic angles'
resonances~\cite{chiral_tunn_tudorovskiy2012}.
The Klein phenomenon in the nodal-ring semimetals is observed if the
incident angle of the particle differs from
90$^\circ$.
This is dissimilar to the case of graphene and Weyl-point semimetals, where
this effect exists only for normal incidence.
Another interesting feature of the scattering in the studied materials is
the emergence of two transmission and two reflection channels for a single
incident plane wave. To characterize such a scattering one needs two
transmission and two reflection coefficients. The transmitted (reflected)
particles in different transmission (reflection) channels have different
momenta. This property is in stark contrast with the scattering of free
non-relativistic electrons, where the momentum of the outgoing particles
are uniquely fixed by the momentum of the incident particle. The existence
of the multiple scattering channels is a consequence of the complicated
dispersion structure of a nodal-ring semimetal.
The paper is organized as follows. In
Sec.~\ref{sec::model}
we briefly discuss the theoretical model of a nodal-ring semimetal. This
model is used in
Sec.~\ref{sec::parall}
to study the scattering on a barrier parallel to the basal plane. The
barrier perpendicular to the basal plane is discussed in
Sec.~\ref{sec::perp}.
Summary and conclusions are in
Sec.~\ref{sec::discussion}.
\section{Model} \label{sec::model}
We write the Hamiltonian of the system in the following
form~\cite{PhysRevB.93.085138}:
\begin{equation}\label{eq::hamilt}
\hat{H}(\mathbf{k})=(m-Bk_{\bot}^2)\sigma_x+k_z \sigma_z + U\sigma_0,
\end{equation}
where
$\mathbf{k}=(k_x,k_y,k_z)$
is the single-particle momentum, and scalar
$k_{\bot}$
is determined by the formula
$k_{\bot}^2=k_x^2+k_y^2$,
coefficient
$m$
is an analog of the rest mass, the quantity
${1}/(2B)$
is
an inertial mass for the motion in the
$xy$-plane, and
$U$
is the potential
energy. Matrix
$\sigma_0$
is the 2x2 unity matrix and
$\sigma_{x,z}$
are
the Pauli matrices. We set
$\hbar$
and
$v_{F}$
in
$z$~direction equal to
one. The spectrum of Hamiltonian~(\ref{eq::hamilt}) is \begin{equation}
\varepsilon_{\mathbf{k}}^{\rm e,h}=U\pm \sqrt{(m-Bk_{\bot}^2)^2+k_z^2},
\label{2}
\end{equation}
where label `e' (`h') corresponds to electrons (holes). If the potential
energy is zero, the solutions of the equation
$\varepsilon_{\mathbf{k}}=0$
forms a circle of radius
$k_\bot=\sqrt{m/B}$
in the
$xy$-plane. This circle
is the nodal line of the model, and
$xy$-plane is the basal plane.
Normalized eigenfunctions of the Hamiltonian~(\ref{eq::hamilt}) are spinors
\begin{eqnarray}\label{3}
\psi_\mathbf{k}\! =\! C_{\mathbf{k}}\begin{pmatrix}1\\
\chi_{\mathbf{k}}\\
\end{pmatrix},\,\,
\chi_{\mathbf{k}}\!=\!\frac{\varepsilon\!-\!U\!-\!k_z}{m-Bk_{\bot}^2},
\,\,
C_{\mathbf{k}}\!=\!\frac{1}{\sqrt{1\!+\!\chi_{\mathbf{k}}^2}}.
\end{eqnarray}
Following a standard
procedure~\cite{landau1981quantum},
we can calculate the probability current associated with a plane wave
$\psi (\mathbf{r})=a\psi_{\bf k} e^{i\mathbf{kr}}$:
\begin{equation}\label{eq::current}
\mathbf{j}= |a|^2
\begin{pmatrix}
-4\chi_{\mathbf{k}} B k_x \\
-4\chi_{\mathbf{k}} B k_y \\
1-\chi^2_\mathbf{k} \\
\end{pmatrix}.
\end{equation}
This current is invariant with respect to rotation in the
$xy$-plane. We
will use
Eq.~(\ref{eq::current})
below to choose a correct structure of the outgoing waves and to define
properly transmission and reflection coefficients.
\section{Barrier parallel to basal plane}\label{sec::parall}
\begin{figure}
\center{\includegraphics[width=0.9\linewidth]{energy}}
\center{\includegraphics[width=0.9\linewidth]{waves}}
\center{\includegraphics[width=0.9\linewidth]{tor_in_z_17_12}}
\caption{Barrier parallel to the basal plane. Panel~(a): Potential energy
$U(z)$.
It is finite and constant for
$0<z<L$.
Otherwise, it is zero.
Panel~(b): Incident, reflected, and transmitted waves near the barrier. The
wave vectors of the incoming and outgoing waves are shown schematically by
the dashed lines with arrows. The barrier is represented by the blue hatched
area.
Panel~(c): Relative orientations of the barrier and the iso-energy surface.
The iso-energy surface is defined by the equation
$\varepsilon^2 = {(m-Bk_{\bot}^2)^2+k_z^2}$.
It is shown by (green) torus in the reciprocal space. The barrier is shown
as blue parallelepiped.
\label{Fig1}}
\end{figure}
First we consider the situation when the barrier is parallel to the basal
plane, see
Fig.~\ref{Fig1}. The potential energy
$U(z) = U [ \theta(z) - \theta(z-L)]$,
where
$\theta(z)$
is the Heaviside step-function, depends on
$z$~coordinate only. Barrier width in the $z$ direction is equal to $L$.
In the $x$ and $y$ directions the barrier extends to infinity. Thus, the
momentum components
$k_x$
and
$k_y$
are conserved. Further, we assume that the incident particle is an electron
(not a hole). The barrier divides the space into three regions (to the left
of the barrier, to the right of the barrier, and under it). The wave
function in these three regions is
\begin{eqnarray}
\label{eq::match1}
\nonumber \psi &=& C_{+} e^{ik_zz}
\begin{pmatrix}
1\\
\chi_{+}\\
\end{pmatrix}
+ rC_{-}e^{-ik_zz}
\begin{pmatrix}
1\\
\chi_{-}\\
\end{pmatrix}\!\!,\, z<0,
\\
\psi &=& De^{iq_zz}\!
\begin{pmatrix}
1\\
\phi_{+}\\
\end{pmatrix}
\!+\! Fe^{-iq_zz}\!
\begin{pmatrix}
1\\
\phi_{-}\\
\end{pmatrix}\!\!,\,\, 0<z<L,
\\
\nonumber \psi &=& t\,C_{+} e^{ik_z(z-L)}
\begin{pmatrix}
1\\
\chi_{+}\\
\end{pmatrix}\!\!,\,\, z>L.
\end{eqnarray}
Here
\begin{eqnarray}
\label{eq::kz_def}
k_z&=&\sqrt{\varepsilon^2-(m-Bk_{\bot}^2)^2},\\
\label{eq::qz_def}
q_z &=& \sqrt{(\varepsilon-U)^2-(m-Bk_{\bot}^2)^2}.
\end{eqnarray}
The quantities
$\pm k_z$
and
$\pm q_z$
are the
$z$-components of the wave vectors outside and inside the barrier,
respectively. Energy of the incident electron is
$\varepsilon$,
and
\begin{eqnarray}
\chi_\pm &=& \chi( \pm k_z)=\frac{\varepsilon \mp k_z}{m-Bk_{\bot}^2},
\\
\nonumber
\phi_\pm &=&\phi(\pm q_z)=\frac{\varepsilon-U \mp q_z}{m-Bk_{\bot}^2},\\ \nonumber
\\
\nonumber
C_\pm &=& (1 + \chi_\pm^2)^{-1/2}.
\end{eqnarray}
Factor
$e^{ik_x x + i k_y y}$,
common for all three wave functions, is
omitted for brevity. To describe a particle propagating freely outside the
barrier, the wave function must have purely real
$k_z$,
or, equivalently,
\begin{equation}
{\frac{m-\varepsilon}{B}}<k^2_{\bot}<{\frac{m+\varepsilon}{B}}.
\label{eq::window_kperp}
\end{equation}
Transmission and reflection coefficients
$T=|t|^2$
and
$R=|r|^2$
obey the usual relation
$T+R=1$.
To derive $r$ and $t$ we should match $\psi$ at
$z=0$
and
$z=L$,
accounting for the continuity of the probability current. In this way we
derive
\begin{align}
C_{+}
\begin{pmatrix}
1\\
\chi_+\\
\end{pmatrix}
+ rC_-
\begin{pmatrix}
1\\
\chi_-\\
\end{pmatrix}
=
D
\begin{pmatrix}
1\\
\phi_+\\
\end{pmatrix}
+
F
\begin{pmatrix}
1\\
\phi_-\\
\end{pmatrix},\nonumber \\
tC_+
\begin{pmatrix}
1\\
\chi_+\\
\end{pmatrix}
=
De^{iq_zL}
\begin{pmatrix}
1\\
\phi_+\\
\end{pmatrix}
+
Fe^{-iq_zL}
\begin{pmatrix}
1\\
\phi_-\\
\end{pmatrix}.
\label{10}
\end{align}
Solving
system~(\ref{10})
one obtains the expression for $r$
\begin{equation}
\label{eq::refl1b}
r\!=\!\frac{(U-k_z)^2-q_z^2}{k_z^2+q_z^2-U^2 + 2 i k_z q_z {\rm cot} (q_zL)}
\sqrt{\frac{\varepsilon+k_z}{\varepsilon-k_z}}.
\end{equation}
The dependence of the transmission coefficient
$T=1-|r|^2$
on the transverse momentum
$k_\bot$
is calculated using
Eq.~\eqref{eq::refl1b}.
The results for several energies $\varepsilon$ are shown in
Fig.~\ref{fig::T_vs_k_perp}.
The same dependence for different barrier widths $L$ is presented in
Fig.~\ref{fig::T_vs_L}.
We see that for certain parameter values the transmission is perfect:
$T=1$,
or, equivalently,
$r=0$.
Note that
$k_\bot$
varies in the interval defined by the
conditions~\eqref{eq::window_kperp}.
As it follows from
Eqs.~\eqref{2},
\eqref{eq::kz_def},
\eqref{eq::qz_def},
and~(\ref{eq::refl1b}),
the disappearance of the reflected wave
($r=0$)
occurs when either of two different conditions is satisfied. The first one
is
\begin{equation}
\label{n_integer}
q_z L = \pi n,
\end{equation}
where $n$ is an integer. It includes the barrier width $L$ and corresponds
to a dimensional phenomenon similar to the Ramsauer-Townsend
effect~\cite{ramsauer1921wirkungsquerschnitt}.
Unlike the classical Ramsauer-Townsend effect, which occurs for a
non-relativistic quantum particle, whose energy exceeds the barrier height,
for Weyl semimetals, below-the-barrier particles may also demonstrate the
same reflectionless transmission. Similar phenomenon has been observed in
graphene: electrons approaching a rectangular barrier at the so-called
`magic angles' propagate through the barrier without
reflection~\cite{chiral_tunn_tudorovskiy2012}.
The Ramsauer-Townsend-like peaks in
$T(k_\bot)$
become more pronounced with the increase of $L$, see
Fig.~\ref{fig::T_vs_L}.
If
relation~(\ref{n_integer})
is violated, coefficient $r$ still can vanish, provided that
\begin{eqnarray}
\label{eq::klein_cond}
k_\bot=\sqrt{m/B}.
\end{eqnarray}
Under this condition
$\varepsilon=k_z=U-q_z$,
the Hamiltonian
\eqref{eq::hamilt} effectively describes a one-dimensional relativistic
particle, and the Klein scattering is observed. Thus, while both
Eqs.~(\ref{n_integer})
and~(\ref{eq::klein_cond})
correspond to the reflectionless transmission of the incident particle, the
mechanisms responsible for such a transmission are non-identical.
\begin{figure}[t]
\center{\includegraphics[width=1\linewidth]{barrier_in_z_E_dep}}
\caption{Transmission coefficient $T$ as a function of the dimensionless
transverse momentum
$k_{\bot}\sqrt{B/m}$
for different values of the ratio
$\varepsilon/m$
(see legend in the figure). The curves are calculated for
$U/m=1$,\,
$mL=10$,\,
and
$Bm=1$.
Momentum
$k_{\bot}$
is limited by the conditions
$\sqrt{(m-\varepsilon)/B}<k_{\bot}<\sqrt{(m+\varepsilon)/B}$,
which guarantees that a propagating solution (${\rm Im}\, k_z=0$)
exists. When
$k_{\bot}\sqrt{B/m}=1$,
a perfect transmission due to the Klein tunneling is observed. In addition,
the reflectionless tunneling at the so-called `magic angles' is also
possible.
\label{fig::T_vs_k_perp}
}
\end{figure}
If
$q_z$
is real, the wave function under the barrier is described by a
linear combination of plane waves. From
Eq.~\eqref{eq::qz_def}
we derive that
${\rm Im}\, q_z = 0$
when
\begin{eqnarray}
\frac{m-|\varepsilon-U|}{B}<k_{\bot}^2<\frac{m+|\varepsilon-U|}{B}.
\label{eq::real_qz}
\end{eqnarray}
If this condition is violated, parameter
$q_z$
becomes imaginary, and probability for the electron to pass through the
barrier vanishes exponentially with the growth of $L$. As a result, the
value of $T$ rapidly decays outside the range of
$k_\bot$
defined by
Eq.~(\ref{eq::real_qz}).
The curve at
$\varepsilon/m=0.9$
in
Fig.~\ref{fig::T_vs_k_perp}
illustrates this feature.
\begin{figure}[t]
\center{\includegraphics[width=1\linewidth]{barrier_in_z_L_dep}}
\caption{Transmission coefficient $T$ as a function of the dimensionless
momentum
$k_{\bot}\sqrt{B/m}$
for different dimensionless barrier widths $mL$ (see legend in the figure).
The curves are calculated for
$U/m=1$,
$\varepsilon/m=0.3$,
and
$Bm=1$.
When
$k_{\bot}\sqrt{B/m}=1$,
perfect
transmission due to the Klein tunneling is observed. In addition, the
reflectionless tunneling at the so-called `magic angles' is also possible.
The latter becomes more pronounced for wider barriers.
\label{fig::T_vs_L}
}
\end{figure}
In the above discussion we assumed the ballistic regime of the electron
scattering. Such an approach is valid if the electron mean-free path
$l_{\rm mf}$
is larger than the barrier width. For a wider barrier,
$l_{\rm mf}\ll L$,
the electron scattering on the barrier edge at
$z=L$
becomes insignificant, which is equivalent to the limit
$L \rightarrow \infty$,
the case of a
$p$-$n$
junction. Thus, we have no reflected wave within the
barrier and have to match wave function at
$z=0$
only. Solving the system of two linear equations we obtain the expression
for the reflected wave amplitude in the form
\begin{equation}
r
=
\sqrt{\frac{\varepsilon+k_z}{\varepsilon-k_z}}\frac{k_z-U-q_z}{k_z+U+q_z}.
\label{22}
\end{equation}
In the limit
$k_\bot\rightarrow \sqrt{m/B}$,
when
$k_z\rightarrow \varepsilon$,
we obtain
\begin{eqnarray}
r \propto {\sqrt{ \varepsilon - k_z}}.
\end{eqnarray}
Thus, the reflection coefficient vanishes and the Klein tunneling is
observed. Naturally, no `magic angles' exist in the case of the infinite
barrier.
\section{Barrier perpendicular to basal plane}\label{sec::perp}
Let us consider the situation when the rectangular barrier is perpendicular
to the basal plane. Such a configuration is depicted in
Fig.~\ref{Fig3}.
The nodal ring lies in the
$xy$-plane,
as before. We fix $y$-axis to be normal to the barrier. In such a geometry
momentum components
$k_x$
and
$k_z$
are preserved by the scattering process. As for
$k_y$,
we derive from
Eq.~\eqref{2}
that
$k_y = \pm
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}}$,
where
\begin{equation}\label{eq::ky}
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}} = \sqrt{\frac{1}{B}\left(m-Bk_x^2\pm\sqrt{\varepsilon^2-k_z^2}\right)}.
\end{equation}
We see that four different values of
$k_y$
correspond to a single set of parameters
$(k_x,k_z,\varepsilon)$.
Therefore, an incident electron can be
scattered by the barrier into four possible channels
$k_y = \pm
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}}$,
see
Fig.~\ref{Fig3}.
In other words, the flux of incident electrons is distributed between two
transmission channels and two reflection channels. To distinguish between
the transmission and reflection channels we can use
Eq.~(\ref{eq::current})
to make sure that the transmitted particle carries positive current
$j_y$
along $y$-axis. One can easily check that
$k_y =
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}$
and
$k_y =
-k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
correspond to the transmission channels,
$j_y>0$,
while
$k_y =
-k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}$
and
$k_y =
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle(-)$}}$
correspond to the reflection channel,
$j_y < 0$.
If
$m-Bk_x^2 \geq \sqrt{\varepsilon^2-k_z^2}$
the scattering into four channels is possible. Otherwise, the value of
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
is imaginary, and the transmission and reflection channels corresponding to
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
disappear.
Under the barrier, the wave function is also a linear combination of four
exponentials, each characterized by a specific value of
$q_y$.
Possible values of
$q_y$
are
$\pm q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}}$,
where
\begin{equation}
q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}}
=
\sqrt{\frac{1}{B}\left[m-Bk_x^2\pm\sqrt{(\varepsilon-U)^2-k_z^2}\right]}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{waves2}
\includegraphics[width=0.8\linewidth]{tor_in_y_17_12}
\caption{Barrier perpendicular to the basal plane. Top panel: Incident,
reflected, and transmitted waves near the barrier. The wave vectors of the
incoming and outgoing waves are shown by the dashed lines with arrows. The
barrier is represented by the blue hatched area. The barrier has finite width
equal to $L$ in the
$y$~direction
and extends to infinity in the $x$ and $z$ directions.
Bottom~panel: Relative orientation of the barrier and the iso-energy
surface. The iso-energy (green) surface, defined by the equation
$\varepsilon^2 = {(m-Bk_{\bot}^2)^2+k_z^2}$,
is a torus in the reciprocal space. The barrier is shown as blue
rectangular parallelepiped. Within the parallelepiped the potential energy
is $U$, outside it is zero.
\label{Fig3}
}
\end{figure}
We will confine the following discussion by two constraints. First, we will
assume that the incoming electron is characterized by the momentum
projection
$k_y=k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}$.
The incoming electron with
$k_y= - k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
will not be studied. Second, only the limit
$k_x=0$
will be explicitly discussed. Non-zero
$k_x$
may be easily accounted for by the renormalization of parameter $m$. With
this in mind, we can write the wave function to the left of the barrier
($y<0$)
as a sum of the incident plane wave and two reflected plane waves:
\begin{eqnarray}
\!\!\psi_1 \!=\!e^{i k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}} y}\!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}\!
+\!
r_{\scalebox{0.8}{$\scriptscriptstyle +$}}
e^{-i k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}} y} \!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}\!+\!
r_{\scalebox{0.8}{$\scriptscriptstyle -$}}
e^{ik_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}y}\!
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}\!,
\label{psi_1_y}
\end{eqnarray}
where
\begin{eqnarray}
\chi=\sqrt{\frac{\varepsilon-k_z}{\varepsilon+k_z}}.
\end{eqnarray}
In the region under the barrier
($0<y<L$),
the wave function can be expressed as
\begin{eqnarray}
\psi_2
&=&
\tilde{a}_{{\scalebox{0.8}{$\scriptscriptstyle -$}}}
e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}y}\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix}\!
+
\tilde{b}_{\scalebox{0.8}{$\scriptscriptstyle -$}}
e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}y}\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix} \!
\\
\nonumber
&+&\tilde{a}_{\scalebox{0.8}{$\scriptscriptstyle +$}}
e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}y}\!
\begin{pmatrix}
1\\
-\phi\\
\end{pmatrix}\!
+
\tilde{b}_{\scalebox{0.8}{$\scriptscriptstyle +$}}
e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}y}
\begin{pmatrix}
1\\
-\phi\\
\end{pmatrix},
\label{psi_2_y}
\end{eqnarray}
where
\begin{eqnarray}
\phi=\sqrt{\frac{\varepsilon-U-k_z}{\varepsilon-U+k_z}}.
\end{eqnarray}
Finally, to the right of the barrier
($y>L$),
we have
\begin{eqnarray}
\psi_3\!
=\!
t_{\scalebox{0.9}{$\scriptscriptstyle -$}}
e^{-ik_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}(y-L)}\!
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}\!
+
t_{\scalebox{0.9}{$\scriptscriptstyle +$}}
e^{ik_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}(y-L)}\!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}\!.
\label{psi_3_y}
\end{eqnarray}
The continuity of the probability current at the barrier edges requires the
continuity of the wave function and its $y$-derivative at
$y=0$
and
$y=L$.
As a result, we obtain
\begin{eqnarray}\label{system}
\nonumber
&&(1\!+\!r_{\scalebox{0.8}{$\scriptscriptstyle +$}})\!\!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}
\!+\!
r_{\scalebox{0.8}{$\scriptscriptstyle -$}}\!
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}
\!=
(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle -$}}\!+\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle -$}})\!\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix}\!
+
(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle +$}}\!+\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle +$}}) \!
\begin{pmatrix}
1\\
-\phi\\
\end{pmatrix},
\\
\nonumber
&&k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}(1-r_{\scalebox{0.8}{$\scriptscriptstyle +$}}\!)\!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}
+
r_{\scalebox{0.8}{$\scriptscriptstyle -$}}
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}\!
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}\!
=\!
q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle -$}}\!-\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle -$}})
\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix}
\\
&&+
q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle +$}}-\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle +$}})\!\!
\begin{pmatrix}
1\\
-\phi\\
\end{pmatrix},
\\
&&\left(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle -$}}e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L}\!+\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle -$}}e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L}\right)\!\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix}
\!+\!
\left(\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle +$}}e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}L}\!+\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle +$}}
e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}L}\right)\!\!
\begin{pmatrix}
1\\
-\phi\\
\end{pmatrix}
\nonumber
\\
&&=
t_{\scalebox{0.9}{$\scriptscriptstyle -$}}
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}
+
t_{\scalebox{0.9}{$\scriptscriptstyle +$}}
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix},
\nonumber
\\
&&q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}\left(
\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle -$}}
e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L}\!-\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle -$}}
e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L}
\right)\!\!
\begin{pmatrix}
1\\
\phi\\
\end{pmatrix}
\!+\!
\nonumber
\\
&&q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}\!\left(
\tilde{a}_{\scalebox{0.9}{$\scriptscriptstyle +$}}
e^{iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}L}\!-\!\tilde{b}_{\scalebox{0.9}{$\scriptscriptstyle +$}}
e^{-iq_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}L}
\right)\!\!
\begin{pmatrix}
1\\
\nonumber
-\phi\\
\end{pmatrix}
\!=\!
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}
t_{\scalebox{0.9}{$\scriptscriptstyle -$}}\!
\begin{pmatrix}
1\\
\chi\\
\end{pmatrix}
\!+\!
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}
t_{\scalebox{0.9}{$\scriptscriptstyle +$}}\!
\begin{pmatrix}
1\\
-\chi\\
\end{pmatrix}\!.
\end{eqnarray}
Solving these equations numerically, we calculate the amplitudes
$r_{\scalebox{0.8}{$\scriptscriptstyle \pm$}}$,
$t_{\scalebox{0.8}{$\scriptscriptstyle \pm$}}$,
and obtain two transmission coefficients
$T_{\scalebox{0.9}{$\scriptscriptstyle \pm$}}
=
|t_{\scalebox{0.9}{$\scriptscriptstyle \pm$}}|^2$
and two reflection coefficients
$R_{\scalebox{0.9}{$\scriptscriptstyle \pm$}}
=
|r_{\scalebox{0.9}{$\scriptscriptstyle \pm$}}|^2$.
Particle conservation implies that
$T_{\scalebox{0.9}{$\scriptscriptstyle +$}}
+
T_{\scalebox{0.9}{$\scriptscriptstyle -$}}
+
R_{\scalebox{0.9}{$\scriptscriptstyle +$}}
+
R_{\scalebox{0.9}{$\scriptscriptstyle -$}} = 1$.
The results of the calculations are shown in
Figs.~\ref{Fig4}
and~\ref{Fig5},
where transmission and reflection coefficients are plotted as functions of
$\varepsilon$ for different parameter values.
When the energy $\varepsilon$ is within the interval
$k_z<\varepsilon<\sqrt{k_z^2+m^2}$,
the quantities
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (\pm)$}}$
are all real. Consequently, all four scattering channels are open. Under
this conditions the scattering of the incident wave with
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}$
to the reflected wave with
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
may be significant. It becomes particularly strong if $\varepsilon$ is
close to
$k_z$:
in this regime
$R_- \rightarrow 1$
when
$\varepsilon \rightarrow k_z$,
see
Fig.~\ref{Fig4}.
At the opposite end of the considered energy interval,
$\varepsilon \rightarrow \sqrt{k_z^2+m^2}$,
the reflection probabilities vanish, and the incident particle passes
through the barrier without reflection, preserving its momentum.
\begin{figure}[t]
\center{\includegraphics[width=1\linewidth]{barrier_in_y_kz=0,3_m=1_B=1_L=5}}
\caption{Transmission and reflection coefficients as functions of energy.
The curves are calculated for
$k_z/m=0.3$,
$U/m=1$,
$mL=5$
and
$Bm=1$.
Coefficients
$T_{\pm}(R_{\pm})$
correspond to the transmitted (reflected) waves with
$k_{y}^{(\pm)}$.
We assume that
$k_x=0$
because nonzero
$k_x$
only renormalizes $m$.
\label{Fig4}
}
\end{figure}
\begin{figure}[t]
\center{\includegraphics[width=1\linewidth]{barrier_in_y_kz=0_m=1_B=1_L=5}}
\caption{Transmission and reflection coefficients versus energy for the
scattering confined to the basal plane
($k_z = 0$).
The curves are calculated for
$U/m=1$,
$mL=5$
and
$B=1/m$.
Coefficients
$T_{\pm}(R_{\pm})$
describe the transmitted (reflected) waves with
$k_{y}^{(\pm)}$.
In this regime both
$T_-$
and
$R_-$
vanish. We assume that
$k_x=0$
because nonzero
$k_x$
only renormalizes $m$.
\label{Fig5}
}
\end{figure}
An analytical expressions for the transmission and reflection coefficients
can be obtained in the limit
$k_z=0$,
when the incident particle momentum lies in $xy$-plane. (Since
$k_z$
is conserved, the momenta of the transmitted and reflected particles are
confined to the basal plane as well.) In such a situation
Hamiltonian~(\ref{eq::hamilt})
decouples into two copies of a scalar non-relativistic Hamiltonian with the
spectrum
${\varepsilon(k_y,k_z=0)=\pm (m-Bk_y^2)}$,
and
$\chi=\phi=1$.
It is easy to check that a particle with the momentum component
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}
=
\sqrt{\left(m+\varepsilon\right)/B}$
cannot be scattered to
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
channel since
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}}$
and
$k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}$
belong to different sectors. Solving
Eqs.~\eqref{system}
in this limit, we obtain
$R_{\scalebox{0.8}{$\scriptscriptstyle -$}}
=T_{\scalebox{0.8}{$\scriptscriptstyle -$}} = 0$
and
\begin{eqnarray}
\label{eq::non_rel}
R_{\scalebox{0.8}{$\scriptscriptstyle +$}}
=
\frac{
[(q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}})^2
-
(k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}})^2]
\sin^2(q_y^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L)
}
{
4(q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}
k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}})^2
+
[(q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}})^2
-
(k_{y}^{\scalebox{0.8}{$\scriptscriptstyle (+)$}})^2]
\sin^2(q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L)
},
\end{eqnarray}
where
$q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}
=
\sqrt{\left(m-|\varepsilon-U|\right)/B}$.
The dependencies of
$R_{\scalebox{0.8}{$\scriptscriptstyle +$}}$
and
$T_{\scalebox{0.8}{$\scriptscriptstyle +$}}
=
1-R_{\scalebox{0.8}{$\scriptscriptstyle +$}}$
versus
$\varepsilon$
are shown in
Fig.~\ref{Fig5}.
These functions are non-monotone due to dimensional oscillating factor
$\sin^2(q_y^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}L)$.
Equation~(\ref{eq::non_rel})
is, in some respects, similar to the expression describing scattering of a
non-relativistic particle on a rectangular
barrier~\cite{Griffiths}.
However, there is an important difference. The transmission coefficient of
a non-relativistic particle
$T_{\rm nrel} (\varepsilon)$
oscillates due to dimensional effect if
$\varepsilon > U$,
but is monotone if
$\varepsilon < U$.
In the case of the nodal-ring semimetal, the functions
$R_{\scalebox{0.8}{$\scriptscriptstyle +$}}(\varepsilon)$
and, consequently,
$T_{\scalebox{0.8}{$\scriptscriptstyle +$}}(\varepsilon)$
are non-monotone even for
$\varepsilon < U$.
Mathematically, this occurs due to the existence of the plane wave
solutions with
${\rm Im}\, q_{y}^{\scalebox{0.8}{$\scriptscriptstyle (-)$}}
=0$
in the regime
$\varepsilon<U$.
The case of the infinite-length barrier can be considered in the same
manner. After matching the wave function and its derivative at the barrier
edge, reflection coefficients
$R_{\scalebox{0.8}{$\scriptscriptstyle \pm$}}$
can be calculated. In general, both of them are non-zero, that is, the
scattering in four channels is possible. In the limit
$k_z = 0$
we can find explicit formulas
\begin{eqnarray}
R_{+}&\!=\!&\left(\!\frac{k_y^{(+)}\!-\!q_y^{(+)}}{k_y^{(+)}\!+\!q_y^{(+)}}\!\right)^2\!,\,\,\, T_{+}\!=\!\left(\!\frac{2k_y^{(+)}}{k_y^{(+)}\!+\!q_y^{(+)}}\!\right)^2\!,\\ \nonumber
R_{-}&\!=\!&T_{-}=0.
\label{eq::non_rel_inf}
\end{eqnarray}
This result is similar to the case of the scattering of non-relativistic particle.
\section{Conclusion}\label{sec::discussion}
We show that the electron scattering in the nodal-line semimetals
demonstrates unusual features, such as the Klein tunneling, reflectionless
transmission at `magic angles' (which is an analogue of the classical
Ramsauer-Townsend effect), and the emergence of the additional scattering
channels.
The Klein tunneling occurs for a barrier parallel to the basal plane. The
momentum of the incident particle must satisfy the condition
$m-Bk_{\bot}^2=0$.
If these requirements are met,
Hamiltonian~\eqref{eq::hamilt}
effectively describes one-dimensional massless fermions, for which the
Klein tunneling is a well-established phenomenon.
Besides the Klein tunneling, reflectionless propagation across the barrier
can be observed for a particle colliding with the barrier at certain `magic
angles'. These angles depend on the barrier width. Such a behavior is
related to the Ramsauer-Townsend effect for a non-relativistic quantum
particle. Similar phenomena discussed for
graphene~\cite{chiral_tunn_tudorovskiy2012}.
However, the classical Ramsauer-Townsend effect exists for a particle
whose energy exceeds the height of the barrier, while in the nodal-line
semimetals a particle with
$\varepsilon < U$
also demonstrates the same reflectionless propagation.
When the barrier is perpendicular to the basal plane, the Klein tunneling
is impossible. In this configuration another interesting phenomenon can be
observed: the second scattering channel becomes available both for
transmitted and reflected particles. Such an unusual scattering occurs
because the system of equations describing conservation of the electron
energy and momentum has two different roots. Therefore, two different
values of
$|k_y|$
are admissible. The first of these values is the same as
$|k_y|$
of the incident particle, while the second differs. Thus, the wave
functions of the transmitted and reflected particles are superpositions of
two states with unequal momenta. Depending on the scattering parameters,
the probability of changing
$|k_y|$
after a scattering event can be substantial. We prove this for scattering
processes confined to the basal plane.
\section*{Acknowledgments} This work was supported by the Presidium of RAS
(Program I.7, Modern problems of photonics, the probing of inhomogeneous
media and materials).
|
{
"timestamp": "2018-02-26T02:11:56",
"yymm": "1802",
"arxiv_id": "1802.08643",
"language": "en",
"url": "https://arxiv.org/abs/1802.08643"
}
|
\section{Introduction}}
\label{sec:mp_intro}
\IEEEPARstart{O}{wing} to the advent of online social networks, diffusing information to a large population in a short span of time has become a reality.
Product companies (or campaigners) use this fact to their advantage by harnessing social networks for viral marketing wherein, they offer free or discounted product samples (or present some information) to a selected set of individuals, who advertize the product (or forward the information) to their friends. If these friends buy the product and like it, they likely advertize it to their friends, and this process goes on.
A fundamental problem with respect to this idea is to select the nodes to whom free or discounted product samples are to be given (often referred to as {\em seed nodes\/}), such that the number of individuals influenced by the product marketing (often referred to as {\em extent of diffusion\/}) is maximized.
This leads to an important optimization problem: given a budget on the number of seed nodes, which nodes should be selected for seeding so that the extent of diffusion is maximized?
This problem is often referred to as the problem of {\em influence maximization in a social network\/}.
Owing to the inherent uncertainties of social networks due to uncertainties in human behavior, transmission of information, interaction frequencies, etc., the process of information diffusion is an uncertain one. It is thus a stochastic process, with several possibilities of instances with certain probabilities.
A widely used metric of performance of an influence maximizing seed-selection algorithm is the expected extent of diffusion taken over all instances (or a large number of instances if computing expectation over all instances is computationally hard).
However, selecting a set of seed nodes may lead to an excellent extent of diffusion in one instance, while a very poor extent in another.
This has motivated researchers to consider the possibility of adaptive seeding, which could reduce the uncertainty involved by adaptively selecting seed nodes based on the diffusion observed thus far.
In order to understand the advantages and disadvantages of multiphase diffusion, we first present some preliminaries.
\subsection{Preliminaries}
Consider a social network $G$, with $N$ as its set of $n$ nodes and $E$ as its set of $m$ weighted directed edges.
\subsubsection{Independent Cascade (IC) model}
Each directed edge $(u,v) \in E$ has an associated weight indicating the influence probability $p_{uv}$ (the probability with which node $u$ would influence node $v$, if node $u$ is influenced).
The diffusion progresses in discrete time steps. At time 0, the selected seed nodes are influenced. At time step 1, each seed node $u$ independently attempts to influence each of its neighbors $v$ and succeeds with probability $p_{uv}$. In time step 2, all the nodes that were influenced in time step 1 independently attempt to influence their respective neighbors and succeed with the corresponding influence probabilities.
This process continues until no further nodes can be influenced.
The expected extent of diffusion can be obtained by taking a weighted average of the number of nodes influenced over all possible diffusions using IC model (where weight is the probability of progressing according to the corresponding diffusion).
\subsubsection{Live Graph}
A live graph $L$ is an instance of $G$, obtained by sampling edges with corresponding edge influence probabilities.
A live graph, being an instance, is a directed but unweighted graph.
An edge $(u,v)$ is present in it with probability $p_{uv}$ and absent with probability $1-p_{uv}$, independent of the presence of other edges.
The probability of occurrence of a live graph $L$ is thus,
$\prod_{(u,v) \in L} (p_{uv}) \prod_{(u,v) \notin L} (1-p_{uv})$.
Kempe, Kleinberg, and Tardos
\cite{kempe2003maximizing} show that,
since influence probabilities do not change with time, sampling an edge $(u,v)$ in the beginning of diffusion is equivalent to sampling it when $u$ is influenced.
The expected extent of diffusion starting from a set of seed nodes, can thus be defined as a weighted average of the number of nodes reachable from that set over all live graphs (where weight is the probability of occurrence of the corresponding live graph).
\subsubsection{Multiphase Information Diffusion using IC Model}
Let $p$ be the number of phases for which the diffusion is planned to run.
After the selection of a certain number of seed nodes in the first phase using an influence maximizing algorithm, the diffusion starts and progresses according to IC model, until no further nodes can be influenced.
Then based on the observed diffusion thus far, the network could be modified by removing nodes which are already influenced, since they would play no further role in changing the diffusion state of the network.
This modified network can be viewed the {\em diffusion state} of the original network after the first phase.
Now a certain number of seed nodes are selected for the second phase using the influence maximizing algorithm on this modified network, following which, the diffusion progresses until no further nodes can be influenced. This process repeats until the termination of the last phase (phase $p$).
Note that we could have initiated a phase before the termination of the preceding phase, however it would partially nullify the purpose of using multiple phases by not observing the diffusion till its completion. As our primary goal is to study effectiveness of multiphase diffusion, we consider usage of multiple phases at their full potential by allowing the diffusion in a phase to terminate before initiating the next phase.
We now provide an intuition why multiphase diffusion would be advantageous as compared to single diffusion.
An influence maximizing algorithm which is intended to maximize the expected number of influenced nodes, would possibly not select an influential node if it is likely to get influenced owing to other already selected seed nodes with high probability. But unless this high probability is equal to 1, there will exist `bad' live graphs in which the influential node would not get influenced. Adaptive seeding would select this node as a seed node if our observed diffusion indicates that the underlying live graph is `bad'. This would thus improve the extent of diffusion in expectation.
On the other hand, the algorithm may have selected a node because it is influential enough, but not likely to get influenced owing to other already selected seed nodes. Again, there would exist live graphs in which this node gets influenced without having to select it as seed node. In such live graphs, adaptive seeding would instead select another node which did not actually get influenced in our observed diffusion, which again would lead to a higher expected extent of diffusion.
A drawback of multiphase diffusion is that the diffusion may progress at a slower rate owing to the delay in selecting seed nodes in subsequent phases.
Like in most of the literature, we consider that this delay does not impact the value of our diffusion; we provide a note on accounting for this delay at the end of the paper.
Given a total budget of $k$ that is to be distributed across $p$ phases,
let $k_q$ be the budget allotted for phase $q$.
\begin{definition}[Budget split]
A budget split is a vector representing how the total budget is allotted for different phases.
\end{definition}
So for an information diffusion process that is executed over $p$ phases, the budget split can be represented as $(k_1,\ldots,k_p) = (k_q)_{q=1}^p$.
We use $\mathbf{K}$ to denote a budget split.
Since the total budget is $k$, we should have that
$
\sum_{q=1}^p k_q \leq k
$.
Note that if there is any surplus budget $(k-\sum_{q=1}^p k_q)$, this surplus can be used up in the terminal phase to influence additional nodes which could not be influenced at the end of phase $p$. So it is optimal to have the above constraint tight, that is,
$
\sum_{q=1}^p k_q = k
$.
\begin{definition}[Optimal budget split]
Given an influence maximizing algorithm and a budget for a network, an optimal budget split is one that maximizes the expected extent of diffusion achieved over all phases combined.
\end{definition}
Given an influence maximizing algorithm and budget $k$ for a network,
let $\beta_q(\mathbf{K})$ be the expected extent of diffusion or the expected number of nodes influenced in phase $q$, if the budget split is $\mathbf{K}$.
So the expected extent of diffusion over $p$ phases is $\sum_{q=1}^p \beta_q(\mathbf{K})$. An optimal budget split is, thus,
\begin{small}
\begin{align*}
\mathbf{K}^* = (k_1^*,\ldots,k_p^*) = \argmax_{\mathbf{K}} \sum_{q=1}^p \beta_q(\mathbf{K})
\end{align*}
\end{small}
\subsection{Relevant Work}
\label{sec:mp_relevant}
The problem of maximizing information diffusion in social networks was first studied from algorithmic and computational viewpoint by
Kempe, Kleinberg, and Tardos
\cite{kempe2003maximizing}, where they showed $\left( 1-\frac{1}{e} \right)$-approximation guarantee of greedy algorithm for selecting seed nodes.
However, it is computationally infeasible to run this algorithm on large social networks.
Several alternatives have been proposed to bypass this computational barrier.
Goyal, Lu, and Lakshmanan
\cite{goyal2011celf} present
a lazy forwarding approach to avoid unnecessary computations made in the greedy algorithm.
Chen, Wang, and Yang
\cite{chen2009efficient}
present
a number of efficient versions of the greedy algorithm and also propose a very fast degree discount heuristic.
Wang, Chen, and Wang
\cite{wang2012scalable}
propose a fast heuristic (PMIA) based on the concept of maximum influence arborescence, and show that it performs very close to the greedy algorithm on real-world social network datasets.
Jung, Heo, and Chen
\cite{jung2012irie} propose an even faster high-performance heuristic (IRIE) by integrating influence ranking and influence estimation,
making it feasible to run on networks with tens of millions of edges.
Adaptive seeding or diffusing information through a social network in more than one phase is a relatively nascent area.
Golovin and Krause
\cite{golovin2010adaptive,golovin2011adaptive}
introduce the concept of adaptive submodularity
and prove that any problem satisfying this property facilitates an adaptive greedy algorithm to provide an approximation guarantee;
they show that this property is satisfied for adaptive seeding in IC model.
For
adaptive seeding with any monotone submodular
function,
Badanidiyuru et al.
\cite{badanidiyuru2016locally}
propose an approximation algorithm based on locally-adaptive policies.
Specific to influence maximization in social networks,
Singer
\cite{singer2016influence}
presents a survey on adaptive seeding methodologies.
Seeman and Singer
\cite{seeman2013adaptive} were among the first to dedicatedly study the adaptive seeding framework.
Rubinstein, Seeman, and Singer
\cite{rubinstein2015approximability}
study the approximability of adaptive seeding algorithms that incentivize
nodes with heterogeneous activation costs.
Horel and Singer
\cite{horel2015scalable} develop scalable methods for adaptive selection of the seed set with provable guarantees for models in which the influence of a set can be expressed as the sum of the influence of its members. However, these methods do not apply to IC-like models.
Correa et al.
\cite{correa2015adaptive}
show that in the homogeneous case (where every pair of nodes randomly meet at the same rate), the adaptivity benefit is bounded by a constant.
Dhamal, Prabuchandran, and Narahari
\cite{dhamal2phase} show a trade-off between the size of the observed diffusion
and the exploitation based on the observed
diffusion,
while splitting the budget between two phases.
Tong et al.
\cite{tong2016adaptive}
study adaptive seeding in the dynamic IC model, and provide performance guarantee of the greedy adaptive seeding algorithm.
Yuan and Tang
\cite{tang2016no}
present a theoretical study of a framework where seed node can be selected before the ongoing diffusion terminates, and hence develop a policy that achieves a bounded approximation ratio.
Mondal, Dhamal, and Narahari
\cite{mondal2017two}
study the influence maximization problem in two phases, where the first phase is regular diffusion and the second phase is boosted using referral incentives.
\subsection{Contributions}
To the best of our knowledge, this is the first work to study information diffusion in more than two phases, and present insights on the distribution, phasewise progression, and optimal budget split.
Our specific contributions are as follows:
\begin{itemize}
\item
We present a negative result that more phases do not guarantee a higher extent of diffusion.
\item
Using real-world network datasets, we study how diffusing in multiple phases affects the mean and standard deviation of the distribution representing extent of diffusion.
\item
We study the effectiveness of multiphase diffusion with respect to the number of phases, and the phase-by-phase progression of diffusion so as to quantify the delay in diffusion owing to the delayed selection of seed nodes.
\item
We develop a method for determining an optimal budget split for a given number of phases, based on the nature of the underlying network.
\end{itemize}
\section{Problems in Multiphase Diffusion}
Consistent with almost all studies on adaptive seeding, we assume that seed nodes are selected in a given phase without considering their eventual impact on the next phase, that is, seed nodes are selected in phase $q$ by using a single phase optimal policy with the reduced budget of $k_q$, without accounting for the presence of phase $q+1$.
This approach is termed as {\em myopic approach\/} by Dhamal, Prabuchandran, and Narahari \cite{dhamal2phase}.
In their study, the authors show that using farsighted approach (selecting nodes in a phase by considering its impact on the next phase)
with \textbf{any} budget split
would always lead to a better extent of diffusion in expectation, but do not support or oppose any statement regarding whether the myopic multiphase approach would always outperform single phase.
We fill this gap by showing a negative result.
\subsection{A Negative Result}
Firstly, it can be easily seen that more phases may not be advantageous if the budget split is not made judiciously.
For instance, a 2-phase budget split of $(\frac{1}{3}k,\frac{2}{3}k)$ would most certainly be better than a 3-phase budget split of $(k-2,1,1)$ for reasonably large $k$, since the latter would perform close to single phase, while a $(\frac{1}{3}k,\frac{2}{3}k)$ split would have a significant gain over single phase \cite{dhamal2phase}.
However, one could ask: would having more phases by subdividing allocation of an existing phase, result in a better performance?
We show the answer is negative using a simple counterexample.
According to \cite{dhamal2phase},
using a farsighted approach, a budget split of $(1,1)$ would perform at least as good as single phase with $k=2$ on any network.
We show that this is not guaranteed using a myopic approach, that is, there exists a network for which we could come up with a 2-phase budget split that performs worse than single phase.
\begin{proposition}
Replacing budget split $(\ldots,k_q,\ldots)$ with $(\ldots,x,k_q-x,\ldots),x \in \{1,\ldots,k_q-1\}$ may lead to a worse extent of diffusion, even with respect to an optimal policy.
\label{prop:negative}
\end{proposition}
\begin{proof}
~
\\
\begin{minipage}{.32\textwidth}
We show this with a counterexample for $p=2,q=1$ and $k=k_q=2,x=1$.
\\
The edge influence probabilities are: $p_{AC}=p_{BC}=0.5 \,,\, p_{CD}=p_{CE}=1$.
The example graph is alongside.
\end{minipage}
\begin{minipage}{.14\textwidth}
\centering
\includegraphics[scale=.35]{countereg_cropped.pdf}
\end{minipage}
\vspace{3mm}
When using single phase with $k=2$, the optimal solution to maximize expected diffusion is to select $A$ and $B$ as the two seed nodes.
Node $C$ would then get influenced with probability $1-(1-0.5)(1-0.5)=0.75$.
Since nodes $D$ and $E$ would get influenced with probability $1$ if node $C$ is influenced, we have that these two nodes would get influenced with probability $0.75$ each.
So with $A$ and $B$ as the two seed nodes, the expected number of nodes influenced at the end of the diffusion process is $1+1+0.75+0.75+0.75 = 4.25$.
It can be easily seen that selecting any other set of two nodes would lead to a lower expected extent of diffusion, for instance, $\{C,(A \text{ or } B)\}$ leads to $4$, $\{C,(D \text{ or } E)\}$ leads to $3$, $\{(A \text{ or } B),(D \text{ or } E)\}$ leads to $3$, and $\{D,E\}$ leads to $2$.
So using optimal policy, the expected number of influenced nodes at the end of single phase diffusion process is $\mathbf{4.25}$.
When using two phases with budget split $(1,1)$, that is, $k_1=k_2=1$, the optimal node to select in the first phase is $C$, which leads to nodes $C,D,E$ getting influenced with probability $1$ in first phase. So the expected number of nodes influenced at the end of first phase is $3$. Selecting any other node would lead to a lower number of influenced nodes, for instance, $A$ or $B$ would lead to $2.5$, and $D$ or $E$ would lead to $1$.
Since node $C$ is the optimal choice for first phase, we know with certainty that at the start of second phase, nodes $C,D,E$ are already influenced. Since $k_2=1$, it is optimal to select either node $A$ or $B$ as the seed node for second phase, which would lead to exactly $1$ additional node getting influenced in second phase.
So at the end of this two-phase diffusion process, the number of influenced nodes is $\mathbf{4}$, which is lower than that achieved using single phase ($4.25$).
\end{proof}
The above negative result is of theoretical interest, however, it has been shown that adaptive seeding performs better than single phase seeding on real-world network datasets.
It is also known that an adaptive method of diffusing information under IC model preserves the approximation guarantee of $\left( 1-\frac{1}{e} \right)$ provided by greedy algorithm \cite{golovin2010adaptive,golovin2011adaptive}.
In this paper, we use the state-of-the-art IRIE algorithm, which performs very close to greedy algorithm while running several order of magnitude faster.
\subsection{Problems of Interest}
\subsubsection{Distribution of the Extent of Diffusion}
Information diffusion under IC model being a stochastic process, there are uncertainties involved regarding how the diffusion progresses. This is an important selling point of multiphase diffusion, since it reduces the uncertainty while selecting seed nodes in subsequent phases.
It is a well-accepted practice in the literature to derive conclusions regarding the performance of an algorithm by only considering the expected number of nodes influenced at the end of the diffusion process (or the expected extent of diffusion).
However,
it would be interesting to study how using multiple phases affects the entire distribution of the extent of diffusion, instead of its expected value alone. In particular, we could plot the distributions, observe their nature, and also study the implications of their standard deviations (in addition to their means).
Further, it would be interesting to see how the distribution changes from phase to phase, and what it actually means when we say that multiphase diffusion reduces uncertainty.
\subsubsection{Impact of the Number of Phases}
It has been a consistent result in the two-phase diffusion and adaptive seeding literature that using two phases yields a significant gain over single phase. A natural question arises regarding how beneficial going beyond two phases would be.
A primary objective of using social network for diffusing information, is to enable the information to reach as many individuals as possible.
But it may also be important that the information reaches the individuals as early as possible, especially in the presence of a competing information or when the value of information decreases with time.
So if using $p+1$ phases instead of $p$ phases (with the same total budget $k$) improves the extent of diffusion negligibly, it may be well advised to not increase the number of phases.
Further, additional phase may incur additional costs.
This motivates us to study how the amount of gain changes as we increase the number of phases.
\subsubsection{Determining Optimal Budget Split}
Multiphase diffusion relies on the fact that we observe diffusion at the end of a phase, and exploit this observation by adaptively selecting seed nodes in the following phase.
An optimal way to split the total budget is thus important
to find an optimal balance between observation and exploitation.
Hence the effectiveness of a $p$-phase diffusion fundamentally depends on how the total budget is split among the $p$ phases.
In order to generalize our findings to general social network datasets, it is also important to identify patterns and insights behind the observed optimal budget splits.
\subsubsection{Progression of Diffusion with Phases}
As mentioned earlier, though
our primary objective is to reach or influence as many individuals as possible, it is also important that the information reaches them as early as possible, especially in the presence of a competing information or when the value of the information or product decreases with time.
So given that we are diffusing information across a total of $p$ phases, it is important to know how many nodes get influenced at the end of each phase.
Several marketing, pricing, or campaigning decisions may be impacted with the knowledge of how diffusion progresses over its different phases.
For instance, if the product value decreases over time, a company may be willing to compromise on the optimal budget split and hence the final extent of diffusion, so as to have a higher extent of diffusion during the early phases.
\section{Simulation Setup}
\subsection{Simulation Technique}
We first discuss a naive approach of simulating multiphase diffusion, explain its drawbacks, then present our approach.
\subsubsection{A Naive Approach}
Starting with $k_1$ best seed nodes,
the simulations are first run for $\mathcal{M}_1$ Monte Carlo iterations, each according to IC model, to arrive at $\mathcal{M}_1$ possible diffusion states at the end of phase 1.
For each of these $\mathcal{M}_1$ states, we then adaptively select $k_2$ best seed nodes and run the simulations for $\mathcal{M}_2$ iterations to arrive at $\mathcal{M}_2$ diffusion states.
So now we have a total of $\mathcal{M}_1 \mathcal{M}_2$ diffusion states at the end of phase 2.
Continuing thus, we have $\prod_{q=1}^p \mathcal{M}_q$ diffusion states at the end of phase $p$.
If we run the simulations for $10^4$ iterations (as run for single phase in most papers in the literature) in each phase,
we need to run the influence maximizing algorithm on $1$ state (the given graph) in the first phase, $10^4$ states in second phases, $10^{8}$
states in third phases, and so on.
In addition to selecting seed nodes, simulating diffusion using IC model also would add considerably to the running time;
we need to run the diffusion process $10^4$ times in the first phase, $10^8$ times in second phase, $10^{12}$
times in third phases, and so on.
So it is clear that the branching process grows exponentially and it is rather infeasible to run the simulations on large datasets over large number of phases.
\subsubsection{Our Approach}
We presample a set of $\mathcal{M}$ live graphs before the diffusion starts, instead of determining the presence of each edge $(u,v)$ in live graph after $u$ is influenced. We then use these as a common set of live graphs across various simulations.
This idea similar to \cite{chen2009efficient} wherein live graphs are predetermined to enable precomputations of reachability from any given node and hence avoid repeated computations during program execution.
Such an approach can be justified by considering that the underlying live graph already exists (but known to us only probabilistically) and is uncovered during diffusion process.
Also, finding reachability from a set of nodes in live graph is equivalent to diffusing information starting from these nodes \cite{kempe2003maximizing}.
By presampling live graphs, the reachability from every node in every live graph is computed once and stored.
So its highlight is that we do not have to simulate diffusion using IC model each time; only retrieve the stored set of reachable nodes
\cite{chen2009efficient}.
Another advantage of presampling a common set of live graphs for all simulations (for different budget splits and also different number of phases) is that, we can not only compare their performances, but also reliably draw conclusions regarding aspects such as means and standard deviations of extents of diffusion during and after each phase, by comparing their distributions.
In our simulations, we set
$\mathcal{M}=10^4$.
For the datasets considered (enlisted later), this count of Monte Carlo simulations gave precise results (that is, running independent sets of $10^4$ Monte Carlo simulations lead to extents of diffusion with almost equal means and standard deviations).
\subsection{Extending Algorithm to Multiple Phases}
We use IRIE as our influence maximizing algorithm for determining seed nodes.
To the best of our knowledge, this is the best known algorithm for its performance (very close to the greedy algorithm) as well as running time (few seconds for a graph with million edges).
In our simulations, we set the damping factor $\alpha=0.7$ as identified by the authors to be the value for which IRIE's accuracy is found to be the highest.
In the first phase, we run IRIE just as for single phase, albeit with a budget of $k_1$. The reachability from the selected $k_1$ seed nodes in each of the $\mathcal{M}$ live graphs lead to the corresponding $\mathcal{M}$ diffusion states.
Subsequently, for $q$ ranging from $1$ to $p$, we select $k_q$ seed nodes in phase $q$ for each of the diffusion states, which then after considering reachability from these newly selected nodes in the corresponding live graphs, lead to $\mathcal{M}$ new/updated diffusion states (which act as the starting point for phase $q+1$).
Hence the number of diffusion states for which we run IRIE is
\mbox{$(p-1)\mathcal{M}+1$} (including the starting state, the given graph itself).
It is to be noted that we run IRIE independently on these states,
that is, we use IRIE as a black box. We do not eliminate the possibility of adapting IRIE in a better way for diffusion in multiple phases; we defer this to future work.
The running time of IRIE approximately increases with the number of edges in the network. So the overall running time of the entire multiphase seed selection algorithm for a given budget split is proportional to
\mbox{$|E|(p-1)\mathcal{M}$}.
\begin{comment}
\textbf{\color{red}{Complete}}
{A Note on a Computationally Economical Way of Simulating Multiphase Diffusion}
:
Running IRIE all over again on modified intermediate graphs may lead to very high computational time. So we could store the best IRIE seeds in descending order and then, only select the node if it has not been already influenced in the preceding phases.
This would reduce time complexity to $O(|E|+\mathcal{M})$
\end{comment}
\begin{figure*}
\small
\begin{tabular}{ccc}
\hspace{-8mm}
\includegraphics[scale=.44]{dist1_cropped.pdf}
&
\hspace{-7mm}
\includegraphics[scale=.44]{dist2_cropped.pdf}
&
\hspace{-6mm}
\includegraphics[scale=.44]{dist3_cropped.pdf}
\\
(a) Single phase diffusion
&
(b) Two-phase diffusion
&
(c) Three-phase diffusion
\end{tabular}
\caption{Distribution of extent of diffusion for different number of phases for NetHEPT (WC) with $k=200$
}
\label{fig:dist}
\end{figure*}
\subsection{Searching for Optimal Budget Split}
Given a budget $k$, the budget $k_q$ allotted for phase $q$ can take $k+1$ possible values (including $0$).
Since there are $(p-1)$ degrees of freedom owing to constraint \mbox{$\sum_{q=1}^p k_q = k$},
the number of points (corresponding to possible budget splits) in the
standard discrete simplex is
{\tiny{$\dbinom{k+p-1}{p-1}$}}.
So it is infeasible to exhaustively search over all budget splits even for relatively small values of $p$, for practical values of $k$.
It is a general observation in the literature that
the extent of diffusion usually turns out to be a smooth function of the budget, that is, a slight change in budget usually does not result in a drastic change in the extent of diffusion.
We harness this to avoid exhaustively searching over all budget splits.
We search for an optimal budget split in two steps, by first doing a coarse search (looking at a small number of well-separated budget splits) and then a fine search (looking in the neighborhood of good-valued budget splits found in our coarse search).
We later briefly discuss how one could improve on this search technique for our particular problem.
In our coarse search, we assign each $k_q$ a value from $\{0, 0.1k, 0.2k, \ldots, k\}$ (11 values) such that
\mbox{$\sum_{q=1}^p k_q = k$}.
The number of points (possible budget splits) in the
standard discrete simplex is now
{\tiny{$\dbinom{(11-1)+(p-1)}{p-1}=\dbinom{p+9}{p-1}$}}. For $p=5$, this equals $1001$, which is still a large search space.
However, we could reduce it by noting that, if $k_q=0$ for some $q$, it is equivalent to having less than $p$ phases. Also, several budget splits would be equivalent, for instance, the 3-phase budget splits $(k_x,k_y,0),(k_x,0,k_y),(0,k_x,k_y)$ are all equivalent to the 2-phase budget split $(k_x,k_y)$.
So the results for such budget splits where $k_q=0$ for some $q$, can be directly appended from the results obtained for less than $p$ phases.
So in our coarse search, we only look at budget splits where $\forall q, k_q>0$ and is an integral multiple of $0.1k$.
This is equivalent to slicing a bar of length $k$ into $p$ pieces by making $p-1$ cuts at integral multiples of $0.1k$.
Since there are $9$ possible locations where we can make these $p-1$ cuts, the number of ways in which we can make these cuts is
{\tiny{$\dbinom{9}{p-1}$}}.
This is a valid value
since our simulations have $p\leq 5$.
For $p=5$, this equals $126$, which is a tractable search space.
Following the coarse search, we do a fine search for budget allocations in multiples of $0.05k$ (rounded below, if required).
On finding the budget split vector giving maximum extent of diffusion (probability of finding multiple maxima is 0), say $(h_q)_{q=1}^p$, we look for the budget splits obtained by incrementing and decrementing its individual coordinates by $0.05k$.
Note that since we have $p-1$ degrees of freedom, we are looking at a $(p-1)$-dimensional space, and so the number of increments and decrements for all free coordinates combined is $2(p-1)$. We also check throughout that budget allocation stays non-negative for the constrained coordinate.
Now for each of the $p-1$ dimensions, we have values obtained by incrementing, decrementing, and not changing the coordinate, one of which gives the maximum value among the three; let this coordinate be $z_q$. We now can form a hypercube whose vertices have the $q^{th}$ coordinate as $h_q$ or $z_q$. Note that this could be a less than $(p-1)$ dimensional hypercube if $h_q=z_q$ for some $q$'s. The vertices of this hypercube are now the new budget splits we search on.
If a budget split is already explored, we recall its stored value.
So the maximum number of new budget splits derived using this hypercube (which is when $h_q \neq z_q$ for all free coordinates) would be $2^{p-1}-p$.
This concludes our fine search in the neighborhood of the best budget split that was obtained using coarse search.
The maximum number of new budget splits found is thus, $2^{p-1}-p+2(p-1)$; this equals $19$ for $p=5$. We compute expected extent of diffusion for each of them.
We follow this method for the second best upto the tenth best budget split (run in parallel on different machines independently; so computations repeated for some budget splits).
We employ the above search method for $p \geq 3$; for $p=2$, we search through all multiples of $0.05k$.
In addition to the budget splits searched as above, we explore budget splits of certain manually chosen ratios, which we see later.
\subsection{Datasets Used}
The study of multiphase diffusion is computationally very intensive in nature,
owing to the large number of intermediate diffusion states after each phase on which we need to run the seed selection algorithm, as well as owing to
the large number of possible budget splits we need to take into consideration.
So with the computational power available to us, it was infeasible to run the multiphase simulations on very large datasets studied in the literature for single phase diffusion.
So we focus our simulation study on moderate sized datasets (which are commonly used in the literature) to draw conclusions and provide insights
based on our observations.
We conduct extensive simulations for upto 5 phases on NetHEPT dataset [$|V|=15K, |E|=31K$].
This dataset has been extensively used for experimentation in the literature \cite{kempe2003maximizing,chen2009efficient,wang2012scalable}.
We also conduct simulations on Facebook dataset [$|V|=4K, |E|=88K$] \cite{leskovec2012learning} for upto 4 phases.
For modeling edge influence probabilities in networks,
we use two widely accepted ways, namely, the {\em weighted cascade (WC) model} and the {\em trivalency (TV) model} \cite{wang2012scalable,jung2012irie}.
In WC model, for every edge $(u,v)$ in the network, $p_{uv}$ is equal to the reciprocal of $v$'s degree.
In TV model, every edge in the network is assigned a probability value that is uniformly sampled from the set of values $\{0.001, 0.01, 0.1\}$.
In addition to studying NetHEPT with a budget of $k=50$ (like in most of the literature), we also look at $k=200$ (like in \cite{dhamal2phase}) since it would allow each individual phase to have enough budget to show an impact when the number of phases is large.
Also, studying different values of budgets would allow us to identify any patterns and draw more reliable conclusions.
\section{Simulation Results}
\label{sec:mp_sim}
In this section, we present detailed simulation results with precise observations and plots for NetHEPT dataset,
since we could do an extensive search for optimal budget split even for 5 phases, and also run a large number of Monte Carlo iterations for it.
As mentioned earlier, we also conduct simulations on Facebook dataset for upto 4 phases.
Unless specified, the results for these datasets qualitatively followed a very similar pattern as that for the NetHEPT dataset.
\subsection{Distribution of the Extent of Diffusion}
All distributions corresponding to the extent of diffusion (for any number of phases or for any amount of budget, at the end of any phase or within any intermediate phase) exhibit a bell-shaped nature.
\Cref{fig:dist} presents the
distributions of extents of diffusion over phases, for different number of phases with the corresponding optimal budget split, for NetHEPT (WC) with $k=200$
(see \Cref{tab:budgetsplits} for optimal budget splits).
It can be notably seen that the means of the histograms are evenly spaced (e.g., for 3-phase diffusion, the mean extent after first phase equals the difference between the mean extents of second and first phase, which also equals the difference between the mean extents of third and second phase).
This has implications as we will see in \Cref{sec:result_budgetsplit}.
\Cref{tab:Std200} presents the standard deviations of the overall extent of diffusion that happened till the end of each phase, and also standard deviations of extent of diffusion that happened during each phase (extent of diffusion at the end of the phase minus extent of diffusion at the beginning of the phase).
%
It is important to note that we could compare the distributions across different value of phases and budget splits in a reliable way, because the set of underlying live graphs is common to all the simulations for a given network dataset.
\setlength\tabcolsep{.7mm}
\begin{table}[t]
\caption{Mean extents of diffusion using various budget splits on NetHEPT (WC) (optimal budget splits are highlighted)
}
\label{tab:budgetsplits}
\small
\begin{tabular}{|c|c||c|c||c|c|}
\hline
\T \B
& & \multicolumn{2} {c||} {$k=200$} & \multicolumn{2} {c|} {$k=50$}
\\
\hline \T \B {Pha} & {Budget split} & {Budget split} & {Mean} & {Budget split} & {Mean} \\
ses & ratio & & extent & & extent
\\
\hline
\hline \T \B $1$ & $1$ & $(200)$ & $2389$ & $(50)$ & $947$ \\
\hline \T \B $2$ & $1\!:\!1$ & $(100,\!100)$ & $2464$ & $(25,\!25)$ & $961$ \\
\hline \rowcolor{Gray} \T \B $2$ & $1\!:\!2$ & $(67,\!133)$ & $2478$ & $(17,\!33)$ & $965$ \\
\hline \T \B $3$ & $1\!:\!1\!:\!1$ & $(66,\!67,\!67)$ & $2491$ & $(16,\!17,\!17)$ & $967$ \\
\hline \T \B $3$ & $1\!:\!1\!:\!2$ & $(50,\!50,\!100)$ & $2499$ & $(12,\!13,\!25)$ & $969$ \\
\hline \T \B $3$ & $1\!:\!2\!:\!1$ & $(50,\!100,\!50)$ & $2494$ & $(12,\!25,\!13)$ & $967$ \\
\hline \T \B $3$ & $2\!:\!1\!:\!1$ & $(100,\!50,\!50)$ & $2481$ & $(25,\!12,\!13)$ & $963$ \\
\hline \T \B $3$ & $1\!:\!2\!:\!6$ & $(22,\!45,\!133)$ & $2502$ & $(6,\!12,\!32)$ & $973$ \\
\hline \T \B $3$ & $1\!:\!2\!:\!4$ & $(28,\!58,\!114)$ & $2506$ & $(7,\!14,\!29)$ & $973$ \\
\hline \T \B $3$ & $3\!:\!2\!:\!4$ & $(66,\!44,\!90)$ & $2493$ & $(16,\!11,\!23)$ & $969$ \\
\hline \rowcolor{Gray} \T \B $3$ & $1\!:\!2\!:\!3$ & $(33,\!67,\!100)$ & $2508$ & $(8,\!17,\!25)$ & $975$ \\
\hline \T \B $4$ & $1\!:\!1\!:\!1\!:\!1$ & $(50,\!50,\!50,\!50)$ & $2509$ & $(12,\!12,\!13,\!13)$ & $974$ \\
\hline \T \B $4$ & $1\!:\!2\!:\!6\!:\!18$ & $(7,\!15,\!45,\!133)$ & $2511$ & $(2,\!4,\!11,\!33)$ & $974$ \\
\hline \T \B $4$ & $1\!:\!2\!:\!4\!:\!8$ & $(14,\!27,\!54,\!105)$ & $2515$ & $(4,\!7,\!13,\!26)$ & $978$ \\
\hline \rowcolor{Gray} \T \B $4$ & $1\!:\!2\!:\!3\!:\!4$ & $(20,\!40,\!60,\!80)$ & $2519$ & $(5,\!10,\!15,\!20)$ & $982$ \\
\hline \T \B $5$ & $1\!:\!1\!:\!1\!:\!1\!:\!1$ & $(40,\!40,\!40,\!40,\!40)$ & $2513$ & $(10,\!10,\!10,\!10,\!10)$ & $978$ \\
\hline \T \B $5$ & $1\!:\!2\!:\!6\!:\!18\!:\!54$ & $(3,\!6,\!16,\!45,\!130)$ & $2514$ & $(1,\!2,\!5,\!12,\!30)$ & $980$ \\
\hline \T \B $5$ & $1\!:\!2\!:\!4\!:\!8\!:\!16$ & $(6,\!12,\!25,\!52,\!105)$ & $2518$ & $(2,\!4,\!7,\!13,\!24)$ & $980$ \\
\hline \rowcolor{Gray} \T \B $5$ & $1\!:\!2\!:\!3\!:\!4\!:\!5$ & $(14,\!27,\!40,\!54,\!65)$ & $2525$ & $(4,\!7,\!10,\!13,\!16)$ & $985$ \\
\hline
\end{tabular}
\end{table}
\setlength\tabcolsep{2mm}
\begin{table}[t]
\caption{Standard deviations of extent of diffusion using optimal budget splits on NetHEPT (WC) with $k=200$}
\label{tab:Std200}
\centering
\small
\begin{tabular}{|c|c|c|}
\hline \T \B
Phases & {till end of each phase} & {during each phase}
\\ \hline \T \B
$1$ & $(89)$ & $(89)$
\\ \hline \T \B
$2$ & $(87,96)$ & $(87,68)$
\\ \hline \T \B
$3$ & $(85,94,96)$ & $(85,72,55)$
\\ \hline \T \B
$4$ & $(75,94,97,97)$ & $(75,71,60,49)$
\\ \hline \T \B
$5$ & $(72,86,96,97,97)$ & $(72,69,56,50,42)$
\\ \hline
\end{tabular}
\end{table}
\begin{figure*}
\small
\begin{tabular}{p{.34\textwidth} p{.33\textwidth} p{.33\textwidth}}
\hspace{-7mm}
\includegraphics[scale=.47]{heatmap_cropped.pdf}
&
\includegraphics[scale=.47]{plot_progression_cropped.pdf}
&
\includegraphics[scale=.44]{plot_concavity_cropped.pdf}
\\
{(a) Effectiveness of various 3-phase budget splits for NetHEPT (WC) with $k=200$}
&
(b) Phasewise progression for different number of phases for NetHEPT (WC) $k=200$
&
(c) Types of influenceability curves
\end{tabular}
\caption{Aspects of multiphase diffusion}
\label{fig:all}
\end{figure*}
We now try to numerically understand what it means by saying: using multiple phases would reduce uncertainty.
We first address the question: will multiple phases lead to a lower standard deviation at the end of the diffusion as compared to single phase with the same overall budget?
The answer is `no'.
It is true that for a low value in single phase for a bad live graph, the value improves for that live graph when we use multiple phases. But the value for a good live graph also improves.
Moreover, multiphase diffusion would have a better reach than single phase diffusion, and would reach parts of the live graph which would stay unexplored in single phase for the same live graph. So the uncertainty of these newly explored parts also could get added to multiphase diffusion,
which may in fact may result in multiphase diffusion having a higher standard deviation than single phase.
This can be seen from \Cref{tab:Std200} where for instance, standard deviation at the end of the second phase of 2-phase diffusion ($96$) is greater than that at the end of single phase ($89$).
In general, these observations show that $p+1$ phases may actually lead to a higher standard deviation at the end of the diffusion as compared to $p$ phases with the same overall budget.
\Cref{tab:Std200} also shows that in a $p$-phase diffusion, the standard deviation of the extent of diffusion that progresses during phase $q+1$ is consistently less than that of the extent of diffusion that progresses during phase $q$.
For instance, in a $3$-phase diffusion, the standard deviation of the extent of diffusion that progresses during second phase ($72$) is less than that of the extent of diffusion that progresses during first phase ($85$).
One of the reasons for this observation could be a lower uncertainty in the extent of diffusion triggered by the selection of lesser influential seed nodes.
We now try to see what it means when we say that `using multiple phases would reduce uncertainty as compared to single phase'.
To answer this more concretely, we quantify how the second phase reduces uncertainty in a multiphase diffusion, since this is the phase which first distinguishes multiphase diffusion from single phase.
For instance, to quantify how the second phase reduces uncertainty in a $p$-phase diffusion with budget split $(k_1,k_2,\ldots,k_p)$,
could mean the following:
the standard deviation of the extent of diffusion in the second phase (say $\sigma_p$) would be less than the standard deviation of the additional number of nodes influenced using the single phase diffusion if the single phase diffusion budget is increased from $k_1$ to $k_1+k_2$ (say $\sigma_s$).
We observe in our simulations that this statement holds true.
In particular, for NetHEPT (WC) with $k=200$,
$\sigma_p$ for a given $p$ is the second component of the corresponding vector in the `during each phase' column of \Cref{tab:Std200}. These do not exceed $72$, while we observed that $\sigma_s$ was higher than $100$.
\begin{comment}
Std(single phase with budget k - single phase with budget k1) = 111 and
\\Std(2phase with budget k - 2phase or single phase with budget k1) = 68
k=200
p=2: 111, 68
p=3:
p=4: 115, 71
p=5: 111, 69
k=50
p=2: 113, 71
p=3: 106, 63
p=4:
p=5:
\end{comment}
We can draw an insight from this discussion:
it is beneficial to select highly influential nodes in the initial phases since they would not only lead to a large extent of observed diffusion, but also high uncertainty, which can be then improved upon by selecting other nodes in subsequent phases.
\subsection{Effectiveness with Number of Phases}
Tables \ref{tab:gain} and \ref{tab:facebook} present the gains achieved by multiphase diffusion with various number of phases and corresponding optimal budget splits for NetHEPT (WC) and Facebook (WC), respectively.
We also ran simulations for 10 phases with a number of manually chosen budget splits for NetHEPT and observed that the final extent of diffusion did not exceed an expected value of 2535, which is a 6.11\% gain over single phase.
So we conclude that there is significant gain when we move from single phase to 2 phases, and an appreciable gain when we further move to 3 phases. But there is a sharp decline in the additional gain beyond 3 phases.
\begin{table}[t]
\caption{Gain achieved by using multiphase diffusion using optimal budget split for NetHEPT (WC) with $k=200$}
\label{tab:gain}
\centering
\small
\begin{tabular}{|c|c|c|c|c|}
\hline \T \B
Phases & $2$ & $3$ & $4$ & $5$
\\ \hline \T \B
\% gain over single phase & $3.73$ & $5.00$ & $5.44$ & $5.70$
\\ \hline \T \B
\% gain over one phase less & $3.73$ & $1.21$ & $0.44$ & $0.24$
\\ \hline
\end{tabular}
\end{table}
\setlength\tabcolsep{1.2mm}
\begin{table}
\caption{Optimal budget splits for Facebook (WC) with $k=50$}
\label{tab:facebook}
\centering
\small
\begin{tabular}{|c|c|c|c|}
\hline \T \B
Phases & $2$ & $3$ & $4$
\\ \hline \T \B
Optimal budget split & $(15,35)$ & $(5,15,30)$ & $(2, 7, 15, 26)$
\\ \hline \T \B
\% gain over single phase & $5.41$ & $6.26$ & $6.68$
\\ \hline \T \B
\% gain over one phase less & $5.41$ & $0.80$ & $0.40$
\\ \hline
\end{tabular}
\end{table}
\setlength\tabcolsep{2mm}
\subsection{Optimal Budget Split}
\label{sec:result_budgetsplit}
\Cref{fig:all}(a) presents a visualization of the expected extents of diffusion for various budget splits obtained using our coarse and fine searches for NetHEPT (WC) dataset (the actual discrete heatmap is smoothened for better visualization).
\Cref{tab:budgetsplits} presents the expected extents of diffusion for various
manually chosen budget split ratios; the optimal budget splits for all phases are highlighted.
The budget splits for NetHEPT (TV) dataset with $k=200$ were also very similar.
For NetHEPT (TV) dataset with $k=50$, given the number of phases, a very large set of budget splits gave very similar and almost optimal results.
\Cref{tab:facebook} presents optimal budget splits for Facebook (WC) dataset.
For Facebook (TV), it was rather optimal to select very few nodes in the initial phases; in particular, it was optimal to choose one node each in the first three phases for the case of 4-phase diffusion.
We consistently observed that the optimal budget splits had a lower budget allotted to the initial phases than to the latter phases, that is,
$k_q < k_{q+1}$.
As motivated earlier, the reason for finding an optimal budget split is to find an optimal balance between observation and exploitation.
As per our multiphase adaptation of IRIE, the most influential nodes get selected in the initial phases, while the lesser influential ones get selected in the later phases.
Owing to the highly influential nature of the initially selected nodes, it suffices to select only a few of them in the initial phases to get a good enough observation of the diffusion, and then use the lesser influential ones to cover the parts of the network which could not be reached in the observed diffusion.
Thus, the budget allotted to the initial phases plays a critical role to set a right balance between observation and exploitation.
Our general observation is that an optimal budget split is the one which would lead to an almost equal number of nodes getting influenced in expectation, across the given number of phases.
This can been seen for NetHEPT (WC) dataset in \Cref{fig:all}(b), where the
expected extent of diffusion grows linearly with the number of intermediate phases elapsed.
From our studied datasets with different values of budgets, it was evidently the case that a good balance between the extent of observation of diffusion in earlier phases and exploitation by adaptively selecting seed nodes in later phases, was found when the expected extent of diffusion in all the phases was almost the same.
This was with the exception of Facebook (TV) for which the extent of diffusion was very high in earlier intermediate phases.
We present a compelling insight behind these observations, for which we first introduce the notion of {\em influenceability curve\/} of a network.
\subsubsection{Influenceability Curve of a Network}
Given a diffusion model and a deterministic influence maximizing algorithm, a network would have a plot depicting the number of influenced nodes versus the number of seed nodes.
This plot, in some sense, provides an indication of:
what fraction of best seed nodes budget would lead to what fraction of the maximum achievable extent of diffusion using that budget.
We call this plot as the
{\em influenceability curve\/} of the network
with respect to the given diffusion model and influence maximizing algorithm.
It can be seen in the literature on maximizing information diffusion that,
this curve is concave for most real-world social networks.
The results on some popular datasets for single phase diffusion can be found in
\cite{wang2012scalable,jung2012irie}.
The influenceability curves for such real-world network datasets can be classified broadly into the following four types (illustration in \Cref{fig:all}(c)):
\begin{enumerate}
\item
Rise \& flat: Facebook~(TV), Epinions~(TV), Slashdot(TV)
\item
Very concave: Facebook~(WC), DBLP~(WC), Slashdot~(WC), Epinions~(WC), Arxiv~(TV)
\item
Less concave: NetHEPT~(WC), NetHEPT~(TV), NetPHY~(WC), Arxiv~(WC)
\item
Linear: DBLP~(TV), Amazon~(WC), Amazon~(TV)
\end{enumerate}
The influenceability curve is concave for most networks, which means that selecting first few nodes in earlier phases is enough to give a good enough observation of diffusion, which spares the possibility of selecting higher number of nodes in the later phases to exploit the observation.
For NetHEPT datasets (WC and TV) with a budget such as 50 or 200, it so happens that an equal distribution of extent of diffusion across phases would arise from a budget split which roughly follows an arithmetic progression; that is, the split of 1:2 for 2 phases, 1:2:3 for 3 phases, etc., for a reasonable amount of budget. This explains the specific observation of \cite{dhamal2phase} concerning the 2-phase budget split for NetHEPT-like datasets.
Also, more concavity (\Cref{fig:all}(c)) means an even higher skew in the selection of nodes, since an even smaller number of nodes in initial phases could provide sufficient observation. In such cases, we would have optimal budget split such that first phase budget allocation is considerably lower than that of subsequent phases (e.g., Facebook (WC) in \Cref{tab:facebook}).
If for some unconventional network, the influenceability curve is convex (possibly because no node individually is highly influential, but collectively larger seed sets become highly influential), this would mean that selecting a few nodes in earlier phases do not lead to a significant extent of diffusion and hence a poor observation, which would then lead to a not-so-good adaptive seeding in the later phases. So it would be better to select a higher number of nodes in the earlier phases, which collectively could provide a good enough observation.
As a middle ground between concave and convex, if influenceability curve is close to linear, the budget could be split equally across the phases.
If the influenceability curve rises and flattens (e.g., Facebook (TV)), which means that very few nodes are extremely influential, it would be well advised to not select these nodes in one phase itself.
Using multiple phases, it is possible to ascertain whether a highly influential node, when selected as seed node, influences another highly influential node without having to select the latter as a seed node.
In the single phase case, the influence maximizing algorithm would have selected the latter, by computing that the latter is highly influential but perhaps not very likely to be influenced by other selected seed nodes.
However, if in our observed diffusion, the latter is indeed influenced without having to select it as a seed node, it could help save our budget which could be used for selecting other seed nodes.
If it is not influenced in our observed diffusion, we select it as seed node in a following phase.
Combining these two cases, we would gain in expectation by not selecting the very highly influential nodes in first phase itself, if we have more than 2 phases at hand.
We can develop a simple method for determining optimal budget split based on these insights.
First, note that the difference between the performances of single phase and multiphase diffusion is not extremely high.
That is, the mean extent of diffusion in first phase of a multiphase diffusion in which the budget allotted to first phase is $k_1$ would be close to
the mean extent of diffusion for single phase with budget $k_1$,
the mean extent of diffusion in second phase of a multiphase diffusion in which the budget allotted to second phase is $k_2$ would be close to
the mean extent of additional diffusion for single phase when budget increases from $k_1$ to $k_1+k_2$.
So finding a budget split which leads to the mean extents of diffusion across intermediate phases to be equal, is almost equivalent to partitioning the influenceability curve into $p$ pieces so that the mean extent of diffusion is split equally across these pieces.
E.g., when the number of phases is $p$ with total budget $k$, we look at the curve plot for number of seed nodes ranging from $0$ to $k$, and split the plot into $p$ equally spaced $Y$-coordinates. We look at corresponding $X$-coordinates (inverse function of influenceability curve) to obtain values of $(k_1,k_1+k_2,\ldots,k)$ and hence derive our optimal budget split $(k_1,k_2,\ldots,k_p)$.
Rose \& flat curves being an exception, where we select one node in each non-terminal phase and remaining nodes in terminal phase.
\begin{comment}
arithmetic or geometric with 1/2 ?
1 -> 1,
1/3 -> 1/2,
1/6 -> 1/3,
1/10 -> 1/4
\end{comment}
\subsubsection{Additional Notes}
The expected extents of diffusion over various budget splits follow a somewhat unimodal behavior, as can be seen from \Cref{fig:all}(a). So a multidimensional golden section search technique, if adapted well, has a good chance of finding optimal budget splits quickly.
Alternatively, instead of using our search technique for finding optimal budget split, it may be advantageous to use an iterative algorithm that converges to an optimal solution. Such algorithms usually require a good starting point for finding the optimal solution and also converging to it quickly.
Our results suggest that a budget split of $(k_q)_{q=1}^p$, where $k_q \leq k_{q+1}$
could act as a good starting point (attributed to the concave influeneability curves of real-world social networks).
\subsection{A Note on Value Decaying over Phases}
\Cref{fig:all}(b) presents the phasewise progression of multiphase diffusion.
There have been studies considering the decaying value of product or information over time
\cite{zhang2016influence,dhamal2phase}.
In our study, since a phase starts after the conclusion of the previous phase (which generally takes considerable number of time steps in IC model), it would almost certainly be disadvantageous to use multiple phases with product value decaying in every time step. Hence our study considers that the value of the product decays over phases rather than in every time step.
Consider that the value of a node influenced in phase $q$ has a decay factor of $\delta^q$, where $\delta \in [0,1]$. That is, a node influenced in a later phase provides a lesser value.
%
%
So given a budget split $\mathbf{K}$, the value of diffusion can be defined as
$\sum_{q=1}^p \delta^q \beta_q(\mathbf{K})$, where $\beta_q(\mathbf{K})$ is the number of nodes influenced in phase $q$.
It is clear that lower values of $\delta$ are deterrent to using multiple phases.
Given the number of phases and the corresponding optimal budget split, we could determine the value of $\delta$ below which, it would be rather advantageous to use single phase.
As per our hypothesis, an optimal budget split corresponds to a budget split which leads to equal number of nodes influenced in each of the individual phases (except for networks with `rise \& flat' type of influenceability curve).
Given that the number of phases is $p$, let $x_p$ be the number of nodes influenced in each of the $p$ phases. Since the spread achieved using single phase is lower than that achieved with $p>1$, let $\epsilon_p$ be the fractional loss incurred by using single phase instead of $p$ phases.
For $p$ phases (with the derived optimal budget split) to be advantageous over single phase, even with the incorporation of $\delta$, we should have
\begin{small}
\begin{align*}
& p \, x_p -\epsilon_p \, x_p \leq x_p(1+\delta+\ldots+\delta^{p-1})
\\
\Longleftrightarrow \;& p-\epsilon_p \leq \frac{1-\delta^p}{1-\delta}
\\
\Longleftrightarrow \;& \delta^p - p \, \delta + (p-1-\epsilon_p) \leq 0
\end{align*}
\end{small}
\vspace{2mm}
With $p$ and $\epsilon_p$ known, the above inequality can be easily solved to determine the range of $\delta$ subject to $\delta \in [0,1]$.
There even exist explicit solution formulae of the above inequality for $p$ upto 4 (and it is perhaps an overkill to have more than 4 phases).
Since we know $p$ and $\epsilon_p$ for different values of $p$, solving the above polynomial gives the minimum value of $\delta$ for which, using $p$ phases (with the derived optimal budget split) holds advantage over single phase.
\Cref{tab:mindelta} presents the minimum values of $\delta$ for multiphase diffusion with the optimal budget splits as given in \Cref{tab:budgetsplits} to be advantageous over single phase, for NetHEPT (WC) with $k=200$.
Note that there may exist a better budget split which could be advantageous as compared to single phase for a value of $\delta$ lower than the one found using above analysis.
However, the above analysis guarantees that there exists a budget split (the previously found optimal budget split) which would lead to a better value than single phase, when the value of $\delta$ is higher than the one found using the above analysis (except for networks with `rise \& flat' type of influenceability curve).
We also did preliminary analyses for values of $\delta=0.1,\ldots,0.9$ for identifying optimal budget split (from among the budget splits that we explored in our previous simulations) with respect to the value of diffusion while accounting for the decay factor.
We did a search over different budget splits explored in our earlier simulations, for which we already had stored the phasewise extents of diffusion
(we computed the value of a budget split by taking a weighted sum of the extents of diffusion in each phase $q$ with the weighing factor $\delta^q$).
Our observations suggest that for $\delta = 0.8,0.9$,
a budget split
such that the number of nodes influenced in phase $p$ is approximately proportional to $\delta^p$, is optimal or near-optimal. Since the value of each node influenced in phase $p$ is $\delta^p$, we have that the total value obtained in phase $p$ is approximately proportional to $\delta^{2p}$.
The results were not very consistent for lower values of $\delta$.
We defer a more elaborate study in this direction for future work.
\begin{table}[t]
\caption{Minimum values of $\delta$ for NetHEPT (WC) with $k=200$ for multiphase with optimal budget splits (\Cref{tab:budgetsplits}) to be beneficial}
\label{tab:mindelta}
\centering
\small
\begin{tabular}{|c|c|c|c|c|}
\hline \T \B
Phases & $2$ & $3$ & $4$ & $5$
\\ \hline \T \B
Minimum $\delta$ & $0.810$ & $0.871$ & $0.904$ & $0.924$
\\ \hline
\end{tabular}
\vspace{3mm}
\end{table}
\begin{comment}
\subsection{A Note on Linear Threshold Model}
\label{sec:mpid_lt}
Throughout this paper, we discussed multi-phase diffusion in terms of Independent Cascade (IC) model, primarily because it is a natural setting for such a diffusion. However, one can study multi-phase diffusion using the other most popular diffusion, the Linear Threshold (LT) model.
We now discuss this in brief.
In Linear Threshold (LT) model, an influence degree $b_{u,v}$ is associated with every directed edge $(v,u)$, where $b_{u,v} \geq 0$ is the degree of influence that node $v$ has on node $u$, and an influence threshold $\chi_u$ with every node $u$. The weights $b_{u,v}$ are such that $\sum_v b_{u,v} \leq 1$. Also, owing to lack of knowledge about the thresholds, which are held privately by the nodes, it is assumed that the thresholds are chosen uniformly at random from $[0,1]$. The diffusion process starts at time step 0 and proceeds in discrete time steps, one at a time. In each time step, a node is influenced or activated if and only if the sum of influence degrees of the edges incoming from activated neighbors (irrespective of the time of activation of the neighbors) crosses its own influence threshold, that is, when
$
\sum_v b_{u,v} \geq \chi_u
$.
Nodes, once activated, remain activated for the rest of the diffusion process.
In any given time step, the recently activated nodes along with previously activated ones contribute to the diffusion process. The diffusion process stops when it is not possible to activate or influence any further nodes.
It is clear now that at the beginning of the first phase, the thresholds of nodes are assumed to be uniformly distributed in $[0,1]$. Now when the second phase is scheduled to start, we have the information of the status of nodes thus far, that is, whether they are active or inactive.
In addition, we also have the updated information regarding the thresholds of inactive nodes which are out-neighbors of active nodes. That is, such a node was not activated in the first phase even after receiving a total influence (sum of influence degrees of the edges incoming from activated neighbors) of $\sum_v b_{u,v}$. Thus we now have the information that this node has a threshold greater than $\sum_v b_{u,v}$, and so while determining the seed set for the second phase, we can exploit this additional information by assuming the threshold to be chosen uniformly at random from $\left(\sum_v b_{u,v},1\right]$, instead of a wider (and hence more uncertain) range of $[0,1]$.
\end{comment}
\section{Conclusion
}
The objective of our study was to quantify and understand the effectiveness of multiphase diffusion in social networks under IC model.
We started by present a negative result that more phases do not guarantee a better extent of diffusion.
We used computationally efficient techniques for reducing the number of diffusion states in our simulations as well as searching for an optimal budget split.
We studied the effect on the means and standard deviations of the extent of diffusion in different phases, and provided insight behind the reduction in uncertainty when we use multiple phases.
We also suggested using highly influential nodes in the initial phases since they would not only lead to a large extent of observed diffusion, but also high uncertainty, which can be then improved upon by other nodes in subsequent phases.
Our experiments suggested a significant improvement in spread when we move from single to two phases, but the marginal gain beyond three phases was usually not very significant.
With the primary reasoning behind multiphase diffusion being able to observe in earlier phases and exploit in later phases, we observed that for most types of networks, a good balance between the two is found when the expected extent of diffusion in all the phases is almost the same.
We then presented a method for arriving at an optimal budget split using the influenceability curve of the network.
We concluded by considering a setting with decaying value of diffusion over phases, and provided a bound on the decay factor as a function of the number of phases; the multiphase diffusion would be advantageous over single phase if the value of decay factor is higher than this bound.
\subsection{Future Work}
It would be useful to design efficient algorithms specifically for multiple phases to study larger datasets.
An elaborate study on value of diffusion decaying over phases, is warranted.
A game theoretic study would be interesting, where competing companies implement seeding in multiple phases.
The work can be extended to other diffusion models.
A multi-phase study over evolving social networks is an important practical aspect to look at.
A more theoretical study would help lay foundation for studying multiphase information diffusion.
\vspace{2mm}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-02-27T02:06:40",
"yymm": "1802",
"arxiv_id": "1802.08869",
"language": "en",
"url": "https://arxiv.org/abs/1802.08869"
}
|
\section{Introduction}
\label{intro}
\textit{Mesic nuclei} are currently one of the hottest topics in nuclear and hadronic physics, both from experimental~\cite{Skurzok_NPA,Adlarson_2013,Tanaka,Machner_2015,Metag2017} and theoretical points of view~\cite{Ikeno_EPJ2017,Xie2017,Fix2017,Barnea2017,Gal2017,Gal2015,Friedman,Kelkar_2016_new,Kelkar,Kelkar_new,Wilkin_2016,Wilkin2,BassTom2006,BassTom,Hirenzaki1,Nagahiro_2008rj,Nagahiro_2013,Hirenzaki_2010,WycechKrzemien,Niskanen_2015}. This exotic nuclear matter is supposed to consist of a nucleus bound via the strong interaction with a neutral meson such as the $\eta$, $\eta'$, $K$ or $\omega$. Although, its existence has been predicted over thirty years ago, it still remains to be one of the undiscovered nuclear objects. Some of the most promising candidates for such bound states are $\eta$-mesic nuclei, postulated by Haider and Liu in 1986~\cite{HaiderLiu1} following the coupled channel calculations by Bhalerao and Liu~\cite{BhaleraoLiu} which reported an attractive $\eta$-nucleon interaction. Current studies of hadron- and photon-induced production of the $\eta$ meson resulting in a wide range of values of the $\eta N$ scattering length, $a_{\eta N}$, indicate the interaction between the $\eta$ meson and a nucleon to be attractive and strong enough to create an $\eta$-nucleus bound system even in light nuclei~\cite{Xie2017,Fix2017,Barnea2017,Gal2017,Green,Wilkin1,WycechGreen}. However, experiments performed so far have not brought a clear evidence of their
existence~
\cite{Berger,Mayer,Sokol_2001,Smyrski1,Mersmann,Budzanowski,Papenbrock,PMActa}.
They provide only signals which might be interpreted as indications of the
$\eta$-mesic nuclei.
The interested reader can find recent reviews on the $\eta$ mesic bound states searches in Refs~\cite{Machner_2015,Metag2017,Kelkar,Wilkin_2016,HaiderLiu,Krusche_Wilkin,Moskal_FewBody,Acta_2016,Moskal_Acta2016,Moskal_AIP2017}.
Some of the promising experiments related to $\eta$-mesic nuclei have been performed
with the COSY facility~\cite{Wilkin_epj2017}.
The most recent of these involves the measurement of the
$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$
and $dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$
reactions which has been performed by the WASA-at-COSY Collaboration. Due to the lack of theoretical predictions for cross sections below the $\eta$ production threshold, the data have been analyzed assuming that the signal from the bound state has a Breit-Wigner shape~\cite{Skurzok_NPA,Adlarson_2013}. However, a better guidance for the shape of the cross sections for the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$ processes is provided by a theoretical model described in
Ref.~\cite{Ikeno_EPJ2017} in the excess energy range relevant
to the $\eta$-mesic nuclear search. Given that the model is the very first attempt to provide a consistent description of the data below and above the $\eta$ meson production threshold, the authors used a phenomenological approach with an optical potential for the $\eta$-$^4$He interaction. The available data on the $d d \rightarrow \, ^4$He $\eta$ reaction is reproduced quite well for a broad range of optical potential parameters for which the authors predict the cross section spectra corresponding to $\eta$-$^4$He bound state formation in the subthreshold region.
In this article we present a comparison between this new theoretical model and experimental data collected by WASA-at-COSY in order to further constrain the range of the
allowed $\eta$-$^4$He optical potential parameters. The latter, as we shall see, narrows
down the search for $\eta$-mesic helium to a region of small binding energies
and widths.
\section{Theoretical model}
The formalism presented in Ref.~\cite{Ikeno_EPJ2017} predicted for the first time,
the formation rate of the $\eta$-mesic $^{4}\hspace{-0.03cm}\mbox{He}$ in the
deuteron-deuteron fusion reaction within a model which reproduced the data on
the $ d d \rightarrow \, ^4$He $\eta$ reaction quite well. The authors determined the total cross sections for the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$ reaction based on phenomenological calculations. The calculated total cross section $\sigma$ consists of two parts: conversion $\sigma_{conv}$ and escape $\sigma_{esc}$ part. The conversion part, determined for different parameters $V_{0}$ and $W_{0}$ of a spherical $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ optical potential $V(r) = (V_{0} + iW_{0}) \frac{\rho_{\alpha}(r)}{\rho_{\alpha}(0)}$, is equal to the total cross section in the subthreshold excess energy region where the $\eta$ meson is absorbed by the nucleus (its energy is not enough to escape from the nucleus), while the $\eta$ meson escape part contributes to the excess energy region above the threshold for $\eta$ production and can be calculated as $\sigma_{esc}=\sigma-\sigma_{conv}$. Fig.~\ref{theory_total} shows the example of a calculated total cross section for $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ optical potential parameters ($V_{0},W_{0}$)=$-$(70,20)~MeV.
We should mention here that the above theoretical calculations
(which are being used in the present work) were
done assuming the one-nucleon absorption of the $\eta$ meson since
the strength of the multi-nucleon absorption processes is not well known.
Based on the experimental data on the $ p n \to d \eta$ and $ p N \to p N \eta$
reactions, the strength of the $\eta$ meson absorption by a two-nucleon pair at the
nuclear center was estimated in \cite{WycechAPPB29}
to be 4.2 MeV and 0.2 MeV for the spin triplet and singlet nucleon pairs,
respectively.
This strength can be larger for $^4$He because of the higher central density
as mentioned in \cite{WycechAPPB45}.
The values of the $W_{0}$ parameters in the present work
could be compared with these numbers
to get a rough estimate of the ratio of the one- and two-body absorption
probability at the nuclear center.
The two body absorption potential is expected to provide an additional contribution
to the conversion cross section.
However, it is only the one-nucleon absorption cross section which should be
compared with the present data since multi-nucleon absorption processes
would contribute to different final states not considered in the present work.
Thus, the present analysis of experimental data from
Ref. \cite{Skurzok_NPA} based on the theoretical calculation assuming
the one-body absorption seems reasonable.
\begin{figure}[h!]
\centering
\includegraphics[width=8.0cm,height=5.3cm]{V70_W20_total_shifted.pdf}
\caption{Calculated total cross section of the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$ reaction for the formation of the $^{4}\hspace{-0.03cm}\mbox{He}$-$\eta$ bound system plotted as function of the excess energy Q for $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ optical potential parameters ($V_{0},W_{0}$)=$-$(70,20)~MeV. The black solid line denotes the
total cross section $\sigma$, while the red dashed line denotes the conversion part $\sigma_{conv}$.~\label{theory_total}}
\end{figure}
The spectrum has been normalized in the sense that the escape part reproduces the measured cross sections for the $dd\rightarrow$ $^{4}\hspace{-0.03cm}\mbox{He}\eta$ process~\cite{Frascaria,Willis,Wronska}. Moreover, the flat contribution in the conversion spectrum, considered to be a part of the background, has been subtracted (taking minimum value of the $\sigma_{conv}$ in the excess energy range from -20 to 15~MeV). \\
\indent Since the signal from the $\eta$-mesic bound system is expected below the threshold for the $\eta$ meson production, authors focused here only on the conversion part of the cross sections. An example of the calculated $\sigma_{conv}$ is shown in Fig.~\ref{theory_total_norm} for potential parameters ($V_{0},W_{0}$)=$-$(70,20)~MeV.
Authors of Reference~\cite{Ikeno_EPJ2017} concluded that as a next step it would be important to compare these theoretical results with the experimental data, convoluting the theoretical cross sections with the experimental resolution functions. In this article we present results of such a comparison. The details are presented in Section~\ref{Sec_3} which will be preceded by a brief description of the experimental conditions.
\begin{figure}[h!]
\centering
\includegraphics[width=8.0cm,height=5.3cm]{V70_W20_conv_binning_shifted_comparison.pdf}
\caption{Calculated conversion part of the cross section of the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$ reaction for the formation of the $^{4}\hspace{-0.03cm}\mbox{He}$-$\eta$ bound system plotted as a function of the excess energy Q for $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ optical potential parameters ($V_{0},W_{0}$)=$-$(70,20)~MeV. The cross section is scaled by fitting the escape part to the existing $dd\rightarrow$ $^{4}\hspace{-0.03cm}\mbox{He}\eta$ data and the flat contribution is subtracted as well. The red dashed line shows the theoretical spectrum while the black solid line shows the spectrum after binning (details in Sec.~\ref{Sec_2}).~\label{theory_total_norm}}
\end{figure}
\section{Experimental data}
\label{Sec_2}
Recent measurements at WASA-at-COSY, dedicated to search for $\eta$-mesic $^{4}\hspace{-0.03cm}\mbox{He}$ nuclei were carried out using the unique ramped beam technique allowing for the beam momentum to be changed slowly and continuously around the $\eta$ production threshold in each of the acceleration cycles~\cite{Skurzok_NPA,Adlarson_2013,Acta_2016,Moskal_AIP2017}. This technique allows to reduce systematic uncertainties with respect to separate runs at fixed beam energies~\cite{Adlarson_2013,Smyrski1,Moskal_1998}.
The $^{4}\hspace{-0.03cm}\mbox{He}$-$\eta$ bound states were searched by studying the excitation functions for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$} and \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$} processes in the excess energy range $Q$ from -70~MeV to 30~MeV.
The obtained excitation functions do not reveal any direct narrow structure below the $\eta$ production threshold, which could be considered as a signature of the bound state. Therefore, only the upper limit of the total cross section for the $\eta$-mesic $^{4}\hspace{-0.03cm}\mbox{He}$ formation was determined.
In the first approach, the upper limits of the total cross sections for both processes were estimated at a 90\% confidence level (CL) fitting simultaneously the excitation functions with a sum of a Breit-Wigner and a second order polynomial function corresponding to the bound state signal and background, respectively. Moreover, the isospin relation between $n \pi{}^{0}$ and $p \pi{}^{-}$ pairs was taken into account. The corresponding data analysis is presented in detail in Ref.~\cite{Skurzok_NPA}.
The analysis resulted in the value of the upper limit in the range from 2.5 to 3.5~nb for the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$ process and from 5 to 7~nb for the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$ reaction. Systematic uncertainty, contributed mainly from the assumption of the Fermi momentum
of the $N^{*}$ resonance inside $^{4}\hspace{-0.03cm}\mbox{He}$
~\cite{Kelkar_2016_new}, to be equal to that of a nucleon in $^4$He \cite{Nogga},
varies from 42\% to 46\% for both reactions.
These experimental results are revisited in the next section in the light of a new
theoretical model~\cite{Ikeno_EPJ2017} which reproduces the
$d d \rightarrow \, ^4$He$\eta$ cross section data and with the same $\eta$-$^4$He optical potential predicts the cross sections
for $ d d$ fusion with the formation of an $\eta$-mesic $^4$He below the $\eta$ production threshold. The objective of the present analysis is twofold: to provide (i) stronger
constraints on the optical potential parameters which are already capable of
reproducing the $\eta$ production data and (ii) to improve the upper limits on the
cross sections found in \cite{Skurzok_NPA} using a theoretical model (constrained by
the above threshold data) for the possible bound state, rather than the simple
Breit-Wigner form used in \cite{Skurzok_NPA}.
\section{Comparison between theory and data: results and discussion}~\label{Sec_3}
As mentioned in the previous section,
we performed the analysis which allows to compare excitation functions measured for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$} and \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$} processes~\cite{Skurzok_NPA} with the theoretical predictions presented in Ref.~\cite{Ikeno_EPJ2017}. For this purpose, theoretical conversion spectra were convoluted with the experimental resolutions of the excess energy $Q$. The COSY beam is characterized by a high momentum resolution of up to $\frac{\Delta p}{p}\approx 1\cdot10^{-4}$ resulting in the resolution of $\Delta Q$ of about 70~keV in the energy range of interest. This is about 70 times smaller than the binning of the spectra used by the WASA-at-COSY collaboration~\cite{Skurzok_NPA}. Hence, we bin the theoretical predictions in the same way as data, dividing the spectra into 20 intervals each of 5~MeV width. We assume also that the reconstruction efficiency in the WASA-at-COSY experiment is in a good approximation independent of the excess energy $Q$ as was proven in Ref. ~\cite{Skurzok_PhD}. An example of the theoretical spectrum after the binning procedure is presented in Fig.~\ref{theory_total_norm} as a black histogram.
In the next step, the experimental excitation functions for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$} and \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$} reactions were fitted simultaneously with a sum of binned theoretical function (signal) and a second order polynomial (background). The $n \pi^{0}$, $p \pi^{-}$ isospin relation was taken into account. The fitting functions can be presented as follows:
\begin{equation}
\sigma_{n\pi^{0}}(Q)=\frac{1}{3}A\cdot Theory(Q) + B_{1}Q^{2}+C_{1}Q + D_{1}
\end{equation}
\begin{equation}
\sigma_{p\pi^{-}}(Q)=\frac{2}{3}A\cdot Theory(Q) + B_{2}Q^{2}+C_{2}Q + D_{2}
\end{equation}
\noindent for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$} and \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$}, respectively. $Theory(Q)$ denotes the theoretical function after binning with the amplitude normalized to unity, while $B_{1,2}Q^{2}+C_{1,2}Q + D_{1,2}$ is a polynomial of the second order. The fit was performed for theoretical spectra obtained for different optical potential parameters ($V_{0},W_{0}$)~\cite{Ikeno_EPJ2017}. During the fit, the amplitude $A$ of the theoretical spectrum and polynomial coefficients were treated as free parameters. As an example, the excitation functions with the fit results for optical potential parameters ($V_{0},W_{0}$)=-(70,20)MeV are presented in Fig.~\ref{exc_fit}.
\begin{figure}[h!]
\centering
\includegraphics[width=9.0cm,height=6.5cm]{hXS_V70_W20_FitSigpol2_both_channels_dd_3Henpi0_new.png}
\vspace{0.5cm}
\includegraphics[width=9.0cm,height=6.5cm]{hXS_V70_W20_FitSigpol2_both_channels_dd_3Heppimin_new.png}
\caption{Excitation function for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} n \pi^{0}$} (upper panel) and \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} p \pi^{-}$} reaction (lower panel) determined as described in Ref.~\cite{Skurzok_NPA}. The red solid line represents a fit with
theoretical prediction for potential parameters ($V_{0},W_{0}$)=-(70,20)~MeV combined with a second order polynomial. The blue dotted line shows the second order polynomial describing the background while blue solid line shows the signal contribution. The experimental data~\cite{Skurzok_NPA} are indicated with black squares.~\label{exc_fit}}
\end{figure}
The performed fit delivers the amplitudes $A$ for \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$} consistent with zero within 2$\sigma$ for all sets of $V_{0},W_{0}$ parameters, which is given in Table~\ref{table1}.
Therefore, the upper limit of the total cross section was determined, like in Ref.~\cite{Skurzok_NPA}, at the confidence level 90\% based on standard deviation of the amplitude $\sigma_{A}$ ($\sigma^{CL=90\%}_{upp}$=1,64$\cdot\sigma_{A}$).
$\sigma^{CL=90\%}_{upp}$ values are presented for different parameters $V_{0},W_{0}$ in Table~\ref{table1}.\\
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
$V_{0}$ &$W_{0}$ & A (fit) [nb] &$\sigma^{CL=90\%}_{upp}$ [nb]\\
\hline
-30 & -5 &-5.0$\pm$3.9 &6.5\\
-30 & -20 &-2.2$\pm$3.5 &5.8\\
-30 & -40 &0.2$\pm$3.8 &6.3\\
-50 & -5 &0.1$\pm$3.8 &6.3\\
-50 & -20 &3.3$\pm$4.1 &6.8\\
-50 & -40 &6.0$\pm$4.2 &6.9\\
-70 & -5 &6.4$\pm$4.5 &7.4\\
-70 & -20 &7.9$\pm$4.5 &7.4\\
-70 & -40 &7.5$\pm$3.7 &6.1\\
-100 & -5 &6.3$\pm$4.5 &7.4\\
-100 & -20 &6.9$\pm$3.9 &6.4\\
-100 & -40 &5.3$\pm$3.1 &5.2\\
\hline
\end{tabular}
\end{center}
\caption{Results obtained from the fit of theoretical spectra to experimental data. Table includes: optical potential parameters (first and second columns), amplitude obtained from the fit with its statistical uncertainty (third column) and upper limit of the total cross section for the $dd\rightarrow$ ($^{4}\hspace{-0.03cm}\mbox{He}$-$\eta)_{bound} \rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$ process at CL=90\% (fourth column).~\label{table1}}
\end{table}
Obtained $\sigma^{CL=90\%}_{upp}$ is weakly sensitive to the $V_{0},W_{0}$ parameters, varying from 5.2 to 7.4 nb. Taking into account the systematic uncertainties of about 44\% estimated in Ref.~\cite{Skurzok_NPA}, the values increase, varying from about 7.5 to 10.7~nb. Therefore, in the contour plot shown in Fig.~\ref{contour}, we exclude the region where the cross section is above 10.7~nb (light shaded area). Dark shaded area shows the systematic error. The latter estimate is based on a calculation \cite{Kelkar_2016_new} for the $N^*$
momentum distribution for a given set of $\pi N N^*$ and $\eta N N^*$ coupling constants.
If we take into account the calculations in \cite{Kelkar_2016_new} using
all available values of the coupling constants, the
allowed region in the $V_0$-$W_0$ plane can extend as far as the red line shown in
Fig.~\ref{contour}. The coloured dots shown in the figure are the results of some optical
model calculations which will be discussed in the next section.
\begin{figure}[h!]
\centering
\includegraphics[width=8.5cm,height=5.5cm]{comparison_new_colors.pdf}
\caption{Contour plot of the theoretically determined conversion cross section in $V_{0} - W_{0}$ plane~\cite{Ikeno_EPJ2017}. Light shaded area shows the region excluded by our analysis, while the dark shaded area denotes systematic uncertainty of
the $\sigma^{CL=90\%}_{upp}$. The red line extends the allowed region
based on a new estimate of errors (see text for details). Dots correspond to the optical potential parameters corresponding to the predicted $\eta$-mesic $^4$He states.~\label{contour}}
\end{figure}
\section{Optical model predictions of $\eta$-mesic $^4$He}
After constraining the region of the optical potential ($V_0$, $W_0$) parameter space
allowed by the cross section data below the $\eta$ production threshold, let us now
examine the possibility for the existence of $\eta$-mesic helium nuclei
predicted within the optical model.
To start with, we notice that all states predicted in Table 1 of
\cite{Ikeno_EPJ2017} by solving the Klein Gordon equation with the
optical potential of the present work, are excluded.
On the other hand, since a wide range of
$V_0$, $W_0$ values in \cite{Ikeno_EPJ2017}
do reproduce the $d d \rightarrow \eta \,^4$He data, it seems worthwhile
to investigate other optical model predictions in literature.
The authors in \cite{Gal2017} for example,
compare their results using a few body formalism with existing optical model
calculations by using the following form of the $\eta$-$^4$He potential with
the complex $\eta$-nucleon scattering amplitude $F_{\eta N}$ chosen from two
different models in literature \cite{GW,CS}:
\begin{equation}
V(r)=-{6 \pi \over
\mu_{\eta N}}\,F_{\eta N}\,(r_0 \sqrt{\pi})^{-3}
\exp(-{r^2\over r_0^2}).
\label{eq:KMT}
\end{equation}
Replacing the parameter, $r_0= 1.267$ fm, as given in \cite{Gal2017} and
rewriting the above equation for the potential as,
$V(r) =
[V_0 + i W_0] \, \exp(-{r^2}/{r_0^2})$, we identify the strengths $V_0$ and
$W_0$ and list them in Table~\ref{table2} for the different cases listed in Table 4 of Ref. \cite{Gal2017}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
$\eta N$ model &$\delta\sqrt{s} [MeV] $ & $B_{\eta4He}$ [MeV] &
$\Gamma$ [MeV]&-$V_0$ [MeV] &-$W_0$ [MeV] \\
\hline
GW~\cite{GW} &0 &25.1 &40.8 &175.7&54.2 \\
&-32.4 &1.03 &2.35 &89.7&8.6 \\
CS~\cite{CS} &0 &6.39 &21 &125.87&29.35 \\
&-19.2 &- &- &69.15&5.046 \\
\hline
\end{tabular}
\end{center}
\caption{Strength of the optical potentials corresponding to the
$\eta$-$^4$He states given in \cite{Gal2017}.
$\delta \sqrt{s} = \sqrt{s} - \sqrt{s_{th}}$ with $\sqrt{s}$ being the
energy available in the center of mass of the $\eta N$ system.
$B_{\eta4He}$ and $\Gamma$ are the binding energies and widths of the
$\eta$-$^4$He states.
\label{table2}}
\end{table}
The $\eta N$ amplitude of \cite{GW} (GW), was
obtained within a K-matrix description of the $\pi N$, $\pi \pi N$, $\eta N$ and
$\gamma N$ coupled channels. The authors fitted the $\pi N \to \pi N$,
$\pi N \to \eta N$, $\gamma N \to \pi N$ and $\gamma N \to \eta N$ data in the energy
range of about 100 MeV on either side of the $\eta$ threshold. Ref. \cite{CS}
presented the $\eta N$ amplitudes calculated within a chirally motivated separable
potential model with the parameters of the model fitted to $\pi N \to \pi N$ and
$\pi N \to \eta N$ data.
A comparison of the $V_0$ and $W_0$ values in Table~\ref{table2} with the allowed
region of the
$V_0 - W_0$ plane leads us to the conclusion that all the bound states
listed in Table~\ref{table2} are excluded by our analysis.
Having excluded the optical potential predictions of unstable bound states in literature,
we turn to examine the special
case of an unstable state centered at zero energy.
The case of a zero energy bound state (or zero energy
resonance), sometimes referred to as a transition state
\cite{barnea2} has been widely studied
in literature \cite{deloff} in the context of different physical situations and
has also been observed in ultracold atoms \cite{ultracold}.
Let us recall some basic facts: a bound state corresponds to a pole in the
S-matrix for $E < 0$. A resonance
corresponds to a pole at positive energies. A state at $E = 0$
(which is usually referred to as a zero energy bound state in case the
angular momentum $l >0$ and zero energy resonance otherwise) leads to a
scattering length,
$a \to \infty$, i.e., the scattering length has a pole when $E =0$.
Ref. \cite{barnea2} has examined the occurrence of such states for a class
of potentials of the form
$V(r,r_0) = - {g \over r^s} \, f\biggl ( {r\over r_0} \biggr ) \, \,
(g > 0, \,\,r_0 >0) $,
which include the Gaussian, exponential and Hulthen among others.
For the Gaussian optical potential of the present work, we identify
$g$ with $V_0$, $s=0$ and $f = \exp({-r^2/r_0^2})$.
Analytical as well as numerical solutions of the Schroedinger equation
for these potentials are provided in Ref. \cite{barnea2}. It is shown
that the existence of the transition state solution depends on a
critical parameter given by
$\beta = 2 \mu \, V_0\, r_0^2/ \hbar^2$, numerical values of which are
listed in a table for several values of $l$. Taking their value of
$\beta = 2.684$ in case of the Gaussian potential with $l = 0$, $\mu$
the reduced mass of $\eta$-$^4$He and with
$r_0 = 1.267$~fm, we find $V_0 = -68.04$~MeV.
Putting back this value in the expression,
$V_0 = -[6 \pi/ \mu_{\eta N}]\,\Re e F_{\eta N}\,(r_0 \sqrt{\pi})^{-3}$, arising from
(\ref{eq:KMT}), we determine $\Re e F_{\eta N} = 0.364$~fm.
This value of $\Re e F_{\eta N}$ corresponds to the subthreshold energies
of $\sqrt{s}$ = 1418.2 and 1467~MeV of the GW and CS $\eta$N amplitudes
respectively (see Fig. 1 in Ref. \cite{Gal2017}). The imaginary parts
of the amplitudes can be seen from the same figure in \cite{Gal2017}
(at the corresponding energies) to
be $\Im m F_{\eta N} = 0.0167$~fm and $\Im m F_{\eta N} = 0.0245$~fm for
the GW and CS models respectively. The imaginary part of the optical
potential can now be determined using,
$W_0 = -[6 \pi/
\mu_{\eta N}]\,\Im m F_{\eta N}\,(r_0 \sqrt{\pi})^{-3}$.
Thus, in case of the zero energy resonance, we find the optical
potential parameters, ($V_0$, $W_0$) to be (-68.04, -3.12)~MeV and
(-68.04, -4.55)~MeV for the GW and CS $\eta$-nucleon interactions respectively.
Repeating the exercise for a different value of the Gaussian parameter,
$r_0 = 1.373$~fm as in \cite{Ikeno_EPJ2017}, the potential parameters are
found to be (-58.01, -3.2)~MeV and
(-58.01, -4.9)~MeV for the GW and CS $\eta$-nucleon interactions respectively.
The above method of first considering the E=0 state of a real Gaussian
potential to determine $V_0$ and then finding $W_0$ seems a posteriori
justified considering the small values of $W_0$ (as compared to $V_0$)
obtained. Indeed a similar procedure of first finding the binding energy
by considering only the real part of the potential and later finding
$\Gamma = - 2 < \Psi|\Im m V_{\eta A}|\Psi>$ using perturbation theory where
$\Psi$ is the solution of the real Hamiltonian has been used in
\cite{Gal2017} too.
Finally, motivated by the above discussion, a renewed search for the $\eta$-$^4$He
states within the model of \cite{Ikeno_EPJ2017} is performed.
At the edge of the allowed region in Fig.~\ref{contour}, very narrow and
weakly bound states of $\eta$-$^4$He, with binding energies and widths in the
range of $\sim$ 2 - 230~keV and $\sim$ 8 - 64~keV respectively are found by
solving the Klein Gordon equation as in \cite{Ikeno_EPJ2017}. These states
correspond to the optical potential parameters $|V_0|$ in the range from
58 to 65~MeV and $W_0$ = 0.5~MeV (red dots in Fig.~\ref{contour}).
For values of $|V_0| <$ 58~MeV, no bound states are found.
We should mention here, however,
that some of the potential parameters are not consistent with the experimental data
on the $\eta$ production cross section above threshold as reported in Ref.
\cite{Ikeno_EPJ2017},
especially for the cases with weak absorption.
Hence we think that a systematic analysis including both the escape and
the conversion cross sections covering the above- and below-threshold region
is necessary in order to investigate the weak absorptive potential region.
\section{Subthreshold considerations and uncertainties}
The $\eta$-nucleus optical potentials are in principle energy dependent and
would depend strongly for example
on the energy at which the elementary $\eta$N amplitude, $F_{\eta N}$,
of Eq. (\ref{eq:KMT}) is evaluated.
In the case of $\eta$-mesic nuclei, the $\eta$N interaction happens at
subthreshold energies and $F_{\eta N}$ should be evaluated at an energy shifted
by an amount $\delta$ below threshold. The importance of taking such a downward shift
into account has been discussed with different points of view in literature
\cite{CieplyNPA2014,Galarxiv2017,HaiderIJMP,Hoshino}. The authors in
\cite{CieplyNPA2014,Galarxiv2017} (and references therein) provide a detailed analysis
of this topic and derive an expression for $\delta$ which depends on the nuclear
binding energy per nucleon as well as the real part of the optical potential itself.
Refs. \cite{HaiderIJMP,Hoshino}, however,
provide a simpler method with $\delta$ given by the average binding of the
target nucleons.
Since the experimental analysis of the present work relies on the input from the
theoretical calculations in \cite{Ikeno_EPJ2017} where the above effects were not
taken into account explicitly, we shall now try to estimate the uncertainties
on $\sigma_{upp}$ (shown by the red mesh in Fig.~\ref{potsandsigma})
introduced by this omission. To obtain this estimate, we
evaluate the optical potential parameters $V_0$, $W_0$ using Eq. (\ref{eq:KMT})
by comparing them with the form $V(r) = [V_0 + i W_0] \exp(-{r^2\over r_0^2})$.
Thus, as observed in the previous section,
$V_0 = -[6 \pi/ \mu_{\eta N}]\,\Re e F_{\eta N}\,(r_0 \sqrt{\pi})^{-3}$ and
$W_0 = -[6 \pi/ \mu_{\eta N}]\,\Im m F_{\eta N}\,(r_0 \sqrt{\pi})^{-3}$.
Evaluating $F_{\eta N}$ at threshold and at 7 MeV
(binding energy per nucleon for $^4$He) and 30 MeV below threshold,
we obtain the optical potential parameters given in Table~\ref{table2} for different
models of $F_{\eta N}$ \cite{GW,CS,Mai2,KSW,IOV} in literature.
The values of $\sigma_{upp}$ corresponding to the parameters $V_0$, $W_0$ in Table~\ref{table2} are read off from Fig.~\ref{potsandsigma} and listed in Table~\ref{table22}. Even if the optical
potential parameters do change a lot depending on the choice of the energy at
which $F_{\eta N}$ is evaluated, the upper limits on the cross sections do not seem
to be very sensitive to this change. Depending on the model, the change in the
upper limits can be between 0 - 6 \%.
Given that the upper limits $\sigma_{upp}$
determined in the present analysis are not very sensitive to the
parameters $V_0$, $W_0$ (see Table~\ref{table1} and Fig.~\ref{potsandsigma}),
such a small uncertainty was expected.
\begin{figure}[h!]
\centering
\includegraphics[width=12.0cm,height=10cm]{potentials.pdf}
\caption{Upper limits on the cross sections, $\sigma_{upp}$ in nb,
as a function of the optical potential parameters $V_0$ and $W_0$.
The red mesh represents the values determined in the present analysis
(as in Table~\ref{table1}). The symbols are the values of $\sigma_{upp}$ corresponding
to $V_0$, $W_0$ given in Table~\ref{table2} for the different $\eta$N models. The black
symbols correspond to $\sigma_{upp}$ for $V_0$, $W_0$ evaluated using
$F_{\eta N}$ at a subthreshold $\eta$N centre mass energy of $\sqrt{s}$ - 7 MeV and
the blue symbols with $F_{\eta N}$ at threshold.
~\label{potsandsigma}}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\hline
$F_{\eta N}$ &$\delta$=0 & & & $\delta$=-7& & &$\delta$=-30 & & \\
\hline
&-$V_{0}$ &-$W_{0}$ & $\sigma_{upp}$& -$V_{0}$ &-$W_{0}$ & $\sigma_{upp}$
&-$V_{0}$ &-$W_{0}$ & $\sigma_{upp}$ \\
\hline
CS~\cite{CS}&97.7 &21.9&6.5 & 72.5&10 &6.88 &44.3 &2.7 &- \\
M2~\cite{Mai2}&54.9 &28.8 &6.6 &45.2 &22 &6.59 &26.6&13 & - \\
KSW~\cite{KSW}&68.4 &32.6 &6.57 &56.8 &22 &6.67 &38.7 &13 &6.5 \\
IOV~\cite{IOV}&42.8 &37.8 &6.55 &36.35 &27.8 &6.46 &20.16 &16.5 &- \\
GW~\cite{GW}&139 &43.6 &- &104 &23.7 &- &71.7 &8.1 &6.95 \\
\hline
\end{tabular}
\end{center}
\caption{Optical potential parameters $V_0$ and $W_0$ (in MeV)
evaluated using (\ref{eq:KMT}) with
the $\eta$-N amplitude $F_{\eta N}$
evaluated at $\delta =0$ (threshold), $\delta$ = -7 MeV
and $\delta$ = -30 MeV, with $\delta = \sqrt{s} -\sqrt{s_{th}}$.
The upper limits on the cross sections listed in this table are read from
the mesh (representing the $\sigma_{upp}$ (in nb)
determined in the present analysis) in Fig.~{\ref{potsandsigma}} at the values of
$V_0$ and $W_0$ in this table. \label{table22}}
\end{table}
\section{Summary and Conclusions}
We performed an analysis in order to constrain the $\eta$-$^4$He
optical potential parameters by comparing a recently developed theoretical
model for $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ bound state production
in \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$} reactions~\cite{Ikeno_EPJ2017} with the experimental data collected by WASA-at-COSY~\cite{Skurzok_NPA}.
Convoluting the theoretical cross section with experimental resolutions,
we estimated the upper limits of the total cross sections for the formation of the $\eta$-mesic Helium nuclei in \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$} processes at a 90\% confidence level.
Comparison of the determined upper limits for the creation of $\eta$-mesic nuclei via the \mbox{$dd\rightarrow$ $^{3}\hspace{-0.03cm}\mbox{He} N \pi$} process with the cross sections obtained in Ref.~\cite{Ikeno_EPJ2017} excludes a wide range of $\eta$-$^{4}\hspace{-0.03cm}\mbox{He}$ optical potential parameters. With the values of $|V_0|$ and $|W_0|$ being
restricted to be less than 60 MeV and 7 MeV respectively, most predictions of
$\eta$-mesic helium states seem to be excluded within the present analysis.
Extremely narrow and loosely bound states within the model of \cite{Ikeno_EPJ2017}
seem however to appear in the
allowed region of the optical potential parameters.
In spite of some shortcomings such as the absence of the explicit inclusion of the
strong energy dependence of the $\eta$N interaction \cite{Galarxiv2017}
and the fact that, in principle, the $\eta$-helium nuclei should be treated within a few
body formalism \cite{Fix2017,Gal2017,RakitPRC,KelkarPRL}, it is worth noting that
in the decades long search
for $\eta$-mesic nuclei, the present work is indeed a first
attempt to combine the experimental data below $\eta$ production threshold with
predictions from a theoretical model which can reproduce the existing data above
threshold too. There exist approaches such as the coupled channels
generalization of the optical potential~\cite{WycechKrzemien} which can bring
out interesting aspects related to the existence of the $\eta$-mesic helium.
Hence, it is hoped that the optical
model analysis of the present work should provide guidance in narrowing down
the search for $\eta$-mesic $^{4}\hspace{-0.03cm}\mbox{He}$.
\section{Acknowledgements}
We acknowledge the support from the Polish National Science Center through grant No. 2016/23/B/ST2/00784 and the Faculty of Science, Universidad de los Andes,
Colombia, through project number P17.160322.007/01-FISI02. This work was partly supported by JSPS KAKENHI Grant Numbers JP16K05355 (S.H.) and JP17K05443 (H.N.) in Japan.
\section*{References}
|
{
"timestamp": "2018-03-30T02:05:42",
"yymm": "1802",
"arxiv_id": "1802.08597",
"language": "en",
"url": "https://arxiv.org/abs/1802.08597"
}
|
\section{Introduction}
The field of cavity quantum electrodynamics studies phenomena
arising from the interaction between matter and light---the latter
in the form of electromagnetic modes confined in a cavity---in the
regime in which the quantum nature of the light is
unveiled~\cite{MicrocavitiesBook}. The matter component commonly
involves two-level systems, which describe a wide variety of
emitters (from real spins to quantum dots, atoms, molecules,
NV-centers, or qubits, among others) as long as the state of the
system can be properly represented by a two-dimensional Hilbert
space. The most simple case of a single two-level system coupled
to a quantized cavity mode is described by the Jaynes-Cummings
model~\cite{JaynesCummings1963}, later extended to an arbitrary
number of emitters in the so-called Tavis-Cummings
model~\cite{TavisCummings1968}. These Hamiltonians, together with
specific procedures to introduce the effect of
losses~\cite{OpenBook}, provide a theoretical description for a
broad variety of configurations.
The interaction of light with a collection of quantum emitters has
been intensively studied considering an extensive range of
systems, since it is of broad interest in research areas ranging
from lasing~\cite{Lasing1, Lasing2} and
superradiance~\cite{Haroche1982, Hommel2007} to non-classical
light generation~\cite{VuckovicNat2008},
entanglement~\cite{Haroche2001}, or quantum information
processing~\cite{Zoller1995, Imamoglu1999}. In fact, there are
several optical cavity configurations capable of supporting an
electromagnetic mode through semiconductor, metal, or dielectric
structures, suiting thereby the specific requirements. From high-quality optical microcavities~\cite{MicrocavitiesBook,
Vahala2003} (consisting in different planar
configurations~\cite{pillarMicrocavity,braggMicrocavity},
whispering-galleries~\cite{whisper1, whisper2}, or
photonic-crystal cavities~\cite{microPhotCrystal1,
microPhotCrystal2,nanoPhotCrystal}, to name a few), the aim to
reach higher light-matter couplings---especially on the route
towards room-temperature devices---has led to the reduction of
the effective volume even below the diffraction limit of classical
optics. This has given rise to nanocavities built on the basis of
plasmonic nanostructures. The price to pay for the large confinement
is that these structures suffer from large dissipative losses,
reducing in turn the quality factor of the cavity.
The balance between coupling strength and losses leads to the
well-known distinction between the weak and strong coupling
regimes. Within the weak coupling regime, the principal feature
consists in the enhancement of the spontaneous emission owing to
the coupling of the emitters to the resonant cavity, known as
the Purcell effect~\cite{Purcell1946}. On the contrary, if the
matter-field interaction becomes larger than the emitter and
cavity relaxation rates, the system may enter the strong coupling
regime, in which the genuine eigenstates turn out to be a quantum
mixture of matter and light. These are referred to as dressed
states or polaritons, arising due to the rapid exchange of energy
between cavity and emitters. The regime of strong coupling has
been demonstrated experimentally for systems involving a large
amount of emitters for
microcavities~\cite{Arakawa1992,Weisbuch1994} as well as for
plasmonic nanocavities~\cite{Bellesa2004,Ebbesen2011}. Reaching
this regime by involving just a single emitter becomes more
challenging, since it requires a cavity with a higher quality
factor or stronger field confinement. While experimental realizations of single-emitter strong
coupling have been reported in microcavities~\cite{Kimble2003,
Forchel2004} and photonic crystals~\cite{Yoshie2004} in the past,
only recently has the strong coupling regime been reached in
plasmonic nanocavities with a single molecule~\cite{Baumberg2016,
Santhosh2016}. Note that by varying the number of emitters, the
system can be naturally tuned from one regime to the other---in
these hybrid systems, the participation of a collection of $N$
quantum emitters makes the interaction stronger through the
appearance of the characteristic $\sqrt{N}$ factor in the
effective collective coupling strength~\cite{Dicke1954}.
Whereas coupling a large amount of emitters to a nanocavity mode
is not difficult from the experimental perspective, addressing the
complete problem theoretically presents a significant challenge. There have
been some theoretical advances in the treatment of plasmonic
strong coupling for large ensembles of emitters in the vanishing
population regime~\cite{Delga2014}. There, the fermionic character
of quantum emitters can be neglected by modelling them as bosonic
harmonic oscillators and, thus, by disregarding excitonic
nonlinearities and any saturation effects emerging in the photon
population dynamics for the system. One step beyond this
completely bosonic description was also carried out through the
so-called Holstein-Primakoff approach~\cite{GonzalezTudela2013, DeLiberato2017},
which is equivalent to introducing a macroscopic third-order
susceptibility for the medium embedding the emitter
ensemble~\cite{Alpeggiani2016}. Very recently, we proposed a
theoretical framework~\cite{Rocio2017} able to describe
plasmon-exciton strong coupling for a mesoscopic number of quantum
emitters which fully accounted for their fermionic character. It
is precisely the inherent quantum nonlinear character of emitters
that forms the basis of many interesting phenomena when coupling
light and matter, such as the well-known photon
antibunching~\cite{KimbleMandel1977}. In general, intensity
correlations of the emitted light are related to the probability
of detecting coincident photons, so that it is used as a
quantifier of multiple-photon events (the supression of which is
an important requirement for single-photon emission). Importantly,
although this magnitude has been extensively used to discriminate
single-photon sources, some recent papers point out that this is
not by itself a reliable indicator~\cite{Laussy}.
Non-classical light generation is extremely important in the
fields of quantum cryptography~\cite{quantumCryptography}, quantum
sensing~\cite{quantumSensing}, quantum
metrology~\cite{quantumMetrology}, or quantum
communication~\cite{Kimble2008}, among other emerging photonic
quantum technologies. The production of single photons on demand
has focused great efforts, and different methods have been used
for its extraction~\cite{SinglePhotons2005}, such as faint laser
pulses~\cite{faintPulses}, single-emitter
systems~\cite{singleAtom,singleMolecule,diamond}, non-linear
crystals~\cite{nanocrystal} or parametric
downconversion~\cite{parametricDownconversion}. All these exploit
the inherent non-linearity of the photonic system. When using a
large number of emitters, their collective response becomes
approximately bosonic and non-classical light is supposed not to
be generated. Nevertheless, it has been shown that when
considering a mesoscopic number of emitters, these non-linearities
can be preserved and antibunched light may still be
produced~\cite{Kimble1991, Carmichael1991, Foster2000}. Much
effort has been devoted to the analysis of the evolution of the
quantum statistical properties of the light emitted by hybrid
systems for increasing number of emitters, assessing the
possibility of generating non-classical light with mesoscopic
ensembles. Some studies involve just a few quantum
emitters~\cite{Auffeves2011, Radulaski2017}, and others consider a
huge amount under particular
simplifications~\cite{MeiserHolland2010a,Richter2015,Grangier2014}.
From a technical perspective, there exist brute-force approaches
based on Monte-Carlo techniques~\cite{TemnovWoggon2005,
TemnovWoggon2009}, and developments towards the efficient
treatment of large ensembles~\cite{Richter2016} thanks to the use
of symmetries.
In this article, we study the coherence properties of the light
radiated by a collection of $N$ identical quantum emitters placed
inside a generic optical cavity when the system is coherently
pumped by a laser. Two paradigmatic configurations are explored,
namely, a collection of quantum dots coupled to a dielectric
microcavity, and an ensemble of organic molecules within a
plasmonic nanocavity (note that the pumping and the emission
differ according to the open or closed character, respectively, of
these systems). Whereas the former have been extensively
investigated within the field of semiconductor quantum
electrodynamics over the last two decades, the exploration of the
latter for quantum optical purposes is still in a very early
stage. To our knowledge, this article provides the first
comparative study between both physical platforms for
non-classical light generation. In order to perform a meaningful
comparison between both physical systems, we treat them in the
same footing. In turn, this means that aspects that may be
relevant in specific experimental implementations of both
configurations, such as spatial and spectral inhomogeneities or
emitter-emitter interactions, are neglected. The statistics of the
emitted light are analysed for these two cases determining the
parameter ranges in which photon bunching and antibunching appear.
In particular, the focus is on the two effects that can lead to
single-photon emission: the well-known {\it photon blockade
effect} and the so-called {\it unconventional antibunching}, which
originates from interference effects and is thus here referred to as
{\it interference-induced correlations}. On the basis of the
quantum master equation for the extended Tavis-Cummings model,
analytical and numerical computations are performed by using an
effective Hamiltonian approach.
Our theoretical framework is described in
\autoref{sec:TheoreticalFramework}, beginning with the
introduction of the two systems under consideration and their
modelling (\autoref{subsec:TF_System}). The procedure to determine
the steady state of the system (\autoref{subsec:TF_SteadyState})
and compute the correlation functions
(\autoref{subsec:TF_CorrelationFunctions}) is described next. In
\autoref{sec:Results}, we present a comprehensive analysis of
photon statistics for realistic plasmonic nanocavities and
dielectric microcavities. First, the intensity and second-order
correlation function of the emitted light are explored for various
ensemble sizes (\autoref{subsec:R_IntensityCorrelation}), and
analytical expressions for these magnitudes for both cavity
configurations are provided. Then, the two different mechanisms
leading to sub-Poissonian light are studied in detail
(\autoref{subsec:R_TwoMechanisms}), followed by the analysis of
the effect of spectral detuning between cavity and emitters on the
photon correlations (\autoref{subsec:R_Detuning}). The evolution
of the second-order correlation function at time delays different
from zero is also investigated (\autoref{subsec:R_Tau}). Finally,
the conclusions of the work are presented in
\autoref{sec:Conclusions}.
\section{Theoretical framework} \label{sec:TheoreticalFramework}
\subsection{System} \label{subsec:TF_System}
The system under study consists of $N$ quantum emitters---modelled
as simple two-level systems with a ground $| \mbox{g}_n \rangle$ and an
excited state $| \mbox{e}_n \rangle$---located in an optical cavity.
Every emitter is coupled to a quantized single cavity mode through
the electric-dipole interaction, and they are considered not to
interact among them apart from through the cavity mode (note that
emitter-emitter interactions become significant only in dense
ensembles~\cite{Rocio2017}). A laser field coherently pumps the
system, and the emitted light is collected in a detector located
in the far-field. An illustration of the system is depicted in
\autoref{fig:modelScheme}, where the two cases of a nano- (left)
and a micro- (right) cavity are distinguished. Both the pumping
and the emission varies according to the corresponding open or
closed character: whereas for nanocavities the entire system is
pumped and the radiation from both the quantum emitters and the
cavity is observed (open configuration), in microcavities the
coupling to the outside mode is mediated by the mirrors, such that
only the cavity mode is pumped and only its emission is received
at the detector (closed configuration). Apart from this
fundamental distinction, the size and characteristic losses are
also differentiating features between these two types of cavities.
By nanocavities, we are referring to plasmonic
cavities~\cite{SavastaACS2010,Sandoghdar2013,
Shegai2015,Peyskens2017}, where the spatial dimensions are reduced
to the nanometre scale and cavity losses are substantial ($\sim
0.1$~eV)~\cite{Tame2013}. In contrast, by microcavities we refer
here to photonic crystals~\cite{Noda2003,Englund2007} and other
semiconductor structures~\cite{Jewell1989,Forchel2004} with sizes
of the order of micrometres and whose absorption is much lower
($\sim 0.1$~meV)~\cite{Vahala2003}.
The Hamiltonian of the total system involves the excitation of both the
cavity mode (with transition frequency $\omega_{\mbox{\tiny{C}}}$ and bosonic
creation and annihilation operators $a^\dagger$ and $a$) and the
collection of $N$ quantum emitters (with transition frequency
$\omega_n$ and creation and annihilation operators
$\sigma_n^\dagger = |\mbox{e}_n \rangle \langle \mbox{g}_n |$ and $\sigma_n =
|\mbox{g}_n \rangle \langle \mbox{e}_n |$ for the $n$-th emitter which, in
turn, define the operator $\sigma_n^z = [\sigma_n^\dagger,
\sigma_n]/ 2 $), as well as the coherent pumping of a laser of frequency $\omega_{\mbox{\tiny{L}}}$. In the Schr\"odinger picture, it reads
(from now on, $\hbar = 1$):
\begin{equation}
\begin{split}
H = \ & \omega_{\mbox{\tiny{C}}} \ a ^\dagger a + \sum_{n=1}^N \omega_{n} \sigma_n^z
+ \sum_{n=1}^N \lambda_n (a^\dagger \sigma_n + a \sigma_n^\dagger) \\
& + \Omega_{\mbox{\tiny{C}}} ( a ^\dagger \ e^{- \mbox{\scriptsize{i}} \omega_{\mbox{\tiny{L}}} t} + a \ e^{ \mbox{\scriptsize{i}} \omega_{\mbox{\tiny{L}}} t}) \\
& + \sum_{n=1}^N \Omega_{n} ( \sigma_n^\dagger \ e^{- \mbox{\scriptsize{i}} \omega_{\mbox{\tiny{L}}} t} + \sigma_n \ e^{ \mbox{\scriptsize{i}} \omega_{\mbox{\tiny{L}}} t}) \ ,
\end{split}
\label{eq:HamiltonianSP}
\end{equation}
where $\lambda_n$ is the coupling between the $n$-th quantum
emitter and the cavity mode, and $\Omega_{\mbox{\tiny{C}}}$ and $\Omega_n$ are
the laser pumping to the cavity mode and the $n$-th quantum
emitter respectively. If we define the effective dipole moments
associated with the cavity, $\boldsymbol{\mu}_{\mbox{\tiny{C}}}$, and with the
$n$-th emitter, $\boldsymbol{\mu}_{n}$, these pumping frequencies
can be expressed in terms of the laser field intensity
$\boldsymbol{ E}_{\mbox{\tiny{L}}}$ as $\Omega_{\mbox{\tiny{C}}} = \boldsymbol{ E}_{\mbox{\tiny{L}}}
\cdot \bf {\boldsymbol{\mu}_{\mbox{\tiny{C}}}}$ and $\Omega_{n} =
\boldsymbol{E}_{\mbox{\tiny{L}}} \cdot \boldsymbol{\mu}_{n}$. In the
description of a microcavity, only the cavity is pumped---hence
we set $\Omega_n = 0$ for all $n$. Note also that the
{\it rotating wave approximation}~\cite{WallsMilburnBook} has been
introduced in \autoref{eq:HamiltonianSP}, which implies that the
emitter-cavity coupling is low enough to disregard the fast-rotating terms.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{A_modelSchemeH.pdf}
\caption{Scheme of the systems, composed of a collection of quantum emitters coupled to an electromagnetic mode supported by either a nano- (left) or a micro- (right) cavity. The set of parameters characterising the system are sketched.}
\label{fig:modelScheme}
\end{figure}
Through the coherent excitation, the Hamiltonian
(\ref{eq:HamiltonianSP}) acquires an explicit dependence on time,
which can be easily removed by transforming it into a rotating
frame. By means of the unitary operator
$
\mathcal{U}_0(t) = \mbox{exp}[{- \mbox{i} A t}] \ ,
$
with
$
A = \omega_{\mbox{\tiny{L}}} a^\dagger a + \sum_{n=1}^N \omega_{\mbox{\tiny{L}}} \sigma_n^z \ ,
$
the Hamiltonian in the transformed frame,
$\tilde H = \mathcal{U}_0^\dagger H \mathcal{U}_0 - A$,
becomes:
\begin{equation}
\begin{split}
\tilde H = \ & \Delta_{\mbox{\tiny{C}}} \ a ^\dagger a + \sum_{n=1}^N \Delta_{n} \sigma_n^z
+ \sum_{n=1}^N \lambda_n (a^\dagger \sigma_n + a \sigma_n^\dagger) \\
& + \Omega_{\mbox{\tiny{C}}} ( a ^\dagger + a )
+ \sum_{n=1}^N \Omega_{n} ( \sigma_n^\dagger + \sigma_n) \ ,
\end{split}
\label{eq:HamiltonianIP}
\end{equation}
where the detunings from the laser corresponding to the cavity
mode, $ \Delta_{\mbox{\tiny{C}}} = \omega_{\mbox{\tiny{C}}} - \omega_{\mbox{\tiny{L}}}$, and the $n$-th
quantum emitter, $ \Delta_{n} = \omega_n - \omega_{\mbox{\tiny{L}}}$, have
been defined.
When all quantum emitters have the same transition frequency
$\omega_{\mbox{\tiny{QE}}}$ (consequently, the same detuning $\Delta_{\mbox{\tiny{QE}}} =
\omega_{\mbox{\tiny{QE}}} - \omega_{\mbox{\tiny{L}}}$) and equal couplings to the cavity,
$\lambda$, and to the laser field, $\Omega_{\mbox{\tiny{QE}}} =
\boldsymbol{E}_{\mbox{\tiny{L}}} \cdot \boldsymbol{\mu}_{\mbox{\tiny{QE}}}$, the above
expression for the Hamiltonian (\ref{eq:HamiltonianIP}) reduces
to:
\begin{equation}
\begin{split}
\tilde H = \ & \Delta_{\mbox{\tiny{C}}} \ a ^\dagger a + \Delta_{\mbox{\tiny{QE}}} S^z
+ \lambda (a^\dagger S^- + a S^+) \\
&
+ \Omega_{\mbox{\tiny{C}}} ( a ^\dagger + a )
+ \Omega_{\mbox{\tiny{QE}}} ( S^+ + S^-) \ ,
\end{split}
\label{eq:HamiltonianIPbright}
\end{equation}
where the bright mode creation and annihilation operators, $S^+ =
\sum_{n=1}^N \sigma_n^\dagger $ and $S^- = (S^+)^\dagger$,
together with the operator $S^z = \sum_{n=1}^N \sigma_n^z$, have
been introduced for this set of $N$ emitters. These collective operators
behave like standard spin-$N/2$ operators.
Note that the coupling $\lambda$ is just the interaction between the emitter
dipole moment and the near field of the cavity, that is, $\lambda
= \boldsymbol{E}_{\mbox{\tiny{C}}} \cdot \boldsymbol{\mu}_{\mbox{\tiny{QE}}}$. Some tests
beyond these assumptions, treating the emitters and their
respective couplings individually, can be found in
Ref.~\cite{Rocio2017}.
\subsection{Steady state of the system} \label{subsec:TF_SteadyState}
To study the properties of the emitted light, we first need to
determine the steady-state of the system. The time-evolution of
the density matrix $\rho$ describing the system is governed by the
following master equation:
\begin{equation}
\frac{\mbox{d}}{\mbox{d} t} \rho = - \mbox{i} \ [\tilde H, \rho]
+ \frac{\gamma_{\mbox{\tiny{C}}}}{2}\mathcal{L}_a[\rho] +\frac{\gamma_{\mbox{\tiny{QE}}}^{\rr}}{2} \mathcal{L}_{S^-}[\rho] + \sum_{n=1}^N \frac{\gamma_{\mbox{\tiny{QE}}}^{\nr}}{2} \mathcal{L}_{\sigma_n}[\rho]\ ,
\label{eq:MasterEquation}
\end{equation}
where the Lindblad terms, given by $\mathcal{L}_\mathcal{O} = 2
\mathcal{O} \rho \mathcal{O}^\dagger - \mathcal{O}^\dagger
\mathcal{O} \rho - \rho \mathcal{O}^\dagger \mathcal{O}$ for an
arbitrary operator $\mathcal{O}$, account for the losses arising
from both cavity and quantum emitters (with decay rates
$\gamma_{\mbox{\tiny{C}}}$ and $\gamma_{\mbox{\tiny{QE}}}$ respectively). The superscripts
stand for radiative (r) and non-radiative (nr) damping. Note that
while all cavity losses are included in $\mathcal{L}_a$, two
different terms have to be added for the quantum emitters:
radiative losses are described through the bright mode operator
$S^-$ (corresponding to the assumption that the emitters are at
sub-wavelength distances and thus radiate like a collective dipole),
but non-radiative losses are assigned to single emitters.
Therefore, it is the single-atom operator $\sigma_n$ that is
involved in the latter case.
In the regime of sufficiently low driving intensity, the contribution
of the so-called {\it refilling} or {\it feeding} terms
$\mathcal{O} \rho \mathcal{O}^\dagger$ appearing in the Lindblad
superoperators remains negligible~\cite{Rocio2017}. When they are
removed, the Lindblad master equation (\ref{eq:MasterEquation}) becomes
equivalent to the Schr\"odinger equation $\mbox{d} |\psi \rangle
/ \mbox{d} t = - \mbox{i} H_{\rm eff} | \psi \rangle$ with an effective
Hamiltonian:
\begin{equation}
H_{\rm eff} = \tilde{H} - \mbox{i} \frac{\gamma_{\mbox{\tiny{C}}}}{2} a^\dagger a
- \mbox{i} \frac{\gamma_{\mbox{\tiny{QE}}}^{\rr}}{2} S^+ S^-
- \mbox{i} \frac{\gamma_{\mbox{\tiny{QE}}}^{\nr}}{2} S^z \ ,
\label{eq:EffectiveHamiltonian}
\end{equation}
where $\tilde H$ is given by \autoref{eq:HamiltonianIPbright}.
In this effective Hamiltonian, only bright mode operators appear.
Within this approach, the dark states
of the ensemble, superpositions of the quantum emitter excitations
that do not couple to the cavity or the external light, can thus be
disregarded without further approximation. This corresponds to a
crucial reduction in numerical effort when
considering a large number of emitters $N$: instead of having to
consider $N$ and $N(N-1)$ states in the one- and two-excitation
manifold respectively, only one singly and one doubly excited bright
state, $\sum_{n=1}^{N} \sigma_n^\dagger | 0 \rangle/ \sqrt{N}$
and $\sum_{n, m =1}^{N} \sigma_n^\dagger \sigma_m^\dagger
| 0 \rangle / \sqrt{N (N-1)}$ (with $n \neq m$), play a role.
In the low pumping regime, we can perturbatively solve the Schr\"odinger
equation $H_{\mbox{\scriptsize{eff}}} |\psi \rangle = 0$ (equivalent to solving for the
steady state solution of the master equation $\mbox{d} \rho / \mbox{d}
t = 0$). Considering the incident laser amplitude $E_{\mbox{\tiny{L}}}$ as the small
parameter, the effective Hamiltonian
(\ref{eq:EffectiveHamiltonian}) can be split as $H_{\mbox{\scriptsize{eff}}} = H_0 +
E_{\mbox{\tiny{L}}} V$, where the second term is the driving:
\begin{equation}
E_{\mbox{\tiny{L}}} V = \Omega_{\mbox{\tiny{C}}}(a^\dagger + a) + \Omega_{\mbox{\tiny{QE}}} (S^+ + S^-) \ .
\end{equation}
The steady state $| \psi \rangle$ can also be expanded in a power
series of $E_{\mbox{\tiny{L}}}$, $ | \psi \rangle = \sum_{k = 0} E_{\mbox{\tiny{L}}}^k |
\psi_k \rangle $. Substituting these expansions into the
equation $H_{\mbox{\scriptsize{eff}}} |\psi \rangle= 0$ and grouping terms for each
power of $E_{\mbox{\tiny{L}}}$ results in a set of linear equations. The
zeroth-order equation leads simply to $|\psi_0 \rangle = |0
\rangle$ (that is, the ground state, which represents no
excitations in the system) whereas the $k$th-order equation turns
out to be $H_0 | \psi_k \rangle + V| \psi_{k-1} \rangle = 0$.
These equations can be successively solved so that the steady
state is finally obtained from this perturbative approach.
\subsection{First- and second-order correlation functions} \label{subsec:TF_CorrelationFunctions}
Once the steady state is known, the correlation properties of the
emitted light can be calculated. The negative-frequency part of
the scattered far-field operator at the detector,
$\boldsymbol{E}_{\rm D}^-$, depends on the type of cavity we
consider: while for nanocavities the radiation from both cavity
and quantum emitters is taken into account, $\boldsymbol{E}_{\rm
D}^- \propto \boldsymbol{\mu}_{\mbox{\tiny{C}}} a^\dagger +
\boldsymbol{\mu}_{\mbox{\tiny{QE}}} S^+$, for microcavities just the emission
coming from the cavity is detected, $\boldsymbol{E}_{\rm D}^-
\propto \boldsymbol{\mu}_{\mbox{\tiny{C}}} a^\dagger$. This reflects the
open/closed character of each type of cavity. Note that the
differences between the electromagnetic Green's function
describing the emission from the cavity and the various emitters
in nanocavities can be neglected due to their deeply subwavelength
dimensions. The light intensity at a given point $I ({\boldsymbol
r}, t)$ is defined in terms of the electric field operator as:
\begin{equation}
I ({\boldsymbol r}, t) = \langle \boldsymbol{E}_{\rm D}^- ({\boldsymbol r}, t) \boldsymbol{E}_{\rm D}^+ ({\boldsymbol r}, t) \rangle
\end{equation}
and the two-time second-order correlator $G^{(2)}$ and its
normalized version $g^{(2)}$ are given by~\cite{LoudonBook}:
\begin{equation}
\begin{split}
G^{(2)} & ({\boldsymbol r_1}, t_1; {\boldsymbol r_2}, t_2) = \\
& \langle \boldsymbol{E}_{\rm D}^- ({\boldsymbol r_1}, t_1) \boldsymbol{E}_{\rm D}^- ({\boldsymbol r_2}, t_2) \boldsymbol{E}_{\rm D}^+ ({\boldsymbol r_2}, t_2) \boldsymbol{E}_{\rm D}^+ ({\boldsymbol r_1}, t_1) \rangle
\\
g^{(2)} & ({\boldsymbol r_1}, t_1; {\boldsymbol r_2}, t_2) = \frac{G^{(2)} ({\boldsymbol r_1}, t_1; {\boldsymbol r_
2}, t_2)}{I ({\boldsymbol r_1}, t_1) I ({\boldsymbol r_2}, t_2) } \ ,
\end{split}
\end{equation}
where $\langle \cdot \rangle$ denotes time average. The latter is
related to the (conditional/joint) probability of detecting a
photon in the detector placed at $r_2$ at time $t_2$ once a photon
has reached the detector placed at $r_1$ at time $t_1$.
Considering a fixed position, it can be rewritten in terms of the
time delay $\tau = t_2 - t_1$ as:
\begin{equation}
g^{(2)} (\tau) = \frac{\langle \boldsymbol{E}_{\rm D}^- (t) \boldsymbol{E}_{\rm D}^- (t + \tau) \boldsymbol{E}_{\rm D}^+ (t + \tau) \boldsymbol{E}_{\rm D}^+ (t) \rangle}
{I (t) I (t + \tau) } \ ,
\end{equation}
which does not depend on time $t$ in the steady state. Note that
for $\tau = 0$, it yields the probability of detecting two
coincident photons. When $\gtau < \gz$, registering two photon
counts with a delay $\tau$ is less likely than the observation of
two simultaneous photons. This is known as {\it photon bunching},
since photons tend to be distributed close together, in ``bunches'',
instead of being located further apart. The opposite situation is
given when $\gtau > \gz$, known as {\it photon antibunching}, where
photons tend to arrive at different times. For a coherent source
of light, $\gtau = 1$ for all $\tau$, which means that photons
arrive independently from one another at the detector (note that
$\gtau \rightarrow 1$ when $\tau \rightarrow \infty$ for any light
source), leading to a Poissonian distribution of arrival times.
The statistics of the
light is then said to be {\it super-Poissonian} ($\gz
> 1$) or {\it sub-Poissonian} ($\gz<1$) if the coincidence of two
photons at the detector is, respectively, more or less likely than
that for a coherent light source (random case). The concepts of
antibunching and sub-Poissonian statistics are often not distinguished in
the literature since they usually occur together. Nevertheless, they
are not equivalent concepts but reflect distinct effects; indeed,
sub-Poissonian statistics can take place together with
bunching~\cite{ZouMandel1990}. Since the evolution of $\gtau$ with
time delay is explored in this article, we distinguish them
rigorously throughout the text to avoid misunderstandings. All the
same, both antibunching and sub-Poissonian statistics are
phenomena related to non-classical light, as the conditions
defining them cannot be fulfilled for classical
fields~\cite{WallsMilburnBookSTATISTICS}. Therefore, they offer a
means to measure the classicality/quantumness of light.
The first- and second-order correlation functions can be evaluated
by considering the perturbative solution for the steady state of
the system described above. The scattering intensity $I$ and the
normalized zero-delay second-order correlation function $\gz$ in
the steady state are thus computed as:
\begin{equation}
\begin{split}
I & = \langle \psi_1 | \boldsymbol{E}_{\rm D}^- \boldsymbol{E}_{\rm D}^+ | \psi_1 \rangle \\
g^{(2)} (0) & = \langle \psi_2 | \boldsymbol{E}_{\rm D} ^- \boldsymbol{E}_{\rm D}^- \boldsymbol{E}_{\rm D}^+ \boldsymbol{E}_{\rm D}^+ | \psi_2 \rangle
/ I ^2 \ .
\label{eq:ComputingIandG2}
\end{split}
\end{equation}
From these equations, it follows that our perturbative
calculations can be restricted to second order, and we can
truncate the Hilbert space at the two-excitation manifold. To
compute the second-order correlation function $\gtau$, the
evolution operator
$
\mathcal{U}(t) = \mbox{exp}[{- \mbox{i} H_{\mbox{\scriptsize{eff}}} t}]
$
has to be introduced:
\begin{equation}
\begin{split}
g^{(2)} (\tau) = & \langle \boldsymbol{E}_{\rm D} ^- (0) \boldsymbol{E}_{\rm D}^- (\tau) \boldsymbol{E}_{\rm D}^+ (\tau) \boldsymbol{E}_{\rm D}^+ (0) \rangle
/ I ^2
\\
= & \langle \psi | \boldsymbol{E}_{\rm D} ^- \mathcal{U}^\dagger (\tau) \boldsymbol{E}_{\rm D}^- \boldsymbol{E}_{\rm D}^+ \mathcal{U}(\tau) \boldsymbol{E}_{\rm D}^+ | \psi \rangle
/ I ^2 \ ,
\end{split}
\end{equation}
where $ \boldsymbol{E}_{\rm D} ^- \equiv \boldsymbol{E}_{\rm D} ^- (0)$, and the perturbative solution
of $| \psi \rangle$ up to second order is also used in the
calculation.
\section{Results and discussion} \label{sec:Results}
In the following, the theoretical framework presented in the
previous section is applied to study the coherence properties of
the light emitted by a collection of $N$ quantum emitters coupled
to either a nano- or a microcavity, with the distinction
introduced before. Our attention is focused on resonant coupling,
setting $\omega_{\mbox{\tiny{C}}}, \omega_{\mbox{\tiny{QE}}} \equiv \omega_0 = 3$ eV in all
cases (except in the section where the effects of spectral
detuning are explored). Beyond that, we have to
consider a specific set of parameters for each system. The
dissipation rate associated with the microcavity is taken to be
$\gamma_{\mbox{\tiny{C}}} = 66 \ \mu$eV (16 GHz)~\cite{VuckovicNat2008}, which
corresponds to the spontaneous decay rate of a dipole moment
$\mu_{\mbox{\tiny{C}}} = 3.1$ e$\cdot$nm \cite{NovotnyBook}. On the contrary,
substantial non-radiative losses are a distinctive feature of
plasmonic cavities \cite{MaierBook, SavastaACS2010}, hence we set
$\gamma_{\mbox{\tiny{C}}} = 0.1$ eV (24 THz) for the nanocavity. This value
also incorporates the radiative losses corresponding to a dipole
with $\mu_{\mbox{\tiny{C}}} = 19$ e$\cdot$nm, which mimics the cavity emission
\cite{GlezBallestero2015}. Regarding the emitters, we consider
those typically used in the experimental setups involving each
cavity. First, the quantum emitters usually located inside
dielectric cavities are characterized by negligible
non-radiative losses, thus $\gamma_{\mbox{\tiny{QE}}}^{\nr} = 0$. In
particular, we choose semiconductor quantum dots with dipole
moment $\mu_{\mbox{\tiny{QE}}} = 0.25$ e$\cdot$nm, which corresponds to
a radiative decay rate $\gamma_{\mbox{\tiny{QE}}}^{\rr} = 0.41 \ \mu$eV (0.10 GHz)
\cite{VuckovicNat2008}. Conversely, the quantum emitters
interacting with plasmonic nanocavities are considered to be
organic molecules with $\mu_{\mbox{\tiny{QE}}} = 1$ e$\cdot$nm, and presenting
very low quantum yield. The specific rates chosen for these
emitters are $\gamma_{\mbox{\tiny{QE}}}^{\nr} = 15$ meV
(3.6 THz) and $\gamma_{\mbox{\tiny{QE}}}^{\rr} = 6 \ \mu$eV (1.5 GHz).
In our study, we assume that both open and closed cavities operate
at low temperature, which diminishes greatly the impact of
pure-dephasing processes in the dynamics of both quantum
dots~\cite{Thoma2016} and organic molecules~\cite{Rector1998}.
Accordingly, we have not included this decoherence mechanism in
our theoretical model. Moreover, from here on and for simplicity,
we consider that the external laser field $\boldsymbol{E}_{\rm L}$
is parallel to both the cavity and the quantum emitters dipole
moments. Note that this turns out to be the optimal configuration
to enter the strong coupling regime.
\subsection{Intensity and coherence} \label{subsec:R_IntensityCorrelation}
The features of the light emerging from these two configurations
are studied first as a function of the number of emitters. The
intensity and the zero-delay second-order correlation function for
the steady state are computed from \autoref{eq:ComputingIandG2}.
Although the numerical results displayed below depend on the
particular values of the parameters, the qualitative picture we
present is not bound to the specific configuration---on the
contrary, it remains the same when considering a wide range of
parameters describing realistic systems.
\subsubsection{Plasmonic nanocavities}
We consider first the nanocavity, where the light reaching the
detector comes from both the quantum emitters and the cavity
itself. \autoref{fig:ms1_intg2} shows the scattering intensity $I$
(top row) and the zero-delay second-order correlation function
$g^{(2)}(0)$ (bottom row) for three different collections of
emitters: $N = 1$ (a), 5 (b), and 25 (c). Both magnitudes are
plotted as a function of the laser detuning $\omega_{\mbox{\tiny{L}}} -
\omega_0$ and the coupling strength $\lambda$, which is expressed
in units of the cavity decay rate $\gamma_{\mbox{\tiny{C}}}$.
\begin{figure*}[htpb]
\centering
\includegraphics[width=\linewidth]{B_ms1_SPQE.pdf}
\caption{Scattering intensity $I$ (top row) and correlation function $g^{(2)}(0)$ (bottom row) versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ and coupling strength $\lambda$ (in units of the cavity decay rate $\gamma_{\mbox{\tiny{C}}}$) for a system of $N = 1$ (a), 5 (b) and 25 (c) quantum emitters coupled to a plasmonic nanocavity. In these panels, dotted (dashed) lines plot the polariton frequencies (half-frequencies) in the one-excitation (two-excitation) manifold. Insets zoom into the low coupling region. Magenta marks indicate points whose $\gtau$ is plotted in \autoref{fig:tau} (a$_1$).}
\label{fig:ms1_intg2}
\end{figure*}
In all intensity maps two scattering maxima are observed, which
correspond to the polariton energies within the strong coupling
regime---that is, the eigenenergies of the dressed states in the
one-excitation manifold, the first-rung of the so-called {\it
Tavis-Cummings ladder}. In this way, for each value of the
coupling strength we find two intensity peaks at laser frequencies
that match the lower (LP) and the upper (UP) polariton energies.
These dispersion curves are plotted in dotted lines overlapping
the maps, so that the correspondence is easily observed. Note that
these two maxima branches are also apparent within the weak
coupling regime, and thus this presence cannot be regarded as an
energy splitting. Its origin actually lies in a Fano-like
interference, appearing when two signals with very different
linewidths interact~\cite{Bryant2006, Bryant2008, SavastaPRL2010}.
The pronounced minimum in the scattering intensity is in this case
produced by the destructive interference between the cavity and
emitter emission.
Finally, observe the asymmetry between the two intensity maxima
branches: the one corresponding to the UP is distinctly brighter.
Note that the emission coming from each polariton can be described
from either the parallel (UP) or antiparallel (LP) superposition
of the dipole moments associated with the plasmon and the bright
mode of the emitter ensemble. Since the dipole moment
corresponding to the UP is larger, its emission is more intense.
This difference in the effective dipole moment between LP and UP
grows as $N$ increases, as a consequence of the greater collective
dipole moment of the ensemble. Hence the contrast between branches
becomes more pronounced for larger ensemble sizes.
We can get a better understanding from the analytical results
obtained thanks to the perturbative approach described above. The
expression for the scattered intensity reads:
\begin{equation}
I \propto \left|
\frac{ \tilde{\Delta}_{\mbox{\tiny{C}}} \mu_{\mbox{\tiny{QE}}}^2 + \dtQE \mu_{\mbox{\tiny{C}}}^2/N - 2 \lambda \mu_{\mbox{\tiny{C}}} \mu_{\mbox{\tiny{QE}}}}{\dtC \dtQE /N - \lambda^2}
\right|^2 \ ,
\label{eq:anaOC_intensity}
\end{equation}
where the detunings of the laser frequency from both the cavity,
$\Delta_{\mbox{\tiny{C}}}$, and the emitters, $\Delta_{\mbox{\tiny{QE}}}$, are redefined to
introduce the associated losses as $\dtC = \Delta_{\mbox{\tiny{C}}} - \mbox{i}
\gamma_{\mbox{\tiny{C}}}/2$ and $\dtQE = \Delta_{\mbox{\tiny{QE}}}-\mbox{i}(\gamma_{\mbox{\tiny{QE}}}^{\nr} +
N \gamma_{\mbox{\tiny{QE}}}^{\rr})/2$. This expression confirms the origin of
the intensity maxima: the condition for which the denominator
vanishes, $\lambda^2 = \tilde \Delta_{\mbox{\tiny{C}}} \tilde \Delta_{\mbox{\tiny{QE}}}/N $,
gives us the dispersion of the LP and UP. Notice
the $\sqrt{N}$ dependence, characteristic scaling of collective
coupling. In addition, this expression sheds light into the
asymmetry in the branches: the intensity behaves as $I \propto (1
\mp \sqrt{N} \mu_{\mbox{\tiny{QE}}}/\mu_{\mbox{\tiny{C}}})^2$ for the LP (upper sign) and UP
(lower sign) when the losses from both cavity and emitters are
neglected.
The bottom row of \autoref{fig:ms1_intg2} clearly shows areas of
super-Poissonian (yellow colored) as well as sub-Poissonian (blue
colored) statistics for all ensembles sizes. Focusing our
attention first in the single-emitter case ($N=1$), we distinguish
a main super-Poissonian area located between the LP and UP
energies (again depicted as dotted lines) for all coupling values.
Close to these polariton frequencies, but still far from the
two-excitation eigenenergies (depicted as dashed lines), regions
of sub-Poissonian light are found, being more pronounced as the
coupling strengthens. These correspond to the well-known {\it
photon blockade} effect, where the presence of an excitation in
the system prevents the absorption of a second photon at certain
frequencies due to the anharmonicity of the energy ladder. Apart
from these three stripes, we find another area of sub-Poissonian
emission that is enlarged in the corresponding inset. It lies
around the resonant frequency $\omega_{\mbox{\tiny{L}}} = \omega_{0}$. The
mechanism behind it was addressed theoretically in the context of
dielectric microcavities~\cite{Savona2010}, the so-called {\it
unconventional antibunching}. Here, we employ the term {\it
interference-induced correlations}, since it is the destructive
interference among possible decay paths that produces the
suppression of two-photon processes and hence the drop of $\gz$
below one~\cite{Ciuti2011, Ciuti2013}. In the following section,
these two different types of sub-Poissonian light are discussed in
further detail.
The statistical features observed for single emitters are also
present for larger $N$. As we already pointed out in
Ref.~\cite{Rocio2017}, photon correlations arising at the
single-emitter level remain, and can even be enhanced as the
ensemble size increases. This is observed in the panels
corresponding to $N=5$ and $N=25$ in \autoref{fig:ms1_intg2}. The
area of bunched light remains between the one-excitation
eigenenergies, although it tends to approach the LP branch when
the number of emitters increases. This tilt is also observed for
the interference-induced correlation area: the region of negative
correlations shifts towards the LP energy, while values for the
function $\gz$ below one are still achieved within the same
coupling range as for the single-emitter case. Note that by
increasing further the number of emitters, the system eventually bosonizes
(that is, it yields $\gz=1$). The quantum character of
the emitted light is then lost. Focusing now on the region
associated with the photon blockade effect, we observe that the
minimum following the UP branch is deeper than the LP one.
Nevertheless, it is apparent how both fade for larger sizes of the
emitter ensemble. Indeed, for $N=25$ there are a wide range of
coupling strengths where this effect is not observable. On the
contrary, for moderate values of the coupling the dip
corresponding to interference effects is not only present, but
becomes quite pronounced.
From our perturbative approach we can also obtain the analytical
expression for the correlation function $g^{(2)}(0)$ corresponding
to an open nanocavity:
\begin{widetext}
\begin{equation}
\begin{split}
g^{(2)}(0) = & \left| 1 - \frac{1}{N}
\left( \frac{\dtC \mu_{\mbox{\tiny{QE}}} - \lambda \mu_{\mbox{\tiny{C}}}}
{ \dtC \mu_{\mbox{\tiny{QE}}}^2 + \dtQE \mu_{\mbox{\tiny{C}}}^2/N - 2 \lambda \mu_{\mbox{\tiny{C}}} \mu_{\mbox{\tiny{QE}}}}\right)^2
\right. \\
& \left.
\frac{( \dtQE + \mbox{i} N \gamma_{\mbox{\tiny{QE}}}^{\rr}/2)[\dtC \dtQE \mu_{\mbox{\tiny{QE}}}^2 +(\dtC \mu_{\mbox{\tiny{QE}}} - \lambda \mu_{\mbox{\tiny{C}}} )^2 - N \lambda^2 \mu_{\mbox{\tiny{QE}}}^2]}
{( \dtQE+\mbox{i} \gamma_{\mbox{\tiny{QE}}}^{\rr}/2)(\dtC^2 + \dtC \dtQE -N \lambda^2) - \dtC (N-1)\lambda^2}
\right|^2 \ ,
\end{split}
\label{eq:anaOC_g2}
\end{equation}
\end{widetext}
where, again, we have made use of the redefined detunings $\dtC$
and $\dtQE$. We observe that, as expected, $\gz \rightarrow 1$
when $N$ tends to infinity for a fixed value of the coupling
strength---so the expression does recover the bosonization limit.
Notice also that the denominator of the first term coincides with
the numerator of the analytical expression for the intensity
(\autoref{eq:anaOC_intensity}), and the vanishing condition for
the denominator of the second term yields the polaritons of the
second-rung of the Tavis-Cummings ladder.
Considering the same cavity and emitters as in the plasmonic case,
we can explore the changes introduced when quantum emitters are
not directly pumped by the laser and only the radiation coming
from the cavity is registered at the detector. This mimics the
setup of a closed microcavity for the parameter values distinctive
of a plasmonic nanocavity. Results for the intensity $I$ and the
correlation function $g^{(2)} (0)$ for different values of the
coupling strength $\lambda$ are shown in \autoref{fig:ms1_comp}
for these two situations: with (a) and without (b) considering
both the pumping and the emission associated with the quantum
emitters. To make the comparison, we present the particular case
of a collection of $N=5$ quantum emitters. Therefore, the lines
appearing in the left-hand column of \autoref{fig:ms1_comp} are
just cuts of the maps (b$_1$) and (b$_2$) of
\autoref{fig:ms1_intg2} at four particular values of the coupling
strength.
\begin{figure*}[ht]
\centering
\includegraphics[width=15cm]{B_ms1_compSPQESP.pdf}
\caption{Scattering intensity $I$ (top row) and correlation function $g^{(2)}(0)$ (bottom row) versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ for a system of $N = 5$ quantum emitters coupled to a plasmonic nanocavity for various coupling strengths $\lambda$ with (a) and without (b) considering the pumping to and the emission from the quantum emitters.}
\label{fig:ms1_comp}
\end{figure*}
First, we notice that the asymmetry in the two intensity peaks is
removed when considering only the emission from the cavity. As
commented before, for an open nanocavity the effective dipole
moment of the LP and the UP are respectively the parallel and
antiparallel superpositions of those of the cavity and the quantum
emitters. This makes the emission of the UP brighter, and it also
introduces a noticeable dependence on the number of emitters. On
the contrary, in this new configuration both polaritons radiate
with the same associated dipole moment (corresponding to the
cavity, which is the same regardless the ensemble size), and the
associated emission is thus identical. Notice also that the
positions of the intensity peaks are the same for open and closed
configurations, because the polariton energies do not change.
The symmetry observed in the intensity patterns is also kept in
the correlation function $g^{(2)}(0)$. Apart from this difference,
the main features in the statistics are kept from the open case,
namely: first, around the zero value of the laser detuning
($\omega_{\mbox{\tiny{L}}} = \omega_0$) there exists a dip for reduced
coupling strength whereas a maximum is developed as the
interaction increases; and second, for frequencies near the
one-excitation polaritons, the photon blockade effect is
observable. From these cuts in \autoref{fig:ms1_comp}(a$_2$), we
confirm that the minimum following the UP branch is the deepest.
\subsubsection{Dielectric microcavities}
Now we consider the typical configuration of dielectric
microcavities, where only the cavity mode is pumped and direct
emission from the quantum emitters does not take place (closed
configuration). A study similar to the previous section is carried
out, determing the intensity $I$ and the zero-delay second-order
correlation function $g^{(2)} (0)$. The results are shown in
\autoref{fig:ms2_intg2} for the same three cases: $N = 1$ (a), 5
(b), and 25 (c) quantum emitters placed inside the cavity.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{B_ms2_SP.pdf}
\caption{Scattering intensity $I$ (top row) and correlation function $g^{(2)}(0)$ (bottom row) versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ and coupling strength $\lambda$ (in units of the cavity decay rate $\gamma_{\mbox{\tiny{C}}}$) for a system of $N = 1$ (a), 5 (b) and 25 (c) quantum emitters coupled to a dielectric microcavity. In these panels, dotted (dashed) lines plot the polariton frequencies (half-frequencies) in the one-excitation (two-excitation) manifold. Insets zoom into the low coupling region. Magenta marks indicate points whose $\gtau$ is plotted in \autoref{fig:tau} (b$_1$).}
\label{fig:ms2_intg2}
\end{figure*}
The intensity panels reveal again the presence of the two
polaritons when entering the strong coupling regime. The energies
corresponding to the dressed states in the one-excitation manifold
are plotted in dotted lines, and they overlap the intensity peaks.
Nevertheless, in contrast to the nanocavity configuration, these
two intensity maxima are symmetric and barely vary their height as
the number of emitters increases. As we have commented in the
previous section, this is due to the fact that only the emission
from the cavity is registered, so the effective radiating dipole
moment is always the same.
This underlying symmetry is also revealed in the analytical
expression of the intensity, computed from the perturbative
approach:
\begin{equation}
I \propto \left|
\frac{\dtQE \mu_{\mbox{\tiny{C}}}^2 /N}{\dtC \dtQE /N - \lambda^2}
\right|^2 \ .
\end{equation}
This expression can be reproduced from that corresponding to nanocavities, \autoref{eq:anaOC_intensity}, just by considering the limit $\mu_{\mbox{\tiny{QE}}} \rightarrow 0$. We observe that the denominator remains unchanged, so its vanishing condition give us again the energy dispersion for the polaritons and hence the position of the intensity maxima.
The results for the correlation function $\gz$ depicted in the
bottom row of \autoref{fig:ms2_intg2} show a similar pattern to
the ones found for nanocavities (\autoref{fig:ms1_intg2}),
although incorporating the symmetry already expected. Around the
zero laser detuning ($\omega_{\mbox{\tiny{L}}} = \omega_0$) and for
intermediate (or large) coupling strengths we find
super-Poissonian statistics (yellow colored). These panels also
show the two types of sub-Poissonian emission (blue colored)
previously observed: the associated with the phenomenon of photon
blockade, as well as that related to destructive interference. The
position of the eigenenergies corresponding both to the one- and
two-excitation manifolds are plotted in dotted and dashed lines,
respectively, overlapping the correlation maps. The frequencies
where the photon blockade effect occurs are easily relatable next
to the dotted lines. Nevertheless, it is in the other area of
sub-Poissonian emission---the one associated with interference
effects---where the main difference between open and closed
cavities appears. Apart from the spectral symmetry already
discussed, the development of two dips instead of a single one is
the most apparent feature. For lower values of the coupling
strength, we find $\gz=1$ at zero detuning, while on both sides of
this frequency, a window with sub-Poissonian emission is visible.
The presence of this double dip pattern disappears when
introducing non-radiative losses associated with the quantum
emitters.
The evolution of correlations as the number of emitters increases
differs from the open nanocavity case. Apart from the fact that
symmetry modifies the laser frequencies at which the different
regions are achieved (for a specific value of the coupling
strength), the main variation concerns the sub-Poissonian emission
caused by destructive interference. These areas are enlarged in
the insets of \autoref{fig:ms2_intg2}. As the ensemble size
increases, the parameter ranges in which we find $\gz <1$ clearly
widen. Therefore, it is possible to obtain antibunched emission
for a specific coupling strength just by increasing the number of
emitters. Beyond a particular $N$, the system tends to reach the
bosonization limit, where $\gz = 1$. The onset of this regime
depends on the coupling strength between cavity and emitters and,
as seen in \autoref{fig:ms2_intg2}(c$_2$), for $N=25$ emitters we
still find significant negative correlations for a wide interval
of coupling values. Note that the photon blockade region does not
endure so long and it practically disappears for a few emitters
within this coupling range.
There exists a major aspect, not mentioned before, that should be
highlighted: the range of laser detunings at which this
non-classical behaviour is found is of the order of meV. Notice
that the energy scale in \autoref{fig:ms2_intg2} differs in three
orders of magnitude from the one corresponding to nanocavities,
\autoref{fig:ms1_intg2}. Therefore, the spectral robustness and
accessibility of the antibunched regions is significantly
different in open and closed cavities, specially in the case of
interference-induced negative correlations. We anticipate that the
spectrally broad (narrow) nature of photon correlations in
plasmonic nanocavities (dielectric microcavities) implies a
faster (slower) temporal evolution of $\gtau$.
Finally, to gain insight into the coherence properties discussed
above, we present the analytical expression for the correlation
function $g^{(2)}(0)$:
\begin{widetext}
\begin{equation}
g^{(2)}(0) = \left| 1 - \frac{1}{N}
\left( \frac{\lambda}
{\dtQE / N}\right) ^2
\frac{ ( \dtQE + \mbox{i} N \gamma_{\mbox{\tiny{QE}}}^{\rr}/2) \lambda^2}
{( \dtQE+\mbox{i} \gamma_{\mbox{\tiny{QE}}}^{\rr}/2)(\dtC^2 + \dtC \dtQE -N \lambda^2) -\lambda^2 \dtC (N-1)}
\right|^2 \ ,
\end{equation}
\end{widetext}
which can be also obtained by taking $\mu_{\mbox{\tiny{QE}}} \rightarrow 0$ in
\autoref{eq:anaOC_g2}. Again, this expression yields $\gz=1$ when
$N \rightarrow \infty$, so the classical behaviour is recovered in
this limit. Note as well that we find $\gz = 1$ at the resonant
frequency $\omega_{\mbox{\tiny{L}}} = \omega_0$ when all losses are neglected.
\subsection{Two different mechanisms leading to sub-Poissonian light} \label{subsec:R_TwoMechanisms}
Studying photon correlations in coupled systems, we have
identified two types of sub-Poissonian emission appearing in both
nano- and microcavities. In order to shed light into their
different nature, we proceed in this section to examine their
emergence in more detail. In \autoref{fig:pop_blockade}, the
focus is on the photon blockade effect---which takes place close
to the polarition energies of the one-excitation manifold---,
whereas in \autoref{fig:pop_interference} we study the
sub-Poissonian emission associated with destructive
interference---which appears for moderate coupling strength in
the region of zero-detuning. The population, the intensity $I$, and the
correlation functions $G^{(2)}(0)$ and $g^{(2)}(0)$ are plotted as
a function of the laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ for
nano- (left-hand side panels) and micro- (right-hand side panels)
cavities at specific coupling strengths to explore these
processes.
\subsubsection{Photon blockade}
The sub-Poissonian emission due to the {\it photon blockade
effect} originates from the anharmonicity of the Tavis-Cummings
ladder, as we have commented before. When the laser has an energy
close to that of one of the polaritons at the one-excitation
manifold, the population of this particular hybrid state
increases. In panels (a$_1$) and (b$_1$) of
\autoref{fig:pop_blockade}, populations are plotted in the basis
of the dressed states. There, the continuous coloured lines,
corresponding to the population of the UP (dark pink) and the LP
(light pink), experience an increase when the laser frequency is
tuned to be in the vicinity of the corresponding polariton
frequency (continuous vertical grey lines). Nevertheless, for
these specific energies, the laser is out of resonance for
promoting the state from the one- to the two-excitation manifold
(dashed vertical grey lines depict these energy differences). This
diminishes the probability of emission of two simultaneous photons,
leading to sub-Poissonian statistics.
These panels also show, in dashed coloured lines, the populations
of the states belonging to the two-excitation manifold: two LPs
(light pink), one LP and one UP (very light grey) and two UPs
(dark pink). Note that the maxima of these curves are not located
exactly at the polariton frequencies. They are slightly shifted as
a consequence of the energy differences between the one- and the
two-excitation manifolds.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{C_pop_blockade.pdf}
\caption{Population---in the polariton basis (first row) and in the cavity-emitters basis (second row)---, intensity $I$, and correlation functions $G^{(2)}(0)$ and $g^{(2)}(0)$ versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ for a system of $N=5$ quantum emitters coupled to a nano- (a) and a micro- (b) cavity at coupling strength $\lambda / \gamma_{\mbox{\tiny{C}}} = 2$ (for which the photon blockade effect appears). In these panels, continuous vertical grey lines indicate the polariton frequencies in the one-excitation manifold, while the dashed ones represent the energy differences between the state of one LP and one UP (belonging to the two-excitation manifold) and the state with either one LP (left dashed line) or one UP (right dashed line).}
\label{fig:pop_blockade}
\end{figure*}
When the populations are expressed in terms of the cavity and
emitter states, panels (a$_2$) and (b$_2$) of
\autoref{fig:pop_blockade}, all curves belonging to the same
subspace (continuous or dashed lines for the one- and
two-excitation manifolds respectively) seem to converge to the
same value at the frequencies where the photon blockade phenomenon
takes place (that is, near the polariton frequencies). Apart from
that, we observe that there exist two clear minima in the
population curves corresponding to the state with one (continuous
dark blue line) and two (dashed dark blue line) excitations in the
cavity. Each of them has a replica in one of the curves depicted
in panels (a$_3$) and (b$_3$) of \autoref{fig:pop_blockade}. This
is especially visible for the nanocavities where these two minima
do not coincide. Indeed, the intensity (yellow line) and the $G^{(2)}(0)$
(ochre line) functions reproduce the form of the populations of
the states with one and two excitations in the cavity mode
respectively. The origin of this correspondence is clear for the
closed configuration (as only the emission from the cavity is
detected). For the open one, it results from the fact that the
dipole moment of the cavity is greater than the collective dipole
of the emitter ensemble. Thus, the former contributes the most to
the emitted light (for a reduced number of emitters). Note that
intensity accounts for one-photon processes, while $G^{(2)}(0)$ reflects
from two-photon processes instead (\autoref{eq:ComputingIandG2}).
The intensity plots reflect the presence of the polariton energies
as well---each scattering peak coincides with a maximum in the
polariton population and, naturally, with the position of the
polariton energy. The intensity minima are certainly located
between the two polariton energies, far from resonance. The fact
that the maxima in $G^{(2)}(0)$ are shifted from those in the
scattered intensity provokes the characteristic shape in the
normalized second-order correlation function, shown in panels
(a$_4$) and (b$_4$) of \autoref{fig:pop_blockade}. Values of the
$\gz$ function below one are located close to the polariton
energies (vertical continuous grey lines), whereas there appear
two relative maxima at laser frequencies that match energy
differences between the one- and the two-excitation manifold
(vertical dashed grey lines). For the nanocavity, the different
positions of $I$ and $G^{(2)}(0)$ minima in (a$_3$) leads to
maxima and a minima in $\gz$ near resonance, although the emission
is always super-Poissonian in this frequency window.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{C_pop_interference.pdf}
\caption{Population (in the cavity-emitters basis), intensity $I$, and correlation functions $G^{(2)}(0)$ and $g^{(2)}(0)$ versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ for a system of $N=5$ quantum emitters coupled to a nano- (a) and a micro- (b) cavity at coupling strengths $\lambda / \gamma_{\mbox{\tiny{C}}} = 0.2$ and $0.1$ respectively (for which the interference-induced correlations appear).}
\label{fig:pop_interference}
\end{figure*}
\subsubsection{Interference-induced correlations}
The decrease of $\gz$ below one is referred to as
interference-induced correlations when its origin cannot be
explained in terms of the energy levels as done for the photon
blockade effect. On the contrary, it is produced by the
destructive interference between different available decay
paths~\cite{Savona2010, Ciuti2011, Ciuti2013}.
The population curves, panels (a$_1$) and (b$_1$) in
\autoref{fig:pop_interference}, reveal that it is a decrease in
the population of the state corresponding to two cavity-mode
excitations (dashed dark blue lines) that produces the mimimum in
the $G^{(2)} (0)$ function. It is then transferred to the
normalized $\gz$ and, consequently, there appears sub-Poissonian
statistics in the vicinity of this laser frequency. This
correspondence between cavity population and correlations is
observed in both types of cavities, although there exists a
difference between them: whereas only one dip takes place in
nanocavities, two of them emerge in the case of microcavities.
Notice that this behaviour differs from the photon blockade
mechanism, where a related fall in the population of the state
with two cavity-mode excitations is not observed (on the contrary,
as previously pointed out, all populations seem to converge to the
same value).
The intensity and correlation function $G^{(2)}(0)$ are depicted in
panels (a$_2$) and (b$_2$) in \autoref{fig:pop_interference}. As
in the previous case, these curves clearly follow the shape of the
populations associated with the states corresponding to one
(continuous dark blue lines) and two (dashed dark blue lines)
excitations in the cavity mode, respectively. The different
position of the minima for these two magnitudes is again
responsible for the shape of the $\gz$ function. Nevertheless, now
values below one are reached (the interference-induced photon
correlations). In \autoref{fig:pop_blockade}, the minimum in the
$G^{(2)}$ did not lead to sub-Poissonian statistics, although it
did correspond to a minimum in $\gz$. Note that this fall was not
so abrupt when compared with the intensity dip.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{D_det.pdf}
\caption{Correlation function $g^{(2)}(0)$ versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_{\mbox{\tiny{C}}}$ and coupling strength $\lambda$ (in units of the cavity decay rate $\gamma_{\mbox{\tiny{C}}}$) for a system of $N=5$ quantum emitters coupled to a nano- (top row) and a micro- (bottom row) cavity for various values of the detuning between cavity and emitters, $\Theta$ = 1 $\gamma_{\mbox{\tiny{C}}}$ (a), 2 $\gamma_{\mbox{\tiny{C}}}$ (b), and 3 $\gamma_{\mbox{\tiny{C}}}$ (c). In these panels, dotted (dashed) lines plot the polariton frequencies (half-frequencies) in the one-excitation (two-excitation) manifold. Horizontal pink lines and magenta marks in (c) panels indicate points whose $\gz$ is plotted in \autoref{fig:tauDet}.}
\label{fig:det}
\end{figure*}
\subsection{Effect of detuning between cavity and emitters frequencies on the correlation function $\boldsymbol{\gz}$} \label{subsec:R_Detuning}
By means of the introduction of detuning between cavity and
emitters frequencies, the parameter range in which sub-Poissonian statistics
emerges can be enlarged---the spectral window becomes wider, and
stronger couplings are required~\cite{Radulaski2017}. This is the
tendency we observe in \autoref{fig:det}, where the correlation
function $\gz$ is plotted versus the laser detuning $\omega_{\mbox{\tiny{L}}}
- \omega_{\mbox{\tiny{C}}}$ and the coupling strength $\lambda$ for various
values of the detuning $\Theta \equiv \omega_{\mbox{\tiny{QE}}} - \omega_{\mbox{\tiny{C}}}$.
There, the emitter frequencies $\omega_{\mbox{\tiny{QE}}}$ vary while the
cavity mode resonance is always fixed to be $\omega_{\mbox{\tiny{C}}} =
\omega_{0} \equiv 3$ eV. The case considered is that composed of $N=5$
quantum emitters coupled to either a nano- (top row) or a micro-
(bottom row) cavity, hence these would correspond to panels
(b$_2$) from \autoref{fig:ms1_intg2} and \autoref{fig:ms2_intg2},
respectively, if no detuning were present (that is, $\Theta = 0$).
\autoref{fig:det} shows that, effectively, for both types of
cavities the region with interference-induced correlations spreads
as the difference in energy between cavity and emitters increases,
although this effect is more pronounced in microcavities. Via
detuning, the range of laser frequencies for which sub-Poissonian
emission is attainable broadens---it extends over a frequency
window with a width of almost half the detuning. Focusing now on
the vertical axis, we observe that for a particular value of the
coupling strength, it is possible to have $\gz < 1$ near the
resonant frequency just by increasing the detuning between cavity
and emitters. Furthermore, note that the introduction of detuning
makes it possible to achieve lower values of $\gz$, whereby
improving the quantum character of the emitted light. This is also
true for the photon blockade effect following the UP---since this
is the dressed state with a greater emitter contribution in this
case---, which deepens. For a better visualization, the energies
corresponding to the eigenvalues of the dressed states are plotted
in dotted (one-excitation manifold) and dashed (two-excitation
manifold) lines in all panels. For both nano- and microcavities,
the photon blockade effect reinforces near the UP, whereas it
fades at the LP. For instance, when $\Theta = 3 \gamma_{\mbox{\tiny{C}}}$ the
photon blockade effect following the lower branch disappears---no
sub-Poissonian emission takes place in its surroundings.
Note that varying the sign of the detuning, the roles of UP and LP
are exchanged.
\begin{figure*}[ht]
\centering
\includegraphics[width=15cm]{E_tau.pdf}
\caption{Correlation function $\gtau$ for $N$ quantum emitters interacting with either a nano- (a) or a micro- (b) cavity at resonance ($\Theta=0$). In top panels, $\gtau$ is plotted versus the time delay $\tau$ (in units of the cavity time $1/\gamma_{\mbox{\tiny{C}}}$) for three different ensemble sizes ($N =1$, 5 and 25). Two different configurations (which correspond to the points indicated in \autoref{fig:ms1_intg2} and \autoref{fig:ms2_intg2}) are selected. Continuous lines are used for the antibunching related to the photon blockade effect while dotted lines are used for interference-induced correlations. In bottom panels, $\gtau$ is plotted versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ and time delay $\tau$ (also in units of $1/\gamma_{\mbox{\tiny{C}}}$) for an ensemble of $N=5$ emitters and selecting two different coupling strengths in each case: $\lambda = 2 \gamma_{\mbox{\tiny{C}}}$ (a$_2$) and $0.2 \gamma_{\mbox{\tiny{C}}}$ (a$_3$) for plasmonic nanocavities and $\lambda = 2 \gamma_{\mbox{\tiny{C}}}$ (b$_2$) and $0.1 \gamma_{\mbox{\tiny{C}}}$ (b$_3$) for dielectric microcavities. In these bottom panels, vertical green lines correspond to the curves for $N=5$ depicted at the top. }
\label{fig:tau}
\end{figure*}
Regarding the region with interference-induced correlations, there
exists a particularity for the microcavity that is worth mentioning:
now we only observe one prevailing dip, instead of two (as it was
for the zero detuning case, $\Theta = 0$). As a consequence of the
loss of symmetry, the dip closer to the emitter frequency becomes
narrower, and the other one widens, when the detuning increases.
This makes the patterns observed for $\gz$ at a specific detuning
$\Theta$ quite similar for nano- and microcavities. Nevertheless,
the energy range is very different---note that the coupling
strength $\lambda$ is given in units of
the decay rate $\gamma_{\mbox{\tiny{C}}}$, and the values corresponding to plasmonic
($\gamma_{\mbox{\tiny{C}}} \sim 0.1$ eV) and dielectric ($\gamma_{\mbox{\tiny{C}}} \sim
0.1$ meV) cavities differ by around three orders of magnitude (as do
the laser detunings).
\subsection{Dependence of the correlation function $\boldsymbol{\gtau}$ on the time delay $\tau$} \label{subsec:R_Tau}
In this section, we study the behaviour of the second-order
correlation function $\gtau$ at non-zero time delays $\tau$ for
various configurations displaying sub-Poissonian statistics ($\gz < 1$).
This allows to resolve whether the emitted light is actually
antibunched ($\gz < \gtau$). In top panels of \autoref{fig:tau},
we plot $\gtau$ as a function of $\tau$ (in units of the cavity
lifetime $1/\gamma_{\mbox{\tiny{C}}}$) for an ensemble of $N$ quantum emitters
interacting with either a nano- (a) or a microcavity (b) when
there is no detuning between them ($\Theta = 0$). We consider
three ensemble sizes $N=1$ (yellow lines), 5 (green lines) and 25
(blue lines), and select two different configurations for each
case: one belonging to the photon blockade area (continuous lines)
and the another displaying sub-Poissonian statistics due to
quantum interference effects (dotted lines). All configurations
are indicated in \autoref{fig:ms1_intg2} and
\autoref{fig:ms2_intg2} through magenta marks.
Focusing first on the continuous lines (photon blockade), we
observe that the correlation function approaches one almost
monotonically as the time interval $\tau$ increases, hence we can
talk properly of photon antibunching. Nevertheless, there exist
some oscillations whose amplitude diminishes as the ensemble size
increases. Remarkably, there is practically no difference between
the behaviour for nano- and microcavities once the time scale is
normalized by the cavity decay time $1/\gamma_{\mbox{\tiny{C}}}$---in both
cases, the time evolution follows the same tendency, and the
degree of correlation reached from both effects is similar. Note
that all configurations have been chosen for a coupling strength
$\lambda/\gamma_{\mbox{\tiny{C}}} = 2$ for both types of cavities, although the
laser detuning varies in order to consider the minimum of the
$\gz$ attainable at this coupling. These two sets of curves also
highlight that the degree of coherence is quickly lost when
increasing $N$.
\begin{figure*}[ht]
\centering
\includegraphics[width=13cm]{E_tauDet.pdf}
\caption{Correlation function $g^{(2)}(\tau)$ for an ensemble of $N=5$ quantum emitters interacting with either a nano- (a) or a micro- (b) cavity when $\Theta = 3\gamma_{\mbox{\tiny{C}}}$. In top panels, $\gtau$ is plotted versus laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_{\mbox{\tiny{C}}}$ and time interval $\tau$ (in units of the cavity time $1/\gamma_{\mbox{\tiny{C}}}$) for coupling strength $\lambda = 0.5 \gamma_{\mbox{\tiny{C}}}$ in (a) and $\lambda = 0.8 \gamma_{\mbox{\tiny{C}}}$ in (b). The cuts at $\tau =0$ correspond to the continuous pink lines depicted in \autoref{fig:det} (c). Bottom panels show cuts of the contour plots on top at laser detunings yielding two minima in the $\gz$ function. These specific configurations are indicated by red lines in the top panels and by red markers in \autoref{fig:det} (c).}
\label{fig:tauDet}
\end{figure*}
Dotted lines show instead the evolution in $\tau$ for
configurations displaying interference-induced correlations. We
first observe that the degree of coherence at $\tau = 0$ reached
from this effect is stronger than the associated with the photon
blockade mechanism, although the couplings are smaller: for the
nanocavity, $\lambda / \gamma_{\mbox{\tiny{C}}} = 0.2$, while for the
microcavity it takes the values $\lambda / \gamma_{\mbox{\tiny{C}}} = 0.04$,
0.1 and 0.3 for $N=1$, 5 and 25 respectively. Oscillations in the
$\tau$-evolution of the function $\gtau$ are observed as $N$
increases, although we still have antibunched light since $\gz <
\gtau$. When comparing nano- and microcavities, we observe that
oscillations in the latter are more pronounced. Moreover, note
again the difference in $\gamma_{\mbox{\tiny{C}}}$, which translates into the
fact that the temporal evolution is significantly faster in the
plasmonic nanocavity (a direct consequence of the spectrally broad
character of photon correlations in the system).
A more general picture is shown in the bottom row of
\autoref{fig:tau}, where $\gtau$ is plotted as a function of the
laser detuning $\omega_{\mbox{\tiny{L}}} - \omega_0$ and the time delay $\tau$
(again in units of the cavity time $1/\gamma_{\mbox{\tiny{C}}}$) for a
collection of $N=5$ emitters also interacting with either a nano-
(a) or a microcavity (b) for specific coupling strengths (see
figure caption). The particular values of the laser detuning
marked with vertical green lines (continuous for photon blockade
and dotted for interference-induced correlations) correspond to
the ones depicted in (a$_1$) and (b$_1$) for $N=5$. These contour
plots show that oscillatory patterns are also present for
configurations displaying super-Poissonian statistics at zero time
delay.
We have thus found that sub-Poissonian statistics is accompanied by antibunched light in the zero-detuning configurations explored in \autoref{fig:tau}.
Although $\gtau$ approaches one as the time delay increases, its
evolution is far from monotonous for interference-induced
correlations---they present an oscillatory pattern taking values
above and below one before reaching the coherent limit. This also
happens when detuning between the cavity frequency and the
emitters is introduced. An example is shown in the top row of
\autoref{fig:tauDet}, where the function $\gtau$ is plotted for a
particular coupling strength as a function of laser detuning
$\omega_{\mbox{\tiny{L}}} - \omega_{\mbox{\tiny{C}}}$, and the time delay $\tau$ (in units
of $1/ \gamma_{\mbox{\tiny{C}}}$) for $N=5$ quantum emitters interacting with
either a nano- (a) or a micro- (b) cavity. Here, we have
considered a detuning $\Theta = 3 \gamma_{\mbox{\tiny{C}}}$, so these plots
corresponds to horizontal cuts in the panels of the third column of
\autoref{fig:det} (indicated by horizontal pink lines),
at $\lambda/\gamma_{\mbox{\tiny{C}}} = 0.5$ for
plasmonic nanocavities and $\lambda/\gamma_{\mbox{\tiny{C}}} = 0.8$ for
dielectric microcavities. In these
panels, for most laser detunings, the correlation function
develops an oscillatory pattern as $\tau$ increases for both sub-
and super-Poissonian statistics at $\tau=0$. Again, the close
similarity between the patterns for both cavities is remarkable
(once the delay time is expressed in units of $1/\gamma_{\mbox{\tiny{C}}}$).
We observe that there exists a significant difference in the
temporal dependence of negative correlations also once detuning
between cavity and emitters is introduced. This is evident in the
bottom panels of \autoref{fig:tauDet}, where two specific values
of $\omega_{\mbox{\tiny{L}}} - \omega_{\mbox{\tiny{C}}}$ are considered (indicated in
\autoref{fig:det} with magenta marks) in order to select
configurations that displays sub-Poissonian statistics due to
interference effects (dotted line) and photon blockade (continuous
line). These plots of $\gtau$ versus the time delay correspond to
vertical cuts in panels (a$_1$) and (b$_1$), see vertical red
lines. In the case of photon blockade, $\gtau$ approaches one
monotonically as the delay increases. In contrast, quantum
interference leads to an oscillatory pattern in correlations. Both
retain a temporal evolution similar to the one obtained at $\Theta
=0$ in \autoref{fig:tau}. Note that even the temporal slope and
pitch of oscillations remain the same. Thus, by detuning cavity
and emitters, the opportunity to obtain sub-Poissonian light
improves (as we have mentioned before, the parameter regions
widen) without altering qualitatively its evolution with time
delay between photon detections. Again, the phenomenology for
nano- and microcavities coincide, given that the values of laser
detuning and time delay, both normalized to the cavity losses, are
the same.
\section{Conclusions} \label{sec:Conclusions}
This work investigates the statistical properties of the light
generated by a collection of quantum emitters coupled to a single
electromagnetic mode. Theoretical computations based on an
effective Hamiltonian approach have been carried out to describe
the response of two different systems under low-intensity
coherent driving: plasmonic nanocavities and dielectric
microcavities. Special attention has focused on exploring the
impact that the distinct open/closed character of these two types
of cavities has on the scattered light.
For both cavity configurations, sub-Poissonian emission has been
observed not only at the single-emitter level, but also for
mesoscopic ensembles involving several tens of emitters. Our
results show that there are two different mechanisms that yield
significant negative correlations in the interaction between a
purely bosonic subsystem (cavity) and a quasi-bosonic one (emitter
ensemble): photon blockade and destructive interference. The
former takes place at high coupling strengths (comparable to or
larger than the cavity decay rate), while the latter becomes
relevant for weaker cavity-emitter interactions. Despite their
distinct open/closed character and the largely different physical
parameters describing nano- and microcavities, the photon
statistics phenomenology for both systems is remarkably similar
(once normalized to the cavity losses). This fact becomes clearer
through the exploration of cavity-emitter spectral detuning, which
enlarges the parameter range yielding antibunched light, and the
temporal evolution of correlations, which reveals the slow (fast)
fading of photon blockade (interference-induced) antibunching. Our findings may serve as
guidance for the optimization of quantum optical phenomena for
specific applications through the appropriate choice of material
parameters for their implementation.
\section*{Acknowledgments}
This work has been funded by the European Research Council under
Grant Agreements ERC-2011-AdG 290981 and ERC-2016-STG-714870, the EU Seventh Framework
Programme (FP7-PEOPLE-2013-CIG-630996 and
FP7-PEOPLE-2013-CIG-618229), and the Spanish MINECO under
contracts MAT2014-53432-C5-5-R and FIS2015-64951-R, as well as
through the ``Mar\'ia de Maeztu'' programme for Units of
Excellence in R\&D (MDM-2014-0377).
|
{
"timestamp": "2018-02-26T02:10:28",
"yymm": "1802",
"arxiv_id": "1802.08607",
"language": "en",
"url": "https://arxiv.org/abs/1802.08607"
}
|
\section{First-level heading}
\medskip
Detailed knowledge of the gap function in iron-based superconductors can help to identify the mechanism of superconductivity in these materials. Recent clarification of the details of the electronic structure in FeSe to a precision of 1 meV \cite{fedorov16, Borisenko16NPh, watson2016evidence, pustovit16LTP, watson15PRB, zhang15PRB, suzuki2015momentum, ye15arXiv, nakayama14PRL, shimojima14PRB, maletz14PRB} is a necessary prerequisite to study the superconducting gap by angle-resolved photoemission spectroscopy (ARPES) and makes this material a perfect candidate for such a detailed investigation. There are several experimental studies of the superconducting gap in FeSe and closely related compounds using different techniques: tunneling spectroscopy \cite{jiao2017superconducting, sprau2017discovery, moore2015evolution, kasahara2014field, singh2013spatial, song2011direct}, ARPES \cite{xu2016highly, hashimoto2018superconducting, okazaki2012evidence,miao2012isotropic}, specific heat \cite{sun2017symmetry,zeng2010anisotropic,abdel2015superconducting} and London penetration depth \cite{kasahara2014field,abdel2015superconducting}; but the consensus as for the size and symmetry of the gap in the full Brillouin Zone (BZ) has not been reached. Although ARPES still remains the only, though non-phase-sensitive, method of direct determination of the gap as a function of momentum, i.e. the gap function, the agreement between existing reports is far from being perfect. For instance, the authors of Ref. \onlinecite{miao2012isotropic} reported an isotropic 2.5 meV gap on a central pocket in FeSe$_{0,45}$Te$_{0.55}$, while the smaller and considerably more anisotropic gaps in the compound with very similar composition FeSe$_{0.4}$Te$_{0.6}$ are found in Ref. \onlinecite{okazaki2012evidence}. As for the pristine FeSe, according to Ref. \onlinecite{hashimoto2018superconducting} the gap is anisotropic on a central hole-like pocket, as well as in slightly S- doped FeSe \cite{xu2016highly}, while no gap has been observed on the electron pockets in the corner of the BZ. This is in contrast to a majority of the tunneling results \cite{jiao2017superconducting,sprau2017discovery,moore2015evolution,kasahara2014field}, which imply a presence of multiple superconducting gaps. Moreover, specific heat and London penetration depth also indicate the presence of two gaps \cite{abdel2015superconducting}. Finally, no study managed to shed any light on a possible k$_z$-dependence of the gap function, although it was mentioned in Ref.\onlinecite{xu2016highly} that no gap could be detected neither on the electron-like Fermi surfaces nor on part of the hole-like pocket near \(\Gamma\)-point. Therefore it is important to have a precise information about the behavior of the gap function throughout the whole BZ, preferably obtained by the same technique.
In this Letter, we report the results of a high-resolution ARPES study of the superconducting gap function in the single-crystals of FeSe, exactly from the same material in which we recently clarified the fine details of the electronic structure \cite{kushnirenko2017anomalous, fedorov16, Borisenko16NPh}. We clearly observed two anisotropic SC gaps on hole- and electron-like Fermi surfaces. Their momentum variation as a function of k$_x$, k$_y$ and k$_z$ is compared to the ones predicted by orbital-selective pairing \cite{sprau2017discovery} and nematicity-induced anisotropy of the pairing gap \cite{Kang_arxiv}.
ARPES data have been collected at I05 beamline of Diamond Light Source \cite{hoesch17RSI}. Single-crystal samples were cleaved \textit{in situ} in a vacuum better than $2\times10^{-10}$ mbar and measured at temperatures ranging from 5.7 K. Measurements were performed using linearly polarized synchrotron light, utilizing Scienta R4000 hemispherical electron energy analyzer with an angular resolution of 0.2$^\circ$ – 0.5$^\circ$ and an energy resolution of 3 meV. Samples were grown by the KCl/AlCl$_3$ chemical vapor transport method .
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{Fig1.jpg}
\caption{ (a)-(d) Fermi surface maps of the electron-like pockets measured using different photon energies. (e), (f) Fermi surface maps of the hole-like pockets measured using different photon energies. (g) Schematic sketch of the experimentally determined 3D Fermi surface of FeSe. (h) Momentum-energy intensity distribution along the line indicated in (f), (i) k$_F$ energy distribution curves above and below T$_c$ corresponding to the line from (h) and a star from (f).}
\label{fig:one}
\end{figure*}
In Fig.\ref{fig:one}(a-f) we show the experimental Fermi surface maps of electron- and hole-like pockets measured with different photon energies which correspond to different k$_z$-values in 3D BZ. A schematic picture of 3D Fermi surface of FeSe summarizing these and previous ARPES results is presented in Fig. \ref{fig:one}(g) and the evidence for the sensitivity of our experiments to the superconductivity itself is given in panels Fig.\ref{fig:one}(h,i).
In the center of the BZ near Z-point, we see two hole-like elliptical pockets crossing each other (Fig. \ref{fig:one}(f)). These two ellipses originate from two different domain orientations in the nematic state \cite{fedorov16,watson2016evidence,xu2016highly,pustovit16LTP,suzuki2015momentum,watson15PRB} because the surface area probed by photons is bigger than a typical domain size (above the nematic transition the FS near Z-point is a single rounded hole-like pocket). As one approaches the \(\Gamma\)-point, the size of these squeezed by nematicity hole-like pockets rapidly decreases resulting in a very small FS at k$_z$=0, as shown in Fig. \ref{fig:one}(e). Intensity distribution along the cut through the FS centered at Z-point clearly shows two sets of spin-orbit split $d_{xz,yz}$ dispersing features in Fig. \ref{fig:one}(h) while $d_{xy}$-band, which tops in FeSe at 50 meV below Fermi level, is hardly visible making itself noticeable only via hybridization with other states. The two features dispersing towards the Fermi level do not actually cross it and demonstrate all typical signs of the opening of the small superconducting gap. The direct evidence is given in Fig. \ref{fig:one}(i) by two energy distribution curves (EDC) taken above and below the critical temperature of FeSe. Emerging of a coherence peak as well as a typical shift of the leading edge midpoint are clearly observed. Because of the closely separated multiple features with drastically different Fermi velocities close to the Fermi level, it is the shift of the leading edge, or leading edge gap (LEG), which we will use throughout this paper to characterize the superconducting gap in FeSe.
In the corner of the BZ, there are two peanut-like \footnotemark[100] pockets crossing each other [Figs. \ref{fig:one}(a-d, g)]. The size of these electron-like Fermi surfaces is also changing with $k_z$. Going from A-point to M-point, one notices a significant shrinking. A popular interpretation \cite{watson15PRB,zhang15PRB,shimojima14PRB,suzuki2015momentum} of the presence of these two pockets is that they are the result of superposition of single electron pockets from different domains of the twinned sample, as is the case with the hole-pockets in the center (see above). In this approach, the other electron pocket, though expected in the conventional band structure calculations, should disappear below the transition. Our interpretation is that both pockets remain present also in the nematic phase and also in the single-domain sample, i.e. in agreement with the band structure calculations. The overlapping of the contributions from two different domains does take place as well, but resulting FS is more complicated, being a superposition of two sets of crossed "peanuts". Since each set becomes C$_2$-symmetric in the orthorhombic (nematic) phase, the overlap of such two structures rotated with respect to each other by $\sim$ 90$^\circ$ leads to an apparent doubling of each pocket. Such a small difference between Fermi surfaces is predicted by DFT calculations for the orthorhombic state (see Figs. 2(b-d) in Ref. \onlinecite{fedorov16}) and is in agreement with our previous ARPES results (see Figs. 2(e-g) in Ref. \onlinecite{fedorov16}) where the elusive doubling can still be resolved. In the Supplemental Material section \footnotemark[100] we present more ARPES data which unambiguously prove the presence of two sets of crossed peanuts Fermi surfaces in FeSe. In order to avoid additional complications with the extraction of gap values from the spectra, we adjust the photon energy and geometry of the experiment such that only one set of pockets is visible at a time.
Now let us turn to the momentum variation of the superconducting gap on electron-like pocket near A-point. The data-set shown in Fig. \ref{fig:two}(a) was measured using 28 eV photons \footnotemark[101] with linear horizontal polarization from the sample cooled down to 5.5 K, i.e. in its superconducting state. We took the scans in a direction perpendicular to the BZ borders deliberately. Such experimental geometry allowed us to detect more spectral weight and thus more details in comparison with the previous ARPES studies of the SC gap in FeSe. Specifically, the photoemission intensity from the pockets' ends is not suppressed as in earlier studies and even some parts of the other pocket are visible. First, we establish the presence of the superconducting gap also in this region of momentum space. Figs. \ref{fig:two}(c,d) show k$_F$-EDCs from places on the pocket marked with stars on Fig. \ref{fig:two}(a) in the normal and the superconducting state. In both cases, the pairs of EDCs demonstrate the shift of the leading edge position to higher binding energies upon entering the superconducting state. The insets show the derivatives which peak at different binding energies, by definition representing the positions of the leading edges. Moreover, this shift is not the same, it is equal to 0.3 meV and 0.6 meV respectively, which indicates that the gap not only is present on electron pockets, but also it is not isotropic.
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth]{Fig2.jpg}
\caption{ (a) Fermi surface map of the electron-like pocket near A-point. (b) Symmetrized version of a map from (a). (c),(d) k$_F$ energy distribution curves from parts of the pocket marked by stars in (a) measured in the normal and superconducting states. Insets show the first derivatives of the same curves. (e) binding energy of the leading edge of k$_F$-EDCs on the electron-like pocket from row data. (f) The same as (e) but from the symmetrized dataset. In order to match results from two different ellipses, black dots are shifted by 90 degrees.}
\label{fig:two}
\end{figure}
For further analysis of gap anisotropy we have extracted binding energy of the leading edge along the most intense peanut pocket. The result is shown in Fig. \ref{fig:two}(e). Already these result obtained from the raw data clearly demonstrate the presence of noticeable gap anisotropy. To compensate for the matrix element effects, which result in uneven intensity distribution along the peanut, we symmetrized the dataset from panel (a) with respect to the long axis of the better visible pocket. Symmetrized FS map is shown in Fig. \ref{fig:two}(b) and corresponding binding energy of the leading edge as a function of $\theta$ is shown in Fig. \ref{fig:two}(f). Here the red dots correspond to an ellipse from the map with a red contour on top, while the black dots correspond to a visible part of another peanut (black lines in the map). This plot presents direct evidence of the anisotropic superconducting gap on electron-like pocket near A-point of FeSe. Not surprising, the symmetry of this gap function is C$_2$. The largest gap, which corresponds to the lowest leading edge position, is on the shorter axis of the peanuts while the gap minima are on the longer axis of the peanuts. Fitting the data with a periodic function $\epsilon=A_0+A_1 cos(\phi)+A_2 cos(2\phi)$, where $A_0, A_1, A_2$ are free parameters, gives the amplitude of the gap variation of 0.6 meV (see brown curve in Fig. \ref{fig:two}(f)). The fit to the raw data from the non-symmetrized map has almost the same shape and amplitude (blue curve in Fig. \ref{fig:two}(e)).
As mentioned above, each peanut of electron-like FS is a superposition of 2 components which originate from two orthorhombic domains (for details see Supplemental Material \footnotemark[100]). It is thus instructive to know, which exactly component is analyzed in Fig. \ref{fig:two}. From the comparison of the pocket shape obtained from Fig. \ref{fig:two}(a) with the one in Ref. \footnotemark[100] one can conclude that intensity on this map mostly originates from the shorter peanut and we thus have analyzed the gap anisotropy related to this pocket. The finite intensity from the longer peanut, which is still present on the map, could effectively lower the leading-edge energy position at the ends (longer axis) of the shorter peanut \cite{BorisenkoSymm}. This is because the dispersions along this direction in the momentum space run nearly parallel to each other and the one which supports the longer peanut (secondary signal) can contribute spectral weight to k$_F$-EDC of the primary feature. Consequently, the amplitude of the LEG gap anisotropy can be underestimated. Here we would like to point out, that although LEG is a good qualitative measure of the superconducting gap and its anisotropy, the correspondence of the absolute values is more complicated and depends on many factors \cite{kordyuk2003measuring}. Since modelling of the spectral function, definitely necessary to provide (model-dependent) absolute values of the gap in the case of FeSe, is beyond the scope of this paper, we will continue to discuss LEG as a robust quantity, which can be extracted directly from the raw data without a sophisticated data analysis. As a rule, the real gap is slightly larger than LEG.
\begin{table}[h]
\centering
\begin{tabular}{c|c}
Photon energy & Leading-edge gap anisotropy\\ \hline
25 eV & 0.35 meV \\
28 eV (A-point) & 0.6 meV \\
30 eV & 0.8 meV \\
42 eV (M-point) & 0.7 meV \\
\end{tabular}
\caption{Leading-edge gap anisotropy on the electron-like pocket for different k$_z$ values}
\label{tab:one}
\end{table}
To explore the gap function in whole 3D momentum space, we have also analyzed the datasets taken using different photon energies: 42 eV which corresponds to M-point \footnotemark[101], 25 eV and 30 eV. The amplitude of the LEG variation, i.e. the difference in the leading edge position between long and short peanut axis k$_F$-EDCs, is given in Table \ref{tab:one}. While the functional form of the anisotropy is approximately the same, there are clear oscillations of the amplitude as a function of k$_z$ with the most rapid variations taking place in the vicinity of the A-point.
Now let us consider the hole-like Fermi surface in the center of the Brillouin zone. Fig.\ref{fig:three}(a) shows Fermi surface map of the hole-like pocket near Z-point. Here it is convenient to analyze the EDCs from the lower part of the map, as panels (b) and (c) demonstrate. In Fig.\ref{fig:three}(b) the cut from the upper part of the map is shown where it is seen that the other split component of $d_{xz,yz}$-dispersion is strong and thus complicates the LEG analysis. In Fig.\ref{fig:three}(c), in contrast, the intensity from these states is weak and the dispersing features responsible for the gapped FS are more pronounced. The presence of the gap is evident from Fig. \ref{fig:three}(e) which shows k$_F$-EDCs from intensity distribution along the line going trough the $\Gamma$-point (Fig. \ref{fig:three}(d)) measured above and below T$_c$. Leading edge shift between these EDCs is 0.8 meV. In order to estimate the SC gap anisotropy on this FS, we have extracted binding energy of the leading edge from the exemplary EDCs corresponding to the red markers in Fig.\ref{fig:three}(a). The result is shown in Fig. \ref{fig:three}(f). Also from this figure one clearly notices that gap on the hole-like pocket is anisotropic, with a maximum located on the shorter ellipse axis and minimum on the longer one. Fitting the data with a periodic function yields a difference between extrema of 0.75 meV.
\begin{figure}[]
\centering
\includegraphics[width=1\linewidth]{Fig3.jpg}
\caption{ (a) Fermi surface map of hole-like pocket near Z-point. (b)-(d) spectra measured in directions shown on the map with two orange and one grey lines respectively. (e) k$_F$ energy distribution curves obtained in a direction shown with a line on (c) from spectra measured in the normal and superconducting state. (f) binding energy of the leading edge of the k$_F$-EDCs from the hole-like pocket. (h) cut through the $\Gamma$-point with white line representing the leading edge position of EDCs near zero momentum.}
\label{fig:three}
\end{figure}
Near the \(\Gamma\)-point hole-like pocket becomes too small (about 0.06 ${\text{Å}}^{-1}$ in diameter) to disentangle two components of the Fermi surface originated from two domain orientations. Analysis of the asymmetry of superconducting gap becomes very complicated and model-dependent. The presence of the gap itself is though apparent. This follows also from the typical leading edge position behavior extracted from the cut measured through the pocket center Fig. \ref{fig:three}(h). Presence of a deep minimum in this curve points to the back-folding of the dispersion due to superconductivity. If the top of the dispersion is located close to the Fermi level, as is the case here, opening of the gap results exactly in this behavior of the leading edge position \cite{evtushinsky2011fusion}. In the case of absence of the gap, i.e. nodes, one would expect a flat shape of this curve dictated mostly by the Fermi function since the spectral function is nearly equally strong also in between the Fermi level crossings due to proximity of the top.
Fig.4 summarizes our findings as regards the 3D gap function in FeSe. In this figure we show only the FS corresponding to single domain. The superconducting gap on both parts of the Fermi surface is anisotropic in k$_x$-k$_y$ plane as well as in k$_z$ direction. The largest leading edge gap is on the hole-like pocket near Z-point. The gap oscillations on this pocket have C$_2$ symmetry with maximum on a short axis of the ellipse. Near \(\Gamma\)-point the gap seems to be considerably smaller but not zero. A symmetry of the leading edge gap on the electron-like pocket also has C$_2$ behavior on each of the two ellipses. Its highest value corresponds to short axis of the ellipse and is smaller than one on the hole-like pocket. The behavior of the gap on the electron-like pocket is qualitatively the same for all k$_z$, but is characterized by a non-monotonic amplitude of the oscillations when going from A to M-point.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Fig4.jpg}
\caption{ Summary of the obtained results. The fits to the experimental LEG distribution in 3D BZ are shown for different k$_z$ values for hole and electron pockets. In both cases zero $\theta$ corresponds to the diagonal of the BZ, i.e. the direction in the momentum space which connects the center of hole-like pocket with the center of electron-like pocket.}
\label{fig:four}
\end{figure}
We have previously detected a correlation between the size of the gap and degree of spin-orbit splitting in all main representatives of the iron-based superconductors \cite{Borisenko16NPh}. The present study confirms this with a new level of precision for FeSe. Indeed, the absolute value of the gap in the center of the BZ, where the spin-orbit splitting is maximal, is larger than the one on the electron-like pockets. Another correlation with the absolute value of the gap has been noticed by us in hole-doped 122 materials \cite{RN110}. There we have demonstrated that the gap is always the largest for the $d_{xz,yz}$-states and decreases as soon as the other orbital character is admixed.
We also compare the earlier determined anisotropic gap function in LiFeAs \cite{BorisenkoSymm} with the one determined in the present study in FeSe. Apparently, very different electronic structures result in qualitatively different gap structures. In the case of LiFeAs electron-like pockets are significantly larger and the gap oscillates on them in phase, having C$_4$ symmetry, i.e. it is maximal on both of them when crossing diagonal of the BZ and is minimal in between, regardless of shorter or longer axes of the ellipses. The gap of $d_{xz,yz}$ states is also the largest in FeSe but oscillates in the same way as the gap on the large $d_{xy}$-pocket in LiFeAs (which is absent in FeSe) having minima on the diagonals of the BZ. Oscillations of the gap on small and 3D $d_{xz,yz}$-pocket in LiFeAs have not been resolved. This comparison calls for detailed and quantitative theoretical estimates of the gaps in FeSe by the same methods applied to LiFeAs earlier \cite{RN112,RN105,RN109}.
Presented results imply a significant anisotropy of the superconducting gap in FeSe, not only in-plane, but also as a function of k$_z$. This is in contrast to the expectations of the conventional spin-fluctuations mediated pairing theories where mostly isotropic s-wave gaps are expected, or anisotropy is different. The concept of orbital-selective Cooper pairing suggested recently \cite{sprau2017discovery} seems to be in agreement with our observations. There the concentration of the pairing in the particular orbital channel may arise from differences in the correlation strength for electrons with different orbital character. In particular, it explains persistently smaller $d_{xy}$-gaps observed experimentally by increased incoherence which would suppress the pairing within an itinerant picture. Indeed, the anisotropy of the superconducting gap in FeSe studied by us, can roughly be explained by the orbital composition of the states forming the Fermi surface in the normal state. As soon as the contribution of $d_{xz,yz}$ character is stronger - the gap reaches its maximum. On the other hand, this concept also requires adding a phenomenologically different quasiparticle spectral weights for the $d_{xz}$ and $d_{yz}$ orbitals. We do not observe significantly different Z-weights of these orbitals, because both electron-like pockets are present within a single domain and the corresponding peaks of the spectral function are equally sharp \footnotemark[100]. At the same time, our results are in a qualitative agreement with the gap anisotropy extracted from the tunneling data \cite{sprau2017discovery} with the difference in the absolute values being probably due to mentioned peculiarities of the LEG.
Our data are also in agreement with the variations of the pairing gap caused by nematicity itself \cite{Kang_arxiv}. In this approach the anisotropy of the gap arises from the mixing of $s$-wave and $d$-wave pairing channels without the necessity to postulate different Z-factors for each orbital.
In order to make a more rigorous statement as for the application of one or another theoretical approach, more detailed calculations are obviously needed to reproduce the whole 3D momentum dependence of the superconducting gap in FeSe determined in the present study.
This work was supported by Deutsche Forschungsgemeinschaft Grants No. BO1912/6-1 and No. BO1912/7-1. We acknowledge Diamond Light Source for time on Beamline I05 under proposals SI11643-1 and SI18586-1. We are grateful to Dirk Morr, Matthew Watson, Andrey Chubukov and Alexander Kordyuk for the fruitful discussions. In addition, S. Aswartham and I. Morozov express their gratitude to the Volkswagen Foundation for financial support and are grateful to Dmitriy Chareev for valuable consultations on growing FeSe single crystals.
\footnotetext[100]{Supplemental Material}
\footnotetext[101]{It is known that Z- and \(\Gamma\)-point in FeSe correspond to photon energy 23 eV and 37 eV respectively \cite{maletz14PRB,watson2015suppression,watson15PRB}. However, when one probes electrons emitted away from the normal to the surface, the out-of-plane component (k$_z$) is not constant and decreases. In order to compensate for this when probing the part of the BZ near A- and M-points, one should use photons with different energies: 28 eV for A-point and 42 eV for M-point \cite{fedorov16}}
|
{
"timestamp": "2018-02-26T02:12:20",
"yymm": "1802",
"arxiv_id": "1802.08668",
"language": "en",
"url": "https://arxiv.org/abs/1802.08668"
}
|
\section{Radius of Injection}\label{app:RadiusofInjection}
Let $G={\rm SL}_n(\R)$, $\Gamma={\rm SL}_n(\Z)$, and $X=\Gamma\backslash G$ be the space of lattices. We want to prove the following lemma for the radius of injection.
\begin{customlem}{\ref{lem:radiusofinjection}}
There exist constants $c_1, c_2>0$ (depending only on $n$) such that for any $0<\epsilon<c_1$, the projection map
\begin{align*}
\pi_x: B_r^G(e) &\to B_r^X(x)\\
g &\mapsto xg
\end{align*}
is injective for all $x\in L_\epsilon$, where $r=c_2\epsilon^n$.
\end{customlem}
To do this, we will first need some background on Siegel sets for the action of ${\rm SL}_n(\Z)$ on ${\rm SL}_n(\R)$.
\subsection{Siegel Sets}\label{sect:SiegelSet}
Let $K = {\rm SO}(n)$, let $A$ be the positive diagonal subgroup, and let $N$ be the subgroup of upper triangular unipotents. The Iwasawa decomposition of $G$ is given by $G = NAK$. One can use reduction theory for arithmetic groups to find a convenient way of writing $x\in X$ in terms of particular subsets of these subgroups.
Given $\epsilon>0$, define
\begin{align*}
A_\epsilon &= \left\{{\rm diag}(a_1, \cdots, a_n)\in A \hspace{4pt} \vline \hspace{4pt} \frac{a_{i+1}}{a_{i}} \leq \epsilon \right\}\\
N_\epsilon &=\left\{ u \in N \hspace{2pt} \vline \hspace{4pt} |u_{i,j}| \leq \epsilon \hspace{6pt} \forall i<j \right\}.
\end{align*}
A Siegel set for $G$ is a set of the form $\Sigma_{s,t} := N_{s}A_{t} K$ for some $s,t>0$.
Siegel sets can be thought of as a nice way of approximating a fundamental domain for the action of $\Gamma={\rm SL}_n(\Z)$ on $G$
(see Figure \ref{fig:SiegelSet}).
This approximation can be optimized in the following sense:
For any
$s\geq 1/2$ and $t\geq 2/\sqrt{3}$, $G={\rm SL}_n(\R)$ can be written as
\begin{align*}
G = \Gamma \Sigma_{s,t}
\end{align*}
(for details and a proof, see \cite{Rag} Theorem 10.4 or \cite{BekkaMayer} Theorem 5.1.7).
\begin{figure
\begin{center}
\begin{tikzpicture}
\fill [gray!20] (1, 4) -- (1, 1.732) -- (-1, 1.732) -- (-1,4);
\node[right,gray] at (.1,3) {\large{$S$}};
\draw [-] (-2,0) -- (2,0);
\draw [-] (0,0) -- (0,4);
\draw [-] (1, 4) -- (1, 1.732);
\draw [-] (-1, 4) -- (-1, 1.732);
\draw [-] (-1, 1.732) -- (1, 1.732);
\draw [domain=60:120] plot ({2*cos(\x)}, {2*sin(\x)});
\node [below] at (-1.1,-.1) {\small{$-1/2$}};
\node [below] at (1,-.1) {\small{$1/2$}};
\draw [-] (-1,-.1)--(-1, .1);
\draw [-] (1, -.1)--(1, .1);
\end{tikzpicture}
\caption{The Siegel set $S = \Sigma_{1/2,2/\sqrt{3}}$ and the fundamental domain when $n=2$, represented in the Poincar\'e upper half plane.}
\label{fig:SiegelSet}
\end{center}
\end{figure}
\subsection{Proof of Radius of Injection}
We start with the following well-known computation.
\begin{lem}\label{lem:Adg}
Let $g\in \Sigma_{\frac{1}{2},\frac{2}{\sqrt{3}}}$ satisfy $\Gamma g\in L_\epsilon$. Then the operator norm of ${\rm Ad}_g:{\rm Mat}_{n\times n}(\mathbb{R}) \to {\rm Mat}_{n\times n}(\mathbb{R})$ satisfies
\begin{align*}
\nn{{\rm Ad}_g}\ll \epsilon^{-n}
\end{align*}
where the implicit constant depends only on dimension $n$.
\end{lem}
\begin{proof}
Let $g=uak$ where $u\in U_{1/2}$, $a={\rm diag}(a_1,\cdots, a_n)\in A_{2/\sqrt{3}}$, and $k\in {\rm SO}(n)$. Let $\{e_i\}_{1\leq i \leq n}$ be the standard basis on $\mathbb{R}^n$ and fix $\nn{\cdot}$ to be the max matrix norm on ${\rm Mat}_{n\times n}(\mathbb{R})$ (any other norm will work equally well).
Notice that $e_n u = e_n$ for any $u\in U$ and that $e_n a = a_ne_n$ for $a\in A$. Furthermore, since $k$ is an orthogonal matrix, we have that $\nn{v k} \leq \sqrt{n}\nn{v}$ for any $v\in \mathbb{R}^n$. Then, since $\Gamma uak\in L_\epsilon$, we know that $\nn{vuak}\geq \epsilon$ for all $v\in \mathbb{Z}^n\setminus\{0\}$. In particular,
\begin{align*}
\epsilon \leq \nn{e_n uak} \leq \sqrt{n} \nn{e_n ua} = \sqrt{n}\nn{e_n a} =\sqrt{n}a_n\nn{e_n} = \sqrt{n}a_n.
\end{align*}
But since $a\in A_{2/\sqrt{3}}$, we can also say
\begin{align*}
\epsilon/\sqrt{n} \leq a_n \leq (2/\sqrt{3})a_{n-1}\leq (2/\sqrt{3})^2a_{n-2}\leq\cdots\leq(2/\sqrt{3})^{n-1}a_{1}
\end{align*}
which means that $a_i \geq C\epsilon$ for all $1\leq i \leq n$, where $ C = (\sqrt{3}/2)^{n-1}/\sqrt{n}$. Moreover, since $\det a = a_1 a_2 \cdots a_n = 1$, we have that
\begin{align*}
a_i = \frac{1}{a_1\cdots a_{i-1}a_{i+1}\cdots a_n} \leq \frac{1}{C^{n-1}\epsilon^{n-1}}
\end{align*}
for any $1\leq i\leq n$.
This implies that for any $1\leq i,j\leq n$, the ratio $a_i/a_j$ can be bounded by
\begin{align*}
\frac{a_i}{a_j} \leq \frac{1}{C^n\epsilon^n}.
\end{align*}
But notice that for an arbitrary matrix $m\in {\rm Mat}_{n\times n}(\mathbb{R})$,
\begin{align*}
|(ama^{-1})_{ij}| = \frac{a_i}{a_j} |m_{ij}| \leq C^{-n}\epsilon^{-n} |m_{ij}|.
\end{align*}
Thus for $a\in A_{2/\sqrt{3}}$, under the max norm on matrices, we have
\begin{align*}
\nn{ama^{-1}}\leq C^{-n} \epsilon^{-n}\nn{m}.
\end{align*}
Furthermore, since $u\in U_{1/2}$, the magnitudes of all entries of $u$ are bounded by $1$. It is therefore relatively straightforward to see (via matrix multiplication) that $|(umu^{-1})_{ij}|\leq n^2 \max_{i,j} |m_{ij}|$, hence $\nn{umu^{-1}}\leq n^2 \nn{m}$, and the same follows for $k\in K$.
Thus for arbitrary $m\in {\rm Mat}_{n\times n}(\mathbb{R}^n)$,
\begin{align*}
\nn{gmg^{-1}} &= \nn{uakmk^{-1}a^{-1}u^{-1}}\\
&\ll \nn{akmk^{-1}a^{-1}}\\
&\ll \epsilon^{-n}\nn{kmk^{-1}}\\
&\ll \epsilon^{-n}\nn{m}
\end{align*}
where all of the above constants depend solely on $n$. This implies that
\begin{align*}
\nn{{\rm Ad}(g)}\ll \epsilon^{-n}
\end{align*}
as claimed.
\end{proof}
We may now prove our priginal lemma for the radius of injection.
\begin{proof}[Proof of Lemma \ref{lem:radiusofinjection}]
Let $x\in L_\epsilon$. By Section \ref{sect:SiegelSet}, we can write $x=\Gamma g$, for some $g\in\Sigma_{1/2,2/\sqrt{3}}$. Suppose $g_1, g_2\in B^G_r(e)$ and $\pi_x(g_1)=\pi_x(g_2)$, i.e. $\Gamma g g_1 = \Gamma g g_2$.
Then there exists $\gamma\in\Gamma$ such that $gg_1 = \gamma g g_2$, i.e. $g_1 = g^{-1}\gamma g g_2$. From this and left-invariance of the metric, we have that
\begin{align*}
d_G(e,g^{-1}\gamma g)
&\leq d_G(e, g_1) + d_G(g_1,g^{-1}\gamma g)\\
&\leq r + d_G(g\gamma^{-1} g^{-1}g_1,e)\\
&\leq r+ d_G(g\gamma^{-1} g^{-1}g_1,g_2) + d_G(g_2,e)\\
&\leq r+d_G(g_1,g^{-1}\gamma gg_2) + r\\
&= 2r.
\end{align*}
But recall that around every point in $G$ there is a neighborhood on which the metric $d_G$ and the metric derived from any matrix norm are Lipschitz equivalent. Hence, around the identity, for $r$ less than some fixed value depending only on $n$, we have
\begin{align*}
\nn{e-g^{-1}\gamma g}\ll d_G(e,g^{-1}\gamma g) \ll r
\end{align*}
where $\nn{\cdot}$ is the max norm. Finally, by Lemma \ref{lem:Adg},
\begin{align*}
\nn{e-\gamma} &=\nn{gg^{-1}(e-\gamma)gg^{-1}}\ll \epsilon^{-n}\nn{g^{-1}(e-\gamma)g}=\epsilon^{-n}\nn{e-g^{-1}\gamma g} \ll r/\epsilon^n.
\end{align*}
Thus for a correctly chosen constant $c_2$, $r=c_2\epsilon^n$ implies that
\begin{align*}
\nn{e-\gamma}<1.
\end{align*}
But since $\gamma\in\Gamma = {\rm SL}_n(\Z)$ has integer entries, this can only happen if $\gamma = e$, which implies $g_1=g_2$, so $\pi_x$ is injective on $B^G_r(e)$.
\end{proof}
\section{Properties of the Function $G_d$}\label{app:Gd}
Recall that we defined the generalized Pillai's function $G_d:\mathbb{N}\to\mathbb{N}$ by
\[
G_d (K) := \#\{ {\bf k} \in \tilde B_K \hspace{2pt}|\hspace{2pt} k_1 \cdots k_d \equiv 0 \mod K \}.
\]
We want to prove the following properties of this function.
\begin{customlem}{\ref{lem:Gd}}
For any integers $K,d\geq 1$, the following hold:
\begin{enumerate}[(i)]
\item (Iterated sum formula)
\[
G_d(K) = \sum^K_{k_{d-1} = 1}\cdots \sum^K_{k_1 = 1} \gcd(K, k_1\cdots k_{d-1}).
\
\item (Recursive formula) Let ${\rm Id}^d(K) = K^d$. Then
\[
G_{d+1} = {\rm Id}^d\ast(\phi\cdot G_d).
\
\item $G_d$ is multiplicative
\item (Behavior at primes) Let $p$ be a prime. Then
\[
G_d(p) = p^d-(p-1)^d.
\
\item (Dirichlet series bound) For real $x>e$ and $s<d$,
\[
\sum_{K\leq x} \frac{G_d(K)}{K^s} \ll_{s,d} x^{d-s}(\log x)^{d-1}.
\
\end{enumerate}
\end{customlem}
To do this, let us first recall a few basic facts from number theory. For any function $f: \mathbb{N} \to \mathbb{R}$, we have
\begin{align}
\sum^K_{i=1} f(\gcd(K,i)) = \sum_{j|K} f(j)\phi(K/j)\label{eq:Cformula}
\end{align}
where $\phi$ is Euler's totient function, i.e. $\phi(n)$ is the number of positive integers less than $n$ that are relatively prime with $n$. This formula dates back to the work of Ces\`{a}ro and is sometimes refered to as Ces\`{a}ro's formula (cf. \cite{Cesaro} or \cite{DicksonHist}).
Recall also the definition of Dirichlet convolution: If $f$ and $g$ are functions on the natural numbers, then their convolution is defined by
\begin{align*}
(f\ast g)(K) := \sum_{j|K} f(j)g(K/j).
\end{align*}
So, for example, (\ref{eq:Cformula}) says that $\sum^K_{i=1} f(\gcd(K,i)) = (f\ast \phi)(K)$. Recall that the convolution of two multiplicative arithmetic functions is again multiplicative. We now have everything we need to complete the proof.
\begin{proof}
(\ref{Gdprops1})
To determine an expression for
\[
G_d (K) := \#\{ {\bf k} \in \tilde B_K | k_1 \cdots k_d \equiv 0 \mod K \}
\]
notice that to specify a point ${\bf k}\in \tilde B_K$ such that $k_1\cdots k_d \equiv 0 \mod K$, we can choose $k_1$ through $k_{d-1}$ independently to be any integers between $1$ and $K$, but then the remaining coordinate $k_d$ must be a multiple of $K/\gcd(K, k_1\cdots k_{d-1})$, that is, the last coordinate must contain all primes in $K$ not contained in any of the previous coordinates. Since there are $\gcd(K, k_1\cdots k_{d-1})$ multiples of $K/\gcd(K, k_1\cdots k_{d-1})$ less than or equal to $K$, the total number of points counted in this way is given by
\begin{align*}
G_d(K) = \sum^K_{k_{d-1} = 1}\cdots \sum^K_{k_1 = 1} \gcd(K, k_1\cdots k_{d-1}).
\end{align*}
(\ref{Gdprops2})
We will proceed by induction on $d$. For the base case, we have $G_2(K) = {\rm Id}\ast \phi = {\rm Id}\ast (\phi \cdot 1) = {\rm Id}\ast (\phi \cdot G_1)$, which is a well-known formula for Pillai's arithmetical function, as mentioned above. Then suppose $G_d = {\rm Id}^{d-1}\ast (\phi \cdot G_{d-1})$ for $d\geq2$ and consider $G_{d+1}$.
Notice that for any integers $k$, $n$, and $m$, we can write $\gcd(k, nm) = \gcd(k, n\gcd(k,m))$, that is, we can throw out all the primes in $m$ that are not in $k$. Furthermore, since $\gcd(k, m)|k$, we can write
\begin{align*}
\gcd(k, n\gcd(k,m)) = \gcd(k, m)\gcd(k/\gcd(k,m), n).
\end{align*}
Hence, $G_{d+1}$ may be written
\begin{align*}
G_{d+1}(K) &= \sum^K_{k_d = 1}\cdots \sum^K_{k_1 = 1} \gcd(K, k_1\cdots k_{d})\\
&= \sum^K_{k_d = 1}\cdots \sum^K_{k_1 = 1} \gcd(K, k_d)\gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1})\\
&= \sum^K_{k_d = 1} \gcd(K, k_d) \left(\sum^K_{k_{d-1} = 1}\cdots \sum^K_{k_1 = 1} \gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1})\right)
\end{align*}
But now notice that the function $\gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1})$ is periodic with period $K/\gcd(K,k_d)$ in each coordinate $k_i$ for $i = 1, \cdots, d-1$. Thus
\begin{align*}
\sum^K_{k_i = 1} \gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1}) = \gcd(K,k_d)\sum^{K/\gcd(K,k_d)}_{k_i = 1} \gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1})
\end{align*}
for $i = 1, \cdots, d-1$. Therefore,
\begin{align*}
G_{d+1}(K) &= \sum^K_{k_d=1} \gcd(K, k_d)^d \left(\sum^{K/\gcd(K, k_d)}_{k_{d-1} = 1}\cdots \sum^{K/\gcd(K, k_d)}_{k_1 = 1} \gcd(K/\gcd(K, k_d), k_1\cdots k_{d-1})\right)\\
&= \sum^K_{k_d=1} \gcd(K, k_d)^d G_d(K/\gcd(K, k_d)).
\end{align*}
But by Ces\`{a}ro's formula, this is simply
\begin{align*}
G_{d+1}(K) = \sum_{j|K} j^d \phi(K/j) G_d(K/j)
\end{align*}
Finally, we can express this in terms of Dirichlet convolution as
\begin{align*}
G_{d+1}(K) = ( {\rm Id}^d \ast (\phi \cdot G_d))(K)
\end{align*}
which completes our proof by induction.
(\ref{Gdprops3})
The multiplicativity of $G_d$ for $d\geq 1$ follows immediately from the recursive formula along with the facts that ${\rm Id}^d$, $\phi$, and $G_1 = 1$ are all multiplicative,
and products and convolutions of multiplicative functions are multiplicative.
(\ref{Gdprops4})
We will again proceed by induction on $d$. Notice that $G_1(p) = 1 = p-(p-1)$ for all $p$. Now suppose that for some $d\geq1$, $G_d(p) = p^d - (p-1)^d$ for all primes $p$.
By the recursive formula proved above, we may write
\begin{align*}
G_{d+1}(p) &= \sum_{j|p} j^d \phi(p/j) G_d(p/j)\\
&= 1^d\phi(p)G_d(p) +p^d\phi(1) G_d(1)
\end{align*}
since for prime $p$ the sum is only over $j=1,p$. Then by the induction hypothesis and the facts that $G_d(1) = \phi(1) =1$ and $\phi(p) = p-1$ for any prime $p$, we have that
\begin{align*}
G_{d+1}(p) &= (p-1)(p^d - (p-1)^d) + p^d\\
&= p^{d+1} - (p-1)^{d+1}
\end{align*}
which completes the proof.
(\ref{Gdprops5})
Once again, we proceed by induction on $d$. Observe that for $d=1$, we have
\begin{align*}
\sum_{K\leq x} \frac{G_1(K)}{K^s} &= \sum_{K\leq x} \frac{1}{K^s}.
\end{align*}
When $0\leq s <1$, we have that $1/K^s$ is decreasing, and $\sum_{K\leq x} 1/K^s \leq 1+\int_1^x 1/t^s dt$. On the other hand, when $s<0$, we have that $1/K^s$ is increasing, and $\sum_{K\leq x} 1/K^s \leq \int_1^{x+1} 1/t^s dt$. In either case, we have
\[
\sum_{K\leq x} \frac{G_1(K)}{K^s} \ll_s x^{1-s}
\]
which is the desired bound for $d=1$.
Now suppose that for $d\geq 1$ we have $\sum_{K\leq x} G_d(K)/K^s \ll_{s,d} x^{d-s}(\log x)^{d-1}$ for all $x>e$ and $s<d$. By the complete multiplicativity of ${\rm Id}^{-s}$ and the recursive formula for $G_d$, we can write
\[
G_{d+1}(K)/K^s = ({\rm Id}^{d-s}\ast ({\rm Id}^{-s}\cdot \phi \cdot G_d))(K).
\]
Also note that a Dirichlet product $(f\ast g)(K) = \sum_{j|K} f(j)g(K/j)$ can be seen as a sum over pairs of positive integers $(n,m)$ whose product is $K$, i.e.
\[
(f\ast g)(K) = \sum_{\substack{n,m\\nm=K}}f(n)g(m).
\]
Hence, the sum
\[
\sum_{K\leq x} (f\ast g)(K) = \sum_{\substack{n,m\\nm\leq x}} f(n)g(m) = \sum_{n\leq x} f(n)\sum_{m\leq x/n} g(m)
\]
is a sum over pairs of integers whose product is no greater than $x$. Also notice that $\phi(n)< n$ for any positive integer $n$. Thus for any $s<d+1$ and $x>e$, we may write
\begin{align*}
\sum_{K\leq x} \frac{G_{d+1}(K)}{K^s} &= \sum_{K\leq x}({\rm Id}^{d-s}\ast ({\rm Id}^{-s}\cdot \phi \cdot G_d))(K)\\
&= \sum_{n\leq x}\frac{1}{n^{s-d}} \sum_{m\leq x/n}\frac{\phi(m)G_d(m)}{m^s}\\
&< \sum_{n\leq x}\frac{1}{n^{s-d}} \sum_{m\leq x/n}\frac{G_d(m)}{m^{s-1}}.
\end{align*}
Then since $s<d+1$, we have $s-1<d$. Also, notice that for $n<x/e$, we have $x/n>e$, so the induction hypothesis applies to sums over $m\leq x/n$ for $n$ in this region. On the other hand, for $n\geq x/e$, we have $x/n \leq e$, so a sum over $m\leq x/n$ is only a sum over the first two terms, $m=1$ and $m=2$, and can thus be bounded by a constant (depending on $s$ and $d$). Hence, we may write
\begin{align*}
\sum_{K\leq x} \frac{G_{d+1}(K)}{K^s} &< \sum_{n\leq x}\frac{1}{n^{s-d}} \sum_{m\leq x/n}\frac{G_d(m)}{m^{s-1}}\\
&= \sum_{n< x/e}\frac{1}{n^{s-d}} \sum_{m\leq x/n}\frac{G_d(m)}{m^{s-1}} +\sum_{x/e\leq n\leq x}\frac{1}{n^{s-d}} \sum_{m\leq x/n}\frac{G_d(m)}{m^{s-1}}\\
&\ll_{s,d} \sum_{n< x/e}\frac{1}{n^{s-d}} (x/n)^{d+1-s}\log(x/n)^{d-1} +\sum_{x/e\leq n\leq x}\frac{1}{n^{s-d}}\\
&\ll_{s,d} x^{d+1-s}\sum_{n< x/e}\frac{\log(x/n)^{d-1}}{n} + \sum_{x/e\leq n\leq x}\frac{1}{n^{s-d}}.
\end{align*}
Observe that $\sum_{x/e\leq n\leq x}\frac{1}{n^{s-d}} \ll_{s,d} x^{d+1-s}$ (this can be seen with a calculation similar to that of the base case). On the other hand, the function $\log(x/t)^{d-1}/t$ is positive and decresing in the region $(1, x/e)$, so we may bound the sum by the first term plus the corresponding integral:
\begin{align*}
\sum_{n< x/e}\frac{\log(x/n)^{d-1}}{n} \leq (\log x)^{d-1} + \int_1^{x/e} \frac{\log(x/t)^{d-1}}{t} dt.
\end{align*}
With the substitution $u=\log(x/t)$, we find that
\[
\int_1^{x/e} \frac{\log(x/t)^{d-1}}{t} dt = \int_1^{\log x}u^{d-1} du = \frac{(\log x)^d -1}{d}.
\]
In total, we have that
\begin{align*}
\sum_{K\leq x} \frac{G_{d+1}(K)}{K^s} &\ll_{s,d} x^{d+1-s} \left(1+(\log x)^{d-1} + (\log x)^d\right)\\
&\ll x^{d+1-s}(\log x)^d
\end{align*}
since $x>e$, and this completes the proof.
\end{proof}
\end{appendix}
\section{Equidistribution for Arithmetic Sequences Along Abelian Horospherical Flows}\label{sect:EquiArith}
Let $G = {\rm SL}_n(\R)$, $\Gamma = {\rm SL}_n(\Z)$, and $X = \Gamma\backslash G$. Let $U$ be an upper triangular unipotent subgroup of the form
\begin{align}
U=
\left\{\left(\begin{matrix}
I_{m} & \vline & \mbox{\normalfont\Large $\ast$} \\
\hline
0 & \vline & I_{n-m}
\end{matrix}
\right)\right\}\label{eq:U}
\end{align}
for $m<n$. Note that $U\cong \mathbb{R}^d$ as groups for $d=m(n-m)$ under any identification $u(\bold{t})$ of ${\bf t}\in\mathbb{R}^d$ with the upper-right block. Recall that the Haar measure on $U$ is the Lebesgue measure on $\mathbb{R}^d$ under this identification, which we normalize so that $u([0,1]^d)$ has unit measure. Observe that $U$ is horospherical with respect to the element
\begin{align*}
a_t = {\rm diag}(\underbrace{e^{t(n-m)/n}, \cdots, e^{t(n-m)/n}}_{m}, \underbrace{e^{-tm/n}, \cdots, e^{-tm/n}}_{n-m})
\end{align*}
for any $t>0$ and that conjugation by $a_t$ scales all entries in the upper-right block of $U$ by $e^{t(n-m)/n}e^{tm/n}=e^t$. Hence, for this choice of $a_t$, we have $B_T = a_{\log T} u([0,1]^d) a_{-\log T} = u([0,T]^d)$. For this reason we will conflate the notation and write $B_T$ for both $[0,T]^d\subseteq\mathbb{R}^d$ and $u([0,T]^d)\subseteq U$.
Let $\psi$ be an additive character of $U$ (so $\psi(\bold{t}) = e^{i{\bf a}\cdot \bf{t}}$ for some ${\bf a}\in\mathbb{R}^d$). Define measure $\nu_T$ and (complex) measure $\mu_{T,\psi}$ on $X$ via duality: for $f\in C_c^\infty(X)$ let
\begin{align*}
\int_X f d\nu_T = \nu_T(f) := \frac{1}{|B_T|}\int_{B_T} f(x_0 u({\bf t})) d{\bf t}
\end{align*}
and
\begin{align*}
\int_X f d\mu_{T,\psi}=\mu_{T,\psi}(f) := \frac{1}{|B_T|}\int_{B_T} \psi({\bf t})\left(f(x_0 u({\bf t}))-\int_X f dm_X\right) d{\bf t}.
\end{align*}
Our main goal in this section is to obtain an effective rate of equidistribution along (multivariate) arithmetic sequences of inputs for the right action of $U$ on $X$. To do this, we first present the following lemma, the proof of which closely follows the proof of Lemma 3.1 in \cite{VenkSparse} for the case of $G={\rm SL}_2(\R)$ and $\Gamma$ cocompact.
\begin{lem}\label{lem:VenkLem}
Let $x_0 =\Gamma g_0\in X $ satisfy (\ref{eq:dio1}) for $T>R>C$. Then there exists $b>0$
such that for any $f\in C^\infty_c(X)$ and additive character $\psi$,
\begin{align*}
\left|\mu_{T,\psi}(f)\right|\ll R^{- b} \sob{\infty}{\ell}(f)
\end{align*}
where $\ell$ is as in Theorem \ref{thm:equidist}.
\end{lem}
\begin{rmk}
As noted in \cite{VenkSparse}, the significance of this lemma is that the implicit constant is independent of choice of $\psi$. This can be shown for highly oscillatory $\psi$ using integration by parts and for almost constant $\psi$ using equidistribution of the horospherical flow directly, thus this lemma is most significant for $\psi$ of moderate oscillation.
The proof will use our effective equidistribution result as well as a variety of technical integral manipulations that nonetheless do not require any heavy machinery.
\end{rmk}
\begin{proof}
Let $1\leq H\leq T$ and define a complex measure $\sigma_H$ on $U$ by
\begin{align*}
\int_U g d\sigma_H = \sigma_H(g) := \frac{1}{|B_H|}\int_{B_H} \overline\psi({\bf t})g(u({\bf t}))d{\bf t}
\end{align*}
for $g \in C_c^\infty(U)$.
Let $f\ast\sigma_H$ be the right convolution of $f$ by $\sigma_H$, i.e., for $x\in X$
\begin{align*}
f\ast \sigma_H (x) &= \int f(x u({\bf t})^{-1})d\sigma_H({\bf t})\\
&=\frac{1}{|B_H|} \int_{B_H} \overline\psi ({\bf t})f(x u({\bf t})^{-1}) d{\bf t}.
\end{align*}
Notice that by switching the order of integration (one may verify that the conditions of Fubini's theorem are satisfied) and using invariance of the Haar measure, we have
\begin{align*}
\int_X f\ast\sigma_H dm_X &= \int_X \frac{1}{|B_H|} \int_{B_H} \overline\psi ({\bf t})f(x u({\bf t})^{-1}) d{\bf t} \hspace{.03in}dm_X(x)\\
&= \frac{1}{|B_H|} \int_{B_H} \overline\psi ({\bf t})\left(\int_Xf(x u({\bf t})^{-1}) dm_X(x)\right)d{\bf t}\\
&= \frac{1}{|B_H|} \int_{B_H} \overline\psi ({\bf t})\left(\int_X f dm_X \right)d{\bf t}.
\end{align*}
Hence,
\begin{align*}
\mu_{T,\psi}(f\ast \sigma_H) &= \frac{1}{|B_T|}\int_{B_T}\psi({\bf t})\left( f\ast\sigma_H(x_0u({\bf t})) - \int_X f\ast\sigma_H dm_X \right)d{\bf t}\\
&= \frac{1}{|B_T|}\int_{B_T}\psi({\bf t})\frac{1}{|B_H|}\int_{B_H} \overline\psi({\bf s})\left(f(x_0 u({\bf t})u({\bf s})^{-1})-\int_X f dm_X\right)d{\bf s}d{\bf t}\\
&= \frac{1}{|B_T||B_H|}\int_{B_T}\int_{B_H}\psi({\bf t-s})\left(f(x_0 u({\bf t-s}))-\int_X f dm_X\right)d{\bf s}d{\bf t}
\end{align*}
since $\overline\psi ({\bf s}) =\psi (-{\bf s})$ and $U\cong R^d$. Now by switching the order of integration and applying a change of variables, we get
\begin{align*}
\mu_{T,\psi}(f\ast \sigma_H) &= \frac{1}{|B_T||B_H|}\int_{B_H}\int_{B_T- {\bf s}}\psi({\bf t})\left(f(x_0 u({\bf t}))-\int_X f dm_X\right)d{\bf t}d{\bf s}.
\end{align*}
But we may also write
\begin{align*}
\mu_{T,\psi}(f) &= \frac{1}{|B_T|}\int_{B_T} \psi({\bf t})\left(f(x_0 u({\bf t}))-\int_X f dm_X\right) d{\bf t}\\
&=\frac{1}{|B_T||B_H|}\int_{B_H}\int_{B_T} \psi({\bf t})\left(f(x_0 u({\bf t}))-\int_X f dm_X\right) d{\bf t}d{\bf s}.
\end{align*}
Thus
\begin{align*}
&|\mu_{T,\psi}(f)-\mu_{T,\psi}(f\ast \sigma_H)|\\
&\leq \frac{1}{|B_T||B_H|}\int_{B_H}\int_{B_T\triangle (B_T-{\bf s})} \left| f(x_0 u({\bf t}))-\int_X f dm_X\right| d{\bf t}d{\bf s}\\
&\ll \frac{1}{|B_T||B_H|}\int_{B_H} |B_T\triangle (B_T-{\bf s})| \sob{\infty}{0}(f) d{\bf s}.
\end{align*}
But notice that $B_T\triangle (B_T-{\bf s})$ is simply the symmetric difference of two shifted cubes, the measure of which will be maximized when ${\bf s} = (H, \cdots, H)$ (see Figure \ref{fig:symmdiff}). Hence,
\begin{align*}
|B_T\triangle (B_T-{\bf s})| &\leq 2 (T^d - (T-H)^d)\\
&= 2( d T^{d-1}H - \cdots \pm d T H^{d-1} \mp H^d)\\
&\ll T^{d-1}H.
\end{align*}
since $H\leq T$ implies that the leading term dominates.
\begin{figure
\begin{center}
\begin{tikzpicture}[scale=.8]
\draw [-] (0,0) -- (5,0);
\draw [-] (0,0) -- (0,5);
\draw [-] (0,5) -- (5,5);
\draw [-] (5,0) -- (5,5);
\draw [-] (1,1) -- (6,1);
\draw [-] (1,1) -- (1,6);
\draw [-] (1,6) -- (6,6);
\draw [-] (6,1) -- (6,6);
\draw[fill=gray!20] (0,0) -- (0,5) -- (1,5) -- (1,1) -- (5,1) -- (5,0) -- cycle;
\draw[fill=gray!20] (6,6) -- (6,1) -- (5,1) -- (5,5) -- (1,5) -- (1,6) -- cycle;
\draw [red,decorate,decoration={brace,amplitude=10pt,mirror}]
(0,0) -- (5,0) node [red,midway,below,yshift=-10pt]
{$T$};
\draw [red,decorate,decoration={brace,amplitude=10pt}]
(0,0) -- (0,5) node [red,midway,left,xshift=-10pt]
{$T$};
\draw [red,decorate,decoration={brace,amplitude=5pt,mirror}]
(5,1) -- (6,1) node [red,midway,below,yshift=-5pt]
{$H$};
\draw [red,decorate,decoration={brace,amplitude=5pt}]
(1,5) -- (1,6) node [red,midway,left,xshift=-5pt]
{$H$};
\draw [red,decorate,decoration={brace,amplitude=7pt,mirror}]
(1,1) -- (1,5) node [red,midway,right,xshift=7pt]
{$T-H$};
\draw [red,decorate,decoration={brace,amplitude=7pt}]
(1,1) -- (5,1) node [red,midway,above,yshift=5pt]
{$T-H$};
\end{tikzpicture}
\caption{The symmetric difference between $B_T$ and $B_T - (H,\cdots,H)$.}
\label{fig:symmdiff}
\end{center}
\end{figure}
It follows that $\int_{B_H} |B_T\triangle (B_T-{\bf s})| d{\bf s} \ll T^{d-1} H^{d+1}$. Thus,
\begin{align}
|\mu_{T,\psi}(f)-\mu_{T,\psi}(f\ast \sigma_H)| \ll \frac{T^{d-1}H^{d+1}}{|B_T||B_H|} \sob{\infty}{0}(f) = \frac{H}{T}\sob{\infty}{0}(f).\label{eq:H/T}
\end{align}
Now consider
\begin{align*}
|\mu_{T,\psi}(f\ast \sigma_H)|^2 &= \left|\frac{1}{|B_T|}\int_{B_T} \psi({\bf t})\left(f\ast\sigma_H(x_0u({\bf t}))-\int_X f\ast \sigma_H dm_X\right)d{\bf t}\right|^2\\
&\leq \frac{1}{|B_T|^2}\left(\int_{B_T} \left|f\ast\sigma_H(x_0u({\bf t}))-\int_X f\ast \sigma_H dm_X\right|d{\bf t}\right)^2\\
&= \frac{1}{|B_T|^2}\left<1, \left|f\ast\sigma_H(x_0u(\cdot))-\int_X f\ast \sigma_H dm_X\right| \right>_{L^2(B_T)}^2.
\end{align*}
By Cauchy-Schwarz, we know that
\begin{align*}
|\mu_{T,\psi}(f\ast \sigma_H)|^2 &\leq \frac{1}{|B_T|^2} \nn{1}_{L^2(B_T)}^2 \nn{f\ast\sigma_H(x_0u(\cdot))-\int_X f\ast \sigma_H dm_X}_{L^2(B_T)}^2.
\end{align*}
Now, $\nn{1}_{L^2(B_T)}^2 = \int_{B_T} 1^2 d{\bf t} = |B_T|$, and
\begin{align*}
\nn{f\ast\sigma_H(x_0u(\cdot))-\int_X f\ast \sigma_H dm_X}_{L^2(B_T)}^2 &= \int_{B_T} \left|f\ast\sigma_H(x_0u({\bf t}))-\int_X f\ast \sigma_H dm_X\right|^2 d{\bf t}\\
&=|B_T|\nu_T\left(\left|f\ast\sigma_H-\int_X f\ast \sigma_H dm_X\right|^2\right)
\end{align*}
which shows that
\begin{align}
|\mu_{T,\psi}(f\ast \sigma_H)|^2 \leq \nu_T\left(\left|f\ast\sigma_H-\int_X f\ast \sigma_H dm_X\right|^2\right).\label{eq:munu}
\end{align}
Hence, by (\ref{eq:H/T}) and (\ref{eq:munu}), we have
\begin{align}
|\mu_{T,\psi}(f)| &\leq |\mu_{T,\psi}(f)-\mu_{T,\psi}(f\ast \sigma_H)| + |\mu_{T,\psi}(f\ast \sigma_H)|\nonumber\\
&\ll \frac{H}{T}\sob{\infty}{0}(f) + \nu_T\left(\left|f\ast\sigma_H-\int_X f\ast \sigma_H dm_X\right|^2\right)^{1/2}. \label{eq:muTpsi}
\end{align}
To estimate $\nu_T\left(\left|f\ast\sigma_H-\int_X f\ast \sigma_H dm_X\right|^2\right)$, observe that
\begin{align*}
&\left|f\ast\sigma_H(x)-\int_X f\ast \sigma_H dm_X\right|^2\\
&= \left|\frac{1}{|B_H|}\int_{B_H} \overline\psi({\bf s})\left( [u({\bf s})f](x)-\int_X f dm_X \right) d{\bf s}\right|^2\\
&=\frac{1}{|B_H|^2}\left(\int_{B_H} \overline\psi({\bf s_1})\left([u({\bf s_1})f](x)-\int_X f dm_X \right)d{\bf s_1}\right)\\
&\hspace{.55in}\cdot\left(\int_{B_H} \psi({\bf s_2})\left([u({\bf s_2})\overline f](x)-\int_X \overline f dm_X\right) d{\bf s_2}\right)\\
&= \frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\psi({\bf s_2-s_1}) \left[\left([u({\bf s_1})f](x)-\int_X f dm_X \right)\hspace{-4pt}\left([u({\bf s_2})\overline f](x)-\int_X \overline f dm_X\right)\right]d{\bf s_1}d{\bf s_2}.
\end{align*}
When we apply $\nu_T$ to this, we can change the order of integration so that the innermost integral is over $B_T$, with the character $\psi({\bf s_2-s_1})$ outside this integral. We may then integrate separately over the four terms we get by expanding the bracketed product above. That is,
\begin{align}
&\nu_T\left(\left|f\ast\sigma_H-\int_X f\ast \sigma_H dm_X\right|^2\right)\nonumber\\
&=\frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\psi({\bf s_2-s_1}) \nu_T\left(\left(u({\bf s_1})f-\int_X f dm_X \right)\left(u({\bf s_2})\overline f-\int_X \overline f dm_X\right)\right)d{\bf s_1}d{\bf s_2}\nonumber\\
&=\frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\psi({\bf s_2-s_1}) F({\bf s_1, s_2}) d{\bf s_1}d{\bf s_2}\label{eq:introF}
\end{align}
where
\begin{align}
F({\bf s_1, s_2}) &= \nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)\nonumber\\
&- \nu_T(u({\bf s_1})f) \int_X \overline f dm_X \nonumber\\
&- \nu_T(u({\bf s_2})\overline f) \int_X f dm_X\label{eq:F}\\
&+ \left|\int_X f dm_X\right|^2.\nonumber
\end{align}
Now from Theorem \ref{thm:equidist} we know that for arbitrary $\tilde f\in C_c^\infty(X)$ and $x_0$ satisfying the Diophantine basepoint condition (\ref{eq:dio1}) with $T>R>C$, we have
\begin{align*}
\left|\nu_T(\tilde f)-\int_X \tilde f dm_X\right| = \left|\frac{1}{|B_T|}\int_{B_T} \tilde f(x_0 u({\bf t})) d{\bf t}-\int_X \tilde f dm_X\right| \ll R^{-\gamma}\sob{\infty}{\ell}(\tilde f),
\end{align*}
that is,
\begin{align}
\nu_T(\tilde f) = \int_X \tilde f dm_X +\mathcal{O}(R^{-\gamma}\sob{\infty}{\ell}(\tilde f)).\label{eq:equidist}
\end{align}
Applying this to the function $\tilde f = u({\bf s_1})f$, we find that
\begin{align*}
\nu_T(u({\bf s_1})f) = \int_X u({\bf s_1})f dm_X + \mathcal{O}(R^{-\gamma}\sob{\infty}{\ell}(u({\bf s_1})f)).
\end{align*}
But since $m_X$ is the Haar measure,
\begin{align*}
\int_X u({\bf s_1})f dm_X = \int_X f(xu({\bf s_1})^{-1}) dm_X(x) = \int_X f dm_X.
\end{align*}
Thus
\begin{align*}
\nu_T(u({\bf s_1})f) \int_X \overline f dm_X = \left|\int_X f dm_X\right|^2 + \mathcal{O}\left(R^{-\gamma}\sob{\infty}{\ell}(u({\bf s_1})f)\left|\int_X \overline f dm_X\right|\right).
\end{align*}
Furthermore, from Sobolev norm property (\ref{Sob3}), we know that for $f\in C^\infty_c(X)$ and $h\in G$, we have $\sob{\infty}{\ell}(hf) \ll_\ell \nn{h}^\ell \sob{\infty}{\ell}(f)$, where $\nn{h}$ is the operator norm of ${\rm Ad}_{h^{-1}}$. Since the entries of $u({\bf s})^{-1}$ are bounded by $\max(1,|{\bf s}|)$, we have $\nn{u({\bf s})} \ll \max(1,|{\bf s}|)^2$. Thus for ${\bf s_1} \in [0,H]$ with $H\geq 1$, $\sob{\infty}{\ell}(u({\bf s_1})f) \ll H^{2\ell}\sob{\infty}{\ell}(f)$. Combining this with the bound $\left|\int \overline f dm_X\right| \ll \sob{\infty}{0}(f) \ll \sob{\infty}{\ell}(f)$, we find that
\begin{align*}
\nu_T(u({\bf s_1})f) \int_X \overline f dm_X = \left|\int_X f dm_X\right|^2 + \mathcal{O}(R^{-\gamma}H^{2\ell}\sob{\infty}{\ell}(f)^2).
\end{align*}
Likewise,
\begin{align*}
\nu_T(u({\bf s_2})\overline f) \int_X f dm_X = \left|\int_X f dm_X\right|^2 + \mathcal{O}(R^{-\gamma}H^{2\ell}\sob{\infty}{\ell}(f)^2).
\end{align*}
Therefore, (\ref{eq:F}) becomes simply
\begin{align*}
F({\bf s_1, s_2}) =\nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)-\left|\int_X f dm_X\right|^2 + \mathcal{O}(T^{-\alpha\gamma}H^{2\ell}\sob{\infty}{\ell}(f)^2).
\end{align*}
Substituting this back into (\ref{eq:introF}), we conclude that
\begin{align}
&\nu_T\left(\left|f\ast\sigma_H(x)-\int_X f\ast \sigma_H dm_X\right|^2\right)\nonumber\\
&\ll \frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\left|\nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)-\left|\int_X f dm_X\right|^2 \right|d{\bf s_1}d{\bf s_2}+R^{-\gamma}H^{2\ell}\sob{\infty}{\ell}(f)^2.\label{eq:backtointroF}
\end{align}
But now notice that
\begin{align*}
\int_X u({\bf s_1})f\cdot u({\bf s_2})\overline f dm_X = \left<u({\bf s_1})f,u({\bf s_1})f\right>_{L^2(X)} = \left<u({\bf s_1-s_2})f,f\right>_{L^2(X)}
\end{align*}
so by the triangle inequality, we can estimate
\begin{align}
\left|\nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)-\left|\int_X f dm_X\right|^2 \right| \leq& \left|\nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)-\int_X u({\bf s_1})f\cdot u({\bf s_2})\overline f dm_X \right|\nonumber\\
&+ \left|\left<u({\bf s_1-s_2})f,f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right|.\label{eq:longtriangleineq}
\end{align}
Again, by our equidistribution result in (\ref{eq:equidist}), we know that
\begin{align}
\left|\nu_T(u({\bf s_1})f\cdot u({\bf s_2})\overline f)-\int_X u({\bf s_1})f\cdot u({\bf s_2})\overline f dm_X \right|\ll R^{-\gamma}\sob{\infty}{\ell}(u({\bf s_1})f\cdot u({\bf s_2})\overline f)\label{eq:oddequidist}
\end{align}
and by properties (\ref{Sob2}) and (\ref{Sob3}) of Sobolev norms, we have
\begin{align}
\sob{\infty}{\ell}(u({\bf s_1})f\cdot u({\bf s_2})\overline f) \ll \sob{\infty}{\ell}(u({\bf s_1})f) \sob{\infty}{\ell}(u({\bf s_2})f) \ll H^{4\ell} \sob{\infty}{\ell}(f)^2\label{eq:Sobbd}
\end{align}
for ${\bf s_1, s_2}\in [0,H]$. Thus, from (\ref{eq:Sobbd}), (\ref{eq:oddequidist}), and (\ref{eq:longtriangleineq}), equation (\ref{eq:backtointroF}) becomes
\begin{align}
&\nu_T\left(\left|f\ast\sigma_H(x)-\int_X f\ast \sigma_H dm_X\right|^2\right) \nonumber\\
&\ll \frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\left|\left<u({\bf s_1-s_2})f,f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right|d{\bf s_1}d{\bf s_2}+ R^{-\gamma}H^{4\ell}\sob{\infty}{\ell}(f)^2.\label{eq:lastest}
\end{align}
Now from Corollary \ref{cor:expmixing} (\ref{umtxcoeff}), we know there exists $\beta>0$ such that for any ${\bf s}\in \mathbb{R}^d$,
\begin{align}
\left|\left<u({\bf s})f,f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right| \ll \max(1,|{\bf s}|)^{-\beta}\sob{\infty}{\ell}(f)^2.\label{eq:spectralGap}
\end{align}
Then for ${\bf s} = {\bf s_1}-{\bf s_2}$, we have the following problem: We want to bound the integral in (\ref{eq:lastest}) by a power of $H$, but for $({\bf s_1},{\bf s_2})$ close to the diagonal in $B_H\times B_H$ we cannot do better than a constant times the Sobolev norm of $f$ in (\ref{eq:spectralGap}). We will address this by integrating separately over a neighborhood of the diagonal that has small measure (depending on $H$) and away from the diagonal where $\max(1,|{\bf s_1} - {\bf s_2}|)$ is dominated by $H$.
To make this precise, let $D:=\{ ({\bf s_1,s_2})\in B_H \times B_H \hspace{2pt}\vert\hspace{3pt} {\bf s_1 = s_2} \}$ be the diagonal of $ B_H \times B_H $ and define $D_\epsilon:=\{ ({\bf s_1,s_2})\in B_H\times B_H \hspace{2pt}\vert\hspace{3pt} |{\bf s_1-s_2| }<\epsilon \}$.
Notice that $D$ is a $d$-dimensional subset of $\mathbb{R}^{2d}$ with diameter $\sqrt{2d}H$. Furthermore, any point satisfying $|{\bf s_1 - s_2}| = \epsilon$ is distance $\epsilon/\sqrt{2}$ from the diagonal, so $D_\epsilon$ is an ($\epsilon/\sqrt{2}$)-neighborhood of $D$ sitting inside $[0,H]^{2d}$. Thus $D_\epsilon$ is contained within a box in $\mathbb{R}^{2d}$ with $d$ side-lengths of $\sqrt{2d}H$ and $d$ side-lengths of $2\epsilon/\sqrt{2}$, so
\begin{align*}
|D_\epsilon| \ll H^d\epsilon^d
\end{align*}
(see Figure \ref{fig:nbhdofdiag}). In particular, if $\epsilon = H^\zeta$ (for $0<\zeta<1$ to be determined), then
\begin{align*}
\left|\{ ({\bf s_1}, {\bf s_2})\in B_H\times B_H \hspace{2pt}\vert\hspace{3pt} |{\bf s_1-s_2| }<H^\zeta \}\right| \ll H^{d(1+\zeta)}.
\end{align*}
In this region, the integrand is dominated by 1, so when we integrate over this region and divide by $|B_H|^2 = H^{2d}$ (as we are doing in (\ref{eq:lastest})), we get a term of order $H^d({\zeta-1)}\sob{\infty}{\ell}(f)^2$.
\begin{figure
\begin{center}
\begin{tikzpicture
\draw[fill=gray!20] (0,0) -- (1,0) -- (5,4) -- (5,5) -- (4,5) -- (0,1) -- cycle;
\draw [-] (0,0) -- (5,0);
\draw [-] (0,0) -- (0,5);
\draw [-] (0,5) -- (5,5);
\draw [-] (5,0) -- (5,5);
\draw [-,red] (0,0) -- (5,5);
\draw [-] (-.5,.5) -- (.5,-.5);
\draw [-] (4.5,5.5) -- (5.5,4.5);
\draw [-] (-.5,.5) -- (4.5,5.5);
\draw [-] (.5,-.5) -- (5.5,4.5);
\draw [-, red] (2.5,2.5) -- (3,2);
\draw [-, dashed,gray] (2,2) -- (2,0);
\draw [-, dashed,gray] (3,2) -- (3,0);
\draw [-, dashed,gray] (3,2) -- (0,2);
\draw [red,decorate,decoration={brace,amplitude=10pt}]
(-.5,.5,0) -- (4.5,5.5) node [red,midway,xshift=-17pt,yshift=15pt]
{$\sqrt{2d}H$};
\draw [decorate,decoration={brace,amplitude=5pt,mirror}]
(2,0) -- (3,0);
\node[below] at (4.9,0) {$H$};
\node[left] at (0,4.9) {$H$};
\node[right,red] at (2.5,2.55) {\small $\epsilon/\sqrt{2}$};
\node[below] at (2,0) {\small ${\bf s_2}$};
\node[left] at (0,2) {\small ${\bf s_2}$};
\node[below] at (3.1,0) {\small ${\bf s_1}$};
\node[below] at (2.5,-.1) {\small $\epsilon$};
\node[gray] at (4,4.5) {$D_\epsilon$};
\node[red] at (4,3.6) {$D$};
\end{tikzpicture}
\caption{The measure of the set where $|{\bf s_1-s_2| }<\epsilon$ has measure bounded by $H^d\epsilon^d$ in $B_H\times B_H$ (shown here for one dimensional $U$).}
\label{fig:nbhdofdiag}
\end{center}
\end{figure}
On the other hand, for $|{\bf s_1- s_2}|\geq H^\zeta$,
we can say that
\begin{align}
\left|\left<u({\bf s_1-s_2})f,f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right| &\ll \max(1,|{\bf s_1- s_2}|)^{-\beta}\sob{\infty}{\ell}(f)^2\nonumber\\
&\leq H^{-\zeta\beta}\sob{\infty}{\ell}(f)^2.\nonumber
\end{align}
Hence,
\begin{align}
\frac{1}{|B_H|^2}\int_{B_H}\int_{B_H}\left|\left<u({\bf s_1-s_2})f,f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right| d{\bf s_1}d{\bf s_2} &\ll (H^{-\zeta\beta} + H^{d(\zeta-1)})\sob{\infty}{\ell}(f)^2\nonumber\\
&= H^{-d\beta/(d+\beta)}\sob{\infty}{\ell}(f)^2\label{eq:betaBd}
\end{align}
where we have chosen $\zeta=d/(d+\beta)$ to optimize the error.
Together, the bounds in (\ref{eq:lastest}) and (\ref{eq:betaBd}) imply that
\begin{align}
\nu_T\left(\left|f\ast\sigma_H(x)-\int_X f\ast \sigma_H dm_X\right|^2\right) \ll& \left(R^{-\gamma}H^{4\ell} + H^{-d\beta/(d+\beta)}\right)\sob{\infty}{\ell}(f)^2.\label{eq:nuTbd}
\end{align}
Finally, from (\ref{eq:muTpsi}) and (\ref{eq:nuTbd}), we have
\begin{align*}
|\mu_{T,\psi}(f)| \ll \left( T^{-1}H + R^{-\gamma/2}H^{2\ell} + H^{-d\beta/(2d+2\beta)}\right)\sob{\infty}{\ell}(f).
\end{align*}
Since $\gamma<1$ and $R<T$,
the first term decays more quickly that the second, and can be ignored. Thus the decay is optimized when
\begin{align*}
H^{-d\beta/(2d+2\beta)} &= R^{-\gamma/2} H^{2\ell}\\
H &= R^{\gamma(d+\beta)/(4\ell d + 4\ell\beta + d\beta)}.
\end{align*}
This demonstrates the claim that
\begin{align*}
|\mu_{T,\psi}(f)| &\ll R^{-b} \sob{\infty}{\ell}(f)
\end{align*}
where $b = d\beta\gamma/(8d\ell +8\ell\beta+2d\beta)$.
\end{proof}
We will now use this lemma to establish an effective equidistribution bound along multivariate arithmetic sequences.
Let $K_1, \dots, K_d\geq1$ and define $K$ to be the diagonal matrix
\begin{align*}
K := {\rm diag}(K_1,\cdots, K_d) = \begin{pmatrix}K_1&&\\&\ddots&\\&&K_d\end{pmatrix}
\end{align*}
and $|K| = \det(K) = K_1K_2\cdots K_d$.
We want to understand the behavior of
\begin{align}
S:=\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} f(x_0 u(K{\bf k})).\label{eq:Sdef}
\end{align}
For equidistribution, we want this to be close to $\#\{{\bf k}\in\mathbb{Z}^d | K{\bf k}\in B_T\} \int_X f dm_X \approx \frac{T^d}{|K|}\int_X f dm_X$. For $x_0$ satisfying a basepoint property, we have the following result.
\begin{thm}
Let $K = {\rm diag}(K_1,\cdots, K_d)$ with $T\geq K_1, \dots, K_d\geq1$ and determinant $|K|$. Then for all $x_0\in X$ satisfying (\ref{eq:dio1}) with $T>R>C_K$, we have
\begin{align*}
\left|\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} f(x_0 u(K{\bf k}))-\frac{T^d}{|K|}\int_X f dm_X\right| \ll \left(T^d R^{-b/(d+1)} |K|^{-d/(d+1)} +\frac{T^{d-1}\max_i K_i}{|K|}\right)\sob{\infty}{\ell}(f)
\end{align*}
where $C_K:=\max(C, (2/\min K_i)^{(d+1)/b}|K|^{1/b})$ with $C$ and $\ell$ as in Theorem \ref{thm:equidist}.\label{thm:arithequidist}
\end{thm}
\begin{proof}
Let $\delta>0$ be small (to be determined) and define the single-variable hat function
\begin{align*}
g_\delta(t) := \max(\delta^{-2}(\delta-|t|), 0)
\end{align*}
for $t\in\mathbb{R}$ and (through slight abuse of notation) the multivariable function
\begin{align*}
g_\delta({\bf t}) := g_\delta(t_1)\cdots g_\delta(t_d)
\end{align*}
for ${\bf t} = (t_1, \cdots, t_d) \in \mathbb{R}^d$. Notice that $\int_{\mathbb{R}^d} g_\delta ({\bf t}) d{\bf t} = 1$ and $\text{supp } (g_\delta) \subseteq [-\delta, \delta]^d$.
\begin{comment}
\begin{figure
\begin{center}
\begin{tikzpicture}
\draw [<->] (-2,0) -- (2,0);
\draw [->] (0,0) -- (0,3);
\draw [red] [-] (-1,0) -- (0,2.5)--(1,0);
\node [right] at (2,0) {$t$};
\node [red] [right] at (.5,1.5) {$g_\delta(t)$};
\node [right] at (0,2.5) {$1/\delta$};
\node [below] at (-1.1,0) {$-\delta$};
\node [below] at (1,0) {$\delta$};
\draw [-] (-.1,2.5)--(.1, 2.5);
\draw [-] (-1,-.1)--(-1, .1);
\draw [-] (1, -.1)--(1, .1);
\end{tikzpicture}
\caption{The single-variable hat function $g_\delta(t)$.}
\end{center}
\end{figure}
\end{comment}
Define an approximation to the sum $S$ by
\begin{align}
S_{{\rm approx}} := \int_{B_T} \left(\sum_{{\bf k}\in\mathbb{Z}^d} g_\delta({\bf t}-K{\bf k})\right) f(x_0 u({\bf t}))d{\bf t}.\label{eq:Sapproxdef}
\end{align}
That is, instead of averaging $f$ over the lattice points of $K\mathbb{Z}^d$, we average over small neighborhoods around the lattice points using the bump function $g_\delta$, since $\sum_{{\bf k}} g_\delta({\bf t}-K{\bf k})$ is supported on a disjoint union of $\delta$-cubes centered around the points of $K\mathbb{Z}^d$ (that is, so long as $\delta < \min_i K_i/2$).
We want to show that $S_{\rm approx}$ can be written
\begin{align}
S_{\rm approx} = \left( \sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} \int_{[-\delta, \delta]^d + K{\bf k}} g_\delta({\bf t}-K{\bf k}) f(x_0u({\bf t}))d{\bf t}\right) + r(T,K,f,d)\label{eq:Sapproxapprox}
\end{align}
where $r(T,K,f,d)$ is an error term depending on $T$, $K_1, \cdots, K_d$, $f$, and dimension $d$. To see this, observe that in both (\ref{eq:Sapproxdef}) and (\ref{eq:Sapproxapprox}) we are integrating $f$ against a sum of bump functions supported on a disjoint union of $\delta$-cubes centered at the lattice points of $K\mathbb{Z}^d$. However, in (\ref{eq:Sapproxdef}) we are integrating over the region shaded in red Figure \ref{fig:suminterror}, whereas in (\ref{eq:Sapproxapprox}) we are integrating over the region shaded in gray (that is, we are only integrating against the bump functions whose centers intersect $B_T$).
\begin{figure
\begin{center}
\begin{tikzpicture}
\foreach \Point in {(0,0),(2.9,0),(5.8,0),
(0,2.5),(2.9,2.5), (5.8,2.5),
(0,5), (2.9,5),(5.8,5),,
(0,7.5),(2.9,7.5),(5.8,7.5)}
{
\begin{scope}
\clip (0,0) rectangle (8,8);
\draw[pattern=north east lines, pattern color=red!30] \Point +(-1,-1) rectangle +(1,1);
\end{scope}
\draw[pattern=north west lines, pattern color=gray!60] \Point +(-1,-1) rectangle +(1,1);
\draw \Point +(-1,-1) rectangle +(1,1) ;
\fill \Point circle[radius=2pt];
}
\foreach \Point in {(8.7,0),(8.7,2.5),(8.7,5),(8.7,7.5)}
{
\begin{scope}
\clip (0,0) rectangle (8,8);
\draw[pattern=north east lines, pattern color=red!30] \Point +(-1,-1) rectangle +(1,1);
\end{scope}
\fill \Point circle[radius=2pt];
\draw \Point +(-1,-1) rectangle +(1,1) ;
}
\draw [-] (0,0) -- (8,0);
\draw [-] (0,0) -- (0,8);
\draw [-] (0,8) -- (8,8);
\draw [-] (8,0) -- (8,8);
\draw [red,decorate,decoration={brace,amplitude=10pt,mirror}]
(0,0) -- (2.9,0) node [midway,yshift=-17pt]
{$K_1$};
\draw [red,decorate,decoration={brace,amplitude=10pt}]
(0,0) -- (0,2.5) node [midway,xshift=-18pt]
{$K_2$};
\draw [red,decorate,decoration={brace,amplitude=7pt,mirror}]
(2.9,0) -- (3.9,0) node [midway,yshift=-13pt]
{$\delta$};
\draw [red,decorate,decoration={brace,amplitude=7pt}]
(0,2.5) -- (0,3.5) node [midway,xshift=-13pt]
{$\delta$};
\draw [red,decorate,decoration={brace,amplitude=10pt,aspect=.55}]
(0,8) -- (8,8) node [midway,yshift=16pt,xshift=12pt]
{$T$};
\draw [red,decorate,decoration={brace,mirror,amplitude=10pt,aspect=.47}]
(8,0) -- (8,8) node [midway,yshift=-7pt,xshift=17pt]
{$T$};
\end{tikzpicture}
\caption{The area shaded in red indicates the region over which we are integrating in the definition of $S_{\rm approx}$, whereas the area shaded in gray represents the region over which we are integrating in our estimate of $S_{\rm approx}$ given in (\ref{eq:Sapproxapprox}). The difference between the two integrals can be bounded by the number of $\delta$-cubes intersecting the boundary of $B_T$ multiplied by the supremum of $f$.}
\label{fig:suminterror}
\end{center}
\end{figure}
Thus all of the possible error comes from integrating over those $\delta$-cubes that intersect the boundary of $B_T$. Consider a face of $B_T$ that is orthogonal to the $i^{\rm th}$ standard basis vector. It will intersect at most $T/K_j + \mathcal{O}(1)$ of these cubes along an edge in the $j^{\rm th}$ direction for $j\neq i$. Hence, the total number of cubes that face intersects can be bounded by
\begin{align*}
\frac{T}{K_1}\cdots\frac{T}{K_{i-1}}\cdot\frac{T}{K_{i+1}}\cdots\frac{T}{K_d} = T^{d-1}\frac{K_i}{|K|}.
\end{align*}
Since $g_\delta$ integrates to one, the error that results from integrating over one of these $\delta$-cubes is bounded by $\sob{\infty}{0}(f)$. Then considering all the faces of $B_T$, we see that the error satisfies
\begin{align*}
|r(T,K,f,d)| \ll \sob{\infty}{0}(f)\sum_{i=1}^d T^{d-1}\frac{K_i}{|K|} \ll T^{d-1}\frac{\max_i K_i}{|K|}\sob{\infty}{0}(f).
\end{align*}
Then by a change of variables in (\ref{eq:Sapproxapprox}), we have
\begin{align}
S_{{\rm approx}} &= \left( \sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} \int_{[-\delta, \delta]^d} g_\delta({\bf s}) f(x_0u(K{\bf k}+{\bf s}))d{\bf s}\right) + r(T,K,f,d). \label{eq:Sapprox}
\end{align}
Also, since $\int_{[-\delta,\delta]^d} g_\delta({\bf s})d{\bf s} = 1$, we may rewrite the definition of $S$ in (\ref{eq:Sdef}) as
\begin{align*}
S &=\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} \int_{[-\delta, \delta]^d} g_\delta({\bf s}) f(x_0 u(K{\bf k}))d{\bf s}
\end{align*}
and combining this with (\ref{eq:Sapprox}), we obtain
\begin{align*}
|S_{{\rm approx}}-S| &\leq \left(\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} \int_{[-\delta, \delta]^d} g_\delta({\bf s}) |f(x_0 u(K{\bf k}+{\bf s}))-f(x_0 u(K{\bf k}))|d{\bf s}\right) + |r(T,K,f,d)|.
\end{align*}
But note that from property (\ref{Sob4}) of Sobolev norms, we have
\[
|f(x_0 u(K{\bf k}+{\bf s}))-f(x_0 u(K{\bf k}))|\ll \sob{\infty}{1}(f) |{\bf s}| \ll \sob{\infty}{1}(f) \delta
\]
for ${\bf s} \in [-\delta, \delta]^d.$
Together with our error bound, this implies that
\begin{align*}
|S_{{\rm approx}} - S| &\ll \left( \sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}}\int_{[-\delta,\delta]^d} g_\delta({\bf s})d{\bf s}\right)\delta\sob{\infty}{1}(f) + T^{d-1}\frac{\max_i K_i}{|K|}\sob{\infty}{0}(f)\\
&= \left(\#\{{\bf k}\in\mathbb{Z}^d | K{\bf k}\in B_T\} \delta +T^{d-1}\frac{\max_i K_i}{|K|}\right)\sob{\infty}{1}(f)
\end{align*}
once again, because $\int_{[-\delta,\delta]^d} g_\delta({\bf s})d{\bf s} = 1$.
But $\#\{{\bf k}\in\mathbb{Z}^d | K{\bf k}\in B_T\} \approx T^d/|K|$, also with an error of magnitude $\ll T^{d-1}\max_i K_i/|K|$ (for reasons analagous to those illustrated in Figure \ref{fig:suminterror}). Therefore
\begin{align}
|S_{{\rm approx}} - S| \ll \left(\frac{T^d}{|K|}\delta+\frac{T^{d-1}\max_i K_i}{|K|}\right)\sob{\infty}{1}(f).\label{eq:SSapprox}
\end{align}
To show that $S_{{\rm approx}}$ and $\dst{\frac{T^d}{|K|}\int_X f dm_X}$ are close, we observe that by Poisson summation,
\begin{align}
\sum_{{\bf k}\in\mathbb{Z}^d} g_\delta({\bf t} - K{\bf k}) &= \sum_{{\bf k}\in\mathbb{Z}^d} g_\delta({\bf t} + K{\bf k})\nonumber\\
&= \sum_{{\bf k}\in\mathbb{Z}^d} \widetilde{g_\delta}( K^{-1}{\bf t} + {\bf k})\nonumber\\
&= \sum_{{\bf k}\in\mathbb{Z}^d} \psi_{K^{-1}\bf k}({\bf t}) \widehat{\widetilde{g_\delta}}({\bf k})\label{eq:expakSum}
\end{align}
where $\psi_{K^{-1}\bf k}({\bf t}) = e^{2\pi i {\bf k} \cdot (K^{-1}{\bf t})} = e^{2\pi i (K^{-1}{\bf k}) \cdot {\bf t}}$ and $\widehat{\widetilde{g_\delta}}$ is the multivariate Fourier transform of $\widetilde g_\delta({\bf x}) = g_\delta(K{\bf x})$. When we substitute (\ref{eq:expakSum}) into the definition of $S_{\rm approx}$ given in (\ref{eq:Sapproxdef}), we get
\begin{align*}
S_{{\rm approx}} &= \int_{B_T} \left(\sum_{{\bf k}\in\mathbb{Z}^d} \psi_{K^{-1}\bf k}({\bf t}) \widehat{\widetilde{g_\delta}}({\bf k})\right) f(x_0 u({\bf t}))d{\bf t}\\
&=\sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k})\left(\int_{B_T} \psi_{K^{-1}\bf k}({\bf t}) f(x_0 u({\bf t}))d{\bf t}\right)
\end{align*}
where Fubini's Theorem allows us to switch the order of the sum and the integral. Similarly,
\begin{align*}
\frac{T^d}{|K|}\int_X f dm_X &= \left(\int_{B_T}\sum_{{\bf k}\in\mathbb{Z}^d} g_\delta({\bf t} - K{\bf k}) d{\bf t}+\mathcal{O}\left(\frac{T^{d-1}\max_i K_i}{|K|}\right)\right)\int_X f dm_X\\
&=\sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k})\left(\int_{B_T} \psi_{K^{-1}\bf k}({\bf t}) \int_X f dm_X d{\bf t}\right) + \mathcal{O}\left(\frac{T^{d-1}\max_i K_i}{|K|}\sob{\infty}{0}(f)\right)
\end{align*}
where we have used that $|\int_X f dm_X|\leq \sob{\infty}{0}(f)$. Thus
\begin{align}
\left|S_{{\rm approx}}-\frac{T^d}{|K|}\int_X f dm_X\right| &= \left|\sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k})\int_{B_T} e^{2\pi i {\bf k} \cdot (K^{-1}{\bf t})}\left(f(x_0 u({\bf t})) - \int_X f dm_X\right) d{\bf t}\right|\nonumber\\
&\hspace{15pt}+\mathcal{O}\left(\frac{T^{d-1}\max_i K_i}{|K|}\sob{\infty}{0}(f)\right)\nonumber\\
&= \left|\sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k})|B_T|\mu_{T,\psi_{K^{-1}{\bf k}}}(f)\right|+\mathcal{O}\left(\frac{T^{d-1}\max_i K_i}{|K|}\sob{\infty}{0}(f)\right).\label{eq:SapproxIntegral}
\end{align}
Then since $R>C$, we can apply Lemma \ref{lem:VenkLem} to obtain
\begin{align*}
\left|S_{{\rm approx}}-\frac{T^d}{|K|}\int_X f dm_X\right| &\ll_f T^d R^{-b}\sob{\infty}{\ell}(f) \sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k}) + \frac{T^{d-1}\max_i K_i}{|K|}\sob{\infty}{0}(f)
\end{align*}
(by direct computation we can see that $\widehat{\widetilde{g_\delta}}$ is positive). Observe how it was crucial here that the result in Lemma \ref{lem:VenkLem} was uniform over characters.
Finally, again by Poisson summation, we have
\begin{align}
\sum_{{\bf k}\in\mathbb{Z}^d} \widehat{\widetilde{g_\delta}}({\bf k})
&= \sum_{{\bf k}\in\mathbb{Z}^d} \widetilde{g_\delta}({\bf k})\nonumber\\
&= \sum_{{\bf k}\in\mathbb{Z}^d} g_\delta(K{\bf k})\nonumber\\
&= g_\delta (0,\dots, 0)= \delta^{-d}\nonumber
\end{align}
since $\text{supp }(g_\delta) \subseteq [-\delta, \delta]^d$ and $\delta<\min_i K_i/2$ implies that $g_\delta(K{\bf k})= 0$ for ${\bf k} \neq (0, \dots, 0)$. Substituting this into equation (\ref{eq:SapproxIntegral}), combining it with (\ref{eq:SSapprox}), and using property (\ref{Sob1}) of Sobolev norms, we get
\begin{align*}
\left|S-\frac{T^d}{|K|}\int_X f dm_X\right| \ll \left(T^d R^{-b}\delta^{-d}+\frac{T^d}{|K|}\delta +\frac{T^{d-1}\max_i K_i}{|K|}\right)\sob{\infty}{\ell}(f).
\end{align*}
We can optimize the first two terms by choosing $\delta = (|K|/R^b)^{1/(d+1)}$. Observe that our only resrtiction on $\delta$ was that $\delta < \min_i K_i/2$. This will be achieved with our choice of $\delta$ so long as $R>(2/\min K_i)^{(d+1)/b}|K|^{1/b}$.
Thus, under these conditions,
\begin{align*}
\left|S-\frac{T^d}{|K|}\int_X f dm_X\right| \ll \left(T^d R^{-b/(d+1)} |K|^{-d/(d+1)} +\frac{T^{d-1}\max_i K_i}{|K|}\right)\sob{\infty}{\ell}(f).
\end{align*}
\end{proof}
If $K$ has all diagonal entries of equal weight (in abuse of notation, say all of weight $K$) then we get the following corollary which will be of use to us in the next section.
\begin{cor}
Let $T\geq K\geq 1$. There exists a constant $\tilde C>0$ (depending only on $n$ and $d$) such that for all $x_0\in X$ satisfying (\ref{eq:dio1}) with $T>R>\tilde C$, we have
\begin{align*}
\left|\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} f(x_0 u(K{\bf k}))-\frac{T^d}{K^d}\int_X f dm_X\right| \ll T^d R^{-b/(d+1)}K^{-d^2/(d+1)} \sob{\infty}{\ell}(f).
\end{align*}\label{cor:arithequidistcor}
\end{cor}
\begin{proof}
This is a straightforward application of the previous theorem, observing that in this case $(2/\min K_i)^{(d+1)/b}|K|^{1/b}=(2^{d+1}/K)^{1/b}\leq 2^{(d+1)/b}$ since $K\geq 1$. Thus the theorem holds with $\tilde C =\max(C,2^{(d+1)/b})$. Moreover, the second error term in Theoerem \ref{thm:arithequidist} in this case is simply $T^{d-1}K^{1-d}$, and since $K, R<T$ and $b<1$, this term decays more quickly than the first and can be ignored.
\end{proof}
\begin{rem}
For $X=\Gamma\backslash G$ where $\Gamma$ is a cocompact lattice, we have the following basepoint-independent versions of Lemma \ref{lem:VenkLem}, Theorem \ref{thm:arithequidist}, and Corollary \ref{cor:arithequidistcor}.
\begin{lem}
There exists $b>0$ (depending on $n$, $d$, and $\Gamma$) such that for all $T$ large enough, we have
\begin{align*}
\left|\mu_{T,\psi}(f)\right|\ll_\Gamma T^{- b} \sob{\infty}{\ell}(f)
\end{align*}
for any $f\in C^\infty(X)$, $x_0\in X$, and additive character $\psi$.\label{lem:VenkLemcmpt}
\end{lem}
\begin{thm}
Let $K = {\rm diag}(K_1,\cdots, K_d)$ with $T\geq K_1, \dots, K_d\geq1$ and determinant $|K|$. Then for all $T$ large enough, we have
\begin{align*}
\left|\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} f(x_0 u(K{\bf k}))-\frac{T^d}{|K|}\int_X f dm_X\right| \ll_\Gamma \left(T^{d-b/(d+1)} |K|^{-d/(d+1)} +\frac{T^{d-1}\max_i K_i}{|K|}\right)\sob{\infty}{\ell}(f)
\end{align*}
for all $f\in C^\infty(X)$ and $x_0\in X$.\label{thm:arithequidistcmpt}
\end{thm}
\begin{cor}
Let $T\geq K\geq 1$. Then for all $T$ large enough, we have
\begin{align*}
\left|\sum_{\substack{{\bf k}\in\mathbb{Z}^d\\ K{\bf k}\in B_T}} f(x_0 u(K{\bf k}))-\frac{T^d}{K^d}\int_X f dm_X\right| \ll_\Gamma T^{d-b/(d+1)}K^{-d^2/(d+1)} \sob{\infty}{\ell}(f)
\end{align*}
for all $f\in C^\infty(X)$ and $x_0\in X$.\label{cor:arithequidistcorcmpt}
\end{cor}
The proofs of these results are completely analagous to the correspoding proofs for ${\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$, but use the basepoint-independent equidistribution result stated in (\ref{eq:equidistcmpt}) instead of Theorem \ref{thm:equidist}. As before, we may remove dependence on the lattice $\Gamma$ for $n\geq 3$ and for $n=2$ if $\Gamma$ is a congruence lattice.
\end{rem}
\section{Conclusion}\label{sect:concl}
In this paper we gave an effective equidistribution result for horospherical flows on the space of lattices and an effective rate of equidistribution for arithmetic sequences of entries in abelian horospherical flows on both the space of lattices and compact quotients of ${\rm SL}_n(\R)$. We then use sieve methods to derive an upper and lower bound for averages over almost-prime entries of abelian horospherical flows. In the compact setting, we have as a result the density of integer entries having fewer than a fixed number of primes depending only on the dynamical system and not on the basepoint. In the space of lattices, we consider the orbits of points satisfying a Diophantine condition with parameter $\delta$ and we prove the density of integer entries having fewer than a fixed number of primes depending on the system and on $\delta$.
There are several improvements and generalizations of this work that can be readily imagined. It seems likely that the methods used here can be generalized to quotients of
connected, semisimple Lie groups by lattices. It also seems possible that methods similar to those used in \cite{SarnackUbis} could be adapted to remove dependence on the basepoint in the noncompact case, yielding a uniform result for the density of almost-primes in the orbits of any generic point.
One could also generalize from abelian horospherical flows to arbitrary horospherical flows. This could make the character analysis in Section \ref{sect:EquiArith} on arithmetic sequences more tricky, but it nonetheless seems doable. Finally, the sieve methods used Section \ref{sect:Sieving} can be modified to learn about averages over points $(k_1, \cdots, k_d)\in \mathbb{R}^d$ satisfying $\gcd(\mathcal{P}(k_1, \cdots, k_d), P) =1$, where $\mathcal{P}$ is a suitably nice irreducible polynomial (note that we considered the case where $\mathcal{P}(k_1, \cdots, k_d) = k_1k_2\cdots k_d$).
Of course, the more natural question is not what happens at almost-prime times, but what happens at prime times. Unfortunately, it does not seem possible at present to use these methods to establish results about true primes, and additional ingredients or a wholly different approach may be required. However, this result is significant in that it continues to lend support to the conjecture, already suggested by \cite{SarnackUbis}, that prime times in horospherical orbits are dense and possibly equidistributed.
\section{Effective Equidistribution of Horospherical Flows}\label{sect:Equidistribution}
Our main objective in this section is to prove the following effective equidistribution theorem for horospherical flows on $X={\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$.
\begin{thm}\label{thm:equidist}
Let $u$ be a horospherical flow on $X$, and let $x_0=\Gamma g_0\in X$. Then there exist constants $\gamma, C>0$ (depending only on $n$ and $d$)
such that for $f\in C^\infty_c(X)$ and $T>R>C$, either
\begin{align}
\left| \frac{1}{m_U(B_T)} \int_{B_T} f(x_0 u) dm_U(u) - \int_X f dm_X \right| \ll R^{-\gamma} \sob{\infty}{\ell}(f) \tag{\ref{thm:equidist}.a}\label{eq:endresult}
\end{align}
or
\begin{align}
\exists j\in\{1,\cdots, n-1\} \text{ and primitive } w\in\Lambda^j(\mathbb{Z}^n)\setminus\{0\}\text{ s.t. } \nn{w g_0 u}< R^q\hspace{5pt}\forall {u}\in B_T.\tag{\ref{thm:equidist}.b}\label{eq:dionegation}
\end{align}
where $q = \sum_{\lambda_i <0} -m_i\lambda_i$ and $\ell=n(n-1)/2$ is the dimension of maximal compact subgroup of $G$.
\end{thm}
Intuitively, this theorem says that either the $U$-orbit of $x_0$ equidistributes in $X$ with a fast rate, or $x_0$ is close to a proper subset of $X$ that is fixed by the action of $U$, where our notions of ``fast" and ``close" are quantitatively related.
\begin{rmk}
The condition that $w$ in (\ref{eq:dionegation}) be primitive is conceptually useful but technically unnecessary, in that if there exists any $w\in\Lambda^j(\mathbb{Z}^n)\setminus\{0\}$ satisfying (\ref{eq:dionegation}), then there will also exist a primitive vector that does so.
\end{rmk}
\begin{rmk}
The ``either/or" in the theorem statement is not meant to imply an exclusive or. In fact, the theorem can be restated in the following form: For $x_0$, $\gamma$, $C$, $\ell$, $f$, $T$, and $R$ as above, not (\ref{eq:dionegation}) implies (\ref{eq:endresult}).
This leads us to define the following Diophantine basepoint condition for $x_0 = \Gamma g_0 \in X$:
\begin{align}
\forall j\in\{1,\cdots,n-1\} \text{ and } w\in\Lambda^j(\mathbb{Z}^n)\setminus\{0\}, \hspace{4pt}\exists\hspace{2pt} u\in B_T \text{ s.t. } \nn{w g_0u}\geq R^q.\tag{\ref{thm:equidist}.c}\label{eq:dio1}
\end{align}
Then Theorem \ref{thm:equidist} says that (\ref{eq:dio1}) implies (\ref{eq:endresult}), and this is in fact how we will structure the proof.
\end{rmk}
\begin{rmk}
Although the theorem is stated for balls of the form $B_T = a_{\log T} u([0,1]^d) a_{-\log T}$, it holds equally well for symmetric balls of the form $B_T = a_{\log T} u([-1,1]^d) a_{-\log T}$.
\end{rmk}
\begin{proof}
Let $x_0 = \Gamma g_0\in X$ satisfy the basepoint condition in (\ref{eq:dio1}) for some $T>R$. Then consider $f\in C_c^\infty(X)$ and write, via a change of variables,
\begin{align}
I_0 &:= \frac{1}{m_U(B_T)}\int_{B_T} f(x_0u) dm_U(u)\nonumber\\
&= \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}} f(x_0a_{\log R} u a_{-\log R})dm_U(u)\nonumber\\
&= \frac{1}{m_U(B_{T/R})} \int_{U} {\bf1}_{B_{T/R}}(u) f(x_0a_{\log R} u a_{-\log R})dm_U(u).\label{eq:I0}
\end{align}
We want to show that this quantity is close to $\int f dm_X$, and from (\ref{eq:I0}) it almost looks as if we could apply the exponential mixing result of Corollary \ref{cor:expmixing} (\ref{expmixing}) to achieve this, however there are several significant barriers to doing so. Most obviously, the integral in (\ref{eq:I0}) is over $U$ instead of $X$. Furthermore, the ``basepoint" $x_0a_{\log R} u$ varies with $u$, and will eventually spend time outside of any fixed compact subset of $X$ for $u$ coming from a large enough ball. Finally, the function ${\bf1}_{B_{T/R}}$ is not smooth.
We will first address the issue of smoothness by convolving the indicator function with a smooth approximation to the identity ({\bf Step 1}). We will then apply the ``thickening" argument of Margulis to obtain an intergral over $X$ from our inegral over $U$ ({\bf Step 2}). Finally, we will deal with the moving basepoint by demonstrating that for most $u\in B_{T/R}$ we have a uniformly good rate of equidistribution and that the size of the set on which this does not occur can be quantitatively controlled ({\bf Step 3}). This last step is where we will use the nondivergence result of Section \ref{sect:nondiv}.
\subsection*{Step 1}
Let $r$ be a small, positive number (to be determined) and let $\theta\in C_c^\infty (U)$ be a nonnegative bump function supported on $B^U_r(e)$ satisfying the approximate identity properties of Lemma \ref{lem:approxId}. Then the convolution $\int_U\theta(u'){\bf1}_{B_{T/R}}(u(u')^{-1})dm_U(u')$ is a smooth function approximating our original indicator function. If we substitute this function for ${\bf1}_{B_{T/R}}$ in (\ref{eq:I0}) and use the invariance property of the Haar measure, we get the integral
\begin{align}
I_\text{smth} &:= \frac{1}{m_U(B_{T/R})} \int_{U} \int_U\theta(u'){\bf1}_{B_{T/R}}(u(u')^{-1})dm_U(u') f(x_0a_{\log R} u a_{-\log R})dm_U(u)\nonumber\\
&=\frac{1}{m_U(B_{T/R})} \int_{U}\int_U \theta(u') {\bf 1}_{B_{T/R}}(u) f(x_0a_{\log R} uu' a_{-\log R})dm_U(u)dm_U(u').\label{eq:Ismth}
\end{align}
Now observe that since $\int\theta =1$, we may again use the invariance of the Haar measure to rewrite (\ref{eq:I0}) as
\begin{align}
I_0 &= \frac{1}{m_U(B_{T/R})} \int_{U} {\bf 1}_{B_{T/R}}(uu')f(x_0a_{\log R} uu' a_{-\log R})dm_U(u) \int_U \theta(u') dm_U(u')\nonumber\\
&= \frac{1}{m_U(B_{T/R})} \int_{U}\int_U \theta(u') {\bf 1}_{B_{T/R}}(uu') f(x_0a_{\log R} uu' a_{-\log R})dm_U(u)dm_U(u').\label{eq:shiftedBTR}
\end{align}
From (\ref{eq:Ismth}) and (\ref{eq:shiftedBTR}), we can see that
\begin{align}
\left| I_0 - I_\text{smth}\right| &\leq \frac{1}{m_U(B_{T/R})}\int_U\theta(u') \sob{\infty}{0}(f)\left(\int_U \left|{\bf 1}_{B_{T/R}}(uu')-{\bf1}_{B_{T/R}}(u)\right|dm_U(u)\right)dm_U(u')\nonumber\\
&= \frac{\sob{\infty}{0}(f)}{m_U(B_{T/R})}\int\theta(u') m_U(B_{T/R}\triangle B_{T/R}(u')^{-1}) dm_U(u').\label{eq:symmdiffint}
\end{align}
But notice that since $\text{supp }\theta \subseteq B^U_r(e)$, we know that $u'$ is close to the identity, so $u$ in this region can only shift $B_{T/R}$ by a small amount. In fact, by pulling the measure back to $\mathbb{R}^n$, one may compute directly that the size of the symmetric difference is bounded by
\begin{align}
m_U(B_{T/R}\triangle B_{T/R}(u')^{-1}) \ll (T/R)^{p-p_0}r\label{eq:symmdiffU}
\end{align}
for any $u'\in B^U_r(e)$, where $p_0 = \min_{i>j} (\lambda_i - \lambda_j)$.
Combining this with (\ref{eq:symmdiffint}) above and again using the fact that $\theta$ integrates to 1, we see that
\begin{align}
|I_0-I_\text{smth}| &\ll \frac{(T/R)^{p-p_0}}{m_U(B_{T/R})}r \sob{\infty}{0}(f)=(R/T)^{p_0} r \sob{\infty}{0}(f) \leq r \sob{\infty}{0}(f)\label{eq:I0-Ismth}
\end{align}
since $m_U(B_{T/R}) = (T/R)^p$ and $T\geq R$.
Now that we know $I_0$ and $I_\text{smth}$ can be made close, we want to know that $I_\text{smth}$ is not too far from $\int f dm_X$. Using Fubini's Theorem, we can say
\begin{align*}
I_\text{smth} = \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}}\int_U \theta(u') f(x_0a_{\log R} uu' a_{-\log R})dm_U(u')dm_U(u)
\end{align*}
and we may also write
\begin{align*}
\int_X f dm_X = \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}} \left( \int_X f dm_X \right) dm_U(u).
\end{align*}
Hence,
\begin{align}
&\left| I_\text{smth} - \int_X f dm_X \right|\label{eq:mixinginint}\\
&\leq \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}} \left| \int_U \theta(u') f(x_0a_{\log R} uu' a_{-\log R})dm_U(u') -\int_X f dm_X\right| dm_U(u).\nonumber
\end{align}
\subsection*{Step 2}
Now the expression inside the absolute value looks more similar to that of Corollary \ref{cor:expmixing} (\ref{expmixing}), but we are still integrating over the wrong space. We want an integral over $X$, and although functions on $X$ integrate locally like their pullback by projection over $G$, the integral with which we are concerned is over the lower-dimensional (``thin") subspace $U$.
Define
\begin{align}
I_U(u) := \int_U \theta(u') f(x_0a_{\log R} uu' a_{-\log R})dm_U(u').\label{eq:Ithin}
\end{align}
to be the integral from inside (\ref{eq:mixinginint}) above. In order to apply exponential mixing, we will need to ``thicken" this integral over $U$ to an integral over a neighborhood of the orbit in $G$ and then project to $X$.
Recall from Section \ref{sect:decomp} that $m_G = m_U \times m^r_H$, where $m_H^r$ is the right Haar measure on $H = U^0 U^-$. Then let $\psi\in C^\infty_c(H)$ be an approximate identity supported on $B^H_r(e)$ as described in Lemma \ref{lem:approxId}.
Since $\int \psi = 1$, we may rewrite (\ref{eq:Ithin}) as
\begin{align}
I_U(u) = \int_H \int_U \theta(u') \psi(h) f(x_0a_{\log R} uu' a_{-\log R}) dm_U(u') dm^r_H(h).
\end{align}
Now define
\begin{align}
I_X(u) := \int_H \int_U \theta(u') \psi(h) f(x_0a_{\log R} uu'h a_{-\log R}) dm_U(u')dm^r_H(h)\label{eq:Ithick}
\end{align}
which differs from $I_U(u)$ only by the presence of the variable $h$ inside $f$. To see that $I_U(u)$ and $I_X(u)$ are close, observe that
\begin{align}
\left|I_U(u) - I_X(u)\right| \leq \int_H \int_U \theta(u') \psi(h) \left|f(\tilde x) - f(\tilde x a_{\log R} h a_{-\log R})\right| dm_U(u')dm^r_H(h)\label{eq:thin-thick1}
\end{align}
where $\tilde x = x_0a_{\log R} uu'a_{-\log R}$. But since $f$ has bounded derivative,
\begin{align}
\left|f(\tilde x) - f(\tilde x a_{\log R} h a_{-\log R})\right| \ll \sob{\infty}{1}(f) d_G(e, a_{\log R} h a_{-\log R})\label{eq:bddderiv}
\end{align}
by Sobolev property (\ref{Sob4}). Furthermore, since conjugation by $a_t$ is non-expanding on the subgroup $H$ (recall that it fixes $U^0$ and contracts $U^-$), we may see
that\footnote{There is a slight subtlety here because we used the right Haar measure on $H$, so the corresponding metric $d_H$ is right-invariant, while $d_G$ is left-invariant. In general, $d_G$ restricted to $H$ will be less than or equal to the corresponding left-invariant metric on $H$. However, any left-invariant metric is Lipschitz equivalent to any right-invariant metric in a suitable neighborhood of the identity, so the above series of inequalities goes through for $r$ small enough.}
\begin{align}
d_G(e, a_{\log R}h a_{-\log R})\leq d_G(e, h)\ll d_H(e,h)\leq r\label{eq:Hnonexpanding}
\end{align}
for $h\in \text{supp } \psi \subseteq B^H_r(e)$.
Then from (\ref{eq:thin-thick1}), (\ref{eq:bddderiv}), and (\ref{eq:Hnonexpanding}) and the fact that both $\theta$ and $\psi$ integrate to 1, we have
\begin{align}
\left|I_U(u) - I_X(u)\right| &\ll
\sob{\infty}{1}(f) r.\label{eq:thin-thick}
\end{align}
Now we want to verify that $I_X(u)$ is not far from $\int f dm_X$. By our measure decomposition, we can see (\ref{eq:Ithick}) as an integral over $G$:
\begin{align}
I_X(u) = \int_G \phi(g) f(x_0a_{\log R} ug a_{-\log R}) dm_G(g)\label{eq:intoverG}
\end{align}
where the function $\phi(uh) = \theta(u)\psi(h)$ is defined for all $g\in UH$, hence it is defined almost-everywhere. In order to apply mixing, we want to further interpret $I_X(u)$ as an integral over $X$. To do this, let $y=x_0 a_{\log R} u$, keeping in mind that $y$ depends on $u$. Then define $\phi_y\in C^\infty_c(X)$ by $\phi_y = \phi \circ \pi_y^{-1}$ where $\pi_y: G \to X$ is natural projection at $y$. Note, however, that $\phi_y$ is only well-defined if $\pi_y$ is injective on $\text{supp }\phi = \text{supp }\theta\text{ supp }\psi \subseteq B^U_r(e)B^H_r(e)$. In a neighborhood of the identity, $B^U_r(e)B^H_r(e) \subseteq B^G_{cr}(e)$ for a positive constant $c$, since
\begin{align*}
d_G(uh,e)\leq d_G(uh,u)+d_G(u,e) = d_G(h,e)+d_G(u,e) \ll d_H(h,e) + d_U(u,e) \leq 2r.
\end{align*}
Therefore, if $\pi_y$ is injective on $B^G_{cr}(e)$ for $y=x_0a_{\log R}u$ (an assumption we will reutrn to later) we can say from (\ref{eq:intoverG}) that
\begin{align*}
I_X(u) &= \int_G \phi(g) f(yg a_{-\log R}) dm_G(g)\\
&=\int_G \phi_y(yg) f(yg a_{-\log R}) dm_G(g)\\
&= \int_X \phi_y(x)f(xa_{-\log R}) dm_X(x).
\end{align*}
Since $\int \phi_y dm_X = \int \phi dm_G = \int \theta dm_U \int \psi dm^r_H = 1$, we can now apply the effective mixing result from Section \ref{sect:expmixing} to obtain
\begin{align*}
\left|I_X(u) - \int_X f dm_X \right| &= \left|\int_X \phi_y(x)f(xa_{-\log R}) dm_X(x) - \int_X \phi_y dm_X \int_X f dm_X \right|\\
&\ll R^{-\beta} \sobt{\ell}(\phi_y)\sobt{\ell}(f).
\end{align*}
Then from property (\ref{Sob5}) in Section \ref{sect:Sob} and our bound on the Sobolev norm of an approximate identity (property (\ref{approxidSob}) in Section \ref{sect:approxid}), we can say
\begin{align*}
\sobt{\ell}^X(\phi_y) = \sobt{\ell}^G(\phi) \ll \sobt{\ell}^U(\theta)\sobt{\ell}^H(\psi) \ll r^{-(\ell+d/2)}r^{-(\ell+\tilde d/2)} = r^{-2\ell - (n^2-1)/2}
\end{align*}
where $\tilde d = \dim H$.
Thus if $\pi_y$ is injective on $B^G_{cr}(e)$, then
\begin{align}
\left|I_X(u) - \int_X f dm_X \right| \ll R^{-\beta} r^{-p_1} \sobt{\ell}(f)\label{eq:thick-int}
\end{align}
where $p_1 = 2\ell+(n^2-1)/2$.
\subsection*{Step 3}
However, as we have noted, $y=x_0a_{\log R}u$ depends on $u$, which varies over $B_{T/R}$ in (\ref{eq:mixinginint}). While we cannot ensure that $\pi_y$ is injective on $B^G_{cr}(e)$ for all $u\in B_{T/R}$, we can say that the set on which this does not occur has small measure.
Recall from Lemma \ref{lem:radiusofinjection} that $\pi_y: B_r^G(e) \to B_r^X(y)$ is injective for $y\in L_\epsilon$ for $r$ proportional to $\epsilon^n$ and for $\epsilon$ small enough. Furthermore, observe that condition (\ref{eq:dio1}) is equivalent to the statement that for all $j\in\{1, \cdots, n-1\}$ and primitive $w\in\Lambda^j(\mathbb{Z}^d)\setminus\{0\}$, there exists ${\bf t}\in[0,1]^d$ such that $\nn{wg_0a_{\log T} u({\bf t}) a_{-\log T}}\geq R^q$. Then by Corollary \ref{cor:nondiv} in Section \ref{sect:nondiv}, we have that
\begin{align*}
\left| \{ {\bf t}\in [0,1]^d \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R} \notin L_\epsilon \} \right|\ll \epsilon^{1/d(n-1)}.
\end{align*}
since $R_0 = R^{-q}$ implies $\rho = 1/n$.
From this we find that
\begin{align*}
\left| \{ {\bf t}\in [0,1]^d \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R} \notin L_\epsilon \} \right|
&= \left| \{ {\bf t}\in [0,1]^d \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log R} a_{\log T/R} u({\bf t}) a_{-\log T/R} \notin L_\epsilon \} \right|\\
&= m_U\left(\{ u\in B_1 \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log R} a_{\log T/R} u a_{-\log T/R} \notin L_\epsilon \}\right)\\
&= m_U\left(\{ u\in B_{T/R} \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log R} u \notin L_\epsilon \}\right)/m_U(B_{T/R})
\end{align*}
where the last equality can be verified using a change of variables. That is, for $x_0$ satisfying condition (\ref{eq:dio1}), we have
\begin{align*}
m_U\left(\{ u\in B_{T/R} \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log R} u \notin L_\epsilon \}\right) \ll \epsilon^{1/d(n-1)} m_U(B_{T/R}).
\end{align*}
In other words, if we let $E:= \{ u\in B_{T/R} \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log R} u \in L_\epsilon \}$, then (\ref{eq:thick-int}) holds for all $u\in E$ and $m_U(B_{T/R}\setminus E) \ll \epsilon^{1/d(n-1)} m_U(B_{T/R})$. Thus, from (\ref{eq:mixinginint}), (\ref{eq:thin-thick}), and (\ref{eq:thick-int}), we find
\begin{align*}
\left| I_\text{smth} - \int_X f dm_X \right| \leq& \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}} \left| I_U(u) -\int_X f dm_X\right| dm_U(u)\\
\leq& \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}} \left| I_U(u) -I_X(u) \right| dm_U(u)\\
&\hspace{5pt}+ \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}}\left|I_X(u) - \int_X f dm_X\right| dm_U(u)\\
\ll& \sob{\infty}{1}(f)r +\frac{1}{m_U(B_{T/R})} \int_E \left|I_X(u) - \int_X f dm_X\right| dm_U(u)\\
&\hspace{5pt}+ \frac{1}{m_U(B_{T/R})} \int_{B_{T/R}\setminus E} \left|I_X(u) - \int_X f dm_X\right| dm_U(u)\\
\ll& \sob{\infty}{1}(f)r + \frac{m_U(E)}{m_U(B_{T/R})} R^{-\beta} r^{-p_1} \sobt{\ell}(f) + \frac{m_U(B_{T/R}\setminus E)}{m_U(B_{T/R})} \sob{\infty}{0}(f)\\
\ll& \sob{\infty}{1}(f)r +R^{-\beta} r^{-p_1} \sobt{\ell}(f) + \epsilon^{1/d(n-1)} \sob{\infty}{0}(f).
\end{align*}
Finally, from this and (\ref{eq:I0-Ismth}), we have
\begin{align*}
\left| I_0 - \int_X f dm_X \right| &\leq \left| I_0 - I_\text{smth}\right| + \left| I_\text{smth} - \int_X f dm_X \right|\\
&\ll \left(\epsilon^n + R^{-\beta}\epsilon^{-p_1 n} + \epsilon^{1/d(n-1)}\right)\sob{\infty}{\ell}(f)
\end{align*}
where we have used that $r$ is proportional to $\epsilon^n$, as well as Sobolev property (\ref{Sob1}).
Let $p_2:= 1/d(n-1)$. Since $n>p_2$, the $\epsilon^n$ term above decays more quickly than other terms and can be ignored. To optimize the rate of decay, we set
\begin{align*}
R^{-\beta}\epsilon^{-p_1 n} = \epsilon^{p_2}
\end{align*}
which implies
\begin{align*}
\epsilon = R^{-\beta/(p_1 n + p_2)}.
\end{align*}
Then so long as $R$ is chosen sufficiently large so that $\epsilon$ (and subsequently $r$) are small enough to make Corollary \ref{cor:nondiv} and Lemma \ref{lem:radiusofinjection} true (along with several other statements we made regarding neighborhoods of the identity), then
we have demonstrated (\ref{eq:endresult}) in Theorem \ref{thm:equidist} with the rate
\begin{align*}
\left| I_0 - \int_X f dm_X \right| \ll R^{-\gamma}\sob{\infty}{\ell}(f)
\end{align*}
where $\gamma = \beta p_2/(p_1n+p_2)$.
\end{proof}
\begin{rem}
In the case of $\Gamma$ cocompact, it follows from the above proof that we may remove dependence on the basepoint from our effective equidistribution statement. That is, for $X=\Gamma\backslash G$, $\Gamma\leq G$ a cocompact lattice, and $U\leq G$ a horospherical subgroup, we have that there exists $\gamma>0$
(depending\footnote{ Since dependence on $\Gamma$ only arises from the spectral gap, we can remove dependence for $n\geq 3$ or for $n=2$ when $\Gamma$ is a congruence lattice.}
only on $n$, $d$, and $\Gamma$) such that for $T$ large enough,
\begin{align}
\left|\frac{1}{m_U(B_T)}\int_{B_T} f(x_0 u)dm_U(u) - \int_X f dm_X\right|\ll_\Gamma T^{-\gamma}\sob{\infty}{\ell}(f)\label{eq:equidistcmpt}
\end{align}
for any $f\in C^\infty(X)$ and $x_0\in X$. This is because we only make use of the basepoint condition in {\bf Step 3}, where we need it to deal with the moving basepoint and the fact that the radius of injection depends on where we are in $X$. However, in the compact setting, we have a uniform injectivity radius, so we may may avoid this step altogether.
Morally, uniformity in the basepoint is due to the fact that in the compact setting, there are no proper invariant subspaces near which an orbit can become trapped for long periods of time.
\end{rem}
\section*{Acknowledgements}
I would like to thank my advisor, Amir Mohammadi, for giving me the idea to work on this problem and for providing important guidance throughout the process. I also want to thank Manfred Einsiedler and Hee Oh for helpful discussions about this work.
\bibliographystyle{amsplain}
\section{Introduction}
There is an intimate connection between number theory and dynamics on homogeneous spaces. The case of $\Gamma\backslash{\rm SL}_n(\R)$ where $\Gamma$ is a lattice is one particularly interesting and well-studied example, and when $\Gamma = {\rm SL}_n(\Z)$ this space can be identified with the space of unimodular lattices in $\mathbb{R}^n$. It is well known, for instance, that the geodesic flow on ${\rm SL}_2(\Z)\backslash{\rm SL}_2(\R)$ is related to Diophantine approximation of real numbers by rationals, which can be generalized to metric Diophantine approximation on manifolds (see \cite{KMNondiv}). Another famous example is Margulis's proof of the Oppenheim conjecture (see \cite{Opp}, \cite{MOpp}), which uses Ranghunathan's insight that the conjecture can be reduced to a statement about unipotent orbits in ${\rm SL}_3(\mathbb{Z})\backslash{\rm SL}_3(\mathbb{R})$. Quantitative proofs of the Oppenheim conjecture are given in \cite{QuantOpp} and \cite{EMMQuantOpp}, and these make use of Ratner's work on measure rigidity for unipotent orbits (see \cite{Rat3}).
In addition to the contributions of dynamics to the field of number theory, there are also many number-theoretic questions that are of independent interest in dynamical systems, such as understanding the distribution of certain discrete subsets of times in a dynamical system (e.g. polynomial sequences or primes). Many of these questions remain open---see, for example, the conjecture of Shah in the introduction of \cite{ShahConj} or the collection of conjectures by Margulis listed under Question 16 of \cite{Conjectures}.
Equidistribution results play an important role in dynamical systems and their applications to number theory. Roughly speaking, a subset of some orbit is said to equidistribute with respect to a given probability measure if it spends the expected amount of time in subsets, i.e., if the proportion of the orbit landing within any set is given by the measure of that set. The dual of this notion is that averages of any suitably nice function over larger and larger pieces of the orbit coverge weakly to the average of that function over the whole space with respect to the given measure. Often in applications to number theory it is important that an equidistribution result be effective---that is, that there is a known rate of convergence. Another question that can be asked is whether we can leverage the known equidistribution of a full orbit to obtain information about the distribution of certain ``sparse" subsets of that orbit. Examples of research on sparse equidistribution problems can be found in \cite{VenkSparse}, \cite{SarnackUbis}, \cite{ShahConj}, and \cite{Heegner1} and \cite{Heegner2}.
Horospherical flows are a type of dynamical system arising naturally in the study of homogeous spaces. A subgroup of a Lie group $G$ is said to be horospherical if it is contracted (conversely, expanded) under iteration of the adjoint action of some element of $G$ (see Section \ref{sect:horo} for a more precise definition). It can be shown that any horospherical subgroup is unipotent, although not every unipotent subgroup can be realized as the horospherical subgroup corresponding to an element of $G$. In general, horospherical flows are easier to study than more general unipotent flows, as the expansion property can be used along with dynamical information about the corresponding one-parameter subgroup to great effect.
Actions by horospherical and unipotent subgroups have been studied extensively.
It was proved in \cite{Hedl} that the horocyclic flow on $\Gamma\backslash{\rm SL}_2(\R)$ for $\Gamma$ cocompact is minimal and later shown in \cite{Furst} to be uniquely ergodic. These results were extended in \cite{Veech} and \cite{REWP} to more general horospherical flows on compact quotients of suitable Lie groups. For $\Gamma$ non-uniform, we do not have unique ergodicity or minimality, however it was proved in \cite{Marg71} that orbits of unipotent flows cannot diverge to infinity in noncompact settings, which was refined in Dani's nondivergence theorem in \cite{Dani84a}, \cite{DaniNondiv}. Moreover, it was shown in \cite{Dani78a} (for the case of $\Gamma\backslash{\rm SL}_2(\R)$ noncompact) and \cite{Dani81}, \cite{Dani86} (for more general noncompact homogeneous spaces) that horocyclic/horospherical flows have nice (finite volume, homogeneous submanifold) orbit closures and that every ergodic probability measure invariant under such a flow is the natural Lebesgue measure on some such orbit closure.
This work paved the way for a series of breakthrough papers, culminating in \cite{Rat3} and summarized in \cite{RatSummary}, in which Ratner resolved conjectures of Raghunathan and Dani by giving an essentially complete description of unipotent orbit closures and unipotent-invariant measures on homogeneous spaces.
More recently, many important results for horospheres and related actions have been effectivized. Quantitative versions of Dani's nondivergence theorem were given in \cite{DaniMarg} and \cite{KMNondiv}, as well as a discrete version for ${\rm SL}_2(\Z)\backslash{\rm SL}_2(\R)$ in \cite{SarnackUbis}. In \cite{Burger}, Burger gave an effective rate for the equidistribution of the horocycle flow on compact quotients of ${\rm SL}_2(\R)$, which was improved upon and extended to the noncompact setting in \cite{FF} and \cite{StromDev}. Many authors have also considered the effective equidistribution of closed horocyclic and horospherical orbits in a variety of settings, such as \cite{SarnackAsymp}, \cite{StromClosed}, \cite{KMExpanding}, \cite{LeeOh}, and \cite{KDabb}, although this list is by no means complete. In studying both closed horospherical orbits and long pieces of generic horospherical orbits, one can make use of the ``thickening" argument developed by Margulis in his thesis \cite{Mthesis}. This uses a known rate of mixing for the semisimple flow with respect to which the given subgroup is horospherical along with the expansion property to get a rate for the horospherical flow. The key exponential rate for semisimple flows (and much more general actions) is given in \cite{KMBddOrbits}. Other effective results of interest include \cite{EMV}, \cite{GreenTao1}, \cite{StromRat}, and \cite{EMMV}. We note that most of these results use in some way a spectral gap for the action by translations of the ambient group $G$ or certain subgroups of $G$ on $L^2(\Gamma\backslash G)$.
One reason for wanting effective results is that many applications involving number theory require that the error in relevant approximations be controlled in a quantitative way. For example, \cite{VenkSparse} makes use of effective equidistribution for the horocycle flow to
derive an effective rate for the equidistribution of arithmetic sequences, which he then uses to show that sequences of integer times raised to small powers also equidistribute in $\Gamma\backslash{\rm SL}_2(\R)$ for $\Gamma$ cocompact. As another example, \cite{SarnackUbis} uses effective equidistribution results along with sieving to demonstrate that prime times in the horocycle flow on ${\rm SL}_2(\Z)\backslash{\rm SL}_2(\R)$ are dense in a set of positive measure. More generally, if we hope to apply sieve methods to any equidistribution problem, we will need to have some way of quantitatively controlling the error.
In this paper, we are interested in the asymptotic distribution of almost-primes (i.e. integers having fewer than a fixed number of prime factors) in horospherical flows on the space of lattices and on compact quotients of ${\rm SL}_n(\R)$.
Our main results are summarized in the following two theorems:
\begin{thm}\label{thm:CmptThm}
Let $\Gamma<{\rm SL}_n(\R)$ be a cocompact lattice and $u({\bf t})$ be an abelian horospherical flow on $\Gamma\backslash{\rm SL}_n(\R)$ of dimension $d$.
Then there exists a constant $M$ (depending only on $n$, $d$, and $\Gamma$) such that for any $x\in\Gamma\backslash{\rm SL}_n(\R)$, the set
\[
\{xu(k_1, k_2, \cdots, k_d) \hspace{2pt}\vert\hspace{2pt} k_i\in\mathbb{Z} \text{ has fewer than } M \text{ prime factors}\}
\]
is dense in $\Gamma\backslash{\rm SL}_n(\R)$.
\end{thm}
We remark that the dependence of the constant $M$ on $\Gamma$ arises from the spectral gap, so this dependence can be removed if $n\geq 3$ or for $n=2$ if $\Gamma$ is a congruence lattice.
For a horospherical flow $u({\bf t})$ on ${\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$, we say that $x={\rm SL}_n(\Z) g$ is \textit{strongly polynomially} $\delta$\textit{-Diophantine} if there exists some sequence $T_i\to\infty$ as $i\to\infty$ such that
\[
\inf_{\substack{w\in \Lambda^j(\mathbb{Z}^n)\setminus\{0\}\\j=1, \cdots, n-1}} \sup_{{\bf t}\in [0,T_i]^d}\nn{wgu({\bf t})}>T_i^\delta
\]
for all $i\in\mathbb{N}$.
\begin{thm}\label{thm:NoncmptThm}
Let $u({\bf t})$ be an abelian horospherical flow on ${\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$ of dimension $d$ and let $x\in{\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$ be strongly polynomially $\delta$-Diophantine for some $\delta>0$. Then there exists a constant $M_\delta$ (depending on $\delta$, $n$, and $d$) such that
\[
\{xu(k_1, k_2, \cdots, k_d) \hspace{2pt}\vert\hspace{2pt} k_i\in\mathbb{Z} \text{ has fewer than } M_\delta \text{ prime factors}\}
\]
is dense in ${\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$.
\end{thm}
A brief outline of the paper is as follows:
In Section \ref{sect:prelims}, we establish the basic notation that will be used throughout the paper and introduce the key facts and theorems that we use in our analysis. We also prove a small corollary of the nondivergence theorem in \cite{KMNondiv} that applies to the particular setting of this paper.
In Section \ref{sect:Equidistribution}, we prove an effective equidistribution result for long orbits of arbitrary horospherical flows on the space of lattices. The proof makes use of the ``thickening" argument of Margulis, leveraging the exponential mixing properties of the subgroup with respect to which the flow of interest is horospherical, which is itself a consequence of a spectral gap. The main result in this section is probably not surprising to experts, but the author was unable to locate a result in the literature that is stated in the way presented here.
In Section \ref{sect:EquiArith}, we use the theorem from the previous section to derive an effective bound for equidistribution along multivariate arithmetic sequences of entries in abelian horospherical flows on the space of lattices. In this result, we allow the arithmetic sequences in different coordinates to have different spacing, although we will not need it for Section \ref{sect:Sieving}. The techniques used in this section are heavily inspired by Section 3 of \cite{VenkSparse} and also make use of the spectral gap, as well as some Fourier analysis and other analytic techniques.
In Section \ref{sect:Sieving}, we use the bound along arithmetic sequences as well as a combinatorial sieve theorem to obtain an upper and lower bound on averages over almost-prime entries in abelian horospherical flows. We start with the case of $\Gamma$ cocompact, for which we obtain a result that implies Theorem \ref{thm:CmptThm} above.
We then move to the case $\Gamma={\rm SL}_n(\Z)$, where we prove a similar result for almost-primes in the orbits of points satisfying a strongly polynomially Diophantine condition, giving us Theorem \ref{thm:NoncmptThm}. In order to apply sieving in both cases, we introduce a particular gcd-sum function (counting the number of integer points in a cube in $\mathbb{R}^d$ of side length $K\in\mathbb{N}$ such that $K$ divides the product of the entries) and verify that the relevant errors can be controlled.
Finally, in Section \ref{sect:concl}, we make some closing remarks and indicate possible extensions and areas for future research.
\section{Notation and Preliminaries}\label{sect:prelims}
\subsection{Some Basic Notation}
Let $G={\rm SL}_n(\R)$ for $n\geq2$. Throughout most of this document, $\Gamma$ will denote ${\rm SL}_n(\Z)$, but we will also discuss the case where $\Gamma\leq G$ is a cocompact lattice. We are interested in the right actions of certain subgroups of $G$ on the right coset space $X=\Gamma\backslash G$.
Although it is not a group, $X$ inherits a finite ``Haar" measure $m_X$ from the (bi-invariant)
Haar measure $m_G$ on $G$. In this document, we will always take $m_X$ and $m_G$ to be normalized so that $m_X$ is a probability measure and so that the measure of a small set in $G$ equals the measure of its projection in $X$.
We will use $|\cdot|$ to denote the standard Lebesgue measure on $\mathbb{R}^d$ and $d{\bf t}$ to denote the differential with respect to Lebesgue measure for ${\bf t}\in\mathbb{R}^d$.
We will use gothic letters to represent the Lie algebra of a Lie group (e.g. $\mathfrak{g}$ is the Lie algebra of $G$). Fix an inner product on $\mathfrak{g}$.
This extends to a Riemannian metric on $G$ via left translation, which defines a left-invariant metric $d_G$ and a left-invariant volume form, which (by uniqueness) coincides with the Haar measure on $G$ up to scaling.
This then induces a metric $d_X$ on $X$ of the form
\begin{align*}
d_X(\Gamma g_1, \Gamma g_2) = \inf_{\gamma_1,\gamma_2\in\Gamma} d_G(\gamma_1g_1,\gamma_2g_2) = \inf_{\gamma\in \Gamma} d_G(g_1,\gamma g_2).
\end{align*}
The same construction can be used to define a left-invariant metric $d_H$ for any subgroup $H\leq G$ by restricting the inner product to $\mathfrak{h}\subseteq\mathfrak{g}$. Note, however, that in general $d_H \neq d_G\vert_H$. Instead, we have that $d_G(h_1,h_2)\leq d_H(h_1,h_2)$ for $h_1,h_2\in H$, since the infemum used to define the distance $d_G$ is taken over a larger set than in $d_H$. We will use the notation $B^H_r(h)$ to denote a ball of radius $r$ with respect to the metric $d_H$ around a point $h\in H$ (this is to distinguish these balls from the sets $B_T$ that we will define in Section \ref{sect:horo}). Also observe that every point has a neighborhood in which the left-invariant metric is Lipschitz equivalent to the metric derived from any matrix norm on ${\rm Mat}_{n\times n}(\mathbb{R})$ (see Lemma 9.12 in \cite{EW} for details).
Define the adjoint representation of $g\in G$ as the map ${\rm Ad}_g: \mathfrak{g}\to\mathfrak{g}$ given by $Y\mapsto gYg^{-1}$ for $Y\in\mathfrak{g}$.
In considering equidistribution questions, our space of test functions will be $C^\infty_c(X)$, the set of smooth, compactly supported (real- or complex-valued) functions on $X$. Define the action of $G$ on this space by $[g\cdot f](x) = f(xg^{-1})$ for $g\in G$ and $f\in C^\infty_c(X)$.
Finally, we will use the notation $a\ll b$ to indicate that $a$ is less than a fixed constant times $b$ and $a\asymp b$ to indicate that $a\ll b$ and $b\ll a$. In general, the implied constants may depend on $n$ and on the data of the dynamical system (more specifically, on $d$, the dimension of the horospherical subgroup).
Any additional dependence of the constants will be indicated by a subscript (e.g. $\ll_f$ indicates that the implicit constant may depend on $n$, $d$, and $f$). In principle, the constants may also depend on the lattice $\Gamma$, although since we are primarily considering $\Gamma={\rm SL}_n(\Z)$, we will not indicate this dependence with a subscript when $\Gamma$ is understood to be fixed in this way. We will also use the standard notation $\mathcal{O}(f(x))$ to indicate a function whose absolute value is bounded by a constant times $|f(x)|$ as $x\to\infty$, where as before the constant may depend on $n$ and $d$, and any additional dependence will be indicated with a subscript.
\subsection{Horospherical Subgroups}\label{sect:horo}
A subgroup $U$ of $G$ is (expanding) horospherical with respect to an element $g\in G$ if $U=\{u\in G \hspace{3pt}\vert\hspace{3pt} g^{-j}ug^j \to e \text{ as } j\to \infty\}$, where $e$ is the identity. In other words, elements of $U$ are contracted under conjugation by $g^{-1}$ and expanded under conjugation by $g$.
Define the one-parameter subgroup $\{a_t\}_{t\in\mathbb{R}}\in G$ by
\begin{align}
a_t &= \exp(t\hspace{2pt}{\rm diag}(\underbrace{\lambda_1,\cdots, \lambda_1}_{m_1}, \underbrace{\lambda_2,\cdots, \lambda_2}_{m_2}, \cdots, \underbrace{\lambda_N,\cdots, \lambda_N}_{m_N}))\label{eq:a_t}
\end{align}
where $\lambda_1\geq\lambda_2\geq \cdots \geq \lambda_N$. The requirement that $a_t\in {\rm SL}_n(\R)$ for all $t\in\mathbb{R}$ means that $m_1+\cdots+m_N = n$ and $m_1\lambda_1+\cdots+m_N\lambda_N = 0$.
Let $U$ denote the block-upper-triangular unipotent subgroup given by
\begin{align}
U=
\left\{\left(\begin{matrix}
\begin{matrix}
I_{m_1}&\vline&&\\
\hline
&\vline&I_{m_2}\\
\end{matrix}&&\mbox{\normalfont\Large $\ast$}\\
&\ddots&\\
\mbox{\normalfont\Large 0}&&\begin{matrix}
I_{m_{N-1}}&\vline&\\
\hline
&\vline&I_{m_N}
\end{matrix}
\end{matrix}
\right)\right\}\label{eq:Uhoro}
\end{align}
where $I_m$ is the $m\times m$ identity matrix. Notice that $U$ is the horospherical subgroup corresponding to $a_t$ for $t>0$. Similarly, define the contracting subgroup $U^{-}$ by
\begin{align*}
U^{-}=
\left\{\left(\begin{matrix}
\begin{matrix}
I_{m_1}&\vline&&\\
\hline
&\vline&I_{m_2}\\
\end{matrix}&&\mbox{\normalfont\Large 0}\\
&\ddots&\\
\mbox{\normalfont\Large $\ast$}&&\begin{matrix}
I_{m_{N-1}}&\vline&\\
\hline
&\vline&I_{m_N}
\end{matrix}
\end{matrix}
\right)\right\}
\end{align*}
which is horospherical with respect to $a_{t}$ for $t<0$, and define $U^0$ to be the centralizer of $a_t$ ($t\neq 0$), given by
\begin{align*}
U^{0}=
\left\{\left(\begin{matrix}
\begin{matrix}
B_{1}&\vline&&\\
\hline
&\vline&B_{2}\\
\end{matrix}&&\mbox{\normalfont\Large 0}\\
&\ddots&\\
\mbox{\normalfont\Large 0}&&\begin{matrix}
B_{m_{N-1}}&\vline&\\
\hline
&\vline&B_{m_N}
\end{matrix}
\end{matrix}
\right)\hspace{5pt}\vline \hspace{5pt}
\begin{aligned}
&B_i \in {\rm GL}_{m_i}(\mathbb{R})\\
&\det B_{1}\cdots \det B_{N} = 1
\end{aligned}
\right\}.
\end{align*}
Let $d_0 = \sum_{i=1}^N m_i^2$ and observe that $d := \dim U = \dim U^{-} = \frac{1}{2}\left(n^2-d_0\right)$ and $\dim U^0 = d_0 -1$. All horospherical subgroups of $G={\rm SL}_n(\R)$ are conjugate to a subgroup of the form given in (\ref{eq:Uhoro}), so we restrict our attention to $U$ of this form.
Observe that $U$ is diffeomorphic to $\mathbb{R}^d$ through any identification ${\bf t}\mapsto u({\bf t})$ of the coordinates of $\mathbb{R}^d$ with the matrix entries in the upper-right corner of (\ref{eq:Uhoro}).\footnote{One could also use the more standard map $u({\bf t}) = \exp(\iota({\bf t}))$, where $\iota:\mathbb{R}^d \mapsto \mathfrak{u}$ is any identification of $\mathbb{R}^d$ with the Lie algebra $\mathfrak{u}$ of $U$. We have chosen to use the former embedding for ease of notation and because we will later restrict our attention to abelian horosphericals, for which the two maps coincide (up to scaling and permutations of the coordinates). However, whichever map is used does not substantively change the results presented here.}
Note, however, that $U$ and $\mathbb{R}^d$ are only isomorphic as groups in the case that $U$ is abelian, which occurs when $a_t$ has precisely two eigenvalues.
The bi-invariant Haar measure $m_U$ on $U$ is the pushforward of Lebesgue measure on $\mathbb{R}^d$ under this identification, and we may normalize it so that $u([0,1]^d)$ has unit measure. Define an expanding family of balls in $U$ by $B_T = a_{\log T} u([0,1]^d) a_{-\log T}$ for $T\in \mathbb{R}$. One may verify that the preimage of $B_T$ in $\mathbb{R}^d$ is given by a box where $x_k\in[0,T^{\lambda_i-\lambda_j}]$ for $i>j$ if the coordinate $x_k$ is mapped to the $(i,j)$-block of (\ref{eq:Uhoro}) under our identification. Hence, $m_U(B_T) = T^p$, where $p = \sum_{i>j} m_i m_j (\lambda_i -\lambda_j)$.
\subsection{Measure Decomposition}\label{sect:decomp}
The product map $U\times U^0 \times U^- \to G$ given by $(u, u^0, u^-)\mapsto uu^0u^-$ is a biregular map onto a Zariski open dense subset of $G$
(see Proposition 2.7 in \cite{MT}). In particular, if we let $H = U^0 U^-$, this means that $m_G(G\setminus UH) = 0$ and that the product map $(u, h) \mapsto uh$ is open and continuous. Additionally, it is not difficult to see that $U\cap H = \{e\}$.
Then by virtue of the fact that $G$ is unimodular,
we have that $m_G$ restricted to $UH$ is proportional to the pushforward of $m_U \times m^r_H$ by the product map, where $m_H^r$ is the right Haar measure on $H$ (see, e.g., Lemma 11.31 in \cite{EW} or Theorem 8.32 in \cite{Knapp}). Note that we could equivalently use the left Haar measure on $H$ and multiply by the modular function $\triangle_H$, but for convenience of notation we will use the right Haar measure.
\subsection{Sobolev Norms}\label{sect:Sob}
Fix a basis $\mathcal{B}$ for the Lie algebra $\mathfrak{g}$ of $G$.
Define the (right) differentiation action of $\mathfrak{g}$ on $C^\infty_c(X)$ by $Yf(x) = \frac{d}{dt} f(x \exp(tY))\vert_{t=0}$ for $Y\in \mathcal{B}$ and $f\in C^\infty_c(X)$. Higher order derivatives of $f$ can then be expressed as monomials in the basis $\mathcal{B}$.
For $p \in [1, \infty]$ and $\ell\in\mathbb{N}$, the $(p,\ell)$-Sobolev norm of $f\in C^\infty_c(X)$ simultaneously controls the $L^p$-norm of all derivatives of $f$ up to order $\ell$. More precisely, let
\begin{align*}
\sob{p}{\ell}(f) = \sum_{\deg(\mathcal{D})\leq\ell} \nn{\mathcal{D}f}_{L^p(X)}
\end{align*}
where $\mathcal{D}$ ranges over all monomials in $\mathcal{B}$ of degree $\leq \ell$. Observe that the Sobolev norm can be defined similarly for $C^\infty_c(G)$ and $C^\infty_c(H)$ where $H\leq G$, given a choice of basis for $\mathfrak{h}\subseteq\mathfrak{g}$.\footnote{The choice of the basis $\mathcal{B}$ is unimportant in the sense that choosing a different basis will lead to an equivalent norm. Likewise, we could use any norm on the components $\nn{\mathcal{D}f}_{L^p(X)}$ (here we have used the $l^1$-norm), but as all such norms are equivalent, the choice is unimportant.}
We will only require the $(2, \ell)$- and $(\infty, \ell)$-Sobolev norms. When $p=2$, we will drop the notation, letting $\mathcal{S}_\ell(f)=\sob{2}{\ell}(f)$. When needed, we will use a superscript $\mathcal{S}^X$ to indicate a Sobolev norm for functions defined on $X$.
Some useful properties of these norms are as follows (see \cite{VenkSparse} or \cite{KMBddOrbits}):
\begin{enumerate}[(i)]
\item For $X$ a probability space, $f\in C^\infty_c(X)$,
$p\in[1,\infty]$, and $k\leq \ell$, $\sob{p}{k}(f)\leq \sob{\infty}{\ell}(f)$.\label{Sob1}
\item For $f_1, f_2\in C^\infty_c(X)$, $\sob{\infty}{\ell}(f_1f_2)\ll_\ell \sob{\infty}{\ell}(f_1)\sob{\infty}{\ell}(f_2)$.\label{Sob2}
\item For $f\in C^\infty_c(X)$ and $g\in G$, $\sob{\infty}{\ell}(g\cdot f) \ll_\ell \nn{{\rm Ad}_{g^{-1}}}^\ell \sob{\infty}{\ell}(f)$, where $\nn{\cdot}$ is the operator norm on linear functions $\mathfrak{g}\to\mathfrak{g}$.\label{Sob3}
\item Let $L\subset G$ be compact.
For $f\in C^\infty_c(X)$, $x\in X$,
\[
|f(xg)-f(x)|\ll_L \sob{\infty}{1}(f)d_G(g,e)
\]
for all $g\in L$.\label{Sob4}
\item Let $X$ and $Y$ be Riemannian manifolds. For $f_1\in C^\infty_c(X)$ and $f_2\in C^\infty_c(Y)$,
\[\sobt{\ell}^{X\times Y}(f_1 \cdot f_2) \ll_{X,Y} \sobt{\ell}^X(f_1)\sobt{\ell}^{Y}(f_2).
\]\label{Sob5}
\end{enumerate}
\subsection{Approximation to the Identity}\label{sect:approxid}
At times we will want to use smooth bump functions with small support as approximations to the identity, but we will need to know that the Sobolev norm of such functions can be controlled. For this we have the following lemma, which can be found in \cite{KMBddOrbits}.
\begin{lem}[\hspace{1sp}\cite{KMBddOrbits}, Lemma 2.4.7(b)]\label{lem:approxId}
Let $Y$ be a Riemannian manifold of dimension $k$. Then for any $0<r<1$ and $y\in Y$, there exists a function $\theta\in C^\infty_c(Y)$ such that:
\begin{enumerate}[(i)]
\item $\theta\geq0$
\item $\text{supp }(\theta)\subseteq B^Y_r(y)$
\item $\int_Y \theta = 1$
\item $\sobt{\ell}^Y(\theta) \ll_{Y,y} r^{-(\ell+k/2)}$.\label{approxidSob}
\end{enumerate}
\end{lem}
\subsection{The Space of Unimodular Lattices}
For $\Gamma = {\rm SL}_n(\Z)$, $X$ is noncompact and can be understood as the space of unimodular lattices (that is, lattices of covolume 1) in $\mathbb{R}^n$ under the identification $\Gamma g \leftrightarrow \mathbb{Z}^n g$.
For $0<\epsilon\leq 1$, define $L_\epsilon$ to be the set of lattices in $X = {\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$ with no nonzero vectors shorter than $\epsilon$. That is, let
\begin{align*}
L_\epsilon = \left\{ \Gamma g \in X \hspace{2pt}\vert\hspace{2pt} \nn{vg} \geq \epsilon\hspace{2pt} \text{ for all } v\in \mathbb{Z}^n \setminus\{0\} \right\}
\end{align*}
where the norm above can be taken to be any norm on $\mathbb{R}^n$, but for convenience we will use the max norm. By Mahler's Compactness Criterion, $L_\epsilon$ is a compact set (for details and a proof, see \cite{Rag} Corollary 10.9, \cite{BekkaMayer} Theorem 5.3.2, or \cite{EW} Theorem 11.33).
\subsection{Radius of Injection}
Given small $\epsilon>0$, we want to find a radius $r>0$ (depending on $\epsilon$) such that projection at $x$, given by
\begin{align*}
\pi_x: B_r^G(e) &\to B_r^X(x)\\
g &\mapsto xg
\end{align*}
is injective for all $x\in L_\epsilon$ (in fact, it is not difficult to see from the definition of the metric on $X$ that this will be an isometry). For this, we have the following lemma, which is proved in a much more general setting in \cite{HeeBenoist} (see the proof of Lemma 11.2). A proof of the lemma as it is stated here can be found in Appendix \ref{app:RadiusofInjection}.
\begin{lem}\label{lem:radiusofinjection}
There exist constants $c_1, c_2>0$ (depending only on $n$) such that for any $0<\epsilon<c_1$, the projection map $\pi_x: B_r^G(e) \to B_r^X(x)$
is injective for all $x\in L_\epsilon$, where $r=c_2\epsilon^n$.
\end{lem}
\subsection{Quantitative Nondivergence}\label{sect:nondiv}
Let $\{e_1, \cdots, e_n\}$ be the standard basis on $\mathbb{R}^n$. Let $e_I = e_{i_1}\wedge\cdots\wedge e_{i_j}$ for a multi-index $I=(i_1, \cdots, i_j)$, where $1\leq i_1<\cdots<i_j\leq n$. Then $\{e_I \}$ is a basis for $\Lambda^j(\mathbb{R}^n)$, the $j$th exterior power of $\mathbb{R}^n$. Define the norm of $w=\sum_{I} w_I e_I \in\Lambda^j(\mathbb{R}^n)$ to be $\nn{w}=\max_I |w_I|$. Denote by $\Lambda^j(\mathbb{Z}^n)$ the discrete subset of $\Lambda^j(\mathbb{R}^n)$ composed of linear combinations of basis vectors with integer coefficients. Notice that $g\in{\rm GL}_n(\mathbb{R})$ acts on $\Lambda^j(\mathbb{R}^n)$ on the right by
\begin{align*}
(e_{i_1}\wedge\cdots\wedge e_{i_j})g = (e_{i_1}g)\wedge\cdots\wedge (e_{i_j}g)
\end{align*}
where the action extends to all of $\Lambda^j(\mathbb{R}^n)$ via linearity.
The following theorem quantitatively describes how often certain polynomial maps from $\mathbb{R}^d$ to $X$ land inside a compact set $L_\epsilon$. This is a special case of Theorem 5.2 in \cite{KMNondiv}, which itself extends results of \cite{DaniNondiv} and \cite{MargulisNondiv}. The original theorem is stated for much more general $(C,\alpha)$-good functions, but we will only need the version below, which uses the observation in Lemma 3.2 of \cite{BKMNondiv}
that polynomials in $\mathbb{R}[x_1,\cdots,x_d]$ of degree $\leq k$ are $(C_{d,k}, 1/dk)$-good on $\mathbb{R}$.
\begin{thm}[\hspace{1sp}\cite{KMNondiv}, Theorem 5.2]\label{thm:nondiv1}
Let $d, n, k\in\mathbb{N}$ and $0<\rho\leq1/n$. Let $B\subset\mathbb{R}^d$ be a ball and suppose $\dst{\xi:B\to {\rm GL}_n(\mathbb{R})}$ satisfies:
\begin{enumerate}[(i)]
\item $\nn{w\xi({\bf t})}$ is a polynomial in the coordinates of ${\bf t}$ of degree $\leq k$, and\label{nondiv1}
\item $\dst{\sup_{{\bf t}\in B} \nn{w\xi({\bf t})} \geq \rho}$\label{nondiv2}
\end{enumerate}
for all primitive $w\in \Lambda^j(\mathbb{Z}^n)\setminus\{0\}$ and $j\in\{1,\cdots, n\}$. Then for any $0<\epsilon\leq\rho$,
\begin{align*}
|\{{\bf t}\in B \hspace{2pt} \vert \hspace{2pt} \Gamma\xi({\bf t}) \notin L_\epsilon \} | \ll_{d,k} \left(\epsilon/\rho\right)^{1/dk} |B|
\end{align*}
\end{thm}
From this theorem we may derive the following corollary, which we will use in the proof of Theorem \ref{thm:equidist} to say that the orbit of a point satisfying a certain Diophantine condition spends a relatively large proportion of time in $L_\epsilon$ when pushed by the flow $a_t$.
\begin{cor}\label{cor:nondiv}
Let $T,R>1$ and $x_0=\Gamma g_0 \in X$. Then suppose $R_0>0$ is such that
\begin{align*}
\sup_{{\bf t}\in [0,1]^d} \nn{wg_0a_{\log T} u({\bf t}) a_{-\log T}}\geq R_0
\end{align*}
for all primitive $w\in \Lambda^j(\mathbb{Z}^d)\backslash\{0\}$ and $j\in\{1,\cdots,n-1\}$ and define $\rho = \min(1/n, R_0/R^q)$. Then for any $0<\epsilon<\rho$,
\begin{align*}
\left| \{ {\bf t}\in [0,1]^d \hspace{2pt}\vert\hspace{2pt} x_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R} \notin L_\epsilon \} \right|\ll (\epsilon/\rho)^{1/d(n-1)}.
\end{align*}
\end{cor}
\begin{proof}
Let $\xi({\bf t}) = g_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R}$. We want to demonstrate that conditions (\ref{nondiv1}) and (\ref{nondiv2}) hold in Theorem \ref{thm:nondiv1} for $k=n-1$, $\rho = \min(1/n, R_0/R^q)$, and $B=[0,1]^d$.
Recall that our identification $u({\bf t})$ places one coordinate of ${\bf t}$ in each matrix entry in the upper-right corner of (\ref{eq:Uhoro}). Then since multiplication by $a_t$ on either the left or the right only changes matrix entries by scaling, each entry in the upper-right corner of $a_{\log T} u({\bf t}) a_{-\log T}a_{\log R}$ only depends linearly on a single coordinate of ${\bf t}$. This means that for any matrix $g_0$, all entries of $\xi({\bf t}) = g_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R}$ will be affine.
Hence, when we take wedge products of the form
\begin{align*}
(e_{i_1}\wedge\cdots\wedge e_{i_j})\xi({\bf t}) = (e_{i_1}\xi({\bf t}))\wedge\cdots\wedge (e_{i_j}\xi({\bf t}))
\end{align*}
the coefficients will be polynomials of degree $\leq j$. Furthermore, since $\xi({\bf t})\in{\rm SL}_n(\R)$ for all ${\bf t}$, $(e_1\wedge\cdots\wedge e_n)\xi({\bf t})=e_1\wedge\cdots\wedge e_n$ is independent of ${\bf t}$, and the top exterior power can be ignored. Then from the definition of the norm on $w\in\Lambda^j(\mathbb{R}^n)$, we have that $\nn{w\xi({\bf t})}$ is a polynomial of degree $\leq n-1$ for all $j\in\{1,\cdots, n\}$, so (\ref{nondiv1}) is satisfied with $k=n-1$.
Moreover, notice that $e_k a_{\log R}=R^{\lambda_i}e_k$ if $\lambda_i$ is $k$\textsuperscript{th} eigenvalue in the definition of $a_t$ in (\ref{eq:a_t}). Then the right action of $a_{\log R}$ scales $e_{i_1}\wedge\cdots\wedge e_{i_j}\in\Lambda^j(\mathbb{R}^d)$ by the product of all such corresponding factors.
Since $R>1$, the most $a_{\log R}$ can therefore contract any basis element is by the product of all scaling factors corresponding to negative eigenvalues of (\ref{eq:a_t}), that is, by $R^{-q}$, where $q = \sum_{\lambda_i <0} -m_i\lambda_i$. It then follows from the definition of the norm that
\begin{align}
\nn{wa_{\log R}}\geq R^{-q}\nn{w}\label{eq:R-q}
\end{align}
for any $w\in\Lambda^j(\mathbb{R}^d)\setminus\{0\}$ and $j\in\{1,\cdots, n\}$.
Now observe that for $\rho = \min(1/n, R_0/R^q)$, we have $0<\rho\leq 1/n$ and also
\begin{align*}
\sup_{{\bf t}\in[0,1]^d}\nn{w\xi({\bf t})} &= \sup_{{\bf t}\in[0,1]^d}\nn{wg_0 a_{\log T} u({\bf t}) a_{-\log T}a_{\log R}}\\
&\geq R^{-q}\sup_{{\bf t}\in[0,1]^d}\nn{wg_0 a_{\log T} u({\bf t}) a_{-\log T}}\\
&\geq R_0/R^q\\
&\geq \rho
\end{align*}
for $j\in\{1,\cdots,n-1\}$ and primitive $w\in\Lambda^j(\mathbb{Z}^n)\setminus\{0\}$. Thus condition (\ref{nondiv2}) is satified, since as before, $\xi({\bf t}) \in {\rm SL}_n(\R)$ implies the condition is trivially satisfied for the top exterior power.
Hence, by Theorem \ref{thm:nondiv1}, we have
\begin{align*}
\left| \{ {\bf t}\in [0,1]^d \hspace{2pt}\vert\hspace{2pt} \Gamma \xi({\bf t}) \notin L_\epsilon \} \right| \ll (\epsilon/\rho)^{1/d(n-1)}.
\end{align*}
\end{proof}
\subsection{Decay of Matrix Coefficients}\label{sect:expmixing}
In order to obtain effective rates of equidistribution in Sections \ref{sect:Equidistribution} and \ref{sect:EquiArith}, we will need to use results on the effective decay of matrix coefficients.
Estimates of this type have a long and rich history, including Selbrerg's celebrated $3/16$ theorem for congruence quotients of ${\rm SL}_2(\Z)$, Kazhdan's property (T), and works of Cowling, Moore, Howe, and Oh. Far reaching extensions of Selberg's work are also in place thanks to works of Jacques-Langlands, Burger-Sarnak, and Clozel. Our formulation here is taken from \cite{KMBddOrbits} (see \cite{KMBddOrbits}, \cite{GMO}, and \cite{EMMV} for a more comprehensive history and discussion).
\begin{thm}[\hspace{1sp}\cite{KMBddOrbits}, Corollary 2.4.4]
Let $G={\rm SL}_n(\R)$ and $X=\Gamma/G$ for a lattice $\Gamma$. There exists a constant $0<\beta<1$ such that for $f_1, f_2 \in C^\infty_c(X)$ and $g\in G$,
\begin{align*}
\left|\left< g\cdot f_1, f_2\right>_{L^2(X)} - \int_X f_1 dm_X \int_X \overline f_2 dm_X \right|\ll e^{-\beta d_G(e,g)} \sobt{\ell}(f_1)\sobt{\ell}(f_2)
\end{align*}
where $\ell$ is the dimension of maximal compact subgroup of $G$. When $n\geq 3$, the constant $\beta$ is independent of the lattice $\Gamma$, and when $n=2$ it is independent of the lattice if $\Gamma$ is a congruence lattice.
\end{thm}
For our specific applications, we have the following immediate corollaries.
\begin{cor}\label{cor:expmixing}
Let the setting be as above.
\begin{enumerate}[(i)]
\item For $f_1, f_2\inC^\infty_c(X)$ and $t\geq 0$, we have
\[
\left|\int_X f_1(xa_t)f_2(x)dm_X(x)-\int_X f_1 dm_X\int_X \overline f_2 dm_X\right|\ll e^{-\beta t}\sobt{\ell}(f_1)\sobt{\ell}(f_2)
\]\label{expmixing}
\item For $f\inC^\infty_c(X)$ and ${\bf t}\in \mathbb{R}^d$,
\[
\left|\left<u({\bf t})f , f\right>_{L^2(X)}-\left|\int_X f dm_X\right|^2\right|\ll\max(1,|{\bf t}|)^{-\beta}\sobt{\ell}(f)^2
\]\label{umtxcoeff}
\end{enumerate}
\end{cor}
\subsection{Combinatorial Sieve}\label{sect:Sieve}
In order to understand the distribution of almost-prime times in horospherical orbits we will make use of the following combinatorial sieve theorem (see \cite{CombSieve}, or \cite{SarnackNevo} for a form more similar to that stated here).
\begin{thm}[\hspace{1sp}\cite{CombSieve}, Theorem 7.4]\label{thm:CombSieve}
Let $A = \{a_n\}$ be a sequence of nonnegative numbers and let $\dst{P = P(z) = \prod_{p<z} p}$ be the product of primes less than $z$. Let $\dst{S(A, P) = \sum_{(n,P)=1} a_n}$ and $\dst{S_K(A,P) = \sum_{ n \equiv 0 \mod K} a_n}$.
Then suppose
\begin{enumerate}[(i)]
\item \label{axiom1} There exists a multiplicative function $g(K)$ on $K$ squarefree such that
\begin{align*}
S_K(A,P) = g(K)\mathcal{X} + r_K(A)
\end{align*}
and for some $c_1 > 0$, we have $0\leq g(p) < 1 - \frac{1}{c_1}$ for all primes $p$.
\item \label{axiom2} $A$ has level distribution $D(\mathcal{X})$, i.e. there is $\epsilon > 0$ such that
\begin{align*}
\sum_{K<D} |r_K(A)| \ll_\epsilon \mathcal{X}^{1-\epsilon}.
\end{align*}
\item \label{axiom3} $A$ has sieve dimension $r$, i.e. there exists a constant $c_2>0$ such that for all $2 \leq w \leq z$, we have
\begin{align*}
-c_2 \leq \sum_{w \leq p \leq z} g(p)\log p - r \log \frac{z}{w} \leq c_2.
\end{align*}
\end{enumerate}
Then for $s>9r$, $z=D^{1/s}$, and $\mathcal{X}$ large enough, we have
\begin{align*}
S(A,P) \asymp \frac{\mathcal{X}}{(\log\mathcal{X})^r}
\end{align*}
where the implicit constants depend on the constants in (\ref{axiom1}), (\ref{axiom2}), and (\ref{axiom3}).
\end{thm}
\section{Sieving and Orbits Along Almost-Primes}\label{sect:Sieving}
\subsection{$\Gamma$ Cocompact}
Let $\Gamma$ be a cocompact lattice in $G={\rm SL}_n(\R)$ and let $u({\bf t})$ be an abelian horospherical flow on $X=\Gamma/G$, as in Section \ref{sect:EquiArith}. We know that the orbit of $u({\bf t})$ equidistributes with a uniform rate for all $x_0\in X$,
and that as a consequence we have a uniform rate of equidistribution along multivariable arithmetic sequences of the form given in Corollary \ref{cor:arithequidistcorcmpt}.
Here and throughout this section, assume ${\bf k}=(k_1,\cdots, k_d)\in\mathbb{Z}^d$.
We want to understand the behavior of orbits at almost-prime entries of $u({\bf t})$. More precisely, we want to understand averages of positive $f\in C_c^\infty(X)$ over points in $B_T$ that have entries with fewer than a certain fixed number of primes in their prime factorization.
To investigate this question, we will use the combinatorial sieve from Theorem \ref{thm:CombSieve}.
In the context of our problem, we want to define
\begin{align*}
S(A,P) := \sum_{\substack{{\bf k}\in B_T\\ \gcd(k_1\cdots k_d, P)=1}} f(x_0u({\bf k}))
\end{align*}
where $f\in C^\infty_c(X)$, $f\geq 0$, and $P$ is the product of primes less than $z$ (to be determined). That is, we are summing over integer points in $B_T$ with entries containing no primes smaller than $z$. Then let
\begin{align*}
A = \{a_n\} := \left\{ \sum_{\substack{{\bf k}\in B_T\\ k_1\cdots k_d = n}} f(x_0u({\bf k})) \right\}
\end{align*}
and observe that
\begin{align*}
S_K(A,P) := \sum_{n\equiv 0 \mod K} \sum_{\substack{{\bf k}\in B_T\\ k_1\cdots k_d = n}} f(x_0u({\bf k}))
= \sum_{\substack{{\bf k}\in \tilde B_T\\ K|k_1k_2\cdots k_d}} f(x_0 u({\bf k}))
\end{align*}
where $\tilde B_T = (0,T]^d$ (since the index $n$ starts at 1 we want to avoid counting terms of the form $K$ divides $0$).
Notice that $K | k_1\cdots k_d$ if and only if $K | k_1\cdots (k_i + K) \cdots k_d$, that is, the collection of points that we are summing over is periodic with period $K$ in each coordinate. Thus we can rewrite $S_K(A,P)$ as a sum over cubic grids of side length $K$ based at each point in the first box $B_K$:
\begin{align}
\sum_{\substack{{\bf k}\in \tilde B_T\\ K|k_1\cdots k_d}} f(x_0 u({\bf k})) = \sum_{\substack{{\bf \tilde k}\in \tilde B_K\\ K|\tilde k_1\cdots\tilde k_d}} \left( \sum_{K{\bf k} \in B_T} f(x_0 u({\bf \tilde k}) u(K{\bf k})) + \mathcal{O}(T^{d-1}K^{1-d}\sob{\infty}{0}(f))\right)\label{eq:doubleSum}
\end{align}
where the error arises from the fact that a point ${\bf \tilde k}+K{\bf k}$ for ${\bf k} \in B_T$ may, in fact, fall outside of $B_T$ (see Figure \ref{fig:firstboxsumerror}).
\begin{figure
\begin{center}
\begin{tikzpicture}[scale=.8]
\draw[fill=gray!20] (0,0) -- (0,3) -- (3,3) -- (3,0) -- cycle;
\node [gray] at (.75,.5) {$B_K$};
\node [below] at (3,0) {$K$};
\node [below] at (6,0) {$2K$};
\node [below] at (9,0) {$3K$};
\node [below] at (10,0) {$T$};
\node [left] at (0,3) {$K$};
\node [left] at (0,6) {$2K$};
\node [left] at (0,9) {$3K$};
\node [left] at (0,10) {$T$};
\node [below] at (2,1.5) {${\bf \tilde k}$};
\draw [dashed, red] [-] (2,1.5) -- (12,1.5);
\draw [dashed, red] [-] (2,1.5) -- (2,11.5);
\draw [dashed, red] [-] (12,1.5) -- (12,11.5);
\draw [dashed, red] [-] (2,11.5) -- (12,11.5);
\draw [dashed, red] [-] (5,1.5) -- (5,11.5);
\draw [dashed, red] [-] (8,1.5) -- (8,11.5);
\draw [dashed, red] [-] (11,1.5) -- (11,11.5);
\draw [dashed, red] [-] (2,4.5) -- (12,4.5);
\draw [dashed, red] [-] (2,7.5) -- (12,7.5);
\draw [dashed, red] [-] (2,10.5) -- (12,10.5);
\draw [-] (0,0) -- (10,0);
\draw [-] (0,0) -- (0,10);
\draw [-] (10,0) -- (10,10);
\draw [-] (0,10) -- (10,10);
\draw [-] (3,0) -- (3,10);
\draw [-] (6,0) -- (6,10);
\draw [-] (9,0) -- (9,10);
\draw [-] (0,3) -- (10,3);
\draw [-] (0,6) -- (10,6);
\draw [-] (0,9) -- (10,9);
\foreach \Point in {(.5,3),(.5,6),(.5,9),
(1,1.5),(1,3),(1,4.5),(1,6),(1,7.5),(1,9),
(1.5,1),(1.5,2),(1.5,3),(1.5,4),(1.5,5),(1.5,6),(1.5,7),(1.5,8),(1.5,9),(1.5,10),
(2,1.5),(2,3),(2,4.5),(2,6),(2,7.5),(2,9),
(2.5,3),(2.5,6),(2.5,9),
(3,.5),(3,1),(3,1.5),(3,2),(3,2.5),(3,3),(3,3.5),(3,4),(3,4.5),(3,5),(3,5.5),(3,6),(3,6.5),(3,7),(3,7.5),(3,8),(3,8.5),(3,9),(3,9.5),(3,10),
(3.5,3),(3.5,6),(3.5,9),
(4,1.5),(4,3),(4,4.5),(4,6),(4,7.5),(4,9),
(4.5,1),(4.5,2),(4.5,3),(4.5,4),(4.5,5),(4.5,6),(4.5,7),(4.5,8),(4.5,9),(4.5,10),
(5,1.5),(5,3),(5,4.5),(5,6),(5,7.5),(5,9),
(5.5,3),(5.5,6),(5.5,9),
(6,.5),(6,1),(6,1.5),(6,2),(6,2.5),(6,3),(6,3.5),(6,4),(6,4.5),(6,5),(6,5.5),(6,6),(6,6.5),(6,7),(6,7.5),
(6,8),(6,8.5),(6,9),(6,9.5),(6,10),
(6.5,3),(6.5,6),(6.5,9),
(7,1.5),(7,3),(7,4.5),(7,6),(7,7.5),(7,9),
(7.5,1),(7.5,2),(7.5,3),(7.5,4),(7.5,5),(7.5,6),(7.5,7),(7.5,8),(7.5,9),(7.5,10),
(8,1.5),(8,3),(8,4.5),(8,6),(8,7.5),(8,9),
(8.5,3),(8.5,6),(8.5,9),
(9,.5),(9,1),(9,1.5),(9,2),(9,2.5),(9,3),(9,3.5),(9,4),(9,4.5),(9,5),(9,5.5),(9,6),(9,6.5),(9,7),(9,7.5),(9,8),(9,8.5),(9,9),(9,9.5),(9,10),
(9.5,3),(9.5,6),(9.5,9),
(10,1.5),(10,3),(10,4.5),(10,6),(10,7.5),(10,9)}
{
\fill \Point circle[radius=2pt];
}
\foreach \Point in {(2,10.5),(5,10.5),(8,10.5),(11,10.5),(11,7.5),(11,4.5),(11,1.5)}
{
\fill[red] \Point circle[radius=2pt];
}
\end{tikzpicture}
\caption{In $S_K(A,P)$ we are summing over the integer points in $\tilde B_T$ such that $K|k_1\cdots k_2$ (shown in black). We may do this by summing over shifted grids based at each of the points in the first box $\tilde B_K$ (shaded in gray). However, this introduces an error determined by $\sob{\infty}{0}(f)$ and the number of points in each of these shifted grids falling outside $B_T$ (shown in red). The number of such points can be bounded by $T^{d-1}K^{1-d}$, as we have seen before.}\label{fig:firstboxsumerror}
\end{center}
\end{figure}
From Corollary \ref{cor:arithequidistcorcmpt}, we know that at each basepoint $x_{\bf \tilde k} = x_0 u({\bf \tilde k})$, we have
\begin{align}
\sum_{K{\bf k}\in B_T} f(x_{\bf \tilde k} u(K{\bf k})) = \frac{T^d}{K^d} \int f dm_X +\mathcal{O}\left(T^{d-b/(d+1)}K^{-d^2/(d+1)}\sob{\infty}{\ell}(f)\right).\label{eq:whateva}
\end{align}
If we let
\begin{align}
G_d (K) := \#\{ {\bf k} \in \tilde B_K | k_1 \cdots k_d \equiv 0 \mod K \}\label{eq:Gddef}
\end{align}
then (\ref{eq:doubleSum}) together with (\ref{eq:whateva}) says that
\begin{align*}
S_K(A,P) = \sum_{\substack{{\bf k}\in \tilde B_T\\ K|k_1k_2\cdots k_d}} f(x_0 u({\bf k})) = \frac{G_d(K)}{K^d} \mathcal{X} + r(f,K,T)
\end{align*}
where $\mathcal{X} = T^d \int f dm_X$ and
\begin{align*}
| r(f,K,T)|&\ll G_d(K)T^{d-b/(d+1)}K^{-d^2/(d+1)}\sob{\infty}{\ell}(f).
\end{align*}
This suggests that our function $g(K)$ in Theorem \ref{thm:CombSieve} should be $G_d(K)/K^d$, but it remains to show that this function satisfies the sieve axioms (\ref{axiom1}) and (\ref{axiom3}) and that the corresonding remainders satisfy the condition in axiom (\ref{axiom2}) for appropriately chosen $D(\mathcal{X})=D(T)$. We will start with a lemma outlining some of the properties of the function $G_d$, the proof of which is given in Appendix \ref{app:Gd}.
\begin{lem}\label{lem:Gd}
For any integers $K,d\geq 1$, the following hold:
\begin{enumerate}[(i)]
\item (Iterated sum formula)
\[
G_d(K) = \sum^K_{k_{d-1} = 1}\cdots \sum^K_{k_1 = 1} \gcd(K, k_1\cdots k_{d-1}).
\]\label{Gdprops1}
\item (Recursive formula) Let ${\rm Id}^d(K) = K^d$. Then
\[
G_{d+1} = {\rm Id}^d\ast(\phi\cdot G_d).
\]\label{Gdprops2}
\item $G_d$ is multiplicative.\label{Gdprops3}
\item (Behavior at primes) Let $p$ be a prime. Then
\[
G_d(p) = p^d-(p-1)^d.
\]\label{Gdprops4}
\item (Dirichlet series bound) For real $x>e$ and $s<d$,
\[
\sum_{K\leq x} \frac{G_d(K)}{K^s} \ll_{s,d} x^{d-s}(\log x)^{d-1}.
\]\label{Gdprops5}
\end{enumerate}
\end{lem}
\begin{rem}
By convention, we let $G_1(K) = \gcd(K,1) = 1 = \#\{0<k\leq K | k \equiv 0 \mod K\}$, which trivially satisfies the above properties, so long as the empty product in (\ref{Gdprops1}) is properly interpreted to be 1.
\end{rem}
\begin{rem}
Notice that $G_2(K) = \sum^K_{j=1} \gcd(K, j)$ is Pillai's arithmetical function,\footnote{ The values of $G_2(K)$ for $K=1, 2, 3, \dots$ are given as sequence A018804 in the OEIS (see \cite{OEIS}).}
a multiplicative function first considered by Ces\`{a}ro and rediscovered by Pillai in \cite{Pillai}. For this function, property (\ref{Gdprops2}) is the well-known identity $G_2 = {\rm Id}\ast\phi$.\footnote{ For $G_2$, much else is known. In terms of Dirichlet convolution, we also have the useful identity $G_2 = \mu \ast ({\rm Id}\cdot \tau)$, where $\mu$ is the M\"{o}bius function and $\tau$ is the divisor function. In \cite{Broug1}, Broughan used this to derive a closed form for the Dirichlet series in terms of the Riemann zeta function, as well as an asymptotic formula for the partial sums of the Dirichlet series. The asymptotics for partial sums of the Dirichlet series were later refined by \cite{BordPillai}, \cite{Broug2}, and \cite{Tanigawa}.}
As noted in \cite{OEIS}, $G_2(K)$ counts the number of non-congruent solutions to the equation $k_1 k_2 \equiv 0 \mod K$. From the definition of $G_d$ in (\ref{eq:Gddef}), we can similarly see that $G_d(K)$ counts the number of non-congruent solutions to $k_1k_2\cdots k_d \equiv 0 \mod K$, so in this way $G_d$ can be considered a generalization of Pillai's arithmetical function.\footnote{ Other generalizations of Pillai's arithmetical function have been studied. Examples include \cite{LongGcdGeneral}, \cite{TothGcdGeneral}, \cite{BordGcdGeneral}, \cite{HaukGcdGeneral}, and \cite{Toth}, however none of these include the generalization given here. In \cite{TothSimilar}, T\`{o}th considers a generalization that is very similar to ours, and in the notation of that paper, $G_d(K) = A_{d-1}(K)K^{d-1}$. Lemma \ref{lem:Gd} (\ref{Gdprops2}) and (\ref{Gdprops3}) can thus be considered corollaries of results proved in \cite{TothSimilar}, but we prove them in Appendix \ref{app:Gd} in order to keep the paper self-contained. T\`{o}th also gives a formula for the Dirichlet series of this generalization in terms of the Dirichlet series of a related arithmetic function, however we will need an explicit estimate for the partial sums of the Dirichlet series where it does not converge, which we develop as property (\ref{Gdprops5}) of Lemma \ref{lem:Gd}.}
In addition to those presented here, there are undoubtedly many other useful properties and interpretations of the generalized Pillai's functions $G_d$, which could be an interesting area of future study.\footnote{ For example, for $K$ squarefree, we have the bound $G_d(K) \leq K^{d-1} d^{\omega(K)}$, where $\omega(K)$ counts the number of (distinct) primes dividing $K$ (since $K$ is squarefree, we have $\omega(K)=\Omega(K)$). This bound can be derived from the formula for primes along with multiplicativity and can be used along with known estimates for $\omega(K)$ as an alternative to Lemma \ref{lem:Gd} (\ref{Gdprops5}) to verify sieve axiom (\ref{axiom2}).}
\end{rem}
We can now verify that the sieve axioms in Theorem \ref{thm:CombSieve} are satisfied, which gives us the following theorem.
\begin{thm}
Let $u$ be a $d$-dimensional abelian horospherical flow on $X=\Gamma\backslash{\rm SL}_n(\R)$ for $\Gamma$ cocompact, and let $P$ be the product of primes less than $T^\alpha$ for $\alpha<b/ 9d^2$, where $b$ is the constant from Lemma \ref{lem:VenkLemcmpt}. Then for any $x_0 \in X$, positive $f\in C^\infty(X)$, and $T$ large enough (depending on $n$, $d$, $\Gamma$, and $f$), we have
\[
\sum_{\substack{{\bf k}\in B_T\\ \gcd(k_1\cdots k_d, P) = 1}} f(x_0u({\bf k})) \asymp_{\Gamma} \left(\frac{T}{\log T}\right)^d\int f dm_X.
\]
\label{thm:SAPcmpt}
\end{thm}
\begin{rem}
It is not clear from the statement of Theorem \ref{thm:CombSieve}, but from \cite{CombSieve} it can be found that the dependence on $f$ arising from the implicit constant in sieve axiom (\ref{axiom2}) can be entirely absorbed by the implicit constant determining how large we require $T$ to be to get the result. As usual, the implict constant in the conclusion of this theorem depends also on $n$ and $d$, and dependence on $\Gamma$ may be removed if $n\geq 3$ or if $\Gamma$ is a congruence lattice.
\end{rem}
\begin{rem}
Let $\phi(x,y)$ be the number of positive integers $\leq x$ not divisible by any prime $\leq y$ for $x\geq y\geq 2$. It is known that
\[
\phi(x,y) = \frac{x\omega(\log x/\log y)-y}{\log y} + \mathcal{O}\left(\frac{x}{(\log y)^2}\right)
\]
where $\omega:[1,\infty)\to[1/2,1]$ is the Buchstab function.
Thus, the number of integers in $[0,T]$ not divisible by any prime less than $T^\alpha$ for $\alpha<1$ is given by
\[
\phi(T,T^\alpha) = \frac{\omega(1/\alpha)T}{\alpha\log T} - \frac{T^\alpha}{\alpha\log T} + \mathcal{O}\left(\frac{T}{(\alpha \log T)^2}\right).
\]
Thus the number of points ${\bf k}\in B_T$ such that $\gcd(k_1\cdots k_d, P) = 1$ where $P$ is the product of primes less than $T^\alpha$ is $\phi(T,T^\alpha)^d$, which grows asymptotically like $(T/\log T)^d$ as $T\to \infty$. Although our result above only states that there is an upper and lower bound with respect to this quantity, it hints that there may be underlying equidistribution behavior.
\end{rem}
\begin{proof}
We need to show that sieve axioms (\ref{axiom1}), (\ref{axiom2}), and (\ref{axiom3}) are satisfied for
\[
S_K(A,P) := \sum_{\substack{{\bf k}\in \tilde B_T\\K|k_1\cdots k_d}} f(x_0 u({\bf k})) = g(K) \mathcal{X} + r(f,K,T)
\]
where $g(K) = G_d(K)/K^d$, $\mathcal{X}=T^d\int f dm_X$, and
\[| r(f,K,T)|\ll G_d(K)T^{d-b/(d+1)}K^{-d^2/(d+1)} \sob{\infty}{\ell}(f)
\]
(see discussion at the beginning of the section).
To verify sieve axiom (\ref{axiom1}), note that since $G_d(K)$ is multiplicative by Lemma \ref{lem:Gd} (\ref{Gdprops3}), it follows that $g(K) = G_d(K)/K^d$ is multiplicative. Furthermore, by Lemma \ref{lem:Gd} (\ref{Gdprops4}), we know that at primes
\begin{align*}
0< g(p) &= \frac{p^d-(p-1)^d}{p^d}\\
&= 1 - \left(\frac{p-1}{p}\right)^d\\
&\leq 1 - \left(\frac{2-1}{2}\right)^d = 1-\frac{1}{2^d}
\end{align*}
since $p\geq2$. So sieve axiom (\ref{axiom1}) is satisfied with, for example, $c_1 = 2^{d+1}$.
To verify sieve axiom (\ref{axiom2}), observe that by Lemma \ref{lem:Gd} (\ref{Gdprops5}), we know that for $D$ large enough,
\begin{align*}
\sum_{K<D} |\tilde r(f,K,T)| &\ll T^{d-b/(d+1)}\sob{\infty}{\ell}(f)\sum_{K<D}\frac{G_d(K)}{K^{d^2/(d+1)}}\\
&\ll_f T^{d-b/(d+1)} D^{d/(d+1)}( \log D)^{d-1}.
\end{align*}
Now if we let $D=T^\eta$ for any $\eta<b/d$, say $\eta=b/d - 2(d+1)\epsilon$ for some $\epsilon>0$, then we have
\begin{align*}
\sum_{K<D} |\tilde r(f,K,T)| &\ll_{f,\epsilon} T^{d-2d\epsilon} ( \log T)^{d-1}.
\end{align*}
But since $\log T$ asymptotically grows more slowly than any positive power of $T$, we can say that for $T$ large enough, $\log T \ll_{\epsilon} T^{d\epsilon/(d-1)}$. Hence
\[
\sum_{K<D} |\tilde r(f,K,T)| \ll_{f,\epsilon} T^{d-2d\epsilon}(T^{d\epsilon/(d-1)})^{d-1} = (T^d)^{1-\epsilon} \ll_f \mathcal{X}^{1-\epsilon}.
\]
To verify sieve axiom (\ref{axiom3}), notice that (by the binomial theorem)
\begin{align}
g(p) = \frac{p^d-(p-1)^d}{p^d} = \frac{d}{p} - \sum_{i=2}^d \frac{a_i}{p^i} \label{eq: A31}
\end{align}
where $a_i = (-1)^i {d \choose i}$. Since $\sum_{j=1}^\infty\log(j)/j^i$ converges for any $i>1$, we have that
\begin{align}
\left|\sum_{w\leq p \leq z} \sum_{i=2}^d \frac{a_i\log p}{p^i}\right|
\leq \sum_{i=2}^d|a_i|\sum_{j=1}^\infty \frac{\log j}{j^i} = C_2 \label{eq: A32}
\end{align}
and by a corollary of the Prime Number Theorem, we know that
\begin{align*}
\sum_{p\leq x} \frac{\log p}{p} = \log(x) + \mathcal{O}(1).
\end{align*}
Hence,
\begin{align*}
\sum_{p\leq z} \frac{\log p}{p} - \sum_{p< w} \frac{\log p}{p} &= \log(z)-\log(w) + \mathcal{O}(1)\\
\sum_{w\leq p \leq z} \frac{\log p}{p}&= \log\frac{z}{w} +\mathcal{O}(1)
\end{align*}
i.e., there exists $C_2'$ such that
\begin{align}
\left| \sum_{w\leq p \leq z} \frac{\log p}{p} - \log\frac{z}{w}\right| \leq C_2' \label{eq: A33}
\end{align}
for all $2\leq w\leq z$. Putting (\ref{eq: A31}), (\ref{eq: A32}), and (\ref{eq: A33}) together, we see that
\begin{align*}
\left| \sum_{w\leq p \leq z} g(p) \log p - d\log\frac{z}{w}\right| &\leq d \left| \sum_{w\leq p \leq z} \frac{\log p}{p} - \log\frac{z}{w}\right| + \left|\sum_{w\leq p \leq z} \sum_{i=2}^d \frac{a_i\log p}{p^i}\right|\\
&\leq dC_2'+C_2
\end{align*}
which shows that axiom (\ref{axiom3}) is satisfied with sieve dimension $r=d$ and $c_2 = dC_2'+C_2$.
Since we have demonstrated that sieve axioms (\ref{axiom1}), (\ref{axiom2}), and (\ref{axiom3}) hold, we have the conclusion of Theorem \ref{thm:CombSieve}, which implies our result.
\end{proof}
Notice that if an integer $k<T$ has no prime factors less than $T^\alpha$, then it must have fewer than $1/\alpha$ prime factors total. Hence, if we take $f$ to be a positive function supported on any small neighborhood, Theorem \ref{thm:SAPcmpt} tells us that we can take $T$ large enough so that averaging $f$ over integer points in $B_T$ with no prime factors less than $T^\alpha$ has a positive lower bound. This means that the set of $(1/\alpha)$-almost-prime times hitting any neighborhood is nonempty, which gives us the theorem from the introduction with $M=1/\alpha$.
\begin{cor*}[Theorem \ref{thm:CmptThm}]
Let $u({\bf t})$ be an abelian horospherical flow of dimension $d$ on $X=\Gamma\backslash {\rm SL}_n(\R)$ for $\Gamma$ cocompact.
Then there exists a constant $M$ (depending only on $n$, $d$, and $\Gamma$) such that for any
$x_0\in X$, the set
\[
\{x_0u(k_1, k_2, \cdots, k_d) \hspace{2pt}\vert\hspace{2pt} k_i\in\mathbb{Z} \text{ has fewer than } M \text{ prime factors}\}
\]
is dense in $X$.
\end{cor*}
\subsection{The Space of Lattices}
Now consider $X = \Gamma\backslash G$ for the non-cocompact lattice $\Gamma={\rm SL}_n(\Z)$. Since we no longer have a uniform rate of equidistribution for our abelian horospherical flow $u({\bf s})$, we will consider a basepoint $x_0= \Gamma g_0 \in X$ satisfying a Diophantine condition of the following form.
\begin{defn}
We say that $x=\Gamma g$ is \textit{strongly polynomially} $\delta$\textit{-Diophantine} if there exists a sequence $T_i\to\infty$ as $i\to\infty$ such that
\[
\inf_{\substack{w\in \Lambda^j(\mathbb{Z}^n)\setminus\{0\}\\j=1, \cdots, n-1}} \sup_{{\bf t}\in [0,T_i]^d}\nn{wgu({\bf t})}>T_i^\delta
\]
for all $i\in\mathbb{N}$.\label{def:Dio}
\end{defn}
The motivation for this definition is that, as in the compact setting, we will want to apply sieving to learn about integer points haivng few prime factors. However, unlike in the compact case, we do not have a uniform rate of equidistribution, so we must consider the effect of the basepoint. For a given time-scale $T$, to obtain information about almost-primes of a certain order, we would want $R$ in the basepoint condition (\ref{eq:dio1}) to look like a small power of $T$ (say $T^\delta$). However, a theorem like that of Theorem \ref{thm:SAPcmpt} will require $T$ be ``large enough," which depends on the function $f$, and so any fixed time-scale $T$ is insufficient. Moreover, the constant $\delta$ we are able to take at one time-scale may not work for a different time-scale, which affects the number of prime factors we allow for our almost-prime points. The condition given in Definition \ref{def:Dio} ensures that for any function (hence any neighborhood in $X$) we will be able to find a time-scale large enough so that our sieving provides positive information about almost-primes of the same, fixed order.
Before moving on to the main theorem of this section, we briefly remark that this definition is a meaningful one. In view of results in \cite{KMLogLaws}, we see that not only do such points exist, but any generic point for the flow $u$ will satisfy this definition for some positive $\delta$.
\begin{thm}\label{thm:SAPnoncmpt}
Let $u$ be an abelian horospherical flow on $X={\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$ and let $P$
be the product of primes less than $T^\alpha$ for $\alpha<\delta bn/9d(d^2+bn\kappa)$, where $b$ is the constant from Lemma \ref{lem:VenkLem} and $\kappa = \min(m,n-m)$ for $m$ as in (\ref{eq:U}). Furthermore, let $x_0\in X$ be strongly polynomially $\delta$-Diophantine. Then for any positive $f\inC^\infty_c(X)$ there exists a sequence $T_i\to\infty$ as $i\to\infty$ where
\[
\sum_{\substack{{\bf k}\in B_{T_i}\\ \gcd(k_1\cdots k_d, P) = 1}} f(x_0u({\bf k})) \asymp
\left(\frac{T_i}{\log T_i}\right)^d\int f dm_X.
\]
\end{thm}
\begin{proof}
Let $f\inC^\infty_c(X)$, $f\geq 0$, and let $u$ be an abelian horospherical flow as given in (\ref{eq:U}) of Section \ref{sect:EquiArith}.
As in the compact setting, we want to use our equidistribution theorem for arithmetic sequences to say that
\begin{align}
S_K(A,P):&=\sum_{\substack{{\bf k}\in B_{T_i}\\ K|k_1\cdots k_d}} f(x_0u({\bf k}))\nonumber\\
&= \sum_{\substack{{\bf \tilde k}\in \tilde B_K\\ K|\tilde k_1\cdots\tilde k_d}} \left( \sum_{K{\bf k} \in B_{T_i}} f(x_0u({\bf \tilde k})u(K{\bf k})) + \mathcal{O}(T_i^{d-1}K^{1-d})\right)\label{eq:doublesumnoncmpt}\\
&= g(K) \mathcal{X} + r(f,K,T_i)\nonumber
\end{align}
where $g(K)=G_d(K)/K^d$, $\mathcal{X}=T_i^d \int f dm_X$, and the error terms can be suitably controlled.
Unfortunately, we cannot apply the same equidistribution result to the shifted basepoints $x_0u({\bf \tilde k})$ since they will not necessarily satisfy the same Diophantine condition. However, since $K$ is understood to be small in comparison to the $T_i$, all of the points in $B_K$ lie comparatively close to $x_0$. Then since the Diophantine property varies continuously, we expect the points in this region to satisfy a Diophantine condition not much worse than that of $x_0$, and in fact we can make this quantitative.
Observe that if $x_0$ is strongly polynomially $\delta$-Diophantine, it means that condition (\ref{eq:dio1}) holds for the sequence of parameters $T=T_i$ and $R=T_i^{\delta/q}$, where $q=\sum_{\lambda_i<0}-m_i\lambda_i = d/n$ for abelian $u$ of this form.
That is, for all $j\in\{1,\cdots,n-1\}$ and $w\in \Lambda^j(\mathbb{Z}^n)\setminus\{0\}$, we have
\begin{align}
\exists\hspace{2pt} {\bf t}\in [0,T_i]^d \text{ s.t. } \nn{wg_0u({\bf t})}=\nn{wg_0u({\bf \tilde k})u({\bf t})u({\bf -\tilde k})}\geq T_i^\delta. \label{eq:conjcond}
\end{align}
Recall that any $w \in \Lambda^j(\mathbb{R}^n)$ can be written as a sum $w = \sum_I w_I e_I$ over multi-indices $I = (i_1, \cdots, i_j)$ with $0<i_j<\cdots<i_1<n$, coefficients $w_I\in\mathbb{R}$, and basis elements $e_I = e_{i_1}\wedge\cdots\wedge e_{i_j}$ where $\{e_i\}_{1\leq i \leq n}$ is the standard basis on $\mathbb{R}^n$. Recall also that the norm above is defined by
\begin{align*}
\nn{w} = \max_I |w_I|
\end{align*}
and that $G$ acts linearly on $\Lambda^j(\mathbb{R}^n)$ by sending a basis vector $e_{i_1}\wedge \cdots\wedge e_{i_j}$ to
\begin{align*}
(e_{i_1}\wedge \cdots\wedge e_{i_j}) g = (e_{i_1}g)\wedge \cdots\wedge(e_{i_j}g).
\end{align*}
Since our abelian horospherical subgroup has the form given in (\ref{eq:U}), we can write an arbitrary $u\in B_K^{-1} = u([-K,0]^d)$ as
\begin{align}
u =
\left(\begin{matrix}
&&\vline& a_{1(m+1)}&\cdots&a_{1n}\\
& I_{m} & \vline& \vdots& & \vdots\\
&&\vline& a_{m(m+1)}&\cdots&a_{mn}\\
\hline
&0&\vline&&I_{n-m}&\\
\end{matrix}
\right)
\end{align}
where $a_{ij}\in[-K,0]$ for all $1\leq i \leq m$ and $m+1\leq j \leq n$. One may verify that
\begin{align*}
e_i u = e_i + a_{i(m+1)} e_{m+1} + \cdots + a_{in} e_n
\end{align*}
for $1\leq i \leq m$, and
\begin{align*}
e_i u = e_i
\end{align*}
for $m+1\leq i \leq n$. Hence, when we take wedge products $(e_{i_1}u)\wedge \cdots\wedge(e_{i_j}u)$, we cannot get a coefficient of order greater than $K^m$, since only the first $m$ transformed basis vectors have nontrivial coefficients and none of these coefficients have magnitude greater than $K$. On the other hand, we cannot get a coefficient of order larger than $K^{n-m}$, since only the basis vectors $e_{m+1}$ through $e_{n}$ carry nontrivial coefficients. Thus if we let $\kappa:=\min\{m, n-m\}$, we find that
\begin{align*}
\nn{(e_{i_1}u)\wedge \cdots\wedge(e_{i_j}u)}\ll K^\kappa.
\end{align*}
Then for general $w\in \Lambda^j(\mathbb{R}^n)$ and $u\in B_K$, we have
\begin{align*}
\nn{wu}\ll K^\kappa\nn{w}.
\end{align*}
Thus from (\ref{eq:conjcond}), we can say that for any $w \in \Lambda^j(\mathbb{Z}^n)\setminus\{0\}$, $j\in\{1,\cdots, n-1\}$, there exists $ {\bf t}\in [0,T_i]^d$ such that
\begin{align*}
K^\kappa \nn{wg_0u({\bf \tilde k})u({\bf t})} &\gg\nn{wg_0u({\bf \tilde k})u({\bf t})u({\bf -\tilde k})}\geq T_i^\delta
\end{align*}
so
\begin{align*}
\nn{wg_0u({\bf \tilde k})u({\bf s})} &\gg T_i^\delta/K^\kappa
\end{align*}
That is, for any $u({\bf \tilde k})\in B_K$, the shifted basepoint $x_0u({\bf \tilde k})$ satisfies a Diophantine condition of the form (\ref{eq:dio1}) with new parameter proportional to $(T_i^\delta/K^\kappa)^{1/q}=(T_i^\delta/K^\kappa)^{n/d}$. From Corollary \ref{cor:arithequidistcor}, this implies that for $T_i$ large enough (i.e. for $i$ large enough), we have equidistribution with \begin{align}
\sum_{K{\bf k}\in B_{T_i}} f(x_0 u(K{\bf k})) = \frac{T_i^d}{K^d}\int_X f dm_X+\mathcal{O}_f(T_i^d(T_i^\delta/K^\kappa)^{-nb/d(d+1)}K^{-d^2/(d+1)})\label{eq:arithEquiNoncmpt}
\end{align}
for any ${\bf k}\in B_K$.
Using this in (\ref{eq:doublesumnoncmpt}), we find that
\[
S_K(A,P) = g(K)\mathcal{X}+r(f,K,T_i)
\]
where
\[
|r(f,K,T_i)|\ll G_d(K)T_i^{d-\delta nb/d(d+1)}K^{(\kappa nb-d^3)/d(d+1)}\sob{\infty}{\ell}(f).
\]
Since we have already shown that the function $g(K) = G_d(K)/K^d$ satisfies sieve axioms (\ref{axiom1}) and (\ref{axiom3}) with sieve dimension $d$, it remains to verify sieve axiom (\ref{axiom2}).
From Lemma \ref{lem:Gd} (\ref{Gdprops5}), we know that
\begin{align*}
\sum_{K<D} |r(f,K,T_i)| &\ll_f T_i^{d-\delta nb/d(d+1)}\sum_{K<D} G_d(K)/K^{(d^3-\kappa nb)/d(d+1)}\\
&\ll T_i^{d-\delta nb/d(d+1)} D^{(d^2+\kappa nb)/d(d+1)}(\log D)^{d-1}.
\end{align*}
Now let $D = T^\eta$ for $\eta < \delta bn/(d^2+\kappa bn)$, say $\eta = \delta bn/(d^2+\kappa bn) - 2\epsilon d^2(d+1)/(d^2+\kappa nb)$. As before, we know that for large enough $T_i$, $\log T_i \ll_\epsilon T_i^{d\epsilon/(d-1)}$. This is enough to ensure that for $T_i$ large enough, the errors satisfy
\[
\sum_{K<D} |r(f,K,T_i)| \ll_{f,\epsilon} T^{d(1-\epsilon)}\ll_f \mathcal{X}^{1-\epsilon}.
\]
Thus for given $f$, the conclusion of Theorem \ref{thm:CombSieve} holds for all $i$ large enough, which gives us Theorem \ref{thm:SAPnoncmpt}.
\end{proof}
As before, if we consider positive $f$ supported on a neighborhood of $X$, the above theorem tells us that we may take $i$ large enough so that we have a positive lower bound on averages over almost-prime points with fewer than $1/\alpha = 9d(d^2+\kappa nb)/\delta bn$ prime factors, hence such points are dense in $X$. This gives us the theorem for the space of lattices from the introduction.\footnote{ Strictly speaking, if we let $M_\delta = 1/\alpha$, this also has dependence on $\kappa$, which cannot be explicitly reduced to dependence on $n$ and $d$. However, if we want to eliminate this dependence, we may replace $\kappa$ with $n/2$, since $\kappa\leq n/2$ in all cases.}
\begin{cor*}[Theorem \ref{thm:NoncmptThm}]
Let $u({\bf t})$ be an abelian horospherical flow of dimension $d$ on $X={\rm SL}_n(\Z)\backslash{\rm SL}_n(\R)$ and let $x_0\in X$ be strongly polynomially $\delta$-Diophantine for some $\delta>0$. Then there exists a constant $M_\delta$ (depending on $\delta$, $n$, and $d$) such that
\[
\{x_0u(k_1, k_2, \cdots, k_d) \hspace{2pt}\vert\hspace{2pt} k_i\in\mathbb{Z} \text{ has fewer than } M_\delta \text{ prime factors}\}
\]
is dense in $X$.
\end{cor*}
|
{
"timestamp": "2018-02-27T02:02:32",
"yymm": "1802",
"arxiv_id": "1802.08764",
"language": "en",
"url": "https://arxiv.org/abs/1802.08764"
}
|
\section{Introduction}
Chirality is associated with mirror-symmetry breaking. It is ubiquitous in nature and fundamental to the understanding of natural processes.
For chiral molecules, this mirror-symmetry breaking leads to two versions of a molecule, the left and right enantiomers.
Today, characterising molecular chirality is a dynamic and multidisciplinary research field with an expanding arsenal of techniques. In the gas phase, these include techniques such as Coulomb explosion imaging \cite{pitzer2013}, microwave detection \cite{patterson2013}, the combination of mass spectrometry with multiphoton and vibrational excitation techniques \cite{lux2012,rhee2009}, high harmonic spectroscopy \cite{cireasa2015,smirnova_opportunities_2015,ayuso2018}, and photoelectron circular dichroism (PECD) \cite{ritchie1976,powis2000a,powis2000a,garcia2013}.
The growing interest in the response of chiral molecules in the time domain has motivated recent and ongoing efforts to develop non-linear chiroptical techniques in optical \cite{fischer2005,abramavicius2006,choi_two-dimensional_2008,fidler2014} and XUV domain \cite{rouxel2017photoinduced} to that aim.
PECD relies on the difference in angular resolved photo-electron emission for left and right circlarly polarized light.
Due to the extra directionality coming from observation of the photoelectron, dichroism can be seen, after orientational averaging, in the electric dipole approximation; consequently the strength of the dichroism is significantly greater, on the order of $10$\% of the total signal \cite{powis2000}, than techniques reliant on magnetic dipole effects.
In both single and multi-photon PECD, the highest PECD signal is seen in the low-energy region of the spectrum.
PECD shows a strong dependence on both the initial, intermediate and final states and is a structurally sensitive probe, as seen in the striking difference observed between camphor and fenchone due to the methyl group substitution, although the involved bound states and photoelectron spectra hardly change \cite{powis2008}, and in the pronounced dependence of PECD signal on molecular geometry and sensitivity to non-Frank-Condon effects \cite{garcia2013}.
In contrast to conventional PECD, PXECD requires the coherent population of multiple states, and hence the dichroic signal displays quantum beating with respect to the delay between excitation and ionization pulses \cite{beaulieu2016,beaulieu2018}.
PXECD is thus a form of time-resolved photoelectron spectroscopy (TRPES) and can be used to investigate the time evolution of various intramolecular processes (for a review see \cite{stolow_time-resolved_2008-1}).
TRPES has its origins in studies in which atomic hyperfine levels were coherently excited and probed by ionization at nanosecond delay using linear pulses \cite{strand_influence_1978,leuchs_quantum_1979,chien_angular_1983}.
This lead to the observation of quantum beats in the photoelectron angular distributions and allowed information on the ionization continuum and the hyperfine interaction to be extracted.
Later work extended this concept to the hyperfine levels of the NO molecule \cite{reid_observation_1994}.
As shorter pulses became available, experimental and numerical studies involving the coherent excitation of rotational states in the first step examined the influence of rotation-vibration coupling \cite{reid_photoelectron_1999,althorpe_predictions_2000}, and non-adiabatic dynamics \cite{underwood_time-resolved_2000} in small molecules at pico to femtosecond time resolution.
Recent TRPES studies include joint experimental and theoretical work to time-resolve valence electron dynamics during a chemical reaction \cite{hockett_time-resolved_2011}, and a theoretical study of non-adiabatic dynamics in the vicinity of a conical intersection \cite{bennett_nonadiabatic_2016}.
We anticipate PXECD to be a similarly useful tool with the added bonus of sensitivity to the chirality of the studied system.
In this paper we extend and generalise our previous theoretical descriptions of PXECD, combining the best aspects of our initial angular algebra based approach \cite{PXECDp1} and our later approach in \cite{beaulieu2016,beaulieu2018}, and offer a complementary perspective on this phenomenon.
\section{Theory}
As in our previous works \cite{PXECDp1,beaulieu2016,beaulieu2018}, we model the interaction
between the electric field and the molecule using first
order perturbation theory and the dipole approximation.
We define the pump field in the laboratory reference frame as:
\begin{equation}
\mathbf{E}^{\mathrm{L}}\left(t\right)=\frac{1}{\sqrt{2}}F\left(t\right)\hat{\bm{\varepsilon}}^{\mathrm{L}}\mathrm{e}^{-\mathrm{i}\left(\omega t+\delta\right)}+\mathrm{c.c.}
\end{equation}
where $\omega$ is the carrier frequency, $F(t)$ includes the field amplitude and the envelope,
and the carrier-envelope phase $\delta$ determines the orientation of the electric field
vector at the moment $t=0$. Finally, the helicity $\sigma=\pm 1$ determines whether the field is left or right polarized and the polarization of the field is expressed in the spherical basis
\begin{equation}
\hat{\varepsilon}_{-1}^{\mathrm{L}}=\frac{1}{\sqrt{2}}(\hat{x}^{\mathrm{L}}-\mathrm{i}\hat{y}^{\mathrm{L}}),
\qquad \hat{\varepsilon}_{0}^{\mathrm{L}}= \hat{z}^{\mathrm{L}} \qquad \hat{\varepsilon}_{+1}^{\mathrm{L}}=\frac{-1}{\sqrt{2}}(\hat{x}^{\mathrm{L}}+\mathrm{i}\hat{y}^{\mathrm{L}})
\end{equation}
The superscripts $\mathrm{L}$ and $\mathrm{M}$ indicate vectors are in the laboratory and molecular frame respectively.
The transformation
of vectors from the lab frame to the molecular frame is performed
according to $\mathbf{v}^{\mathrm{M}}=\mathbf{D}^{\dagger}\left(\varrho\right)\mathbf{v}^{\mathrm{L}}$ where $\varrho\equiv\left(\alpha,\beta,\gamma\right)$ the Euler
angles in the active $z$-$y$-$z$ convention.
In the angular momentum basis, this rotation operator corresponds to the Wigner rotation matrix, we use its complex transpose here to account for the usual convention that the Wigner rotation matrix transforms basis vectors covariantly.
Using perturbation theory, after the end of the pump pulse of duration $T_{1}$,
we find the wave function at a time $\tau$:
\begin{equation}
\psi_{\varrho}\left(\tau\right)=c_{0}\psi_{0}\mathrm{e}^{-\mathrm{i}\omega_{0}\tau}+\sum_{i=1}c_{i}\left(\varrho\right)\psi_{1}\mathrm{e}^{-\mathrm{i}\omega_{i}\tau},
\end{equation}
where $c_{0}\approx1$ and the expressions for the excitation amplitudes are standard:
\begin{equation}
c_{i}\left(\varrho\right)=\mathrm{i}\left[\mathbf{d}_{i0}^\mathrm{M}\cdot \mathbf{D}^\dagger\left(\varrho\right)\hat{\bm{\varepsilon}}^{\mathrm{L}}\right]\mathcal{E}\left(\omega_{i0}\right)
\end{equation}
$i$ labels the intermediate excited states, $\mathbf{d}_{i0}^{\mathrm{M}}$ are the transition dipole matrix elements to these states from the ground state, in the molecular frame spherical basis.
The excitation amplitude is proportional to the spectral component of the pump at the corresponding transition frequency $\omega_{i0}$, $\mathcal{E}\left(\omega_{i0}\right)$.
To calculate the photoelectron angular distribution resulting from the photoionization of excited states we need to consider the bound-free transitions due to the probe field with polarization $\hat{\bm{\xi}}^\mathrm{L}$.
The population amplitude of a continuum state $\mathbf{k}^\mathrm{M}$ after the end of the
probe pulse, assuming that the pump and the probe do not overlap, is
\begin{eqnarray}
c(\mathbf{k}^\mathrm{M};\varrho)&=&\mathrm{i} \sum_ic_{i}(\varrho)\mathrm{e}^{-\mathrm{i}\omega_{i}\tau}\left[\mathbf{d}_{i}^\mathrm{M}(\mathbf{k}^\mathrm{M})\cdot \mathbf{D}^\dagger\left(\varrho\right)\hat{\bm{\xi}}^{\mathrm{L}}\right]\mathcal{E}^{\prime}\left(\omega^{\prime}_{\mathbf{k}i}\right)
\label{ProbeAmpl}
\end{eqnarray}
where $\mathcal{E}^{\prime}\left(\omega_{\mathbf{k},i}\right)$ is the spectral amplitude of the probe at the
required transition frequency
and $\mathbf{d}_{i}^\mathrm{M}(\mathbf{k})$ are bound-free transition dipoles in the molecular frame.
In this work we will consider electronic states only.
The molecular frame PAD is proportional to
\begin{eqnarray}
\frac{d\sigma}{d\mathbf{k}^\mathrm{M}}(\varrho,\tau)&\propto& \left| \sum_i\mathrm{e}^{-\mathrm{i}\omega_{i}\tau}\left[\mathbf{d}_{i}^\mathrm{M}(\mathbf{k}^\mathrm{M})\cdot\mathbf{D}^\dagger\left(\varrho\right)\hat{\bm{\xi}}^{\mathrm{L}}\right]\left[\mathbf{d}_{i0}^\mathrm{M}\cdot\mathbf{D}^\dagger\left(\varrho\right)\hat{\bm{\varepsilon}}^{\mathrm{L}}\right] \right|^2
\label{ProbeAmpl2}
\end{eqnarray}
Performing a partial wave expansion for the photoelectron and writing component-wise
\begin{eqnarray}
\frac{d\sigma}{d\mathbf{\hat k}^\mathrm{M}}(E,\tau;\varrho) &\propto& \left| \sum_i\mathrm{e}^{-\mathrm{i}\omega_{i}\tau}\sum_{lmp_2q_2}\WignerD{1*}{p_2}{q_2}\hat{\xi}^\mathrm{L}_{p_2}d^{\mathrm{M}}_{i,q_2,lm}(E)Y_{lm}(\mathbf{\hat k}^M)\sum_{p_1q_1}\WignerD{1*}{p_1}{q_1}\hat{\varepsilon}^\mathrm{L*}_{p_1} d^{M}_{i0,q_1} \right|^2,
\end{eqnarray}
We note that we have absorbed a factor of $i^{-l}e^{i\sigma_l}$, where $\sigma_l$ is the Coulomb phase, into the dipole matrix elements in contrast to how they are usually written. Expanding the modulus square we get,
\begin{align}
\frac{d\sigma}{d\mathbf{\hat k}^\mathrm{M}}(E,\tau;\varrho) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau}&\sum_{K_eM_e}\sum_{lmp_2q_2}\WignerD{1*}{p_2}{q_2}d^{\mathrm{M}}_{i,q_2,lm}(E)d^{\mathrm{M}*}_{i',q'_2,l'm'}(E)\WignerD{1}{p'_2}{q'_2}\rho^{\xi\mathrm{L}}_{p_2p'_2} Y_{K_eM_e}(\mathbf{\hat k}^M) \nonumber \\
&\times (-1)^{m'+M_e}\left[\frac{\tilde l\tilde l'\tilde K_e}{4\pi}\right]^{1/2}\ThreeJ{l}{l'}{K_e}{-m}{m'}{M_e}\ThreeJ{l}{l'}{K_e}{0}{0}{0} \nonumber \\
&\sum_{p_1q_1}\WignerD{1*}{p_1}{q_1} d^{M}_{i0,q_1}d^{M*}_{i'0,q'_1}\WignerD{1}{p'_1}{q'_1}\rho^{\varepsilon\mathrm{L}}_{p_1p'_1},
\end{align}
where the product of polarization vectors $\hat{\varepsilon}^{\mathrm{L}}_{p_1}\hat{\varepsilon}^{\mathrm{L}*}_{p'_1}=\rho^{\varepsilon\mathrm{L}}_{p_1p'_1}$ and $\hat{\xi}^{\mathrm{L}}_{p_2}\hat{\xi}^{\mathrm{L}*}_{p'_2}=\rho^{\xi\mathrm{L}}_{p_2p'_2}$ give elements of the polarization density matrix for the first and second photon, and the product of spherical harmonics has been contracted using the identity
\begin{eqnarray}\label{SpHContraction}
&&Y_{lm}(\mathbf{\hat k})Y^{*}_{l'm'}(\mathbf{\hat k})= \sum_{K_eM_e}(-1)^{m}\left[\frac{\tilde l\tilde l'\tilde K_e}{4\pi}\right]^{1/2}\ThreeJ{l}{l'}{K_e}{-m}{m'}{M_e}\ThreeJ{l}{l'}{K_e}{0}{0}{0} Y_{K_e M_e}(\mathbf{\hat k}).\nonumber\\
\end{eqnarray}
involving the $3-j$ symbols and where $\tilde l=2l+1$. At this point it is useful to introduce some of the properties of the $3-j$ symbol, as they will be crucial later. They have a simple relation to the Clebsch-Gordon coefficients used to couple angular momentum (see, for example \cite{brinksatchler}), but treat each angular momentum vector on an equal footing, instead of coupling two angular momenta to give a third, they couple three angular momenta to give a scalar invariant, $\sum_{abc}\ket{Aa}\ket{Bb}\ket{Cc}\ThreeJ{A}{B}{C}{a}{b}{c}=\ket{00}$. An important symmetry property is that a $3-j$ symbol is unchanged after even permutation of its column, and acquires a phase $(-1)^{(A+B+C)}$ under odd permutations, the same phase is acquired if the bottom row is multiplied by $-1$ (equivalent to inversion in 3D). From this it can be seen that if the sum of the top row is odd (and the three vectors are polar) then the scalar invariant is a pseudo-scalar. This is the hall mark of a chiral quantity and we will now proceed to transform the equation for the PAD into a form in which this can be seen explicitly.
With this in mind we observe that the product of dipoles and the product of the polarization density matrix and spherical harmonic are themselves elements of tensors that can be put in spherical tensor form. The general form of this transformation is $d_{Cc}=\sum_{ab}(-1)^a\tilde C^{\tfrac{1}{2}}\ThreeJ{A}{B}{C}{-a}{b}{c}d_{Aa,Bb}$ We also rotate the outgoing electron direction into the lab frame where it is detected.
\begin{align}\label{eqn_dxs_before_avg}
\frac{d\sigma}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau;\varrho) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau}&\sum_{\substack{K_2K_eK_J\\M_JN_J}}D^{\mathrm{M}}_{ii',(K_2K_e)K_JM_J}(E)\WignerD{K_J*}{N_J}{M_J} Z^{(K_2K_e)}_{K_JN_J}(\mathbf{\hat k}^L) \nonumber \\
&\sum_{\substack{p_1q_1\\p'_1q'_1}}\WignerD{1*}{p_1}{q_1} d^{M}_{i0,q_1}d^{M*}_{i'0,q'_1}\WignerD{1}{p'_1}{q'_1}\rho^{\varepsilon\mathrm{L}}_{p_1p'_1},
\end{align}
where,
\begin{align}\label{eqn_def_D}
D^{\mathrm{M}}_{ii',(K_2K_e)K_JM_J}(E)=\sum_{M_eM_2}(-1)^{M_2}\tilde K_J^{\tfrac{1}{2}}\ThreeJ{K_2}{K_e}{K_J}{-M_2}{M_e}{M_J} \sum_{q_2q_2'}(-1)^{q_2}\tilde K_2^{\tfrac{1}{2}}\ThreeJ{1}{1}{K_2}{-q_2}{q_2'}{M_2}\nonumber\\
\sum_{\substack{ll'\\mm'}}(-1)^{m}\tilde K_e^{\tfrac{1}{2}} \ThreeJ{l}{l'}{K_e}{-m}{m'}{M_e}d^{\mathrm{M}}_{i,q_2,lm}(E)d^{\mathrm{M}*}_{i',q'_2,l'm'}(E) \ThreeJ{l}{l'}{K_e}{0}{0}{0}\left(\frac{ \tilde l \tilde l' }{4\pi}\right)^{\tfrac{1}{2}}
\end{align}
and
\begin{align}
Z^{(K_2K_e)}_{K_JN_J}(\mathbf{\hat k}^L)=\sum_{\substack{N_eN_2\\p_2p'_2}}(-1)^{N_2+p_2}(\tilde K_2 \tilde K_J)^{\tfrac{1}{2}} \ThreeJ{K_2}{K_e}{K_J}{-N_2}{N_e}{N_J}\ThreeJ{1}{1}{K_2}{-p_2}{p'_2}{N_2}\rho^{\xi\mathrm{L}}_{p_2p'_2} Y_{K_eN_e}(\mathbf{\hat k}^{\mathrm{L}})
\end{align}
To get the PAD from a randomly oriented gas sample, we must orientationally average eqn. \ref{eqn_dxs_before_avg}:
\begin{eqnarray}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) = \frac{1}{8\pi^2}\int \frac{d\sigma}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau;\varrho) d\varrho
\end{eqnarray}
giving
\begin{align}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau}&\sum_{\substack{K_2K_eK_J\\N_Jp_1p'_1}} \left\{\sum_{M_Jq_1q'_1}D^{\mathrm{M}}_{ii',(K_2K_e)K_JM_J}(E) d^{M}_{i0,q_1}d^{M}_{i'0,-q'_1}\ThreeJ{K_J}{1}{1}{M_J}{q_1}{-q_1'}\right\} \nonumber \\
& (-1)^{p'_1}\ThreeJ{K_J}{1}{1}{N_J}{p_1}{-p_1'}\rho^{\varepsilon\mathrm{L}}_{p_1p'_1}Z^{(K_2K_e)}_{K_JN_J}(\mathbf{\hat k}^L),
\end{align}
The PAD has been separated into two parts: outside the braces are the lab frame quantities (the photon polarizations and the outgoing electron direction), inside the braces we get a scalar invariant involving the molecular frame quantities only, namely the transition and ionization dipoles. We denote this invariant scalar $\alpha^{(K_2K_e)K_J}$ where $K_J$ can take the values $\{0,1,2\}$, and also transform the photon density matrices into their irreducible spherical tensor form.
The non-vanishing components of the first photon density matrix are:
$\rho^{\varepsilon\mathrm{L}}_{00}=-\sqrt{1/3}$, $\rho^{\varepsilon\mathrm{L}}_{10}=-\sqrt{1/2}C_1$, $\rho^{\varepsilon\mathrm{L}}_{20}=-\sqrt{1/6}$ and $\rho^{\varepsilon\mathrm{L}}_{22}=\rho^{\varepsilon\mathrm{L}*}_{2-2}=(1/2)L_1$. Here $-1\le C_1 \le 1$ defines the the amount of circular polarization and $0\le L_1 \le 1$ the amount of linear polarization. $L^2_1+C^2_1$ is unity for pure polarization, less than 1 for partial polarization and 0 for unpolarized (in the $x-y$ plane) light. The major axis of polarization defines the $x$-direction and the propagation direction is $z$.
\begin{align}\label{eqn:simplest_general}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau}&\sum_{\substack{K_2K_eK_J\\N_J}} \alpha^{(K_2K_e)K_J} \rho^{\varepsilon\mathrm{L}}_{K_JN_J}Z^{(K_2K_e)}_{K_JN_J}(\mathbf{\hat k}^L),
\end{align}
We can write this in vector form as
\begin{align}
\alpha^{(K_2K_e)0}&=\mathbf{D}^{\mathrm{M}\dagger}_{ii',(K_2K_e)0}(E) \cdot \mathbf{A}_{ii',0}&&=-\tfrac{1}{\sqrt{3}}D^{\mathrm{M}}_{ii',(K_2K_e)00}(E) \mathbf{d}^{M}_{i0} \cdot \mathbf{d}^{M}_{i'0} \nonumber \\
\alpha^{(K_2K_e)1}&=\tfrac{1}{\sqrt{3}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(K_2K_e)1}(E) \cdot \mathbf{A}_{ii',1}&&=\tfrac{1}{\sqrt{6}} \mathbf{D}^{\mathrm{M}\dagger}_{ii',(K_2K_e)1}(E) \cdot (\mathbf{d}^{M}_{i0} \times \mathbf{d}^{M}_{i'0}) \nonumber \\
\alpha^{(K_2K_e)2}&=\tfrac{1}{\sqrt{5}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(K_2K_e)2}(E) \cdot \mathbf{A}_{ii',2},&
\end{align}
where
\begin{align}
A_{ii',K_JM_J}=\sum_{q_1q'_1} (-1)^{q_1'}\tilde K_J^{\tfrac{1}{2}}d^{M}_{i0,q_1}d^{M*}_{i'0,q'_1}\ThreeJ{K_J}{1}{1}{M_J}{q_1}{-q_1'}.
\end{align}
We can now give a physical interpretation to the various irreducible spherical tensors above.
In general, expression of the quantities above in terms of irreducible spherical tensors is a multipole expansion \cite{fano}.
The zeroth order tensor is a scalar and thus isotropic, the first order tensor is known as the orientation vector, it corresponds to an net orientation of the angular momentum of the system, and has the form of a dipole, the second order tensor is known as the alignment vector and is of quadrupole form (see, for example \cite{blum}).
We see that $ \mathbf{A}_{ii',K_J}$ describes isotropy/orientation/alignment of the system induced by the first pulse, while $\mathbf{D}^{\mathrm{M}\dagger}_{ii',(K_2K_e)K_J}(E)$ describes the further orientation/alignment induced by the second pulse and detection of the photoelectron.
PXECD arises from the $\alpha^{(K_2K_e)1}$ coefficient, therefore we see that orientation of the ensemble by the pump pulse is integral to PXECD.
This orientation creates an induced net dipole in the ensemble that oscillates with angular frequency $\omega_{ii'}$ as discussed in \cite{beaulieu2016,beaulieu2018}.
It is easy to see that $\alpha^{(K_2K_e)1}$ exist only when the bound transition dipoles are non-parallel and hence only exists for the interference terms (involving different excited states) not the direct terms.
This implies it requires coherent population of multiple states to be observed.
One might be tempted to say that $\alpha^{(K_2K_e)K_J}$ is scalar for even values of $K_J$ and pseudoscalar for odd values, but some care must be taken here, the $\mathbf{d}^{M}_{i0}$ are polar vectors, however $\mathbf{D}^{\mathrm{M}}_{ii',(K_2K_e)K_J}(E)$ can be either a polar or pseudovector, examination of the $3$j symbols in eqn. \ref{eqn_def_D} shows that under inversion $\mathbf{i}\mathbf{D}^{\mathrm{M}}_{ii',(K_2K_e)K_J}(E) = (-1)^{K_e+K_J}\mathbf{D}^{\mathrm{M}}_{ii',(K_2K_e)K_J}(E)$ i.e. it is a pseudovector when $K_e+K_J$ is odd.
Hence $\alpha^{(K_2K_e)K_J}$ is a pseudoscalar only when $K_e$ is odd (note: the dot product of a vector and a pseudovector is a pseudoscalar while the dot product of a pseudovector and a pseudovector is a scalar).
It is straightforward to demonstrate that pseudoscalar $\alpha^{(K_2K_e)K_J}$ exist only in chiral molecules. Consider first the reflection of a randomly oriented ensemble of chiral molecules. This operation changes the sign of pseudoscalar $\alpha^{(K_2K_e)K_J}$ but also changes the enantiomer in the sample.
Pseudoscalar $\alpha^{(K_2K_e)K_J}$ is therefore the source of asymmetry in photoelectron emission that changes sign with enantiomer.
For a non-chiral ensemble, reflection does not change the ensemble, therefore $\alpha^{(K_2K_e)K_J}=0$.
Interestingly, it can be seen that $\alpha^{(K_2K_e)1}$ can exist in non-chiral molecules for even values of $K_e$.
We now expand the summation over $K_J,N_J$ and insert the explicit density matrix elements for the first photon, giving
\begin{align}\label{eqn:simplest_general_Z}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau}\sum_{\substack{K_2K_e}} &-\left[\tfrac{1}{\sqrt{3}}\alpha^{(K_2K_e)0} Z^{(K_2K_e)}_{00}(\mathbf{\hat k}^L) +\tfrac{1}{\sqrt{6}}\alpha^{(K_2K_e)2} Z^{(K_2K_e)}_{20}(\mathbf{\hat k}^L)\right] \nonumber \\
&-\tfrac{C_1}{\sqrt{2}}\alpha^{(K_2K_e)1} Z^{(K_2K_e)}_{10}(\mathbf{\hat k}^L) \nonumber \\
&+\tfrac{L_1}{2} \alpha^{(K_2K_e)2}\left[ Z^{(K_2K_e)}_{2-2}(\mathbf{\hat k}^L)+
Z^{(K_2K_e)}_{22}(\mathbf{\hat k}^L) \right],
\end{align}
We remind the reader that this is still the general form for any polarization and propagation direction of the two photons. In the above equation the terms in the first square bracket do not depend on polarization, they exists for an unpolarized pump pulse (note: for a completely unpolarized pump pulse where there is also no preferred propagation direction e.g. produced by three orthogonal beams, only the $Z^{(K_2K_e)}_{00}$ term survives). The term mulitplied by $C_1$ depends on the sign and degree of circular polarization, while the last term depends on the degree of linear polarization.
We now extend the idea of PXECD as described \cite{beaulieu2016, beaulieu2018}. We associate $C_1\alpha^{(K_2K_e)1}$ as the fundamental quantity describing PXECD, all terms involving it change sign with a change in helicity of the pump pulse, and it only exists when multiple states are coherently populated.
It does not necessarily change sign with enantiomer, and therefore can also exist in non-chiral molecules.
We will see later, when we consider the polarization of the second photon, that the PECD terms arise from $C_2\alpha^{(1K_e)K_J}$ and do not require coherently populated states with non-co-linear dipoles.
To determine the PAD we need to examine
\begin{align}
Z^{(K_2K_e)}_{K_JN_J}(\mathbf{\hat k}^L)=\sum_{\substack{N_eN_2N_2'}}(-1)^{N_2}(\tilde K_J)^{\tfrac{1}{2}}\ThreeJ{K_2}{K_e}{K_J}{-N_2}{N_e}{N_J}\bar \rho^{\xi\mathrm{L}}_{K_2N_2'}\WignerD{K_2}{N_2'}{N_2}(\mu,\nu,\eta) Y_{K_eN_e}(\mathbf{\hat k}^{\mathrm{L}}),
\end{align}
where we have written $ \rho^{\xi\mathrm{L}}_{K_2N_2}=\sum_{N_2'}\bar\rho^{\xi\mathrm{L}}_{K_2N_2'}\WignerD{K_2}{N_2'}{N_2}(\mu,\nu,\eta)$ and the Euler angles $(\mu,\nu,\eta)$ define the rotation between the coordinate frame of the first and second photon.
We now look at the specific case of co-propagating pump and probe pulses where the coordinate frame of the two photons coincide, i.e. $\mu=\nu=\eta=0$.
\subsection{Co-propagating pulses}
We obtain the following,
\begin{align}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau} \left[\frac{1}{3} \alpha_{ii'}^{(00)0}+\frac{1}{6} \alpha_{ii'}^{(20)2}+\frac{1}{2} C_1 C_2 \alpha_{ii'}^{(10)1}+\frac{1}{2} L_1 L_2 \alpha_{ii',\mathrm{S}}^{(20)2}\right]&S_{00}(\mathbf{\hat k}^{\mathrm{L}}) \nonumber \\
%
-\left[C_1\left(\frac{\alpha_{ii'}^{(01)1}}{\sqrt{6}}-\frac{ \alpha_{ii'}^{(21)1}}{\sqrt{30}}\right)-C_2\left(\frac{ \alpha_{ii'}^{(11)0}}{3 \sqrt{2}}-\frac{ \alpha_{ii'}^{(11)2}}{3 \sqrt{2}}\right)\right]&S_{10}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\left[\frac{\alpha_{ii'}^{(02)2}}{3 \sqrt{2}}+\frac{\alpha_{ii'}^{(22)0}}{3 \sqrt{10}}-\frac{\alpha_{ii'}^{(22)2}}{3 \sqrt{14}}-\frac{C_1 C_2 \alpha_{ii'}^{(12)1}}{\sqrt{10}}+\frac{L_1 L_2 \alpha_{ii'}^{(22)2}}{\sqrt{14}}\right]&S_{20}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\left[\frac{C_1 \alpha_{ii'}^{(23)1}}{2} \sqrt{\frac{3}{35}}-\frac{C_2 \alpha_{ii'}^{(13)2}}{2 \sqrt{7}}\right] &S_{30}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\left[\frac{\alpha_{ii'}^{(24)2}}{3 \sqrt{14}}+\frac{L_1 L_2 \alpha_{ii'}^{(24)2}}{6 \sqrt{14}}\right]&S_{40}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-i\sqrt{2}\left[\frac{C_1 L_2 \alpha_{ii'}^{(22)1}}{ \sqrt{5}} + \frac{C_2 L_1 \alpha_{ii'}^{(12)2}}{\sqrt{3}} \right]&S_{2-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\sqrt{2}\left[L_1\left( \frac{\alpha_{ii'}^{(02)2}}{\sqrt{3}}+\frac{\alpha_{ii'}^{(22)2}}{\sqrt{21}}\right)+L_2\left(\frac{\alpha_{ii'}^{(22)0}}{\sqrt{15}}+\frac{ \alpha_{ii'}^{(22)2}}{\sqrt{21}}\right) \right]&S_{22}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-i\sqrt{2}\left[\frac{ L_1 \alpha_{ii'}^{(23)2}}{2} \sqrt{\frac{5}{21}}-\frac{L_2 \alpha_{ii'}^{(23)2}}{2} \sqrt{\frac{5}{21}} \right]&S_{3-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\sqrt{2}\left[C_2 L_1 \alpha_{ii'}^{(13)2}\sqrt{\frac{5}{42}} -\frac{C_1 L_2 \alpha_{ii'}^{(23)1}}{ \sqrt{14}} \right]&S_{32}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\sqrt{2}\left[\frac{L_1 \alpha_{ii'}^{(24)2}}{6} \sqrt{\frac{5}{7}}+\frac{ L_2 \alpha_{ii'}^{(24)2}}{6} \sqrt{\frac{5}{7}} \right]&S_{42}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
\end{align}
We can group the terms into 5 classes: Not dependent on light polarization or chirality of the molecule, dependent on circular polarization and chirality, dependent on linear polarization and not chirality, dependent on circular polarization but not chirality, and dependent on linear polarization and chirality. The last two categories are particularly interesting, examples are found, respectively, in the coefficients of $S_{2-2}(\mathbf{\hat k}^{\mathrm{L}})$ which require orientation from the pump(/probe) and alignment from the probe(/pump) and quadrupole emission, and $S_{3-2}(\mathbf{\hat k}^{\mathrm{L}})$ which require alignment of both pump and probe, and octupole emission.
We also see that there are two terms that require both pulses to have circular components involving $\alpha_{ii'}^{(10)1}$ in the isotropic part of the emission (hence seen in the total cross section) and $\alpha_{ii'}^{(12)1}$ seen in $S_{20}(\mathbf{\hat k}^{\mathrm{L}})$, these change sign with change of relative sign between the circularly polarized components of pump and probe pulses, exist for non-chiral molecules, and correspond to both pulses inducing net dipoles in the system, which then couple to give either isotropic or quadrupole emission.
It is also interesting to examine the various coefficients to see their dependence on the orientation/alignment state of the component spherical vectors.
We notice that terms that change sign due to the circular polarisation of the pump(/probe) pulse always correspond to the orientation vector component induced by the pump(/probe) pulse i.e $K_J(/K_2)=1$.
All terms not dependent on circular polarization have the spherical vectors related to pump and probe pulses as either isotropic or aligned.
We see that asymmetry in the photoemission (corresponding to odd order real spherical harmonics) comes from orientation by the first pulse for PXECD and from the second ionizing pulse for PECD.
We observe that $\alpha_{ii}^{(11)0}$, corresponding to the isotropic part of the first pulse, corresponds to standard one photon PECD from the excited state $i$ up to a constant given by bound transition strength, and so we see that ionization where the second pulse is also circular is not exclusively contingent on coherent population of multiple states.
We can also observe that two-photon PECD (i.e. two circular pulses) mixes PXECD terms with PECD terms in the coefficient of $S_{10}(\mathbf{\hat k}^{\mathrm{L}})$.
We now look at the PXECD experimental setup as described in \cite{beaulieu2016}.
\subsection{Circular pump - linear probe}
Setting $L_1=0$ and $C_2=0$ gives the full angular distribution for PXECD as described in \cite{beaulieu2016}.
The following result is obtained.
\begin{align}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau} \left[\frac{1}{3} \alpha_{ii'}^{(00)0}+\frac{1}{6} \alpha_{ii'}^{(20)2}\right]&S_{00}(\mathbf{\hat k}^{\mathrm{L}}) \nonumber \\
%
-\left[C_1\left(\frac{\alpha_{ii'}^{(01)1}}{\sqrt{6}}-\frac{ \alpha_{ii'}^{(21)1}}{\sqrt{30}}\right)\right]&S_{10}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\left[\frac{\alpha_{ii'}^{(02)2}}{3 \sqrt{2}}+\frac{\alpha_{ii'}^{(22)0}}{3 \sqrt{10}}-\frac{\alpha_{ii'}^{(22)2}}{3 \sqrt{14}}\right]&S_{20}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\left[\frac{C_1 \alpha_{ii'}^{(23)1}}{2} \sqrt{\frac{3}{35}}\right] &S_{30}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\left[\frac{\alpha_{ii'}^{(24)2}}{3 \sqrt{14}}\right]&S_{40}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-i\sqrt{2}\left[\frac{C_1 L_2 \alpha_{ii'}^{(22)1}}{ \sqrt{5}} \right]&S_{2-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\sqrt{2}\left[L_2\left(\frac{\alpha_{ii'}^{(22)0}}{\sqrt{15}}+\frac{ \alpha_{ii'}^{(22)2}}{\sqrt{21}}\right) \right]&S_{22}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+i\sqrt{2}\left[\frac{1}{2} \sqrt{\frac{5}{21}} L_2 \alpha_{ii'}^{(23)2} \right]&S_{3-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\sqrt{2}\left[\frac{C_1 L_2 \alpha_{ii'}^{(23)1}}{ \sqrt{14}} \right]&S_{32}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-\sqrt{2}\left[\frac{1}{6} \sqrt{\frac{5}{7}} L_2 \alpha_{ii'}^{(24)2} \right]&S_{42}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
\end{align}
Or explicitly in vector form
\begin{align}
\frac{\overline{d\sigma}}{d\mathbf{\hat k}^\mathrm{L}}(E,\tau) \propto \sum_{ii'}\mathrm{e}^{-\mathrm{i}\omega_{ii'}\tau} \left[\tfrac{-1}{3\sqrt{3}} \alphazero{00}+\tfrac{1}{6\sqrt{5}} \alphatwo{20}\right]&S_{00}(\mathbf{\hat k}^{\mathrm{L}}) \nonumber \\
%
-C_1\left[\tfrac{1}{6}\left(\mathbf{D}^{\mathrm{M}\dagger}_{ii',(01)1}(E) -\tfrac{1}{\sqrt{5}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(21)1}(E) \right)\cdot (\mathbf{d}^{M}_{i0} \times \mathbf{d}^{M}_{i'0})\right]&S_{10}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\tfrac{1}{3}\left[\left(\tfrac{1}{\sqrt{10}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(02)2}(E) -\tfrac{1}{\sqrt{70}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(22)2}(E)\right)\cdot \mathbf{A}_{ii',2} - \tfrac{1}{\sqrt{30}}\alphazero{22} \right]&S_{20}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-C_1\left[\tfrac{1}{2\sqrt{70}} \alphaone{23}\right] &S_{30}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+\left[\tfrac{1}{3\sqrt{70}}\alphatwo{24}\right]&S_{40}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
-iC_1 L_2 \left[\tfrac{1}{ \sqrt{15}}\alphaone{22} \right]&S_{2-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+L_2\left[\left(\tfrac{\sqrt{2}}{\sqrt{45}}\alphazero{22}-\tfrac{\sqrt{2}}{\sqrt{105}} \alphatwo{22}\right) \right]&S_{22}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+iL_2\left[\tfrac{1}{\sqrt{42}}\alphatwo{23} \right]&S_{3-2}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
+C_1 L_2\left[\tfrac{1 }{\sqrt{42}}\alphaone{23} \right]&S_{32}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
- L_2\left[\tfrac{\sqrt{2}}{6\sqrt{7}} \alphatwo{24} \right]&S_{42}(\mathbf{\hat k}^{\mathrm{L}})\nonumber \\
%
\end{align}
Here we can connect back to the results of \cite{beaulieu2016} where the photoelectron current in the $z$-direction was shown to be a triple product in the Cartesian basis by recognising that $S_{10}(\mathbf{\hat k}^{\mathrm{L}}) \propto k_z$ is responsible for the chiral current in the $z$-direction.
$\left(\mathbf{D}^{\mathrm{M}\dagger}_{ii',(01)1}(E) -\tfrac{1}{\sqrt{5}}\mathbf{D}^{\mathrm{M}\dagger}_{ii',(21)1}(E)\right)$ is equivalent, up to a constant, to the Raman type photoionization vector defined in \cite{beaulieu2016}.
The triple product can be transformed from the spherical basis to the Cartesian basis by using the usual unitary transformation between the two, this preserves the triple product up to a phase $e^{-i\tfrac{\pi}{2}}$ coming from the determinant of the transformation matrix.
The same transformation can, of course, be applied to all other terms, remembering to multiply by the appropriate phase for transformation of pseudovectors.
$\mathbf{D}^{\mathrm{M}\dagger}_{ii',(01)1}(E)$ depends only on the isotropic part of the probe pulse and hence survives even for a completely unpolarized probe pulse, while $\mathbf{D}^{\mathrm{M}\dagger}_{ii',(21)1}(E)$ does not.
\section{Conclusions}
We presented a general theory of PXECD for arbitrary polarization of both the pulse that prepares the molecule in a superposition of excited states, and the ionizing pulse.
A conventional way of analysing angular and energy resolved photoelectron distributions is to perform expansion into the basis of spherical harmonics and analyse the coefficients of this expansion, which are generally referred to as asymmetry parameters.
The theory was developed in a way that clearly and simply separates chiral and non-chiral contributions to the time dependent photoelectron angular distribution for all relevant asymmetry parameters.
PXECD was shown to originate from orientation imposed by the first pulse by inducing a net dipole in the ensemble that oscillates with angular frequency $\omega_{ii'}$ as discussed in \cite{beaulieu2016,beaulieu2018}. The induced chiral dipole underlies the PXCD (photoexcitation circular dichroism) phenomenon introduced in \cite{beaulieu2016,beaulieu2018}.
This is in contrast to one-photon PECD where chiral asymmetric emission emerges as a result of the orientation imposed by the ionizing pulse.
In PXECD all asymmetry in the forwards/backwards direction (coefficients of and $S_{10}(\mathbf{\hat k}^{\mathrm{L}})$ and $S_{30}(\mathbf{\hat k}^{\mathrm{L}})$ ) is contingent on both chirality and coherent population of multiple states. In contrast, in two-photon PECD there is a mixing of PXCD terms, that require coherent population of excited states, with PECD-like terms that do not rely on such coherencies.
We have identified the terms uniquely related to PXECD in chiral molecules. We have also shown that PXECD recorded in polarization frame of the pump pulse contains asymmetry parameters, which are exclusively sensitive to coherence, but not associated with chiral response. These terms always arise for field configurations leading to cylindrical symmetry breaking and inducing extrinsic chirality in the polarization plane.
Thus, PXECD is a background-free probe of coherent bound dynamics providing individual access to its chiral and non-chiral contributions.
\section*{Acknowledgements}
We would like to thank useful communications with Andres Ordonez.
AH acknowledges support from DFG project number HA 8552/2-1
\section*{References}
|
{
"timestamp": "2018-02-26T02:12:18",
"yymm": "1802",
"arxiv_id": "1802.08662",
"language": "en",
"url": "https://arxiv.org/abs/1802.08662"
}
|
\section{Introduction}
In principle, deep networks can learn arbitrarily sophisticated mappings from inputs to outputs. However, in practice we must encode specific inductive biases in order to learn accurate models from limit data. In a variety of recent research efforts, practitioners have provided models with the ability to explicitly manipulate latent combinatorial objects such as stacks~\citep{dyer2015transition,NIPS2015_5857}, memory slots~\citep{graves2014neural, sukhbaatar2015end}, mathematical expressions~\citep{neelakantan2015neural}, program traces~\citep{gaunt2016terpret,pmlr-v70-bosnjak17a}, and first order logic~\citep{rocktaschel2017end}. Operations on these discrete objects can be approximated using differentiable operations on continuous relaxations of the objects. As such, these operations can be included as modules in neural network models that can be trained end-to-end by gradient descent.
Matchings and permutations are a fundamental building block in a variety of applications, as they can be used to align, canonicalize, and sort data. Prior work has developed learning algorithms for supervised learning where the training data includes annotated matchings~\citep{caetano2009learning,petterson2009exponential,tang2015bethe}. However, we would like to learn models with latent matchings, where the matching is not provided to us as supervision. This is a common and relevant setting. For example, ~\cite{Linderman2017} showed a problem from neuroscience involving the identification of neurons from the worm C. elegans can be cast as the inference of latent permutation on a larger hierarchical structure.
Unfortunately, maximizing the marginal likelihood for problems with latent matchings is very challenging. Unlike for problems with categorical latent variables, we cannot obtain unbiased stochastic gradients of the marginal likelihood using the score function estimator~\citep{williams1992simple}, as computing the probability of a given matching requires computing an intractable partition function for a structured distribution. Instead, we draw on recent work that obtains biased stochastic gradients by relaxing the discrete latent variables into continuous random variables that support the reparametrization trick~\citep{Jang2016,Maddison2016}.
Our contributions are the following: first, in Section \ref{sec:sinkhorntheo} we present a theoretical result showing that the non-differentiable parameterization of a permutation can be approximated in terms of a differentiable relaxation, the so-called \textit{Sinkhorn operator}. Based on this result, in Section \ref{sec:sinkhornnetworks} we introduce \emph{Sinkhorn networks}, which generalize the work of method of~\cite{Adams2011} for predicting rankings, and complements the concurrent work by \cite{Cruz2017}, by focusing on more fundamental aspects. Further, in Section \ref{sec:gumbelsinkhorn} we introduce the \textit{Gumbel-Sinkhorn}, an analog of the Gumbel Softmax distribution~\citep{Jang2016,Maddison2016} for permutations. This enables optimization of the marginal likelihood by the reparametrization trick. Finally, in Section \ref{sec:experiments} we demonstrate that our methods outperform strong neural network baselines on the tasks of sorting numbers, solving jigsaw puzzles, and identifying neural signals from C. elegans worms.
\section{The Sinkhorn operator: an analog of the softmax for permutations}
\label{sec:sinkhorntheo}
One sensible way to approximate a discrete category by continuous values is by using a temperature-dependent softmax function, component-wise defined as $\text{softmax}_\tau(x)_i=\exp(x_i/\tau)/\sum_{j=1}\exp(x_j/\tau)$. For positive values of $\tau$, $\text{softmax}_\tau(x)_i$ is a point in the probability simplex. Also, in the limit $\tau\rightarrow 0$, $\text{softmax}_\tau(x)_i$ converges to a vertex of the simplex, a one-hot vector corresponding to the largest $x_i$~\footnote{With the exception of the degenerate case of ties.}.
This approximation is a key ingredient in the successful implementations by~\cite{Jang2016,Maddison2016}, and here we extend it to permutations.
To do so, we first state an analog of the normalization implemented by the softmax. This is achieved through the Sinkhorn operator (or Sinkhorn normalization, or Sinkhorn balancing), which iteratively normalizes rows and columns of a matrix. Specifically, following~\cite{Adams2011}, we define the Sinkhorn operator $S(X)$ over an $N$ dimensional square matrix $X$ as:
\begin{eqnarray}
\label{eq:sinkop} S^0(X) &=& \exp(X), \\
\nonumber S^l(X) & = & \mathcal{T}_c\left(\mathcal{T}_r(S^{l-1}(X))\right),\\
\nonumber S(X) &= &\lim_{l\rightarrow \infty} S^l(X).
\end{eqnarray}
where $\mathcal{T}_r(X)= X \oslash (X \mathbf{1}_N\mathbf{1}_N^\top),$ and $ \mathcal{T}_c(X)= X \oslash (\mathbf{1}_N\mathbf{1}_N^\top X $) as the row and column-wise normalization operators of a matrix, with $\oslash$ denoting the element-wise division and $\mathbf{1}_N$ a column vector of ones. \cite{Sinkhorn1964} proved that $S(X)$ must belong to the Birkhoff polytope, the set of doubly stochastic matrices, that we denote $\mathcal{B}_N$~\footnote{This theorem requires certain technical conditions which are trivially satisfied if $X$ has positive entries, motivating the use of the component-wise exponential $\exp(\cdot)$ in the first line of equation \ref{eq:sinkop}.}.
Building on our analogy with categories, notice that choosing a category can always be cast as a maximization problem: the choice $\argmax_i x_i$ is the one that maximizes the function $\langle x,v\rangle$ (with $v$ being a one-hot vector), i.e.\ the maximizing $v^*$ indexes the largest $x_i$. Similarly, one may parameterize the choice of a permutation $P$ through a square matrix $X$, as the solution to the linear assignment problem \citep{Kuhn1955}, with $\mathcal{P}_N$ denoting the set of permutation matrices and $\left< A,B\right>_F=\mathrm{trace} (A^\top B)$ the (Frobenius) inner product of matrices:
\begin{equation} \label{eq:assign} M(X)=\argmax_{P\in \mathcal{P}_N}\left< P,X\right>_F.\end{equation}
We call $M(\cdot)$ the matching operator, through which we parameterize the hard choice of a permutation (see Figure \ref{fig:1}a for an example). Our theoretical contribution is to show that $M(X)$ can be obtained as the limit of $S(X/\tau)$, meaning that one can approximate $M(X)\approx S(X/\tau)$ with a small $\tau$.
Theorem 1 summarizes our finding. We provide a rigorous proof in appendix \ref{sec:proofs}; briefly, it is based on showing that $S(X/\tau)$ solves a certain entropy-regularized problem in $\mathcal{B}_n$, which in the limit converges to the matching problem in equation \ref{eq:assign}.
\begin{theorem1}
For a doubly-stochastic matrix $P$, define its entropy as $h(P)=-\sum_{i,j}P_{i,j}\log\left(P_{i,j}\right)$. Then, one has, \begin{equation}\label{eq:entropyreg}S(X/\tau)=\argmax_{P\in\mathcal{B}_N}\left< P,X\right>_F+\tau h(P).\end{equation}
Now, assume also the entries of $X$ are drawn independently from a distribution that is absolutely continuous with respect to the Lebesgue measure in $\mathbb{R}$.
Then, almost surely, the following convergence holds: \begin{equation}\label{eq:convergence} M(X) = \lim_{\tau\rightarrow 0^+} S(X/\tau) .\end{equation}
\end{theorem1}
Finally, we note that Theorem 1 cannot be realized in practice, as it involves a limit on the Sinkhorn iterations $l$. Instead, we'll always consider the incomplete version of the Sinkhorn operator \citep{Adams2011}, where we truncate $l$ in \eqref{eq:sinkop} to $L$. Figure \ref{fig:1}b in appendix \ref{sub:illustration} illustrates the dependence of the approximation in $\tau$ and $L$.
\section{Sinkhorn Networks}
\label{sec:sinkhornnetworks}
Now we show how to apply the approximation in Theorem 1 in the context of artificial neural networks. We construct a layer that encodes the representation of a permutation, and show how to train networks containing such layers as intermediate representations.
We define the components of this network through a minimal example: consider the supervised task of learning a mapping from scrambled objects $\tilde{X}$ to actual, non-scrambled $X$. Data, then, are $M$ pairs $(X_i,\tilde{X}_i)$ where $\tilde{X}_i$ can be constructed by randomly permuting pieces of $X_i$. We state this problem as a permutation-valued regression $X_i = P_{\theta, \tilde{X}_i}^{-1} \tilde{X}_i+\varepsilon_i$, where $\varepsilon_i$ is a noise term, and $P_{\theta, \tilde{X}_i}$ is the permutation matrix mapping $X_i$ to $\tilde{X}_i$, which depends on $\tilde{X}_i$ and parameters $\theta$. We are concerned with minimization of the reconstruction error \footnote{This error arises from gaussian $\varepsilon_i$. Other choices may be possible, but here we stick to the most straightforward formulation}:
\begin{equation}
\label{eq:objective}
f(\theta, X, \tilde{X})= \sum_{i=1}^M || X_i- P_{\theta, \tilde{X}_i}^{-1} \tilde{X}_i ||^2.
\end{equation}
One way to express a complex parameterization of this kind is through a neural network: this network receives $\tilde{X}_i$ as input, which is then passed through some intermediate, feed-forward computations of the type $g_h(W_hx_h+b_h)$, where $g_h$ are nonlinear activation functions, $x_h$ is the output of a previous layer, and $\theta=\{(W_h,b_h)\}_h$ are the network parameters. To make the final network output be a permutation, we appeal to constructions developed in Section \ref{sec:sinkhorntheo}: by assuming that the final network output $P_{\theta, \tilde{X}}$ can be parameterized as the solution of the assignments problem; i.e., $P_{\theta, \tilde{X}}= M(g(\tilde{X},\theta))$, where $g(\cdot,\theta)$ represents the outcome of all operations involving $g_h$.
Unfortunately, the above construction involves a non-differentiable $f$ (in $\theta$). We use Theorem 1 as a justification for replacing $M(g(\tilde{X},\theta))$ by the differentiable $S(g(\tilde{X},\theta)/\tau)$ in the computational graph. The value of $\tau$ must be chosen with caution: if $\tau$ is too small, gradients vanishes almost everywhere, as $S(g(\tilde{X},\theta)/\tau)$ approaches the non-differentiable $M(g(\tilde{X},\theta))$. Conversely, if $\tau$ is too large, $S(X/\tau)$ may be far from the vertices of the Birkhoff polytope, and reconstructions $P_{\theta, \tilde{X}}^{-1}\tilde{X}$ may be nonsensical (see Figure \ref{fig:figure3}a). Importantly, we will always add noise to the output layer $g(\tilde{X},\theta)$ as a regularization device: by doing so we ensure uniqueness of $M(g(\tilde{X},\theta))$, which is required for convergence in Theorem 1.
\subsection{Permutation equivariance}
Among all possible architectures that respect the aforementioned parameterization, we will only consider networks that are \textit{permutation equivariant}, the natural kind of symmetry arising in this context. Specifically, we require networks to satisfy:
$$P_{\theta, P'\tilde{X} }\left(P'\tilde{X}\right) = P'\left(P_{\theta,\tilde{X}} \tilde{X}\right)$$
where $P'$ is an arbitrary permutation. The underlying intuition is simple: reconstructions of objects should not depend on how pieces were scrambled, but only on the pieces themselves.
We achieve permutation equivariance by using the same network to process each piece of $\tilde{X}$, throwing an $N$ dimensional output. Then, these $N$ outputs (each with $N$ components) are used to create the rows of the matrix $g(\tilde{X},\theta)$, to which we finally apply the (differentiable) Sinkhorn operator (i.e. $g$ stacks the composition of the $g_h$ acting locally on each piece). One can interpret each row as representing a vector of local likelihoods of assignment, but they might be inconsistent. The Sinkhorn operator, then, mixes those separate representations, and ensures that consistent (approximate) assignment are produced.
With permutation equivariance, the only consideration left to the practitioner is the choice of the particular architecture, which will depend on the particular kind of data. In Section~\ref{sec:experiments} we illustrate the uses of Sinkhorn networks with three examples, each of them using a different architecture. Also, in figure \ref{fig:network} we illustrate a network architecture used in one of our examples.
\subsection{Summary}
Sinkhorn network is a supervised method for learning to reconstruct a scrambled object $\tilde{X}$ (input) given several training examples $(X_i,\tilde{X_i})$.
By applying some non-linear transformations, a Sinkhorn network richly parameterizes the mapping between $\tilde{X}$ and the permutation $P$ that once applied to $\tilde{X}$, will allow to reconstruct the original object as $X_{rec}=P^{\top}\tilde{X}$ (the output). We note that Sinkhorn
networks may be similarly used not only to learn permutations, but also to learn matchings between
objects of two sets of the same size.
\begin{figure}[h]
\includegraphics[width=1.0\linewidth]{network.pdf}
\caption{Schematic of Sinkhorn Network for Jigsaw puzzles. Each piece of the scrambled digit $\tilde{X}$ is processed with the same (convolutional) network $g_1$ (arrows with solid circles). The outputs lying on a latent space (rectangles surrounding $\tilde{X}$) are then connected through $g_2$ (arrows with empty circles) to conform the rows of the matrix $g(\tilde{X},\theta)$; $g(\tilde{X},\theta)_i=g_1\circ g_2(\tilde{X}_i)$. Rows may be interpreted as unnormalized assignment probabilities, indicating individual unnormalized likelihoods of pieces of $\tilde{X}$ to be at every position in the actual image. Applying $S(\cdot)$ leads to a `soft-permutation' $P_{\theta,\tilde{X}}$ that resolves inconsistencies in $g(\tilde{X},\theta)$. $P_{\theta,\tilde{X}}$ is then used to recover the actual $X$ at training, although at test time one may use the actual $M(g(\tilde{X},\theta))$.}
\label{fig:network}
\end{figure}
\section{Probabilistic aspects: the Gumbel-Sinkhorn and Gumbel-Matching distributions}
\label{sec:gumbelsinkhorn}
Recently, in \cite{Jang2016} and \cite{Maddison2016}, the Gumbel-Softmax or Concrete distributions were defined for computational graphs with stochastic nodes; i.e, latent probabilistic representations. Their choice is guided by the following i) they seek re-parameterizable distributions to enable the re-parameterization trick \citep{Kingma2013}, and note that via the \textit{Gumbel trick} (see below) any categorical distribution is re-parameterizable, ii) since the re-parameterization in i) is not differentiable, they consider instead sampling under the softmax approximation. This gives rise to the Gumbel-Softmax distribution.
Here we parallel these choices to enable learning of a probabilistic latent representation of permutations. To this aim, we start by considering a generic distribution on the discrete set $\mathcal{Y}$, with potential function $X:\mathcal{Y}\rightarrow \mathbb{R}$:
\begin{equation}\label{eq:expfamily}p(y|X)\propto\exp\left(X(y)\right)\mathbf{1}_{y\in \mathcal{Y}}.\quad\end{equation}
Regarding i), the Gumbel trick arises in the context of Perturb and MAP methods \citep{Papandreou2011} for sampling in discrete graphical models. This has recently received renewed interest~\citep{Balog2017}, as it recasts the a difficult sampling problem as an easier optimization problem. In detail, sampling from \eqref{eq:expfamily}, can be achieved by the maximization of random perturbations of each potential $X(y)$, with Gumbel i.i.d. noise $\gamma(y)$; i.e., $ \argmax_{y\in \mathcal{Y}}\{X(y)+\gamma(y)\}\sim p(\cdot|X)$. Therefore, one can re-parameterize any categorical distribution (corresponding to \eqref{eq:expfamily} with $X(y)=\langle X, y\rangle$) by the choice of a category, after injecting noise.
However, the above scheme is unfeasible in our context, as $|\mathcal{Y}|=N!$. Nonetheless, we appeal to an interesting result: in cases where $\mathcal{Y}$ factorizes, $\mathcal{Y}=\prod_{i=1}^N\mathcal{Y}_i$ \footnote{It suffices that $\mathcal{Y}$ is a subset of the product space, which here is true as $\mathcal{Y}=\mathcal{P}_n\subseteq\{1,\ldots,N\}^N$.}, the use of rank-one perturbations $\gamma(y)=\sum_{i=1}^N \gamma_i(y_i)$ is proposed as a more tractable alternative. Although ultimately heuristic, they lead to bounds in the partition function \citep{Hazan2012,Balog2017}, and can also be understood as providing approximate or unbiased samples from the true density~\citep{Hazan2013,Tomczak2016}.
Guided by this, we say the random permutation $P$ follows the \textit{Gumbel-Matching} distribution with parameter $X$, denoted $P\sim \mathcal{G.M.}(X)$, if it has the distribution arising by the rank-one perturbation of \eqref{eq:expfamily} on permutations, with the linear potential $X(P)=\left<X,P\right\rangle_F$ (replacing $y$ with $P$). One can verify, in a similar line as in \cite{Li2013}, that $M(X+\varepsilon)\sim \mathcal{G.M.}(X)$, if $\varepsilon$ is a matrix of standard i.i.d. Gumbel noise.
Unfortunately, as ii) with the categorical case, Gumbel-Matching distribution samples are not differentiable in $X$, but by appealing to Theorem 1, we define its relaxation for doubly stochastic matrices as follows: we say $P$ follows the \textit{Gumbel-Sinkhorn} distribution with parameter $X$ and temperature $\tau$ , denoted $P\sim \mathcal{G.S.}(X,\tau)$, if it has the distribution of $S((X+\varepsilon)/\tau)$. Samples of $\mathcal{G.S.}(X,\tau)$ converge almost surely to samples of the Gumbel-Matching distribution (see Fig \ref{fig:1}c in appendix \ref{sub:illustration}).
Unlike for the categorical case, neither the Gumbel-Matching nor Gumbel-Sinkhorn distributions have tractable densities. However, this does not preclude inference: likelihood-free methods have recently been developed to enable learning in such implicitly defined distributions~\citep{Ranganath2016, Tran2017}. These methods avoid evaluating the likelihood based on the observation that in many cases inference can be cast as the estimation of a likelihood ratio, which can be obtained from samples~\citep{Huszar2017}. Regardless of these useful advances, in the following we develop a solution based on using the likelihoods of random variables whose densities \emph{are available}.
\subsection{Approximate Posterior Inference}
\label{sub:vi}
Consider a latent variable model probabilistic model with observed data $Y$, and latent $Z=\{P,W\}$ where $P$ is a permutation and $W$ are other variables. Here we illustrate how to approximate the posterior probability $p(\{P,W\}|Y)$ using variational inference \cite{Blei2017}. Specifically, we aim to maximize the ELBO, the r.h.s. of ~\eqref{eq:elbo}:
\begin{equation}\label{eq:elbo}\log p(y)\geq E_{q(Z|Y)}\left(\log p(Y|Z)\right) - KL\infdivx{q(Z|Y)}{p(Z)}.\end{equation}
We assume that both the prior and variational posteriors decompose as products (mean-field). That is, $q(\{P,W\}|Y)=q(P)q(W), p(P,W)=p(P)p(W)$. With this assumption, we may focus only on the discrete part of the problem, i.e. without loss of generality we can assume $Z=P$.
We parameterize our variational prior and posteriors on $P$ using the Gumbel-Matching distributions with some parameter $X$; $\mathcal{G.M.}(X)$. To enable differentiability, we replace them by $\mathcal{G.S.}(X,\tau)$ distributions, leading to a surrogate ELBO that uses relaxed (continuous) variables. In more detail, for our uniform prior over permutations we use the isotropic $\mathcal{G.S.}(X=0,\tau_{prior})$ distribution, while for the variational posterior we consider the more generic $\mathcal{G.S.}(X,\tau)$.
Unfortunately, the term $KL\infdivx{q(P|Y)}{p(P)}=KL\infdivx{\mathcal{G.S.}(X,\tau)}{\mathcal{G.S.}(X=0,\tau_{prior})}$ in equation ~\eqref{eq:elbo} is intractable as there is not closed form expression for the density of $\mathcal{G.S.}$ random variables. As a solution, we use that our prior and posterior are re-parameterizable in terms of matrices $\varepsilon$ of Gumbel i.i.d variables: we have $S((X+\varepsilon)/\tau)\sim\mathcal{G.S.}(X,\tau)$ and $S(\varepsilon/\tau_{prior})\sim\mathcal{G.S.}(X=0,\tau_{prior})$, for the posterior and prior, respectively. To obtain a tractable expression, we propose to use as `code' or stochastic node $Z$, the variable $(X+\varepsilon)/\tau$ instead. Then, the KL term substantially simplifies to $KL\infdivx{(X+\varepsilon)/\tau}{\varepsilon/\tau_{prior}}$. This term can be computed explicitly, as shown in appendix \ref{sub:implicit}.
This `trick', however, comes at a cost: the divergence $KL\infdivx{Z_1}{Z_2}$ would certainly remain unchanged by applying the same invertible transformation $g$ to both variables $Z_1$ and $Z_2$, but in the general case, for non-invertible transformations, such as $S(\cdot)$, one has $KL\infdivx{Z_1}{Z_2}\geq KL\infdivx{g(Z_1)}{g(Z_2)}$. This implies that working in the `Gumbel space' might entail the optimization of a less tight lower bound. Nonetheless, through categorical experiments on MNIST (see appendix \ref{sub:vaecategorical}) we observe this loss of tightness is minimal, suggesting the suitability of our approach on permutations. Finally, we note that key to to our treatment of the problem is the fact that both the prior and posterior were the same function ($S(\cdot)$) of a simpler distribution. This may not be the case in more general models.
To conclude this section, we refer the reader to table \ref{table:summary} in appendix \ref{sec:summary} for a summary of all the constructions on permutations developed in this work.
\section{Experiments}
\label{sec:experiments}
In this section we perform several experiments comparing to existing methods. In the first three experiments we explore different Sinkhorn network architectures of increasing complexity, and therefore, they mostly implements section \ref{sec:sinkhornnetworks}. The fourth experiment relates to the probabilistic constructions described in section \ref{sec:gumbelsinkhorn}, and addresses a problem involving marginal inferences over a latent, unobserved permutation. All experimental details not stated here are in appendix \ref{sec:expdetails}.
\subsection{Sorting numbers}
\label{sub:sorting}
\begin{table}[t]
\centering
\begin{tabular}{llllllll}
\multicolumn{1}{c}{Test distribution} & \multicolumn{1}{c}{$N=5$} & \multicolumn{1}{c}{$N=10$} & \multicolumn{1}{c}{$N=15$} & \multicolumn{1}{c}{$N=80$} & \multicolumn{1}{c}{$N=100$} & \multicolumn{1}{c}{$N=120$}\\
\cmidrule(lr){1-1}\cmidrule(lr){2-7}
$U(0,1)$ &\textbf{.0} &\textbf{.0} & \textbf{.0} & \textbf{.0} & \textbf{.0} & \textbf{.01} \\
$U(0,1)$~\citep{Vinyals2015} &.06 & 0.43 & 0.9 & - & - & - \\
\cmidrule(lr){1-1}\cmidrule(lr){2-7}
$U(0,10)$ & .0 & .0 & .0 & .0 &.02 & .03\\
$U(0,1000)$ &.0 &.0 & .0 & .01 &.02 & .04\\
$U(1,2)$ &.0 &.0 & .0& .01 &.04 & .08\\
$U(10,11)$ &.0 &.0 & .0& .08 &.08 & .6\\
$U(100,101)$ &.0 &.0 & .01& .02 & .99& 1. \\
$U(1000,1001)$ &.0 &.0 & .07 & 1. & 1.& 1.\\
\bottomrule
\end{tabular}
\caption{Results on the number sorting task measured using Prop. any wrong. In the top two rows we compare to~\cite{Vinyals2015}, showing that our approach can sort far more inputs at significantly higher accuracy. In the bottom rows we evaluate generalization to different intervals on the real line.}
\label{table:tablesorting}
\end{table}
To illustrate the capabilities of Sinkhorn Networks in a simple scenario, we consider the task of sorting numbers using artificial neural networks as in~\cite{Vinyals2015}. Specifically, we sample uniform random numbers $\tilde{X}$ in the $[0,1]$ interval and we train our network with pairs $(\tilde{X},X)$ where $X$ are the same $\tilde{X}$ but in sorted order. The network has a first fully connected layer that links a number with an intermediate representation (with 32 units), and a second (also fully connected) layer that turns that representation into a row of the matrix $g(\tilde{X},\theta)$.
Table~\ref{table:tablesorting} shows our network learns to sort up to $N=120$ numbers. As an evaluation measure, we report the proportion of sequences where there was at least one error (Prop. any wrong). Surprisingly, the network learns to sort numbers even when test examples are not sampled from $U(0,1)$, but on a considerably different interval. This indicates the network is not overfitting. These results can be compared with those from~\cite{Vinyals2015}, where a much more complex (recurrent) network was used, but performance guarantees were obtained only with at most $N=15$ numbers. In that case, the reported error rate is 0.9, whereas ours starts to degrade only after $N\approx100$ for most test intervals.
\subsection{Jigsaw Puzzles}
\label{sub:puzzles}
\begin{table}[t]
\centering
\begin{tabular}{llllllllllll}
\toprule
&
\multicolumn{5}{c}{MNIST} & \multicolumn{4}{c}{Celeba} & \multicolumn{2}{c}{Imagenet}\\
\cmidrule(lr){2-6} \cmidrule(lr){7-10} \cmidrule(lr){11-12}
& 2x2 & 3x3 & 4x4 & 5x5 & 6x6 & 2x2 & 3x3 & 4x4 & 5x5 & 2x2 & 3x3\\
\midrule
Kendall tau & 1. & .83 &.43 &.39& .27 & 1.0 & .96 & .88 & .78 & .85&\textbf{.73}\\
Kendall tau \\ \citep{Cruz2017} & - & - & - &-& - & - & - & - & - & - & .72 \\
\midrule
Prop. wrong & .0 & .09 &.45 &.45& .59 & .0 & .03 & .1 & .21 & .12 &.26\\
Prop. any wrong & .0 & .28 & .97 &1. &1.& .0 & .09 & .36& .73 & .19 & .53\\
$l1$ & .0 & .0 &.04 & .02& .03& .0 & .01 & .04 & .08 & .05&.12\\
$l2$ & .0 & .0 & .26 &.18 & .19& .0& .11 & .18 & .24 & .22 & .31\\
\bottomrule
\end{tabular}
\caption{Jigsaw puzzle results. We compare to the available result on the Kendall Tau metric from~\cite{Cruz2017} and provide additional results from our experiments. Randomly guessed permutations of $n$ items have an expected proportion of errors of $(n-1)/n$. Note that our model has at least 20x fewer parameters.\label{table:puzzles}.}
\end{table}
A more complex scenario for learning permutations arises in the reconstruction of an image $X$ from a collection of scrambled ``jigsaw'' pieces $\tilde{X}$~\citep{Noroozi2016,Cruz2017}. In this example, our network differs from the one in~\ref{sub:sorting} in the first layer is a simple CNN (convolution + max pooling), which maps the puzzle pieces to an intermediate representation (see figure \ref{fig:network} for details).
For evaluation on test data, we report several measures: first, in addition to Prop. any wrong we also consider Prop. wrong, the overall proportion of scrambled pieces that were wrongly assigned to their actual position. Also, we use $l1$ and $l2$ (train) losses and the Kendall tau, a ``correlation coefficient'' for ranked data.
In Table \ref{table:puzzles}, we benchmark results for the MNIST, Celeba and Imagenet datasets, with puzzles between 2x2 and 6x6 pieces. In MNIST we achieve very low $l1$ and $l2$ on up to 6x6 puzzles but a high proportion of errors. This is a consequence of our loss being agnostic to particular permutations, but only caring about reconstruction errors: as the number of black pieces increases with the number of puzzle pieces, many become unidentifiable under this loss.
In Celeba, we are able to solve puzzles of up to 5x5 pieces with only 21\% of pieces of faces being incorrectly ordered (see Figure \ref{fig:figure3}a for examples of reconstructions). For this dataset, we provide additional baselines in Table \ref{table:suppceleba} of appendix \ref{sub:puzzlessupp}: there, we show that performance substantially decreases if the temperature is too small or large, but only slightly decreases if only one Sinkhorn iterations is made. We observe that temperature does play a relevant role, consistent with the findings of~\cite{Maddison2016,Jang2016}. This might not be obvious a-priori, as one could reason that temperature over-parameterizes the network. However, results confirm this is not the case. We hypothesize that different temperatures result in parameter convergence in different phases or regions. Also, the minor difference for a single iteration suggest that only a few might be necessary, implying potential savings in the memory needed to unroll computations in the graph, during training.
Learning in the Imagenet dataset is much more challenging, as there isn't a sequential structure that generalizes among images, unlike Celeba and MNIST. In this dataset, our network ties with the .72 Kendall tau score reported in \citep{Cruz2017}. Their network, named DeepPermNet, is based on the stacking of up to the sixth fully connected layer \textit{fc6} of AlexNet \citep{Krizhevsky2012}, which finally (fully) connects to a Sinkhorn layer through intermediate \textit{fc7} and \textit{fc8}. We note, however, our network is much simpler, with only two layers and far fewer parameters. Specifically, the network that produced our best results had around 1,050,000 parameters (see appendix \ref{sec:expdetails} for a derivation), while in DeepPermNet, the layer connecting \textit{fc6} with \textit{fc7} has $512\times 4096\times 9\approx19,000,000$ parameters, let alone the AlexNet parameters (also to be learned). Indeed, we believe there is no reason to consider a complex stacking of convolutions: as the number of pieces increases, each piece is smaller and the convolutional layer eventually becomes fully connected. In the following experiment we explore this phenomenon in more detail.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{figure3bcomp.pdf}
\caption{(a) Sinkhorn networks can be trained to solve Jigsaw Puzzles. Given a trained model, `soft' reconstructions are shown at different $\tau$ using $S(X/\tau)$. We also show hard reconstructions, made by computing $M(X)$ with the Hungarian algorithm \citep{Munkres1957}. (b) Sinkhorn networks can also be used to learn to transform any MNIST digit into another. We show hard and soft reconstructions, with $\tau=1$.}
\label{fig:figure3}
\end{figure}
\subsection{Assembly of arbitrary MNIST digits from pieces}
We also consider an original application, motivated by the observation that the Jigsaw Puzzle task becomes ill-posed if a puzzle contains too many pieces. Indeed, consider the binarized MNIST dataset: there, reconstructions are not unique if pieces are sufficiently atomic, and in the limit case of pieces of size 1x1 squared pixels, for a given scrambled MNIST digit there are as many valid reconstructions as there are MNIST digits with the same number of white pixels. In other words, reconstructions stop being
probabilistic and become a multimodal distribution over permutations.
We exploit this intuition to ask whether a neural network can be trained to achieve arbitrary digit reconstructions, given their loose atomic pieces. To address this question, we slightly changed the network in \ref{sub:puzzles}, this time stacking several second layers linking an intermediate representation to the output. We trained the network to reconstruct a particular digit with each layer, by using digit identity to indicate which layer should activate with a particular training example.
Our results demonstrate a positive answer: Figure \ref{fig:figure3}b shows reconstructions of arbitrary digits given 10x10 scrambled pieces. In general, they can be unambiguously identified by the naked eye. Moreover, this judgement is supported by the assessment of a neural network. Specifically, we trained a two-layer CNN \footnote{Specifically, we used the one described in the \href{https://www.tensorflow.org/get_started/mnist/pros}{Deep MNIST for experts tutorial.}} on MNIST (achieving a 99.2\% accuracy on test set) and evaluated its performance on the test set generated by arbitrary transformations of each digit of the original test set into any other digit. We found the CNN made an appropriate judgement in 85.1\% of the time. More specific results, regarding specific transformations are presented in Table \ref{table:assembly} of appendix \ref{sub:asssup}.
Finally, we note that meaningful assemblies are possible regardless of the original digit: in Figure \ref{fig:afig1} of appendix \ref{sub:asssup} we show arbitrary reconstructions, by this same network, of ``digits'' from a `strongly mixed' MNIST dataset. In detail, these ``digits'' were crafted by sampling, without replacement, from a bag containing all the small pieces from all original digits. These reconstructions suggest the possibility of an alternative to generative modeling, based on the (random) assembly of small pieces of noise, instead of the processing of noise through a neural network. However, this would require training the network without supervision, which is beyond the scope of this work.
\subsection{Posterior inference over permutations with the Gumbel-Sinkhorn estimator}
\label{sub:celegans}
We illustrate how the $\mathcal{G.S.}$ distribution can be used as a continuous relaxation for stochastic nodes in a computational graph. To this end, we revisit the ``C. elegans neural identification problem'', originally introduced in~\cite{Linderman2017}. We refer the reader to~\citep{Linderman2017} for an in-depth introduction, but briefly, \textit{C. elegans} is a nematode (worm) whose biological neural configuration -- the \textit{connectome} -- is stereotypical; i.e. specimens always posses the same number of somatic neurons (282) \citep{Varshney2011}, and the ways those neurons connect and interact changes little from worm to worm. Therefore, its brain can be thought of as a canonical object, and its neurons can unequivocally be identified with names.
The task, then, consists of matching traces from the observed neural dynamics $Y$ to identities (neuron names) in the canonical brain. This problem is stated in terms of a Bayesian hierarchical model, in order to profit from prior information that may constrain the possibilities. Specifically, one states a linear dynamical system $Y_t = PW P^\mathsf{T}
Y_{t-1}+\nu_t$, where $\nu_t$ is a noise term and $W$ and $P$ are latent variables with respective prior distributions. $W$ encodes the dynamics, with a prior $p(W)$ to represent the sparseness of the connectome, etc., and $P$ is a permutation matrix representing the matching between indexes of observed neurons and their canonical counterparts, where we place a flat prior $p(P)$ over permutations. Notably, within the framework it is possible to model the simultaneous problem with many worms sharing the same dynamical system, but here we avoid explicit references to individuals for notational ease.
Given this model, we seek the posterior distribution $p(\{P,W\}|Y)$, a problem that we address with variational inference \citep{Blei2017} using the constructions developed in \ref{sub:vi}. In Table \ref{table:celegans} (and also in Table \ref{table:celeganssup} of appendix \ref{sub:celeganssup}) we show results for this task, using accuracy in matching as the performance measure. These are broken down by relevant experimental covariates~\citep{Linderman2017}: different proportion of neurons known beforehand, and by task difficulty. As baselines, we include i) a simple MCMC sampler that proposes local swipes on permutations ii) the rounding method presented in \cite{Linderman2017}, iii) our method, where we also consider the absence of regularization. Results show our method outperforms the alternatives in most cases. MCMC fails because mixing is poor, but differences are much subtler with the other baselines. With them, we see that clear differences with the no-regularization case confirm the stochastic nature of this problem, i.e., that it is truly necessary to represent a latent probabilistic permutation. We believe our method outperforms the one in \cite{Linderman2017} because theirs, although it provides a explicit density, is a less tight relaxation, in the sense that points can be anywhere in the space, and not only on the Birkhoff polytope. Therefore, their prior also needs to be defined on the entire space and may not property act as an efficient regularizer.
\begin{table}[t]
\centering
\begin{tabular}{lllllllll}Prop. known neurons
& \multicolumn{2}{c}{40.\%} & \multicolumn{2}{c}{30.\%} & \multicolumn{2}{c}{20.\%} & \multicolumn{2}{c}{10.\%}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
Difficulty & Easy& Hard & Easy & Hard & Easy & Hard & Easy & Hard \\
\midrule
MCMC & .85 & .82 &.51 &.44& .29 & .27 & .16 & .12 \\
\citep{Linderman2017} & \textbf{.97} & .95 & .90 &.\textbf{85} & \textbf{.77} & \textbf{.59} & .39 & .21 \\
Gumbel-Sinkhorn & \textbf{.97} & \textbf{.96} & \textbf{.92} & .84 & .76& .\textbf{59} & \textbf{.44} & \textbf{.26}\\
\shortstack{Gumbel-Sinkhorn, no regularization}
& .96 & .93 & .89 & .78 & .71 & .52 & .4 & .23 \\
\bottomrule
\end{tabular}
\caption{Results for the C. elegans neural inference problem.
\label{table:celegans}
\end{table}
\section{Related work}
\label{sec:related}
Learning with matchings has been extensively been studied in the machine learning community; but current applications mostly relate to structured prediction \citep{petterson2009exponential,tang2015bethe}. However, our probabilistic treatment focuses on marginal inference in a model with a latent matching. This is a more challenging scenario, as standard learning techniques, i.e.\ the score function estimator or REINFORCE \citep{williams1992simple}, are not applicable due to the partition function for non-trivial distributions over matchings.
In the case of latent categories, a recent technique that combines a relaxation and the re-parameterization trick \citep{Kingma2013} was proposed as a competitive alternative to REINFORCE for the marginal inference scenario. Specifically, ~\cite{Maddison2016,Jang2016} use the Gumbel-trick to re-parameterize a discrete density, and then replace it with a relaxed surrogate, the Gumbel Softmax distribution, to enable gradient-descent. Our work, like the simultaneous work of \cite{Linderman2017}, aims to extends the scope of this technique to latent permutations. We deem our Gumbel Sinkhorn distributions as the most natural tractable extension of the Gumbel Softmax to permutations, as we clearly parallel each of the steps leading to its construction. A parallel is also presented in ~\cite{Linderman2017}; and notably, unlike ours, their framework produces tractable densities. However, it is less clear how their constructions extend each of the features of the Gumbel Softmax: for example, their rounding-based relaxation also utilizes the Sinkhorn operator, but the limit they consider does not make use of the non-trivial statement of Theorem 1, which naturally extends the categorical case (see appendix \ref{sub:relation} for details). In practice, we see our results favor the Gumbel Sinkhorn distribution, since it is a tighter relaxation.
Connections between permutations and the Sinkhorn operator have been known for at least twenty years. Indeed, the limit in Theorem 1 was first presented in \cite{Kosowsky1994}, but their interpretation and motivation were more linked to statistical physics and economics.
However, our approach is different and links to recent developments in optimal transport (OT) \citep{Villani2003}: Theorem 1 draws on the entropy-regularization for OT technique developed in\cite{Cuturi2013}, where the entropy-regularized transportation problem is referred to as a `Sinkhorn distance'. The extension is sensible as in the case of transportation between two discrete measures (here) the Birkhoff polytope appears naturally as the optimization set \citep{Villani2003}. Entropy regularization as means to achieve a differentiable version of a loss was first proposed in \cite{Genevay2017} in the context of generative modeling. Although this field may appear separate, recent work \citep{Salimans2018} makes explicit the connection to permutations: to compute a (Wasserstein) distance between a batch of dataset samples and one of generative samples of the same size, one needs to solve the matching problem so that the distance between matched samples is minimized. Finally, we note our work shares with \cite{Salimans2018,Genevay2017} in that the OT cost function (here, the matrix $X$) is learned using an artificial neural network.
We understand our work as extending~\cite{Adams2011}, which developed neural networks to learn a permutation-like structure; a ranking. However, there, as in \cite{Helmbold2009}, the objective function was linear and the Sinkhorn operator was instead used as an approximation of a matrix of the marginals, i.e., $S(P)\approx E(P)$. In consequence, there was no need to introduce a temperature parameter and consider a limit argument, which is critical to our case. Interestingly, equation \eqref{eq:entropyreg} can be understood in terms of approximate marginal inference, justifying the approximation $S(P)\approx E(P)$. We comment on this in appendix \ref{sec:marginal}. Note that Sinkhorn iteration can be interpreted as mean-field inference in an associated Gibbs distribution over matchings. With this in mind, backpropagation through Sinkhorn is an end-to-end learning in an unrolled inference algorithm~\cite{stoyanov2011empirical, domke2013learning}. In future work, it may be fruitful to unroll alternative algorithms for marginal inference over matchings, such as belief propagation~\citep{Huang2009}.
Sinkhorn networks were also very recently introduced in \cite{Cruz2017}, although their work substantially differs from ours. While their interest lies in the representational aspects of CNN's, we are more concerned with the more fundamental properties. In their work, they don't consider a temperature parameter $\tau$, but their network still successfully learns, as $\tau=1$ happens to fall within the range of reasonable values. On the Jigsaw puzzle task, we showed that we achieve equivalent performance with a much simpler network having several times fewer parameters and layers. Nonetheless, we recognize the need for more complex architectures for the tasks considered in~\cite{Cruz2017}, and we hope our more general theory; particularly, Theorem 1 and the notion of equivariance, may aid further developments in that direction.
\section{Discussion}
We have demonstrated Sinkhorn networks are able to learn to find the right permutation in the most elementary cases; where all training samples obey the same sequential structure; e.g., in sorted number and in pieces of faces, as we expect parts of faces occupy similar positions from sample to sample. This is already non-trivial, as indicates one can train a neural network to solve the linear assignment problem.
However, the fact that Imagenet represented a much more challenging scenario indicates there are clear limits to our formulation. As the most obvious extension we propose to introduce a sequential stage, in which current solutions are kept on a memory buffer, and improved. One way to achieve this would be by exploring more complex parameterizations for permutations; i.e. replacing $M(X)$ by a quadratic operator that may parameterize a notion of local distance between pieces. Alternatively, one may resort to reinforcement learning techniques, as suggested in \cite{Bello2016}.
Either sequential improvement would help solve the ``Order Matters'' problem~\citep{Vinyals2015}, and we deem our elementary work as a significant step in that direction.
We have made available Tensorflow code for Gumbel-Sinkhorn networks featuring an implementation of the number sorting experiment at \href{http://github.com/google/gumbel_sinkhorn}{http://github.com/google/gumbel\_sinkhorn}
\bibliographystyle{iclr2018_conference}
|
{
"timestamp": "2018-02-26T02:12:19",
"yymm": "1802",
"arxiv_id": "1802.08665",
"language": "en",
"url": "https://arxiv.org/abs/1802.08665"
}
|
\section{Introduction}
In this part we present some basic definitions used throughout the paper.
\bigskip
Let $\mathcal{A}$ be a set, called the alphabet, which may be finite or infinite. We will be working with a fixed infinite word $x$, so that $\mathcal{A}$ will be taken countable, equal to the set of letters appearing in $x$.
An infinite word $x$ is an element of $\mathcal{A}^\mathbb{N}$. A finite word is an element of $\cup_{n\geq 1} \mathcal{A}^n$, and we note $\mathcal{A}^+$ the set of finite words. If $u=a_1a_2\ldots a_n$ then we set $n=|u|$ its length and $\{a_1a_2\ldots a_i \ | \ 1\leq i \leq n\}$ its set of prefixes. If $x$ is a finite or infinite word and $n\geq 1$, we note $\mathbb{P}_n(x)$ its prefix of length $n$. A factor $u$ of $v$ is a finite word appearing in $v$.
Let $T: x_0x_1x_2\ldots \in \mathcal{A}^\mathbb{N} \longrightarrow x_1x_2x_3\ldots\in \mathcal{A}^\mathbb{N}$ denote the shift on infinite words, depriving an infinite word of its first letter. If $x$ is an infinite word, a suffix of $x$ is an element of the form $T^k(x)$ for some $k\geq 0$.
A factorisation of an infinite word $x$ is an expression of the form $x=u_1u_2u_3u_4\ldots$ where the $(u_i)'s$ are finite words.
\bigskip
Let $X$ be any set. A coloring on $X$ is any map : $C: X \longrightarrow C$ where $C$ is a finite set, making the abuse of noting the map $C$ and its image by the same symbol (which will always be $C$). We will call the set $C$ the set of colors, for example $C=\{\textbf{red}, \textbf{blue}\}$.
Suppose that a set $X$ and a coloring $C$ on $X$ are given. A subset $Y$ of $X$ is called monochromatic if the restriction of $C$ to $Y$ is constant.
\begin{defi}[Super-monochromatic factorisation]
Let $y$ be an infinite word over $\mathcal{A}$, and $C$ a coloring on $\mathcal{A}^+$. A factorisation
\begin{center}
$y=u_1u_2u_3u_4\ldots$
\end{center}
is said to be super-monochromatic if the set of finite words :
\begin{center}
$\displaystyle \left( u_{n_1}u_{n_2}\ldots u_{n_k} \right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic.
\end{defi}
\section{Context of Ramsey theory}
In this section we put conjecture 1 in the context of Ramsey theory. We state Hindman's theorem, show the easy part of the conjecture, and show that for any coloring of $\mathcal{A}^+$ and infinite word $x$ there is $y$ in the subshift of $x$ having a super-monochromatic factorisation.
\bigskip
Denote by $\mathcal{P}(\mathbb{N})$ the set of finite subsets of $\mathbb{N}$. Write $A<B$ for $A,B\in \mathcal{P}(\mathbb{N})$ when $\max A < \min B$.
\begin{theo}[Hindman]
\begin{enumerate}[i)]
\item For any coloring on $\mathbb{N}$, there exists an infinite subset $M=\{m_1<m_2<m_3<\ldots\}$ of $\mathbb{N}$ such that the set
\begin{center}
$\displaystyle \left( m_{n_1}+m_{n_2}+\ldots + m_{n_k} \right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic.
\item For any coloring on $\mathcal{P}(\mathbb{N})$, there exists an infinite subset $M=\{A_1<A_2<A_3<\ldots\}$ of $\mathcal{P}(\mathbb{N})$ such that the set
\begin{center}
$\displaystyle \left( A_{n_1}\cup A_{n_2}\cup\ldots \cup A_{n_k} \right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic.
\end{enumerate}
\end{theo}
Hindman's theorem is known as part $i)$, and the implication $i)\Rightarrow ii)$ may be found easily in the litterature.
\bigskip
\begin{prop}
Let $x$ be an infinite word over $\mathcal{A}$, and $C$ a coloring of $\mathcal{A}^+$. If $x$ is ultimately-periodic, then $x$ admits a suffix having a super-monochromatic factorisation.
\end{prop}
\begin{proof}
Write $x=uvvv\ldots$ where $u$ and $v$ are finite words. Consider the coloring $\widetilde{C}$ of $\mathbb{N}^*$ defined for $i\geq 1$ by :
\begin{center}
$\widetilde{C}(i)=C(v^i)$.
\end{center}
By Hindman's theorem, there exists an infinite subset $M=\{m_1<m_2<m_3<\ldots\}$ such that the set of words
\begin{center}
$\displaystyle \left( v^{m_{n_1}+m_{n_2}+\ldots + m_{n_k}} \right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic. So that the suffix $v^{m_1}v^{m_2}v^{m_3}\ldots$ has a super-monochromatic factorisation.
\end{proof}
\bigskip
Endow $\mathcal{A}$ with the discreet topology, and note
\begin{center}
$\Omega (x) = \overline{\{T^{k}(x) \ | \ k\geq 0 \}}$
\end{center}
the subshift of $x$. Recall that a factor of $x$ is said to be recurrent if it appears an infinite number of times in $x$. The word $x$ itself is said to be recurrent if every factor of $x$ is recurrent. If $y$ is an infinite recurrent word such that every factor of $y$ is a factor of $x$, then $y\in \Omega (x)$. If $\mathcal{A}$ is finite, then $\Omega(x)$ admits a recurrent word by a compacity argument.
\begin{prop}
Let $x$ be an infinite word over $\mathcal{A}$ such that $\Omega(x)$ admits a recurrent element, and $C$ a coloring of $\mathcal{A}^+$. Then there exists $y\in \Omega (x)$ such that $y$ has a super-monochromatic factorisation.
\end{prop}
\begin{proof}
Let $z\in \Omega (x)$ be a recurrent word. We build a sequence $(u_n)_{n\geq 1}$ of factors of $z$ such that :
\begin{center}
$\forall n\geq 1$ $u_1u_2\ldots u_n$ is a suffix of $u_{n+1}$,
\end{center}
a property that we call the suffix property, as follows. Take for $u_1$ any factor of $z$. Since $u_1$ is recurrent, there exists a finite word $v$ such that $u_1vu_1$ is a factor of $z$, and define $u_2=vu_1$. If $u_1$, $u_2$, \ldots $u_n$ are defined such that $\forall k=1\ldots n$ $u_1u_2\ldots u_{k-1}$ is a suffix of $u_{k}$ and $u_1u_2\ldots u_n$ is a factor of $z$, then there exists a finite word $v$ such that $u_1u_2\ldots u_n v u_1u_2\ldots u_n$ is a factor of $z$. In this case, we set $u_{n+1}=v u_1u_2\ldots u_n$. It is clear that $(u_n)$ defined by this induction process satisfy the desired property.
For a finite subset $A$ of $\mathbb{N}^*$, set
\begin{center}
$\displaystyle u_A = \prod_{i\in A} u_i$
\end{center}
and apply Hindman's theorem to the coloring $\widetilde{C}$ of $\mathcal{P}(\mathbb{N})$ for $A\subset \mathbb{N}^*$ a finite set by :
\begin{center}
$\widetilde{C}(A)=C(u_A)$
\end{center}
to obtain an infinite sequence $M=\{A_1<A_2< A_3 < \ldots\}$ such that the set of words
\begin{center}
$\displaystyle \left(u_{A_{n_1}\cup A_{n_2}\cup\ldots \cup A_{n_k}}\right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic. The suffix property implies that these words are factors of $x$. And since $u_{A\cup B}=u_Au_B$ whenever $A<B$, the set of concatenations
\begin{center}
$\displaystyle \left(u_{A_{n_1}}u_{A_{n_2}}\ldots u_{A_{n_k}}\right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is monochromatic and is a subset of the set of factors of $x$. This shows that the infinite word
\begin{center}
$y=\displaystyle \prod_{n\geq 1}u_{A_n}$
\end{center}
belongs to $\Omega(x) $ and has a super-monochromatic factorisation.
\end{proof}
\section{A few reductions}
Let $x$ be an infinite word over $\mathcal{A}$. Let
\begin{center}
$T^k(x)= u_1u_2u_3u_4\ldots$ \quad $(k\geq 0)$
\end{center}
be a factorisation of a suffix of $x$.
In this section we define some colorings that allow us to reduce the problem to a certain extend. We present three reductions. The first one allows us to reduce to the case where $x$ is recurrent at the cost of one color. The second allows us to reduce to the case where all the $(u_{n_1}u_{n_2}\ldots u_{n_k})_{k\geq 1, n_1<n_2<\ldots < n_k}$ are factors of $x$, at zero cost. The third one allows us to reduce to the case where the $(u_i)$ satisfy the suffix property, namely that $\forall n\geq 1$, $u_1u_2\ldots u_n$ is a suffix of $u_{n+1}$, at the cost of one color.
In this section, every statement concerning monochromatic factorisations is made with respect to the latest coloring defined. We start by the consideration of first occurrences of factors of $x$.
\bigskip
Write $x=x_0x_1x_2x_3\ldots$ where the $(x_i)$'s are elements of $\mathcal{A}$. For $u$ a factor of $x$, let :
\begin{center}
$x_{A(u)}x_{A(u)+1}\ldots x_{B(u)-1}$
\end{center}
be the first occurrence of $u$ in $x$. In other words :
\begin{center}
$A(u)=\min \{ k\geq 0 \ | \ u=x_kx_{k+1}\ldots x_{k+|u|-1} \}=\min \{ k\geq 0 \ | \ u=\mathbb{P}_{|u|}(T^k(x)) \}$
\end{center}
where we recall that $\mathbb{P}_{n}(y)$ is the prefix of length $n$ of $y$.
We have the obvious relation $B(u)-A(u)=|u|$. Moreover, if $v$ is a prefix of $u$ then $A(v)\leq A(u)$, and if $v$ is a suffix of $u$ then $B(v)\leq B(u)$. In this two functions resides the relative information between $u$ and $x$, and are relevant mostly in the non-ultimately periodic case, as shows the next proposition.
\begin{prop}
Let $x$ be a non-ultimately periodic word over $\mathcal{A}$. Then for all $k\geq 0$, there exist $N\geq 0$ such that for all $n\geq N$, we have
\begin{center}
$k=A(\mathbb{P}_n(T^k(x)))$
\end{center}
in other word, for any suffix $y$ of $x$, a prefix of $y$ that is long enough has its first occurrence where $y$ starts.
\end{prop}
\begin{proof}
The infinite word $x$ is non-ultimately periodic if and only if
\begin{center}
$\forall n,m \in \mathbb{N}$, \quad $n\neq m$ $\ \Longrightarrow \ \ T^n(x)\neq T^m(x) $.
\end{center}
For $k\geq 0$, we have $T^k(x) \neq T^j(x)$ for all $j=0\ldots k-1$, so that
\begin{center}
$\forall j=0\ldots k-1$, $\exists N_j\geq 0$, $\forall n \geq N_j$, $\mathbb{P}_n(T^k(x))\neq \mathbb{P}_n(T^j(x))$.
\end{center}
This shows that for all $n\geq \max\{ N_j \ |\ j=0\ldots k-1 \}$, $\mathbb{P}_n(T^k(x))$ does not appear at a place $j$ for $j<k$. Hence the proposition.
\end{proof}
Thus in an expression of the form $T^k(x)=u_1u_2u_3\ldots$, if the $(u_i)$'s are long enough, we may assume that $A(u_1)=k$, and $B(u_i)=A(u_{i+1})$ for $i\geq 1$.
\bigskip
\begin{defi}
Let $C$ be the coloring defined on $\mathcal{A}^+$ for $u\in \mathcal{A}^+$ by :
\begin{itemize}
\item $C(u)=\textbf{red}$ if $u$ is a factor of $x$ that is not recurrent,
\item $C(u)=\textbf{blue}$ if $u$ is not a factor of $x$ or if $u$ is a recurrent factor of $x$.
\end{itemize}
\end{defi}
\begin{prop}
Assume that no suffix of $x$ is recurrent. Then no suffix of $x$ has a super-monochromatic factorisation for the coloring $C$ defined above.
\end{prop}
\begin{proof}
Assume by contradiction that $x$ has a suffix $T^k(x)= u_1u_2u_3u_4\ldots$ \quad $(k\geq 0)$ having a monochromatic factorisation. In view of the definition of a super-monochromatic factorisation, $u_1$ may be taken arbitrary long. Since $T^k(x)$ is not recurrent, if $u_1$ is long enough then it will contain a non-recurrent factor of $x$, and hence $u_1$ is $\textbf{red}$. This shows that the factorisation is $\textbf{red}$.
This implies that for all $n\geq 2$, the words $u_1u_n$ are $\textbf{red}$. In particular, they are factors of $x$. But since $x$ is non-ultimately periodic (if this was the case, $x$ would have a recurrent suffix), $\lim_n A(u_1u_n)= +\infty $ and $u_1$ appears an infinite number of times in $x$, contradicting the fact that it is non-recurrent.
\end{proof}
\begin{defi}
Let $C_{NF}$ be the coloring defined on the set of non-factors of $x$ for $u\in\mathcal{A}^+$ by :
\begin{itemize}
\item $C_{NF}(u)=\textbf{red}$ if $u$ is not a factor of $x$ and every decomposition of $u$ as a product of factors of $x$ has at least tree terms
\item $C_{NF}(u)=\textbf{blue}$ if $u$ is not a factor of $x$ and may be written as the concatenation of two factors of $x$
\end{itemize}
\end{defi}
Notice, in this definition, that we do not specify the color of factors of $x$.
\begin{prop}
Let $T^k(x)=u_1u_2u_3\ldots$ be a factorisation of a suffix of $x$, and for $A\subset \mathbb{N}^*$ a finite subset, set $u_A=\prod_{i\in A}u_i$. Assume that
\begin{center}
$\forall \nu \geq 0, \exists A \in \mathcal{P}(\mathbb{N}^*)$ \ such that \ $\min A \geq \nu$ and $u_A$ is not a factor of $x$.
\end{center}
Let $C$ be a coloring of $\mathcal{A}^+$ such that $C(u)=C_{NF}(u)$ whenever $u$ is not a factor of $x$. Then the set
\begin{center}
$\left( u_{n_1}u_{n_2}\ldots u_{n_k} \right)_{k\geq 1, n_1<n_2<\ldots < n_k}$
\end{center}
is not $C$-monochromatic.
\end{prop}
\begin{proof}
Assume by contradiction that the set $(u_A)_{\mathcal{P}(\mathbb{N}^*)}$ is monochromatic. Take $\nu_1\geq 0$ and $A\geq \nu_1$ such that $u_A$ is not a factor of $x$. Take $\nu_2 > \max A $ and $B \geq \nu_2$ such that $u_B$ is not a factor of $x$, and set $z=u_Au_B$. The word $z$ belongs to the monochromatic set, is a non-factor of $x$ and if $z=v_1v_2$ where $v_1$ and $v_2$ are factors of $x$, then we have : $u_A$ is a factor of $v_1$ or $u_B$ is a factor of $v_2$, and at least one of $u_A$ or $u_B$ is a factor of $x$, which is impossible. This shows that any expression of $z$ as the concatenation of factors of $x$ must contain at least tree terms, so that $z$ is \textbf{red} and the factorisation is \textbf{red}.
Now let $i,j\in\mathbb{N}^*$ be such that $i+2\leq j$. If $u_iu_j$ is not a factor of $x$, then it must be $\textbf{blue}$, but this is a contradiction with the monochromatic assumption. Hence $u_iu_j$ is a factor of $x$, and actually the same proof shows that $u_A$ is a factor of $x$ whenever $A$ is the union of two intervals. Now if $A=I_1\cup I_2\cup I_3$ is the union of tree intervals, then since $u_{I_1\cup I_2}$ is a factor of $x$, $u_A$ is either \textbf{blue} or a factor of $x$. By the monochromatic assumption, $u_A$ is a factor of $x$. Proceeding by induction, we see that $u_A$ is a factor of $x$ for all finite subset $A$ of $\mathbb{N}$, but this is a contradiction with our first hypothesis.
\end{proof}
\begin{defi}
Let $C$ be the coloring defined on the set of factors $u$ of $x$ by :
\begin{itemize}
\item $C(u)=\textbf{red}$ if and only if there exists a decomposition $u=v_1v_2$ with $A(v_1)=A(u)$ and $B(v_2)=B(u)$
\item $C(u)=\textbf{blue}$ otherwise.
\end{itemize}
\end{defi}
\begin{prop}
If a suffix $y$ of $x$ has a super-monochromatic factorisation, then this suffix admits a super-monochromatic factorisation $y=u_1u_2u_3\ldots$ with $\forall n \geq 1$, $u_1u_2\ldots u_n$ is a suffix of $u_{n+1}$.
\end{prop}
\begin{proof}
Write $y=T^k(x)=u_1u_2u_3\ldots$ a super-monochromatic factorisation with the assumption that $A(u_1)=k$ and $B(u_i)=A(u_{i+1})$ for all $i\geq 1$. Assume also that $\forall n\geq 1$, $|u_{n+1}|\geq |u_1u_2\ldots u_n|$. We see in these conditions that $u_1u_2$ is \textbf{red}, so that the factorisation is \textbf{red}.
Let $n\geq 1$, and consider the \textbf{red} factor $u_1u_2\ldots u_n u_{n+2}$ of $x$. Write $u_1u_2\ldots u_n u_{n+2}=v_1v_2$ with $A(v_1)=A(u_1u_2\ldots u_n u_{n+2})$ and $B(v_2)=B(u_1u_2\ldots u_n u_{n+2})$. If $v_1$ is a prefix of $u_1u_2\ldots u_n$, then $A(v_1)\leq A(u_1u_2\ldots u_n)=A(u_1)$. Also in this case, $u_{n+2}$ is a suffix of $v_2$ so that $B(v_2)\geq B(u_{n+2})$. This implies, with $z=u_1u_2\ldots u_n u_{n+2}$,
\begin{center}
$|z|= B(z)-A(z) \geq B(u_{n+2}) - A(u_1) = |u_1u_2\ldots u_nu_{n+1}u_{n+2}|=|z|+|u_{n+1}|> |z|$
\end{center}
wich is a contradiction.
So $v_2$ must be a suffix of $u_{n+2}$. So we have
\begin{center}
$B(v_2)\leq B(u_{n+2}) \leq B(u_1u_2\ldots u_nu_{n+2})=B(v_2)$
\end{center}
So that $B(u_1u_2\ldots u_nu_{n+2})=B(u_{n+2})$. But this implies that $u_1u_2\ldots u_n$ is a suffix of $u_{n+1}$.
\end{proof}
\section{The example of the Zimin word}
In this section we study the Zimin word, which is an infinite word over an infinite alphabet. No knowledge of this word is required. We build a coloring answering to the conjecture that has 2 colors. In the end of this section we will present how to adapt these results to the doubling-period word, wich is a word over the alphabet $\{0,1\}$. When the author writes these lines, these are the only fully complete examples with 2 colors available.
\begin{defi}
Define the Zimin word $Z$ as the infinite word over the infinite alphabet $\mathcal{A}_x=\{x_1, x_2, x_3, \ldots \}$ by one of the tree equivalent definitions :
\begin{enumerate}
\item $Z=\lim Z_n$ where $Z_1=x_1$, and $Z_{n+1}=Z_nx_{n+1}Z_n$ for $n\geq 1$
\item $Z=\displaystyle \prod_{n\geq 1}x_{val_2(n)+1}$ where $val_2$ is the $2$-adic valuation.
\item $Z$ is the fixed point of the morphism $\varphi : x_i \mapsto x_1x_{i+1}$ $(i\geq 1)$
\end{enumerate}
so that
\begin{center}
$Z=x_1x_2x_1x_3x_1x_2x_1x_4x_1x_2x_1x_3x_1x_2x_1x_5x_1x_2x_1x_3\ldots$
\end{center}
\end{defi}
We leave the proof of the equivalence between these definitions to the reader, and will use the first one for proofs.
For a factor $u$ of $Z$, set
\begin{center}
$k(u)=\max\{k\geq 0 \ |\ x_k \text{ appears in }u \}$,
\end{center}
and notice that for $k\geq 1$, $k=k(u)$ if and only if $u$ is a factor of $Z_k$ but is not a factor of $Z_{k-1}$. Since the letter $x_k$ appears only once in $Z_k$, we see that the letter $x_{k(u)}$ appears only once in the factor $u$. More generaly, between two occurences of a letter $x_k$, there must be a letter $x_l$ with $l>k$.
\begin{defi}
Define the two sequences of words $(u_n)_{n\geq 1}$ and $(v_n)_{n\geq 1}$ by $u_1=v_1=x_1$ and for $n\geq 1$ :
\begin{center}
$u_{n+1}=x_{n+1}u_1u_2\ldots u_n$ \quad and \quad $v_{n+1}=v_nv_{n-1}\ldots v_1x_{n+1}$.
\end{center}
\end{defi}
For $A,B \subset \mathbb{N}$ two finite sets, let
\begin{center}
$\displaystyle u_A=\prod_{i\in A}^{\rightarrow}u_i=u_{i_1}u_{i_2}\cdots u_{i_k}$ \ where \ $A=\{i_1<\ldots<i_k\}$
\end{center}
and
\begin{center}
$\displaystyle v_B=\prod_{j\in B}^{\leftarrow}v_j=v_{j_k}v_{j_{k-1}}\cdots v_{j_1}$ \ where \ $B=\{j_1<\ldots<j_k\}$.
\end{center}
\begin{prop}
Let $n\geq 1$. Then
\begin{enumerate}
\item The proper suffixes of $u_n$ are exactly the words $u_A$ where $A\subset [1,n[$.
\item For all $A\subset [1,n[$, $Z_{n-1}=v_{[1,n[\backslash A}u_A$ \ and \ $u_n=x_nv_{[1,n[\backslash A}u_A$
\end{enumerate}
\end{prop}
\begin{proof}
From the relation $|Z_{n+1}|=1+2|Z_n|$ and $|Z_1|=1$ one derive $|Z_n|=2^n-1$. Similarly we get $|u_n|=2^{n-1}$. It is clear that each $u_A$ for $A\subset [1,n[$ are proper suffixes of $u_n$. Since we have $|u_A|=\sum_{i\in A} 2^{i-1}$, we see that $u_A$ is characterized by its length, and that every length is obtained that way, proving the statement.
For the second relation, notice that $u_A$ is the reversal of $v_A$, and use the fact that $Z_n$ is a palindrome and $u_n$ is the suffix of length $2^{n-1}$ of $Z_n$.
\end{proof}
\begin{prop}
Every factor $u$ of $Z$ writes uniquely in the form :
\begin{center}
$u=u_Ax_{k(u)}v_B$
\end{center}
with $A,B\subset [1,k(u)[$.
\end{prop}
\begin{proof}
Unicity is clear by consideration of the lengths. For existence, write $Z_{k(u)}=\lambda u \rho$ and use the previous proposition.
\end{proof}
\begin{prop}
Let $u,v$ be two factors of $Z$ such that $k(u)<k(v)$, and write :
\begin{center}
$u=u_{A_1}x_{k(u)}v_{B_1}$ \quad and \quad $v=u_{A_2}x_{k(v)}v_{B_2}$
\end{center}
with $A_1,B_1 \subset [1,k(u)[$ and $A_2,B_2 \subset [1,k(v)[$. Then :
\begin{enumerate}
\item $uv$ is a factor of $Z$ if and only if $k(u)\notin A_2$ and $B_1=[1,k(u)[\backslash ( A_2\cap [1,k(u)[ )$.
\item if $uv$ is a factor of $Z$ then
\begin{center}
$uv=u_{A_1\cup \{k(u)\}\cup (A_2\cap ]k(u),k(v)[)}x_{k(v)}v_{B_2}$
\end{center}
\item $u$ is a suffix of $v$ if and only if $k(u)\in B_2$ and $B_2\cap [1,k(u)[=B_1$
\end{enumerate}
\end{prop}
\begin{proof}
Assume that $uv$ is a factor of $Z$. Then we must have $k(u)\notin A_2$ for otherwise the factor $uv$ of $Z$ would contain two occurences of $x_{k(u)}$ and no letter $x_l$ with $l>k(u)$ between them. Now the recursive definition of $Z$ shows that each occurrence of a letter $x_k$ is followed by the word $Z_{k-1}$. This shows that $x_{k(u)}v_{B_1}u_{A_2\cap [1,k(u)[}=u_{\{k(u)\}}$ and this implies that $B_1=[1,k(u)[\backslash ( A_2\cap [1,k(u)[ )$. Conversely, if $B_1=[1,k(u)[\backslash ( A_2\cap [1,k(u)[ )$, then $x_{k(u)}v_{B_1}u_{A_2\cap [1,k(u)[}=u_{\{k(u)\}}$ and we have :
\begin{center}
$uv=u_{A_1}u_{\{k(u)\}}u_{A_2 \cap ]k(u),k(v)[}x_{k(v)}v_{B_2}$
\end{center}
showing at once that $uv$ is a factor of $Z$ and the desired formula.
Since $k(u)<k(v)$, $u$ is a suffix of $v$ if and only if $u$ is a suffix of $v_{B_2}$. It is clear that $u$ is a suffix of $v_{k(u)}v_{B_1}$ so that it is enough to prove the property assuming $u=v_{k(u)}v_{B_1}$. Since $v_n$ ends with the letter $x_n$, $v_{k(u)}v_{B_1}$ is a suffix of $v_{B_2}$ if and only if it is a suffix of $v_{B_2\cap [1,k(u)]}$ and this shows that $k(u)\in B_2\cap [1,k(u)]$. A similar induction shows that $B_1\subset B_2\cap [1,k(u)[$. If $k\in B_2\cap [1,k(u)[$ and $k\notin B_1$, then in $v$ the letter $x_k$ has two occurrences, without a letter $x_l$ with $l>k$ between them, and this is a contradiction.
\end{proof}
\begin{coro}
Let $u,v$ and $w$ be tree factors of $Z$ with $k(u)<k(v)<k(w)$. Then :
\begin{enumerate}
\item if $uv$ and $vw$ are factors of $Z$, then $uvw$ is a factor of $Z$.
\item if $uw$ and $vw$ are factors of $Z$, then $u$ is a suffix of $v$.
\end{enumerate}
\end{coro}
\begin{proof}
Write $u=u_{A_1}x_{k(u)}v_{B_1}$, $v=u_{A_2}x_{k(v)}v_{B_2}$ and $w=u_{A_3}x_{k(w)}v_{B_3}$.
From the formula obtained for $uv$, and the fact that $vw$ being a factor only relies on relations between $k(v)$, $B_2$ and $A_3$, we see that, assuming that $uv$ is a factor of $Z$, the word $vw$ is a factor of $Z$ if and only if $uvw$ is a factor of $Z$. Moreover we have :
\begin{center}
$uvw=u_{A_1\cup \{k(u)\}\cup (A_2\cap ]k(u),k(v)[)\cup \{k(v)\}\cup (A_3\cap ]k(v),k(w)[)}x_{k(w)}v_{B_3}$.
\end{center}
The second statement is similar : $uw$ is a factor of $Z$ implies $k(u)\notin A_3$, and since $B_2=[1,k(v)[\backslash (A_3 \cap [1,k(v)[)$ so that $k(u)\in B_2$. Moreover,
\begin{center}
$B_2\cap [1,k(u)[=[1,k(u)[\backslash A_3\cap [1,k(u)[= B_1$
\end{center}
showing that $u$ is a suffix of $v$.
\end{proof}
Recall that if $A, B$ are finite subsets of $\mathbb{N}$, we write $A<B$ if $\max A < \min B$. We say that $A\subset\mathbb{N}$ is an interval if there are $k\leq l\in\mathbb{N}$ such that $A=[k,l]$.
\begin{lem}
Let $(A_n)_{n\geq 1}$ be a sequence of finite subsets of $\mathbb{N}^*$ with $A_n<A_{n+1}$ for all $n\geq 1$. Then the infinite word
\begin{center}
$Y=\displaystyle \prod_{n\geq 0}u_{A_n}$
\end{center}
is a suffix of $Z$ if and only if $\exists N\in\mathbb{N}, \forall n\geq N$, $A_n$ is an interval and $\min A_{n+1} = 1+\max A_n$.
\end{lem}
\begin{proof}
Notice first that
\begin{center}
$Z=\displaystyle \prod_{i\geq 1} u_i$.
\end{center}
If $M\subset \mathbb{N}^*$ is an infinite set, then the infinite word
\begin{center}
$Y_M=\displaystyle \prod_{m\in M} u_m$
\end{center}
belongs to $\Omega(Z)$. Conversely, for $Y\in \Omega(Z)$, there exists $M\subset \mathbb{N}^*$ such that $Y=Y_M$. To see this, build $M=\{m_1<m_2<\ldots\}$ as follows. Let $x_{m_1}$ be the first letter of $Y$. By the recursive definition of $Z$ we see that $u_{m_1}$ is a prefix of $Y$. Let $Y_1=Y$ and $Y_2$ be the suffix of $Y$ starting where the prefix $u_{m_1}$ of $Y_0$ ends. If $x_{m_2}$ is the first letter of $Y_1$, we must have $m_2>m_1$ and $u_{m_2}$ is a prefix of $Y_2$, so that $u_{m_1}u_{m_2}$ is a prefix of $Y$. If $u_{m_1}u_{m_2}\ldots u_{m_k}$ is a prefix of $Y$ with $m_1<m_2<\ldots<m_k$ and $Y_{k+1}$ is defined as the suffix of $Y$ where $u_{m_1}u_{m_2}\ldots u_{m_k}$ ends, then let $m_{k+1}$ be such that $Y_{k+1}$ starts with the letter $x_{m_{k+1}}$. We must have $m_{k+1}>m_k$, and $u_{m_{k+1}}$ is a prefix of $Y_{k+1}$, so that $u_{m_1}u_{m_2}\ldots u_{m_k}u_{m_{k+1}}$ is a prefix of $Y$. The infinite set $M=\{m_1<m_2<m_3<\ldots\}$ defined this way is such that $Y=Y_M$. Moreover, this construction shows that $M$ is uniquely determined by $Y$.
Since every suffix of $Z$ is of the form $Y_M$ for some infinite $M\subset \mathbb{N}^*$ such that $\exists a\in \mathbb{N}^*$ with $[a,+\infty[\subset M \subset \mathbb{N}^*$, we see that if
\begin{center}
$Y=\displaystyle \prod_{n\geq 0}u_{A_n}$
\end{center}
with $A_n<A_{n+1}$ for all $n\geq 1$, we must have
\begin{center}
$\exists a\in \mathbb{N}^*$, \ such that \ $[a,+\infty[\subset \displaystyle \bigcup_{n\geq 1} A_n \subset \mathbb{N}^*$
\end{center}
proving the lemma.
\end{proof}
\begin{defi}
Let $u=u_Ax_{k(u)}v_B$ be a factor of $Z$, and set $\eta(u)=\max([1,k(u)[\backslash A)$, with the convention that $\eta(u)=0$ if $A=[1,k(u)[$. Let $C$ be the coloring defined by :
\begin{itemize}
\item $C(u)=\textbf{red}$ if $A\cap [1,\eta(u)[=[1,\eta(u)[\backslash(B\cap [1,\eta(u)[)$,
\item $C(u)=\textbf{blue}$ otherwise.
\end{itemize}
If $u$ is not a factor of $Z$, then we set $C(u)=C_{NF}(u)$ where the coloring $C_{NF}$ is defined in section $2$ with $x=Z$.
\end{defi}
\begin{prop}
Let $A\subset \mathbb{N}$ be a finite subset. Then $u_A$ is \textbf{red} if and only if $A$ is an interval.
\end{prop}
\begin{proof}
Let $k=k(u_A)=\max A$, and set $A_0=A\backslash \{k\}$. We have
\begin{center}
$u_A=u_{A_0}x_{k}v_{[1,k[}$
\end{center}
so that $u_A$ is \textbf{red} if and only if $A_0\cap[1,\eta(u_A)[=\emptyset$. But $\eta(u_A)=\max([1,k[\backslash A_0)$, so that $A_0\cap[1,\eta(u_A)[=\emptyset$ if and only if $A_0=]\eta(u_A),k[$, meaning that $A=]\eta(u_A),k]$ is an interval.
\end{proof}
\begin{theo}
The Zimin word $Z$ admits no suffix $Y$ having a super-monochromatic factorisation.
\end{theo}
\begin{proof}
Assume by contradiction that there exists a suffix $Y$ of $Z$ having a super-monochromatic factorisation
\begin{center}
$Y=\displaystyle \prod_{n\geq 1}w_n$.
\end{center}
We assume that $k(w_n)\leq 2+k(w_{n+1})$ for all $n\geq 1$.
By proposition 5 we may assume that $\prod_{i\in A}w_i$ is a factor of $Z$ for all $A\subset \mathbb{N}^*$ finite. Let $n\geq 1$, and consider the factor $w_1w_2\ldots w_nw_{n+2}$ of $Z$. Since $w_{n+1}w_{n+2}$ is a factor of $Z$, we have by proposition 9 that $w_1w_2\ldots w_n$ is a suffix of $w_{n+1}$.
Write
\begin{center}
$w_n=u_{A_n}x_{k(w_n)}v_{B_n}$
\end{center}
for all $n\geq 1$. We have
\begin{center}
$Y=\displaystyle \prod_{n\geq1}u_{A_n}x_{k(w_n)}v_{B_n}=u_{A_1}\prod_{n\geq1}x_{k(w_n)}v_{B_n}u_{A_{n+1}}$
\bigskip
$\displaystyle Y=u_{A_1}\prod_{n\geq1}u_{\{k(w_n)\}\cup (A_{n+1}\cap]k(w_n),k(w_{n+1})[)}$.
\end{center}
By the Lemma, the sets $\{k(w_n)\}\cup (A_{n+1}\cap]k(w_n),k(w_{n+1})[)$ become intervals for large $n$. So that there exist $N\geq 1$ such that for all $n\geq N$,
\begin{center}
$\{k(w_n)\}\cup (A_{n+1}\cap]k(w_n),k(w_{n+1})[)=[k(w_n),k(w_{n+1})[$
\end{center}
and $\emptyset \neq ]k(w_n),k(w_{n+1})[ \subset A_{n+1}$.
But the fact that $w_nw_{n+1}$ is a factor of $Z$ implies that $k(w_n)\notin A_{n+1}$, and all this shows that
\begin{center}
$\eta(w_{n+1})=k(w_n)$.
\end{center}
Now, since $w_n$ is a suffix of $w_{n+1}$, we have $B_{n+1}\cap[1,k(w_n)[=B_n$, and since $w_nw_{n+1}$ is a factor of $Z$, we have
\begin{center}
$A_{n+1}\cap [1,k(w_n)[ = [1,k(w_n)[ \backslash B_n = [1,k(w_n)[ \backslash (B_{n+1}\cap[1,k(w_n)[)$
\end{center}
and with the fact that $\eta(w_{n+1})=k(w_n)$ we see that $w_{n+1}$ is \textbf{red}. Thus the factorisation is super-monochromatic with respect to the color \textbf{red}.
This implies that $w_nw_{n+2}$ is \textbf{red}. We have
\begin{center}
$w_nw_{n+2}= u_{A_n\cup\{k(w_n)\}\cup(A_{n+2}\cap]k(w_n),k(w_{n+2})[)}x_{k(w_{n+2})}v_{B_{n+2}}$.
\end{center}
But $]k(w_{n+1}),k(w_{n+2})[ \subset A_{n+2}$ and $k(w_{n+1})\notin A_{n+2}$, so that $\eta(w_nw_{n+2})=k(w_{n+1})$. By the red condition, we have
\begin{align*}
& \ A_n\cup\{k(w_n)\}\cup(A_{n+2}\cap]k(w_n),k(w_{n+1})[) \\
=& \ [1,k(w_{n+1})[\backslash (B_{n+2}\cap[1,k(w_{n+1})[) \\
=& \ [1,k(w_{n+1})[\backslash B_{n+1} \\
=& \ A_{n+2}\cap [1,k(w_{n+1})[
\end{align*}
and we see that $k(w_n)\in A_{n+2}$. But since $w_nw_{n+2}$ is a factor of $Z$, $k(w_n)\notin A_{n+2}$, which is a contradiction.
\end{proof}
We end this section by producing a coloring for the doubling-period word $D$, with two colors, such that $D$ admits no suffix having a super-monochromatic factorisation. We mention \cite{damanik} for a computation of squares in the doubling-period word.
\begin{defi}
The doubling-period word $D$ is the infinite word over the alphabet $\mathcal{A}=\{0,1\}$ defined as $D=\psi(Z)$ where $\psi$ is the morphism $\mathcal{A}_x^+ \rightarrow \mathcal{A}^+$ defined by $\psi(x_n)=0$ if $n\geq 1$ is odd, and $\psi(x_n)=1$ is $n$ is even. We have :
\begin{center}
$D=01000101010001000100010\ldots$
\end{center}
\end{defi}
For a factor $u$ of $D$, define the sets $\psi^{-1}(u)=\{V \ \text{ factor of } Z | \ \psi(V)=u\}$ and $\psi_k^{-1}(u)=\{V \in \psi^{-1}(u) \ |\ k(V)=k \}$.
Let $W(u)$ be the element $V$ of $\psi^{-1}(u)$ such that $A_Z(V)$ is minimal. By minimality and existence, we have $A_Z(W(u))=A(u)$.
Let $C_Z$ be the coloring answering the conjecture for the Zimin word. We define the coloring $C$ on the set of factors of $D$ by
\begin{center}
$C(u)=C_Z(W(u))$
\end{center}
\begin{theo}
No suffix of $D$ admits a super-monochromatic factorisation for the coloring defined above.
\end{theo}
\begin{proof}
Let $k\geq 0$ and
\begin{center}
$y=T^k(D)=u_1u_2u_3\ldots$
\end{center}
be a suffix of $D$ and a super-monochromatic factorisation such that $k=A_D(u_1)=A_Z(W(u_1))$ and $B(u_i)=A(u_{i+1})$ for all $i\geq 1$.
Let $u$ be a factor of $D$ such that $A(u)=k$. Write $W(u)=u_Ax_{k(W(u))}v_B$ as a factor of $Z$. Assume that the three letters $x_{k(W(u))-2}$, $x_{k(W(u))-1}$ and $x_{k(W(u))}$ all appear in $W(u)$, with this order of apparition. Let $V_1$ and $V_2$ be two factors of $Z$ such that $\psi(V_1)=\psi(V_2)$, then in these words the occurrences of the letter $x_1$ coincide. Indeed, $|W(u)|\geq 3$ and it is easily seen that this inequality is optimal in order to find the possible positions of $x_1$ in $V_1$ and $V_2$. We can then erase the letters $x_1$ from $V_1$ and $V_2$ an proceed by induction to see that each occurrences of letters $x_l$ with $l\leq k(W(u))-2$ is uniquely determined in $V_1$ and $V_2$. So that the words $V_1$ and $V_2$ are equal to $W(u)$ up to the occurrences of the letters $x_{k(W(u))-1}$ and $x_{k(W(u))}$. But these two letters have different images through $\psi$, and since between two letters $x_l$ and $x_k$ all letters $x_j$ with $j< l$ occur, we see that $V_1$ and $W(u)$ are equal up to the letter $x_{k(W(u)}$. This means that if
\begin{center}
$W(u)=u_Ax_{k(W(u))}v_B$
\end{center}
then $V=u_Ax_{k(W(u))+2m}v_B$ for some $m\geq 0$.
Now consider the factor $W(u_iu_j)$ of $Z$ with $i\leq j-2$. Write $W(u_iu_j)=V_1V_2$ with $\psi(V_1)=u_i$ and $\psi(V_2)=u_j$. Assume that in $W(u_i)$the three letters $x_{k(W(u_i))-2}$, $x_{k(W(u_i))-1}$ and $x_{k(W(u_i))}$ appear with this order of apparition. Write
\begin{center}
$V_1=u_{A_1}x_{k(W(u_i))+2m}v_{B_1}$ and $V_2=u_{A_2}x_{k(W(u_j))+2m'}v_{B_2}$
\end{center}
Since $V_1V_2$ is a factor of $Z$, between the two letters $x_{k(W(u_i))+2m}$ and $x_{k(W(u_j))+2m'}$ must appear every letter $x_j$ with $j\leq \min\{k(W(u_i))+2m , k(W(u_j))+2m'\}$, we see that we me must have $m=m'=0$. Showing that $V_1=W(u_i)$ and $V_2=W(u_j)$, so that $W(u_iu_j)=W(u_i)W(u_j)$.
This shows inductively that $W(u_{n_1}u_{n_2}\ldots u_{n_k})=W(u_{n_1})W(u_{n_2})\ldots W(u_{n_k})$ for all $k \geq 1, n_1<n_2<\ldots < n_k$. But this implies that $Z$ has a super-monochromatic factorisation for the coloring $C_Z$, leading to a contradiction.
\end{proof}
\section{Consecutive length}
Let $x$ be a non-ultimately periodic word. In this section, we introduce and study the consecutive length $L(u)$ of a factor $u$ of $x$.
Let $u$ be a factor of $x$. A decomposition $u=v_1v_2\ldots v_l$ with $l\geq 1$ terms is said to be consecutive if
\begin{center}
$A(v_1)=A(u)$, $B(v_l)=B(u)$ \ and \ $\forall i=1\ldots l-1, \ B(v_i)=A(v_{i+1})$.
\end{center}
Define the consecutive length $L(u)$ of a factor $u$ of $x$ as :
\begin{center}
$L(u)=\max\{ l \ | \ u \text{ admits a consecutive decomposition with } l \text{ terms} \}$
\end{center}
A factor $v$ of $x$ is said to be irreducible if $L(v)=1$. A consecutive decomposition is said to be irreducible if every of its terms is irreducible.
\begin{prop}
A consecutive decomposition of $u$ with $L(u)$ terms is irreducible.
\end{prop}
\begin{proof}
In such a decomposition $u=v_1v_2\ldots v_{L(u)}$, we must have $L(v_i)=1$ for all $i=1\ldots L(u)$ by maximality of the value of $L(u)$. Notice also that $L(v_i\ldots v_j)=j-i+1$ for all $i\leq j$.
\end{proof}
\begin{prop}
Let $u,v$ be two factors of $x$ with $B(u)=A(v)$. Then
\begin{center}
$L(u)+L(v) \leq L(uv) \leq L(u)+L(v)+1$.
\end{center}
\end{prop}
\begin{proof}
Let $u=u_1u_2\ldots u_{L(u)}$ and $v=v_1v_2\ldots v_{L(v)}$ be maximal decompositions of $u$ and $v$. Then $uv=u_1u_2\ldots u_{L(u)}v_1v_2\ldots v_{L(v)}$ is a consecutive decomposition, proving the first inequality.
On the other hand, let $uv=w_1w_2\ldots w_{L(uv)}$ be a maximal decomposition of $uv$. Let $i$ be such that $w_1w_2\ldots w_i$ is a prefix of $u$ and $w_{i+2}\ldots w_{L}$ is a suffix of $v$. Then we have $i\leq L(u)$ and $L-i-1\leq L(v)$ by definitions of $L(u)$ and $L(v)$, so that $L(uv)-1\leq L(u)+L(v)$ proving the second inequality.
\end{proof}
\begin{prop}
Let $k\geq 0$. Then for all $l\geq 1$, there exists a factor $u$ of $x$ with $A(u)=k$ and $L(u)=l$.
\end{prop}
\begin{proof}
We first show that the result is true for arbitrary large $l\geq 1$. For any factorisation $T^k(x)=u_1u_2u_3\ldots$, if the $(u_i)$'s are long enough then this factorisation is consecutive. This shows that $\lim_i(u_1u_2\ldots u_i) = +\infty$.
For the remaining $l\geq 1$, let $u$ be a factor of $x$ with $L(u)\geq l$ and $A(u)=k$. Let $u=v_1v_2\ldots v_{L(u)}$ be a maximal consecutive decomposition of $u$. Then $L(v_1v_2\ldots v_l)=l$, proving the statement.
\end{proof}
\section{Case of arbitrary large square free words}
In this section we use the consecutive length to provide a coloring answering the conjecture for infinite words not containing arbitrary large squares. We mention the construction in \cite{shallit} of a cube-free word over a 2-letter alphabet not containing arbitrary large squares.
\bigskip
Let $u$ be a factor of $x$. Define the four sets :
\begin{itemize}
\item $\lambda_+(u)=\{ v \text{ irreducible with }B(vu)=B(u) \ | \ B(v)=A(u) \}$
\item $\lambda_-(u)=\{ v \text{ irreducible with }B(vu)=B(u) \ | \ B(v)<A(u) \}$
\item $\rho_+(u)=\{ v \text{ irreducible suffix of }u \ | \ B(v)=A(u) \}$
\item $\rho_-(u)=\{ v \text{ irreducible suffix of }u \ | \ B(v)<A(u) \}$
\end{itemize}
\bigskip
We have $\lambda_+(u)\neq \emptyset $ \ and \ $\rho_+(u)\neq \emptyset $, by use of the properties of the consecutive length.
\bigskip
We proceed now to the definition of the coloring. We use a third color, without mentioning it, to provide us the suffix hypothesis. Namely, for a factor $u$ of $x$, $L(u)\geq 2$ if and only if there exist two factors $v,w$ of $x$ with $u=vw$ and $A(u)=A(v)$ and $B(u)=B(w)$.
\bigskip
Let $x$ be a non-ultimately periodic word. Define the coloring $C$ for a factor $u$ of $x$ with $L(u)\geq 2$ as :
\begin{itemize}
\item $C(u)=\textbf{red}$ \ if $\lambda_+(u)=\rho_+(u)$ and $\lambda_-(u)=\rho_-(u)$,
\item $C(u)=\textbf{blue}$ \ otherwise.
\end{itemize}
\begin{theo}
Assume that $x$ contains no arbitrary large squares. Then no suffix of $x$ admits a super-monochromatic factorisation for the 3-coloring defined above.
\end{theo}
\begin{proof}
Let $Y=T^k(x)=u_1u_2u_3\ldots$ be a suffix of $x$ and a super-monochromatic factorisation. We have, by the suffix hypothesis and the properties of the consecutive length :
\begin{center}
$\forall n\geq 1$, $u_1u_2\ldots u_{n}$ is a suffix of $u_{n+1}$
$\forall n\geq 1$, $B(u_n)=A(u_{n+1})$.
\end{center}
We show that $u_{n+1}$ is \textbf{red}. We show that $\lambda_+(u_{n+1}) \subset \rho_+(u_{n+1})$.
Let $v\in \lambda_+(u_{n+1})$ so that $L(v)=1$, $B(vu_{n+1})= B(u_{n+1})$ and $B(v)=A(u_{n+1})$.
We have \begin{center}
$B(vu_{n+1})= B(u_{n+1}) \Longrightarrow$ $\left\{\begin{matrix} & v \text{ is a suffix of } u_n \\ & \text{or } u_n \text{ is a suffix of }v\end{matrix}\right.$
\end{center}
Since $v$ is irreducible, we see that $v$ must be a suffix of $u_n$. The suffix property implies that $v$ is a suffix of $u_{n+1}$. And since $B(v)=A(u_{n+1})$, we have $v\in \rho_+(u_{n+1})$
By a similar proof We obtain the other inclusions : $\rho_+(u_{n+1}) \subset \lambda_+(u_{n+1})$, \ $\lambda_-(u_{n+1}) \subset \rho_-(u_{n+1})$, \ and $\rho_-(u_{n+1}) \subset \lambda_-(u_{n+1})$.
This shows that $u_{n+1}$ is \textbf{red}, so the factorisation is \textbf{red}.
Let $v\in \lambda_+(u_nu_{n+2})$, so that $L(v)=1$, and $B(v)=A(u_nu_{n+2})$. By the monochromatic hypothesis, the word $u_nu_{n+2}$ is \textbf{red}.
We have, \quad with $x=\ldots u_nu_{n+1}u_{n+2}\ldots$, $v\in \lambda_+(u_nu_{n+2})$ so that $v\in \rho_+(u_nu_{n+2})$. Since $v\in \rho_+(u_nu_{n+2})$, we see that $v$ is a suffix of $u_nu_{n+2}$. But obviously $|v|\leq |u_{n+1}|$, so by the suffix property, $v$ is a suffix of $u_{n+1}$. Since $v$ is a suffix of $u_{n+1}$, we have $v\in \rho_-(u_{n+2})=\lambda_-(u_{n+2})$. But $A(v)> B(u_n)$ and the suffix property imply that $u_n$ is a suffix of $v$. Now $vu_n$ is a suffix of $u_{n+1}$, so that $u_nu_n$ is an arbitrary large square that is also a factor of $x$. Contradiction.
\end{proof}
|
{
"timestamp": "2018-02-26T02:12:27",
"yymm": "1802",
"arxiv_id": "1802.08670",
"language": "en",
"url": "https://arxiv.org/abs/1802.08670"
}
|
\section{Introduction}
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is composed of four 12\,m diameter imaging atmospheric Cherenkov telescopes (IACTs) sited at the Fred Lawrence Whipple Observatory(FLWO) in southern Arizona (USA) \cite{VERITAS}. Each dish is a tessellated reflector made up of 350 individual mirror facets in a Davis-Cotton configuration focused onto a 499-pixel photomultiplier tube camera for a total field of view of $3.5^\circ$.
The primary field of research of VERITAS is in the ground based detection of very high energy (VHE, E$\gtrsim$100\,GeV) gamma-rays under a seemingly overwhelming 1000:1 background of cosmic-rays. In both cases the primary very energetic particle will cause a particle cascade in the atmosphere known as an extensive air shower. As the air shower particles move through the atmosphere they in turn generate Cherenkov light which can be picked up and imaged by the telescopes on the ground. The cosmic ray background is reduced to approximately 1:1 through the stereo imaging technique, where cosmic rays will tend to have larger, blotchier, more variable light distribution patterns to the smooth, elliptical images expected in the camera for gamma rays. The cosmic ray showers will also result in a large number of muon particles being generated, muons that pass close by the optical axis of the telescope will cause ring like images in the camera. As the muon path moves further from the centre of the camera, or as the direction of travel moves off from the optical axis, the images will only be partial rings, which can readily mimic the elliptical gamma-ray primary images, causing false positive classification. Muon rings, however, can also be useful: the amount of light expected in the ring is a well known quantity and so they can be used to provide an absolute calibration of the camera digitisation. It is for these two reasons -- background event rejection and calibration event identification -- that we are interested in being able to pick out the muon ring images from the non-muon cosmic ray and gamma ray ones.
Classification is a common task in experimental physics and with inherently large datasets it is easy to see why the use of machine learning algorithms has become increasingly popular. With a trigger rate of O(400\,Hz) and over 10,000 hours of observational data for the VERITAS telescopes it is not possible for any one person to sift through all of the data for events of interest. One powerful machine learning algorithm, convolutional neural networks (CNN), was used on a small batch of VHE gamma-ray data to detect and characterize muon events \cite{CNN}.
For developing supervised machine learning algorithms like the CNN, it is essential to assign correct labels to a large training dataset.
In this work, we describe Muon Hunter\footnote{\texttt{www.muonhunter.org}}, a citizen-science project where volunteers label and parameterize muon and non-muon images in VHE gamma-ray data to provide high purity machine learning training samples.
\section{Muon Hunter}
The Muon Hunter experiment was developed in collaboration with the ASTERICS Horizon2020 project\footnote{\texttt{www.asterics2020.eu}}, and the classification interface is hosted by the Zooniverse\footnote{\texttt{www.zooniverse.org}} platform, where researchers in many disciplines can easily design and deploy a citizen-science project through the Zooniverse Project Builder\footnote{\texttt{www.zooniverse.org/labs}}. The Zooniverse is the world's largest and most powerful platform for people-powered research. In the ten years since the launch of the first Zooniverse project, Galaxy Zoo~\cite{GalaxyZoo}, the platform has expanded to over 100 projects in a wide variety of research areas from the physical to the medical and social sciences, and liberal arts. Via the Zooniverse anyone can be a researcher; you do not need any specialised background, training, or expertise to participate (though training in the project task is given on the website). One further noteworthy aspect of the Zooniverse is that it is not simply an outreach tool, volunteers and professionals make real discoveries together -- many of which result in published research papers and open source data sets. The Zooniverse projects are constructed with the specific aim of converting volunteers' efforts into measurable results and scientifically significant discoveries \cite{GalaxyZoo, PlanetHunters, PhysicsToday, PlanetFour}.
The key components of a Zooniverse project are data and workflows. The workflows are a sequence of specific tasks tailored by the researchers to gain insights on the data. The workflows are streamlined and optimised to balance quality of experience for the volunteers with the need to obtain accurate results.
The workflow for identifying the presence of rings in an image is illustrated in Figure~\ref{fig:WorkFlow}. To help new users, a short tutorial, a mini course, and a detailed `About' page were available. Images were retired once a total of 15 volunteers had examined it and finished the workflow. A total of 137,515 VERITAS single-telescope images were served on the Muon Hunter website, with examples given in Figure~\ref{fig:MuonImages}. Most images are preprocessed using the standard VERITAS two-level cleaning \cite{VEGAS} based on the signal-to-noise ratio of each pixel, but a subset of uncleaned images were also uploaded to explore the effect of image cleaning on the results of classification. From 5,734 volunteers we received a total of $\sim$2.1~million classifications, half within the first week after the official launch of the project.
\begin{figure}
\begin{center}
\includegraphics[width=0.66\textwidth]{WorkFlow.png}
\end{center}
\caption{\label{fig:WorkFlow}The workflow for classifying images as muons or non-muons.}
\end{figure}
It was possible to draw multiple rings on one image, as sometimes more than one muon was recorded in an image. But this also allowed room for human error, especially when a user was unfamiliar with the workflow. After a user had completed the workflow of a subject image there was an option to discuss it in the Talk board. The Talk board enables interactions between experts and volunteers regarding specific images. Collections of interesting images are also established by users, including double muon rings and composite images with a muon ring and a background air shower, or particularly noteworthy, unexpected or unusual images that may be worthy of further investigation later.
\begin{figure}
\begin{center}
\includegraphics[width=0.25\textwidth]{run84628_evt567914_tel3_ring_comparison.pdf}
\includegraphics[width=0.25\textwidth]{run85061_evt1611_tel3_ring_comparison.pdf}
\includegraphics[width=0.25\textwidth]{run85061_evt8298_tel1_ring_comparison.pdf}
\end{center}
\caption{\label{fig:MuonImages}Example muon ring images. Standard analysis results are shown in magenta and Muon Hunter volunteer input in yellow.}
\end{figure}
Figure~\ref{fig:Classifications} summarizes the input we received from the volunteers of Muon Hunter. The distribution of the number of classifications each user made roughly follows a log-normal distribution (shown as the red dashed curve), with a median of 30 images per user. Assuming the number of classifications from one user is roughly proportional to the time spent, this log-normal distribution is of similar nature to the dwell time of internet users on social media articles \cite{MuonHunterICRC}. There are 16 volunteers who classified more than 10,000 images, while there are 724 volunteers who only classified one image.
Of 134,000 images 12\% had unanimous votes that a ring was present, 73\% were unanimous a ring was not present, and the remainder were split. The 15 votes for each image allowed us to estimate the confidence of the users’ classifications. The distribution of the fraction of votes for the presence of a ring in each image is shown in the right plot of Figure~\ref{fig:Classifications}.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{volunteer_distr_muon_hunter_lognormal_fit_v2.pdf}
\includegraphics[width=0.45\textwidth]{vote_distr_muon_hunter_v2.pdf}
\end{center}
\caption{\label{fig:Classifications}Left: the distribution of the number of classifications each user made. Right: distribution of votes for muon presence in the images.}
\end{figure}
The volunteers are also engaged through an active social media presence including an online blog\footnote{\texttt{muonhunterblog.wordpress.com} (to date 1301 reads from 55 countries)} giving details about the experiment from the researchers involved; Facebook\footnote{\texttt{www.facebook.com/muonhunters/ (to date 103 likes and 104 follows)}} and Twitter\footnote{\texttt{twitter.com/ZooniMuonHunter (to date 25 followers and 7 likes)}} feeds; and material like postcards for dispersal at public outreach events, e.g. school fairs, and the FLWO visitor center. The Muon Hunter Launch event was also advertised on Facebook for a week, the \$15 ad reached 9,299 people in 50 countries and received 80 event responses. The tracking aspects of these social media sites can also help us to get some insight into the demographics of the people we are reaching and engaging: for example as a function of gender and age group for our Facebook page, see Table~\ref{tab:Facebook}, we reached an audience composed close to parity of 47\% of women and 52\% men, but see that we engaged (with the Facebook page) only 35\% women and 64\% men. These insights can help us understand where we need to focus in the future to even out or encourage participation.
\begin{table}
\caption{\label{tab:Facebook}Muon Hunter Facebook page reach and engagement as a function of age group and gender.}
\begin{center}
\begin{tabular}{c|cc|cc}
\br
Age Group & \multicolumn{2}{c}{People Reached} & \multicolumn{2}{c}{People Engaged} \\
& Female & Male & Female & Male \\
\mr
13-17 & 20\% & 19\% & 23\% & 32\% \\
18-24 & 13\% & 17\% & 4\% & 14\% \\
25-34 & 7\% & 10\% & 5\% & 8\% \\
35-44 & 3\% & 3\% & 2\% & 5\% \\
45-54 & 2\% & 1\% & 0.629\% & 2\% \\
55-64 & 0.91\% & 0.836\% & 0.629\% & 2\% \\
65+ & 0.495\% & 0.579\% & 0\% & 0.629\% \\
\br
\end{tabular}
\end{center}
\end{table}
\section{CNN: training \& evaluation}
One purpose of this project is to train a reliable CNN model to classify muon rings.
Two sources of labels, one provided by the VERITAS analysis \cite{StandardAnalysis} and the other by the Muon Hunter user input, can be used for the training, validation, and testing of the models.
We randomly selected 16 observation runs, each 30 minutes in duration, and analysed them using one of the standard VERITAS data analysis packages, with the muon identification proceedure outlined in \cite{MuonHunterICRC}. A detailed description of the CNN model can be found in \cite{CNN}.
Treating all images with 10 or more of the 15 votes for muons as muon events and the rest as non-muon events, we were able to train a CNN model from the Muon Hunter volunteers with a test accuracy of $\sim$97\%; comparing favourably to the VERITAS standard analysis labels test accuracy of $\sim$95\%.
\section{Summary and future work}
Citizen science is a great resource for both outreach and practical science. Having a simple, clearly defined project task helped a lot with the success of the Muon Hunter project, for which we received a phenomenal response from our volunteers. The input from the volunteers not only helped us train a more efficient machine learning model and gain insight into where the standard analysis may be lacking, it also identified interesting new avenues to research on interesting and/or unexpected images that would be missed in a purely machine processed context which could, in turn, become the focus of future citizen science projects.
\ack
The authors gratefully acknowledge all the Muon Hunter volunteers who contributed to this effort without whom this work would not be possible. Muon Hunters was developed with the help of the ASTERICS Horizon2020 project. ASTERICS is a project supported by the European Commission Framework Programme Horizon 2020 Research and Innovation action under grant agreement \textnumero 653477. VERITAS research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, and by NSERC in Canada. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. The VERITAS Collaboration is grateful to Trevor Weekes for his seminal contributions and leadership in the field of VHE gamma-ray astrophysics, which made this study possible.
\section*{References}
|
{
"timestamp": "2018-02-27T02:07:49",
"yymm": "1802",
"arxiv_id": "1802.08907",
"language": "en",
"url": "https://arxiv.org/abs/1802.08907"
}
|
\section{Introduction}
Since the issue of global warming and climate change, originated from human activities of massively mining and consuming fossil fuels, is one of the greatest challenges of our time~\cite{Paris}, clean energy technologies are becoming increasingly important. More and more peoples are interested in the clean energy sources, such as wind and solar power, and electric vehicles or hybrid electric vehicles. In such intermittent energy systems and electrified drive transporters, electrochemical energy storage devices like rechargeable batteries are indispensable components. Lithium ion batteries (LIBs) have been dominating battery markets for portable electronic devices since their commercialization in 1991, due to their high specific capacity (low weight) and high energy density (low volume). However, the growing demand of LIBs and their introduction into vehicles have raised concerns about the low abundance and thus high cost of lithium resources~\cite{Tarascon10,Goodenough10}, together with other technical problems of safety, minimum charging time and cycle life~\cite{HZhang}. In recent years, therefore, numerous research studies have been made on alternatives with the same operational mechanism to LIBs based on high abundant and low cost elements such as sodium~\cite{JYHwang,Yabuuchi14,Sawicki} and potassium~\cite{Pramudita,BJi} towards a large-scale energy storage device with a reasonable cost.
Electrodes (cathode and anode) are the key components that determine the performance of alkali ion batteries, where alkali cations (\ce{Li+}, \ce{Na+}, \ce{K+}) intercalate/de-intercalate into/from the electrodes during the charge/discharge process. The standard anode widely used in commercial alkali ion batteries might be graphite because of its well-layered structure, composed of two-dimensional (2D) graphene layers bound through the weak van der Waals (vdW) attraction, which allows easy formation of graphite intercalation compounds (GICs)~\cite{Dresselhaus}. As electron donors, alkali metals (AMs) can form donor-acceptor type AM-GICs with the ionic bonding between AM and graphene layer, of which strength decreases as decreasing the atomic number (increasing the electropositivity) from Cs to Na, with the exception of Li due to the strong covalent bonding of Li$-$C~\cite{Moriwake,Nobuhara,ZWang}. From the theoretically calculated formation energies, it was found that for the stage 1 GICs the stability is in the order of \ce{KC6} $>$ \ce{LiC6} $>$ \ce{NaC6}, while \ce{NaC6} is thermodynamically unstable~\cite{Moriwake}. Moreover, the redox potential for K/\ce{K+} ($-2.93$ V) {\it vs.} standard hydrogen electrode is lower than for Na/\ce{Na+} ($-2.71$ V) and close to Li/\ce{Li+} ($-3.04$ V), suggesting higher cell voltage for K-ion battery (KIB) compared to Na-ion battery (NIB) and similar value to LIB. On the other hand, the larger radius ion was found to diffuse more smoothly in graphite due to the activation barriers in the order of Li $>$ Na $>$ K~\cite{Nobuhara,yucj06}. Together with a relatively high abundance, a low cost and environmental friendliness of potassium, these indicate more potentiality of KIB rather than NIB and LIB.
For KIB to become competitive with LIB in battery markets, however, suitable cathode material with a high capacity and a high cell voltage should be developed. As for LIBs and NIBs, many kinds of materials, such as layered metal oxides~\cite{Vaalma,XRen} and polyanionic compounds~\cite{JHan,Recham}, have shown promising properties. Alternatively, based on the unique amphoteric character of graphite that can accommodate both cations (electron donor) and anions (acceptor), graphite was suggested to be used as the cathode host as well, namely dual-ion battery (DIB) or dual-graphite battery (DGB), where AM cations and complex anions simultaneously intercalate/de-intercalate into the graphite based anode and cathode during the charge/discharge process~\cite{YWang,Lebdeh,Carlin,Seel,Schmuelling,Beltrop2,Rothermel,Meister}. Here, the anions are from ionic liquids used as electrolyte in ion batteries, which are typically hexafluorophosphate (\ce{PF6-})~\cite{Seel}, perchlorate (\ce{ClO4-})~\cite{Santhanam}, fluorosulfonyl imide (FSI, \ce{[N(SO2F)2]-}), and bis(trifluoromethanesulfonyl) imide (TFSI, \ce{[N(SO2CF3)2]-})~\cite{Fujii,Umebayashi,Henderson,Henderson2,Bhatt,Siqueira,Borodin1,Borodin2,Nicotera}. Recently, Beltrop {\it et al.}~\cite{Beltrop1} reported a novel potassium-based DGB (K-DGB), composed of $N$-butyl-$N$-methyl-pyrrolidinium TFSI (\ce{Pyr14+TFSI-}) + 0.3 M potassium TFSI (\ce{K+TFSI-}) + 2 wt\% ethylene sulfite (ES) as electrolyte and graphite as both the anode and cathode hosts, showing a relatively high reversible capacity of $\sim$230 mAh g$^{-1}$ and a high potential range from 3.4 V to 5.0 V {\it vs.} K/\ce{K+}. This requires to reveal the mechanism of formation and electrochemical properties of TFSI-GICs by applying the theoretical and computational method. To the best of our knowledge, however, there is no theoretical study on TFSI-GICs, although some simulation works for TFSI molecule~\cite{Fujii,Umebayashi,Siqueira,Borodin1} and first-principles work for other anion GICs such as \ce{PF6}- and \ce{ClO4}-GICs~\cite{Tasaki} have been reported.
In this study, we apply the first-principles method within the density functional theory (DFT) framework to TFSI-GICs, together with graphite and TFSI molecule, aiming to reveal energetics, structures, electrochemical properties, and formation mechanism. Supercells of TFSI-C$_n$, where the carbon atom number $n$ is determined from unit cells of graphene sheet with various sizes, are built, and their formation and interlayer binding energies are calculated. The electrode voltage, activation barrier for TFSI migration, and electronic conductance are considered. To obtain insights for GIC formation, we perform an analysis of electronic density difference and atomic charge.
\section{Computational methods}
The DFT calculations were performed with the pseudopotential plane wave method as implemented in Quantum ESPRESSO code (version 5.3.0)~\cite{QE}, using the ultrasoft pseudopotentials provided in the package.~\footnote{We used the Vanderbilt-type ultrasoft pseudopotentials C.pbe-van\_ak.UPF, N.pbe-van\_ak.UPF, S.pbe-van\_bm.UPF, O.pbe-van\_ak.UPF, and F.pbe-n-van.UPF, which are provided in the package.} Here, valence electrons of C 2s$^2$2p$^2$, N 2s$^2$2p$^3$, O 2s$^2$2p$^4$, S 3s$^2$3p$^4$, and F 2s$^2$2p$^5$ were explicitly considered. For exchange-correlation (XC) interaction between the valence electrons, Perdew-Burke-Ernzerhof (PBE)\cite{PBE} functional within the generalized gradient approximation (GGA) was used, and using the vdW-DF2 functional~\cite{vdwDF2}, the dispersive energy was added. The cutoff energies were set to be 40 Ry for a plane wave basis set and 400 Ry for an electronic density. An isolated TFSI molecule was simulated using the cubic supercell with a lattice constant of 17 \AA, which is large enough to prevent the artificial interaction between the neighbouring images. Only $\Gamma$ point in the Brillouin zone (BZ) was used for this isolated molecule. To make modelling stage 1 GICs, TFSI-C$_n$, one TFSI molecule is placed between the graphene sheets in $AA$-stacked graphite with various cell sizes of $(3\times 3)$, $(4\times 3)$, $(4\times 4)$, $(5\times 4)$, $(5\times 5)$, $(6\times 5)$ and $(6\times 6)$, which give the carbon atom number as $n$ = 18, 24, 32, 40, 50, 60 and 72, respectively. The special $k$-points for the BZ sampling were constructed using the $(3\times 3\times 3)$ and $(3\times 3\times 5)$ Monkhorst-Pack meshes for the $(3\times 3)$ $-$ $(5\times 4)$ and $(5\times 5)$ $-$ $(6\times 6)$ cells for atomic relaxations, while denser $k$-points using the $(8\times 8\times 8)$ and $(8\times 8\times 10)$ meshes for DOS calculations. These computational parameters of cutoff energy and $k$-point mesh could produce the absolute total energy convergence better than 1 meV per atom. In the atomic relaxations, the forces on each atom converged to within $5\times10^{-4}$ Ry Bohr$^{-1}$.
In order to study of the TFSI intercalation into graphite, we assumed the following process~\cite{Tasaki},
\begin{gather}
\ce{TFSI-}-e \rightarrow \ce{TFSI} \\
\ce{TFSI}+\ce{C}_n \rightarrow \text{TFSI-C}_n \label{eq2}
\end{gather}
The formation energy of TFSI-C$_n$ compound per formula unit (four carbon atoms) can be calculated as follows,
\begin{equation}
E_f = \frac{4}{n}\left(E({\text{TFSI-C}_n}) - \frac{n}{4}E(\text{graphite}) - E(\text{TFSI})\right) \label{eq_Ef}
\end{equation}
where $E({\text{TFSI-C}_n})$, $E(\text{graphite})$, and $E(\text{TFSI})$ are the DFT total energies of TFSI-C$_n$ supercell, graphite unit cell, and isolated TFSI supercell, respectively. The negative formation energy indicates that the GIC is thermodynamically stable, {\it i.e.} exothermic formation of the GIC from graphite and TFSI molecule. The interlayer binding energy (or exfoliation energy) per carbon atom can be calculate as follows~\cite{yucj14},
\begin{equation}
E_b = \frac{1}{n}\left(E({\text{TFSI-C}_n})_{(d_i=d_e)} - E({\text{TFSI-C}_n})_{(d_i=d_\infty)}\right)\label{eq_Eb}
\end{equation}
where $d_i$ is the interlayer distance, $d_e$ the equilibrium interlayer distance, and $d_\infty$ might be 20 \AA~over which the total energy is little changed. To calculate the activation barrier for TFSI migration inside graphite, the climbing image nudged elastic band (CI-NEB) method~\cite{NEB} was applied to TFSI-\ce{C50} compound using seven image points and convergence threshold of 0.05 eV \AA$^{-1}$ for the force on band orthogonal to path. The supercell dimensions were fixed at the optimized one, while all atoms were relaxed.
\section{Results and discussion}
\subsection{Structure and energetics}
In compliance with the well-known fact that standard XC functionals within GGA or LDA poorly describe graphite and other carbon layered materials, and therefore, vdW functional should be adopted~\cite{yucj06,yucj14,yucj04,Tsai}, we first assessed the reliability of various vdW functionals by calculating lattice constants of graphite. Table S1$\dag$ shows that all PBE + vdW functionals adopted in this work, even PBE itself, reproduced the experimental in-plane lattice constant (2.461 \AA~\cite{boettiger}), governed by the \ce{C-C} covalent ($\sigma$ and $\pi$) bonding, while vdW-DF2 functional~\cite{vdwDF2} (relative error 1.64\%) yielded the best agreement with the experimental interlayer distance (3.353 \AA~\cite{boettiger}). As the interlayer distance is perpendicular to the graphene sheet and along the direction of the vdW interaction, vdW-DF2 functional was decided to be the most reliable one for graphite and further GICs. For a check of the stability of graphite, the interlayer binding energy was calculated to be $-$78 meV per atom. When compared with the experimental values in the range of $-$35 $\sim$ $-$52 meV~\cite{graphitexf1,graphitexf2}, our calculation gave a slight overestimation, but close to other theoretical results, such as $-$69 meV with the inclusion of vdW energy by Tasaki~\cite{Tasaki}, and $-$61 $\sim$ $-$74 meV with the combination of DFT and perturbation theory by Dapper {\it et al.}~\cite{graphitexf3}.
\begin{table*}[!th]
\centering
\caption{\label{tab_tfsi}Bond length, bond angle, dihedral angle and molecular volume of TFSI in isolated state and TFSI-C$_n$ compounds ($n$ = 18, 24, 32, 40, 50), determined using the PBE + vdW-DF2 functional. Molecular volume is calculated based on the Connolly surface.}
\begin{tabular}{ccccccc}
\hline
& & \multicolumn{5}{c}{TFSI-C$_n$} \\
\cline{3-7}
& TFSI & 18 & 24 & 32 & 40 & 50 \\
\hline
\multicolumn{7}{l}{Bond length (\AA)} \\
S$-$N &1.642 &1.644 &1.623 &1.618 &1.616 &1.614 \\
S$-$O &1.463 &1.453 &1.451 &1.449 &1.448 &1.447 \\
S$-$C &2.027 &1.901 &1.905 &1.901 &1.901 &1.900 \\
C$-$F &1.343 &1.370 &1.377 &1.379 &1.381 &1.383 \\
\multicolumn{7}{l}{Bond angle (degree)} \\
S$-$N$-$S &130.06 &126.07 &126.19 &125.64 &125.68 &125.26 \\
N$-$S$-$O &112.28 &116.52 &115.57 &115.73 &115.88 &115.98 \\
&105.61 &106.55 &107.67 &108.09 &108.21 &108.51 \\
N$-$S$-$C &107.81 &100.60 &103.46 &103.33 &103.31 &103.20 \\
O$-$S$-$C &105.44 &105.73 &104.23 &104.02 &103.86 &103.69 \\
O$-$S$-$O &119.59 &119.58 &119.65 &119.53 &119.51 &119.46 \\
S$-$C$-$F &105.97 &109.38 &110.45 &110.90 &111.12 &111.33 \\
F$-$C$-$F &112.74 &109.56 &108.47 &108.00 &107.77 &107.54 \\
\multicolumn{7}{l}{Dihedral angle (degree)} \\
S$-$N$-$S$-$C&55.74 &84.94 &88.28 &87.77 &87.87 &87.70 \\
\multicolumn{7}{l}{Molecular volume (\AA$^3$)} \\
&164.4 &161.6 &163.7 &162.6 &163.7 &163.3 \\
\hline
\end{tabular}
\end{table*}
We then considered the isolated TFSI molecule, as assumed in this work according to Eq.~\ref{eq2}, although TFSI exists in the anion state in the form of room-temperature ionic liquids, such as \ce{Li+TFSI-}~\cite{Umebayashi,Henderson,Borodin2}, \ce{K+TFSI-}~\cite{Beltrop1} and \ce{Pyr14+TFSI-}~\cite{Henderson,Siqueira,Borodin1,Borodin2,Nicotera,Beltrop1}, used as electrolyte for polymer batteries and alkali ion batteries. The molecular structure was optimized after conformation search and analysed with a comparion to those of TFSI-C$_n$ compounds. In addition, the electrostatic potential and frontier molecular orbitals such as the highest occupied molecular orbitals (HOMO) and the lowest unoccupied molecular orbitals (LUMO) were calculated. Since TFSI has the molecular structure of \ce{F3C-S(O2)-N-S(O2)-CF3} and the terminal \ce{CF3} group can rotate along the \ce{S-N} bond to give rotational isomers, there are typically two conformations according to the dihedral \ce{C-S-N-C} angles; $C_1$ ($cis$) and $C_2$ ($trans$)~\cite{Fujii,Umebayashi}. In accordance with the previous theoretical works employing the Gaussian orbital method~\cite{Fujii,Umebayashi} and force-field molecular dynamics~\cite{Siqueira,Borodin1}, it turned out from conformation search with the systematic grid scan method that the $C_1$ conformer is the energetically lowest conformer.
\begin{figure}[!b]
\centering
\includegraphics[clip=true,scale=0.16]{fig1.eps}
\caption{\label{fig_tfsi}(a) The molecular structure of TFSI molecule with the lowest energy conformation, optimized using the PBE + vdW-DF2 functional, (b) 3D volumetric map of its electrostatic potential, and (c) isosurface plot of frontier molecular orbitals including HOMO and LUMO.}
\end{figure}
This conformer was placed within the cubic supercell with a lattice constant of 17~\AA~to perform atomic relaxation using the vdW-DF2 functional. Effective screening medium method~\cite{esm} was applied to this isolated molecule as implemented in the code. The optimized molecular structure of TFSI molecule is depicted in Figure~\ref{fig_tfsi}(a), and the chemical bonding properties such as bond length, bond angle and dihedral angle are presented in Table 1. When compared with those by Gaussian method~\cite{Fujii,Umebayashi}, the bond lengths were calculated to be slightly larger in overall, especially \ce{S-C} bond length, possibly due to the inclusion of vdW interaction. To obtain qualitative insight for the electrochemical reduction stability, we present electrostatic potential maps in Figure~\ref{fig_tfsi}(b). The regions of lowest electrostatic potential (Figure~\ref{fig_tfsi}(b), bluest region) are found in the vicinity of O atoms, whilst the regions of highest electrostatic potential (Figure~\ref{fig_tfsi}(b), reddest region) around the F, C and S atoms, indicating the electrochemical reduction will be occurred by oxygen atoms. We also plotted the frontier molecular orbitals including HOMO and LUMO, as presented in Figure~\ref{fig_tfsi}(c). It is interesting that around S atoms neither HOMO nor LUMO is found, indicating a different role of S atom from other atoms. In addition, the intra-molecular charge separation upon excitation can not be said to occur due to no clear separation between HOMO and LUMO distribution regions.
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.49]{fig2.eps}
\caption{\label{fig_lengef}(a) The interlayer distance ($d_i$) and relative volume expansion ratio ($r_\text{vol}$), and (b) formation energy per formula unit (4 carbon atoms) ($E_f$), as functions of carbon atom number (C$_n$) of graphene sheet in TFSI-C$_n$ compounds. Cell sizes of graphene sheets are marked in (b).}
\end{figure}
With the structural and energetic characteristics of graphite and TFSI molecule, next, we applied the same DFT scheme to the TFSI-C$_n$ compounds under study. Due to a certain amount of TFSI molecular volume, the minimum cell size of 2D graphene sheet should be determined to intercalate it into graphite. The molecular volume of TFSI was calculated to be 152.9 or 164.4 \AA$^3$ based on the vdW or Connolly surface, which gives a radius of 5.35 or 5.48 \AA~when reduced to the sphere. Then, the $(3\times3)$ graphene sheet with a cell width of 7.41 \AA~should be minimal for TFSI to be fully placed inside the sheet. On the other hand, it was found that the stability of ternary GICs with the co-intercalate consisted of AM atom and diglyme solvent molecule, as estimated by the formation energy, reduces as decreasing the concentration of intercalate, and even the GIC becomes unstable under a certain value of concentration~\cite{yucj14}. Thus, we calculated the formation energies of TFSI-C$_n$ as systematically increasing the number of carbon atom in graphene sheet, {\it i.e.}, increasing the cell size of graphene sheet from $(3\times3)$ ($n$ = 18) to $(6\times6)$ ($n$ = 72) (decreasing the TFSI concentration). Figure~\ref{fig_lengef} shows the formation energy as a function of carbon atom number, together with the interlayer distances and volume expansion ratios relative to pristine graphite. It is seen that the formation energy has the minimum value as $-$2.07 eV in the \ce{C18} compound, and increases linearly as increasing the carbon atom number from $n$ = 24 (similar value of $-$2.01 eV to \ce{C18}), becoming positive over $(6\times5)$ cell ($n$ = 60). This indicates that TFSI-C$_n$ formation over $n$ = 60, from graphite and TFSI molecule, is not suitable thermodynamically, and therefore, TFSI-C$_n$ with $n$ = 18, 24, 32, 40, and 50 were considered for further calculation and analysis.
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.27]{fig3.eps}
\caption{\label{fig_struct}Side and top views of supercells of TFSI-C$_n$ compounds ($n$ =18, 24, 32, 40, 50), optimized using the PBE + vdW-DF2 functional. Interlayer distance and height of N atom to graphene sheet in the unit of \AA~are marked.}
\end{figure}
Figure~\ref{fig_struct} shows the atomistic structures of these GICs, optimized with PBE + vdW-DF2 functional. It was shown that the interlayer distance has the maximum value in TFSI-\ce{C18} as 7.89 \AA, determined through the variable cell optimization, and decreases as a linear function of carbon atom number, while the relative volume expansion ratio also decreases from 130\% (\ce{C18}) to 127\% (\ce{C50}) (see Figure~\ref{fig_lengef}(a)). The height of N atom to the bottom graphene sheet decreases monotonically as increasing the carbon atom number. When compared with the experimental values of $d_i\approx8.21$ \AA~and $r_\text{vol}\approx140$\%~\cite{Schmuelling,Beltrop1}, our calculation gave underestimation in accordance with the above-mentioned slight overestimation of interlayer binding. In Table~\ref{tab_tfsi}, we present the bond length, bond angle, dihedral angle and molecular volume of TFSI molecule intercalated into graphite, in comparison with those in isolated state. It was observed that the bond lengths of \ce{S-N}, \ce{S-O} and \ce{S-C} lessens upon the intercalation of TFSI into graphite, whilst the \ce{C-F} bond lengthens. The bond angles changes also systematically in overall, and in particular, the dihedral angle of \ce{S-N-S-C} changed greatly from 56 degree in isolated state to over 84 degree in TFSI-C$_n$ GICs. These correlate well with the molecular volume contraction from 164.4 \AA$^3$ in isolated state to 161.6 $\sim$ 163.7 \AA$^3$ in GICs. Note that the contraction of TFSI molecular volume upon intercalation is the most remarkable in the $(3\times3)$ cell (\ce{C18}), the highest concentration of intercalate.
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.52]{fig4.eps}
\caption{\label{fig_exfol}The interlayer bind energy energy per carbon atom in TFSI-C$_n$ compounds with $n$ = 18, 24, 32, 40, and 50, calculated using the PBE + vdW-DF2 functional. Inset shows one of graphite.}
\end{figure}
As the another energetic property of GICs in addition to formation energy, we calculated the interlayer binding energy (Eq.~\ref{eq_Eb}), representing the strength of binding between graphene sheets, as shown in Figure~\ref{fig_exfol}. The interlayer binding energy of TFSI-C$_n$ compounds decreases in magnitude as increasing the cell size of graphene sheet, indicating a decrease of binding strength, which correlates well with the tendency of formation energy. Moreover, those with $n$ = 18 ($-$148 meV), 24 ($-$122 meV), 32 ($-$93 meV) are bigger than that of graphite ($-$78 meV), indicating that the GIC formation in these cases enhances the interlayer binding compared with graphite due to a change of bonding from vdW characteristics to ionic one. If the number of carbon atom is over 32, {\it i.e.} $n$ =40 ($-$73 meV) and 50 ($-$58 meV), however, those are smaller than that of graphite, indicating a weakening of interlayer binding.
\subsection{Electrochemical characteristics}
In this subsection, we consider typical electrochemical characteristics of TFSI-C$_n$ compounds, such as electrode voltage, intercalate diffusion and electronic conductivity. As the electrode voltage is one of the most crucial electrochemical properties, better cathode material should possess higher electrode voltage. Providing that, during the charge process in DIB, TFSI-GIC transforms from the compound of higher carbon atom number (lower concentration of TFSI intercalate) to that of lower one (higher concentration of TFSI intercalate), we can change the notation of TFSI-GICs from TFSI-C$_n$ to TFSI$_{x_i}$-C$_{60}$, where $x_i=60/n$ using C$_{60}$ GIC as the starting compound. Then, using the total energies of the compounds, the electrode voltage can be calculated as follows,
\begin{equation}
V_\text{el}=-\frac{x_iE(\ce{TFSI}_{x_i}\text{-C}_{60})-x_jE(\ce{TFSI}_{x_j}\text{-C}_{60})-(x_i-x_j)E(\ce{TFSI})}{(x_i-x_j)e}
\end{equation}
where $e$ is the elementary charge of electron. It should be noted that, although TFSI-\ce{C60} compound was estimated to be unstable, it can be used as a reference compound due to its low positive formation energy as 0.10 eV (Figure~\ref{fig_lengef}(b)).
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.14]{fig5.eps}
\caption{\label{fig_vel}The electrode voltage step as increasing the carbon atom number and accordingly specific capacity. Inset shows experimental electrode potential obtained by Beltrop {\it et al.}~\cite{Beltrop1}, with the red rectangle indicating the range corresponding to calculation.}
\end{figure}
\begin{table}[!b]
\caption{\label{tab_sum}Overview of interlayer distance ($d_i$), relative volume expansion ratio ($r_\text{vol}$), formation energy per formula unit ($E_f$), interlayer binding energy per carbon atom ($E_b$), specific capacity (SC) and average electrode voltage ($V_\text{el}$) in TFSI-C$_n$ GICs with $n$ from 18 to 50.}
\begin{tabular}{ccccccc}
\hline
& $d_i$ & $r_\text{vol}$ & $E_f$ & $E_b$ & SC & $V_\text{el}$ \\
C$_n$& (\AA) & (\%) & (eV) & (meV) & (mAh g$^{-1}$) & (V) \\
\hline
18 & 7.89 & 129.76 & $-$2.07 & $-$148 & 54.0 & 3.00 \\
24 & 7.85 & 128.44 & $-$2.01 & $-$122 & 47.2 & 3.42 \\
32 & 7.83 & 128.39 & $-$1.61 & $-$93 & 40.3 & 3.56 \\
40 & 7.82 & 128.11 & $-$1.15 & $-$73 & 35.2 & 3.64 \\
50 & 7.79 & 127.47 & $-$0.55 & $-$58 & 30.4 & 3.81 \\
\hline
\end{tabular}
\end{table}
Figure~\ref{fig_vel} shows the electrode voltage step as progressing the charge process, {\it i.e.} increasing the number of carbon atom of graphene sheet and accordingly the specific capacity. It was found that as decreasing the carbon atom number, {\it i.e.} cell size of graphene sheet, the specific capacity increases from 30.4 mAh g$^{-1}$ (TFSI-\ce{C50}) to 54.0 mAh g$^{-1}$ (TFSI-\ce{C18}), yielding the voltage steps from 3.81 V to 2.23 V. When compared with the experimental voltage range from 5.0 V to 3.4 V {\it vs.} K/\ce{K+} and specially from 4.4 V to 3.4 V corresponding to the capacity range from 30 to 45 mAh g$^{-1}$, our calculated values were in reasonable agreement with the experiment. Table~\ref{tab_sum} presents an overview of structural, energetic, and electrochemical characteristics of TFSI-C$_n$ compounds with $n$ from 18 to 50, including interlayer distance, relative volume expansion ratio, formation energy, interlayer binding energy, specific capacity and average electrode voltage.
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.11]{fig6.eps}
\caption{\label{fig_neb}The activation barrier for TFSI migration inside the space between graphene sheets along the path across one carbon hexagon. Top views of starting and transition states are presented.}
\end{figure}
The mobility of intercalate inside space formed between graphene sheets can determine the rate capability and cycling stability of graphite based electrode. In this respect, we calculated the activation barriers for TFSI migrations along possible paths across one carbon hexagon by applying the CI-NEB method~\cite{NEB} to TFSI-\ce{C50} GICs. The activation barrier was calculated to be very low like at most $\sim$35 meV, as shown in Figure~\ref{fig_neb} (see Figure S1-S3$\dag$ for those along other paths). When compared with those for alkali cation (0.2 - 0.4 eV)~\cite{Nobuhara} and co-intercalate composed of AM and solvent molecule (0.1 $\sim$ 0.6 eV)~\cite{yucj06,yucj14} migrations inside graphite, this is surprisingly so low that TFSI can migrate almost freely with a very long diffusion length. This suggests a very fast charging time and very long cycling life in DIB. However, it should be noted that much higher activation energy can be required for the first anion uptake, due to the initial widening of the interlayer distance between graphene sheet against the attracting vdW forces.
\begin{figure}[!t]
\centering
\includegraphics[clip=true,scale=0.55]{fig7.eps}
\caption{\label{fig_dos}The electronic density of states (DOS) in isolated TFSI molecule, graphite, and TFSI-C$_n$ compounds with $n$ = 18, 24, 32, 40, and 50, calculated using the PBE + vdW-DF2 functional. In addition to total DOS, angular momentum dependent projected DOS for the case of graphite and partial DOS of TFSI and graphene sheet for the cases of GICs are presented. Fermi energy is set to zero as indicated by dotted vertical line.}
\end{figure}
The electronic conductivity is also an important property for understanding the battery operation. As we assumed for \ce{TFSI-} anion intercalation into graphite, the \ce{TFSI-} anion releases an electron before stating the intercalation, and the electron can be transmitted through GICs to the current collector. Therefore, TFSI-C$_n$ compound should be an electronic conductor like many other electrode materials. The electron conductivity of materials can be judged qualitatively by analysing the electronic density of states (DOS). Figure~\ref{fig_dos} presents the total DOS in isolated TFSI molecule, graphite, and TFSI-C$_n$ compounds, with angular momentum projected DOS for the case of graphite and atom resolved partial DOS for the cases of GICs. As already being well-known, graphite was confirmed to have good in-plane electronic conductivity due to an existence of $p_z$ electron states over the Fermi level, but no inter-plane conductivity due to $s$ and $p_x$ or $p_y$ states away from the Fermi level. Upon TFSI intercalation into graphite, the in-plane conductivity of graphene sheet was observed to preserve well due to electronic states of graphene sheets around the Fermi level in TFSI-C$_n$ compounds. When increasing the concentration of TFSI intercalate (decreasing the carbon atom number), the unoccupied states of graphene are getting further away from Fermi level, indicating a reduction of electronic conductivity. In addition, the overlap of electronic states between graphene sheet and TFSI molecule was observed below and above the Fermi level, indicating hybridization between $\sigma$ bonding orbitals of graphene sheet and molecular orbitals of TFSI molecule.
\subsection{Electronic charge transfer}
\begin{figure*}[!t]
\centering
\includegraphics[clip=true,scale=0.35]{fig8.eps}
\caption{\label{fig_chg}Isosurface plot of the electronic density differences at the value of 0.002 $|e|$ \AA$^{-3}$ upon TFSI intercalation into graphite forming TFSI-C$_n$ compounds with $n$ = 18, 24, 32, 40, and 50. Yellow colour is for positive value (electron gain), while cyan colour is for negative value (electron loss).}
\end{figure*}
Finally, we performed an analysis of electronic density difference and atomic charge to obtain insights into the charge transfer occurred upon TFSI intercalation into graphite forming TFSI-C$_n$ compounds and the chemical bonding between TFSI and graphene layer. This can be meaningful for understanding the mechanism of TFSI-GIC formation. The electronic density difference ($\Delta\rho$) is obtained as the difference between the electronic densities of the TFSI-C$_n$ compounds ($\rho(\ce{TFSI}\text{-C}_n)$) and those of the graphene sheet ($\rho(\text{C}_n)$) and TFSI molecule ($\rho(\ce{TFSI})$) as follows,
\begin{equation}
\Delta\rho=\rho(\ce{TFSI}\text{-C}_n)-\rho(\text{C}_n)-\rho(\ce{TFSI})
\end{equation}
where the atomic positions are fixed as the optimized one.
Figure~\ref{fig_chg} shows the electronic density differences with an isosurface plot at the value of 0.002 $|e|$ \AA$^{-3}$. On condition that the positive value (yellow colour) indicates the electronic density accumulation and the negative value (cyan colour) the density depletion, it was found that the valence electronic density of the graphene layer has transferred almost totally to the TFSI molecule, and the extent of charge transfer decreases gradually as increasing the cell size of graphene layer due to an intuitive gradual reduction in the amount of charge depletion on the graphene layer. Nevertheless, the amount of electronic density difference around the carbon atoms facing the oxygen and nitrogen atoms of TFSI is kept almost to be constant. Moreover, the distribution of electronic density difference around the TFSI molecule also seems to be little changed. Inside the TFSI molecule, the major portion of charge accumulation is found around the O and N atoms, confirming the ionic character of the bonding between the nitrogen as well as oxygen atoms of TFSI molecule and the facing carbon atoms of graphene layer.
To quantitatively estimate the amount of transferred charge, the atomic charges calculated by using the L\"{o}wdin method were analysed. As shown in Table S2$\dag$, upon the intercalation of TFSI into graphite, graphene layer donates electrons due to a decrease of its average L\"{o}wdin charge per atom from 3.962 in graphite to 3.936 (\ce{C50}) $\sim$ 3.902 (\ce{C18}) in GICs, while TFSI molecule plays a role of electron acceptor due to an increase of its L\"{o}wdin charge from 5.973 in the isolated state to 6.036 in GICs. As increasing the cell size of graphene sheet in the cases of GICs, the amount of donated electron by graphene layer decreases due to an increase of L\"{o}wdin charge per carbon atom, but the average charge of TFSI molecule per atom keeps a constant, which are in good agreement with the intuitive observation of electronic density differences. Inside the TFSI molecule, only the charge of S atom decreases upon the intercalation, while those of other atoms increase, indicating that S atoms also donate electrons, as already mentioned in the analysis of molecular orbitals.
\section{Conclusions}
With first-principles calculations using the PBE + vdW-DF2 functional, we studied the intercalation of TFSI anion into graphite, aiming to identify the atomistic structures, energetics, electrochemical properties, and electronic charge transfer of TFSI-C$_n$ compounds for dual ion battery application. We made modelling of these compounds using various sizes of graphene unit cells from $(3\times3)$ to $(6\times6)$, producing the number of carbon atoms $n$ from 18 to 72, and checked their formation energies, clarifying the linear increase of formation energy as increasing the cell size and thermodynamic instability of GICs over $n$ = 60 due to positive formation energies. Together with the decrease of interlayer binding energy as well, this indicates that at low concentration of TFSI intercalate TFSI-C$_n$ compounds can not be formed. As increasing the carbon atom number in TFSI-C$_n$ compounds, the interlayer distance was found to decrease from 7.89 \AA~(\ce{C18}) to 7.79 \AA~(\ce{C50}), and accordingly the volume expansion ratio relative to graphite also decreases from 130 \% to 127\% in slight underestimation compared with experiment. The average electrode potential during the charge process was determined to be ranged from 3.8 V to 3.0 V at the specific capacity range from 30 mAh g$^{-1}$ to 54 mAh g$^{-1}$ in reasonable agreement with experiment. In particular, the activation barrier for TFSI migration inside graphite was calculated to be quite low like under 50 meV, suggesting the very fast charging time. The analysis of DOS indicates that these compounds can be electron conductors and the conductivity seems to decrease as increasing the concentration of intercalate. Through the analysis of electronic density difference and atomic charges, the graphene layer and TFSI molecule in these compounds play role of electron donor and acceptor, respectively. The calculation results clarify the mechanism of TFSI-C$_n$ formation and reveal new prospects for developing graphite based cathode materials of alkali ion batteries.
\section*{Acknowledgments}
This work was supported partially from the State Committee of Science and Technology, Democratic People's Republic of Korea, under the fundamental research project ``Design of Innovative Functional Materials for Energy and Environmental Application'' (No. 2016-20). The calculations in this work were carried out on the HP Blade System C7000 (HP BL460c) that is owned by Faculty of Materials Science, Kim Il Sung University.
\section*{Appendix A. Supplementary data}
Supplementary data related to this article can be found at URL.
\section*{Notes}
The authors declare no competing financial interest.
\bibliographystyle{elsarticle-num-names}
|
{
"timestamp": "2018-02-27T02:03:12",
"yymm": "1802",
"arxiv_id": "1802.08775",
"language": "en",
"url": "https://arxiv.org/abs/1802.08775"
}
|
\section{Introduction}
The accurate mathematical modeling of many important applications, e.g., composite materials, porous media and reservoir
simulation, calls for elliptic problems with heterogeneous coefficients. In order to adequately describe the intrinsic complex properties in
practical scenarios, the heterogeneous coefficients can have
both multiple inseparable scales and high-contrast. Due to the disparity of scales, the classical numerical treatment becomes prohibitively expensive
and even intractable for many multiscale applications. Nonetheless, motivated by the broad spectrum of practical applications, a large number of multiscale model reduction techniques, e.g., multiscale finite element methods (MsFEMs),
heterogeneous multiscale methods (HMMs), variational multiscale methods, flux norm approach, generalized multiscale
finite element methods (GMsFEMs) and localized orthogonal decomposition (LOD), have been proposed in the literature
\cite{MR1455261,MR1979846,MR1660141,MR2721592, egh12, MR3246801, li2017error} over
the last few decades. They have achieved
great success in the efficient and accurate simulation of heterogeneous problems. Amongst these numerical methods, the GMsFEM \cite{egh12} has
demonstrated extremely promising numerical results for a wide variety of problems, and thus it is becoming
increasingly popular. However, the mathematical understanding of the method remains largely missing, despite numerous
successful empirical evidences. The goal of this work is to provide a mathematical justification, by rigorously
establishing the optimal convergence of the GMsFEMs in the energy norm without any restrictive assumptions or oversampling technique.
We first formulate the heterogeneous elliptic problem. Let $D\subset
\mathbb{R}^d$ ($d=1,2,3$) be an open bounded Lipschitz domain {with a boundary $\partial D$}. Then we seek a function $u\in V:=H^{1}_{0}(D)$ such that
\begin{equation}\label{eqn:pde}
\begin{aligned}
\mathcal{L}u:=-\nabla\cdot(\kappa\nabla u)&=f &&\quad\text{ in }D,\\
u&=0 &&\quad\text{ on } \partial D,
\end{aligned}
\end{equation}
where the force term $f\in L^2(D)$ and the permeability coefficient $\kappa\in L^{\infty}(D)$ with $\alpha\leq\kappa(x)
\leq\beta$ almost everywhere for some lower bound $\alpha>0$ and upper bound $\beta>\alpha$. We denote by $\Lambda:=
\frac{\beta}{\alpha}$ the ratio of these bounds, {which reflects the contrast of the coefficient $\kappa$}. Note that
the existence of multiple scales in the coefficient $\kappa$ rends directly solving Problem \eqref{eqn:pde} challenging, since
resolving the problem to the finest scale would incur huge computational cost.
The goal of the GMsFEM is to efficiently capture the large-scale behavior of the solution $u$ locally without
resolving all the microscale features within. To realize this desirable property, we first discretize the computational
domain $D$ into a coarse mesh $\mathcal{T}^H$. Over $\mathcal{T}^H$, we define the classical multiscale
basis functions $\{\chi_i\}_{i=1}^{N}$, with $N$ being the total number of coarse nodes. Let $\omega_i:=\text{supp}
(\chi_i)$ be the support of $\chi_i$, which is often called a local coarse neighborhood below. To
accurately approximate the local solution $u|_{\omega_i}$ (restricted to $\omega_i$), we construct
a local approximation space. In practice, two types of local multiscale spaces are frequently employed:
local spectral space ($V_{\text{off}}^{\si, \ell_i^{\roma}}$, of dimension $\ell_i^{\roma}$) and local harmonic space
$V_{\text{snap}}^{\hi}$. The dimensionality of the local harmonic space $V_{\text{snap}}^{\hi}$ is problem-dependent, and it can be
extremely large when the microscale within the coefficient $\kappa$ tends to zero. Hence, a further local model reduction based
on proper orthogonal decomposition (POD) in $V_{\text{snap}}^{\hi}$ is often employed. We denote the corresponding
local POD space of rank $\ell_i$ by $V_{\text{off}}^{\hi, \ell_i}$. In sum, in practice, we can have three types of local
multiscale spaces at our disposal: $V_{\text{off}}^{\si, \ell_i}$, $V_{\text{snap}}^{\hi}$ and $V_{\text{off}}^{\hi, \ell_i}$ on
$\omega_i$. These basis functions are then used in the standard finite element framework, e.g., continuous
Galerkin formulation, for constructing
a global approximate solution.
One crucial part in the local spectral basis construction is to include local spectral basis functions ($V_{\text{off}}^{\ti, \ell_i^{\romb}}$, of dimension $\ell_i^{\romb}$) governed by Steklov eigenvalue problems \cite{MR2770439}, which was first applied to the context of the GMsFEMs in \cite{MR3277208}, to the best of our knowledge.
This was motivated by the decomposition of the local solution $u|_{\omega_i}$ into the sum of three components, cf. \eqref{eq:decomp}, where the first two components can be approximated efficiently by the local spectral space $V_{\text{off}}^{\si, \ell_i^{\roma}}$ and $V_{\text{off}}^{\ti, \ell_i^{\romb}}$, respectively, and the third component is of rank one and can be obtained by solving one local problem.
The good approximation property of these local multiscale spaces to the solution $u|_{\omega_i}$ of problem
\eqref{eqn:pde} is critical to ensure the accuracy and efficiency of the GMsFEM. We shall present relevant
approximation error results for the preceding three types of multiscale basis functions in Proposition \ref{prop:projection}, Lemma \ref{lemma:u2}, Lemma \ref{lem:energyHA} and Lemma
\ref{lem:5.2}. It is worth pointing out that the proof of Proposition \ref{prop:projection} relies crucially on the expansion of the
source term $f$ in terms of the local spectral basis function in Lemma \ref{lem:assF}. Thus the argument differs
substantially from the typical argument for such analysis that employs the oversampling argument together with a Cacciopoli type
inequality \cite{babuska2011optimal,eglp13}, and it is of independent interest by itself.
The proof to Lemma \ref{lemma:u2} is very critical. It relies essentially on the transposition method \cite{MR0350177},
which bounds the weighted $L^2$ error estimate in the domain by the boundary error estimate, since the latter can be
obtained straightforwardly. Most importantly, the involved constant is independent of the contrast in the
coefficient $\kappa$. This result is presented in Theorem \ref{lem:very-weak}.
To establish Lemmas \ref{lem:energyHA} and \ref{lem:5.2}, we make one mild assumption on the
geometry of the coefficient, cf. Assumption \ref{ass:coeff}, which enables the use
of the weighted Friedrichs inequality in the proof.
In addition, since the local multiscale basis functions in $V_{\text{off}}^{\hi, \ell_i}$ are $\kappa$-harmonic
and since the weighted $L^2(\omega_i)$ error estimate can be obtained directly from the POD, cf. Lemma \ref{lem:5.1}, we employ a
Cacciopoli type inequality \cite{MR717034} to prove Lemma \ref{lem:5.2}. Note that our analysis does not
exploit the oversampling strategy, which has played a crucial role for proving energy error estimates
in all existing works \cite{babuska2011optimal,eglp13,MR3246801,chung2017constraint}.
Together with the conforming Galerkin formulation and the partition of unity functions $\{\chi_i\}_{i=1}^N$
on the local domains $\{\omega_i\}_{i=1}^{N}$, we obtain three types of multiscale methods to solve
problem \eqref{eqn:pde}, cf. \eqref{cgvarform_spectral}--\eqref{cgvarform_pod}. Their energy error estimates
are presented in Propositions \ref{prop:Finalspectral}, \ref{prop:FinalSnap} and \ref{prop:Finalpod},
respectively. Specifically, their convergence rates are precisely characterized by the eigenvalues $\lambda_{\ell_i^{\roma}}^{\si}$, $\lambda_{\ell_i^{\romb}}^{\ti}$,
$\lambda_{\ell_i}^{\hi}$ and the coarse mesh size $H$ (see Section \ref{sec:error} for the definitions
of the eigenvalue problems). Thus, the decay/growth behavior of these eigenvalues plays an extremely
important role in determining the convergence rates, which, however, is beyond the scope of the present work. We refer
readers to the works \cite{babuska2011optimal,li2017low} for results along this line.
Last, we put our contributions into the context. The local spectral estimates in the energy norm in
Proposition \ref{prop:projection} and Lemma \ref{lemma:u2} represent the state-of-art result in the sense that no restrictive
assumption on the problem data is made. Furthermore, we prove the convergence without the help
of the oversampling strategy in the analysis, which has played a crucial role in all existing studies
\cite{babuska2011optimal,EFENDIEV2011937,eglp13,chung2017constraint}. In practice, avoiding oversampling
strategy allows saving computational cost, and this also corroborates well empirical observations \cite{EFENDIEV2011937}.
Due to the local estimates in Proposition \ref{prop:projection} and Lemma \ref{lemma:u2},
we are able to derive a global estimate in Proposition \ref{prop:Finalspectral} that is the much needed
results for analyzing many multiscale methods \cite{MR1660141, MR2721592, MR3246801,li2017error}, cf. Remark
\ref{rem:spectral}. Recently Chung et al \cite{chung2017constraint} proved some convergence estimates
in a similar spirit to Proposition \ref{prop:projection}, by adapting the LOD technique \cite{MR3246801}.
Our result greatly simplifies the analysis and improves their result \cite{chung2017constraint} by avoiding
the oversampling. To the best of our knowledge, there is no known convergence estimate for
either the local harmonic space or the local POD space, and the results presented in Propositions
\ref{prop:FinalSnap} and \ref{prop:Finalpod} are the first such results.
The remainder of this paper is organized as follows. We formulate the heterogeneous problem in Section \ref{sec:pre},
and describe the main idea of the GMsFEM. We present in Section \ref{cgdgmsfem} the
construction of local multiscale spaces, harmonic extension space and discrete POD. Based upon them, we present three type of
global multiscale spaces. Together with the canonical conforming Galerkin formulation, we
obtain three type of numerical methods to approximate Problem \eqref{eqn:pde} in \eqref{cgvarform_spectral}
to \eqref{cgvarform_pod}. The error estimates of these multiscale methods are presented in Section \ref{sec:error},
which represent the main contributions of this paper. Finally, we conclude the paper with concluding remarks in
Section \ref{sec:conclusion}. We establish the regularity result of the elliptic problem with very rough boundary data in an appendix.
\section{Preliminary}\label{sec:pre}
Now we present basic facts related to Problem \eqref{eqn:pde} and briefly describe the GMsFEM (and also to fix the notation).
Let the space $V:=H^{1}_{0}(D)$ be equipped with the (weighted) inner product
\begin{align*}
\innerE{v_1}{v_2}_{D}=:a(v_1,v_2):=\int_{D}\kappa\nabla v_1\cdot\nabla v_2\;\dx\quad \text{ for all } v_1, v_2\in V,
\end{align*}
and the associated energy norm
\begin{align*}
\seminormE{v}{D}^2:=\innerE{v}{v}_{D}\quad \text{ for all } v\in V.
\end{align*}
We denote by $W:=L^2(D)$ equipped with the usual norm $\normL{\cdot}{D}$ {and inner product $(\cdot,\cdot)_{D}$}.
The weak formulation for problem \eqref{eqn:pde} is to find $u\in V$ such that
\begin{align}\label{eqn:weakform}
a(u,v)=(f,v)_{D} \quad \text{for all
} v\in V.
\end{align}
The Lax-Milgram theorem implies the well-posedness of problem \eqref{eqn:weakform}.
To discretize problem \eqref{eqn:pde}, we first introduce fine and coarse grids.
Let $\mathcal{T}^H$ be a regular partition of the domain $D$ into
finite elements (triangles, quadrilaterals, tetrahedra, etc.) with a mesh size $H$. We refer to
this partition as coarse grids, and accordingly the course elements. Then each coarse element is further partitioned
into a union of connected fine grid blocks. The fine-grid partition is denoted by
$\mathcal{T}^h$ with $h$ being its mesh size. Over $\mathcal{T}^h$, let $V_h$ be the conforming piecewise
linear finite element space:
\[
V_h:=\{v\in \mathcal{C}: V|_{T}\in \mathcal{P}_{1} \text{ for all } T\in \mathcal{T}^h\},
\]
where $\mathcal{P}_1$ denotes the space of linear polynomials. Then the fine-scale solution $u_h\in V_h$ satisfies
\begin{align}\label{eqn:weakform_h}
a(u_h,v_h)=(f,v_h)_{D} \quad \text{ for all } v_h\in V_h.
\end{align}
The Galerkin orthogonality implies the following optimal estimate in the energy norm:
\begin{align}\label{eq:fineApriori}
\seminormE{u-u_h}{D}\leq \min\limits_{v_h\in V_h}\seminormE{u-v_h}{D}.
\end{align}
The fine-scale solution $u_h$ will serve as a reference solution in multiscale methods. Note that due to the presence of multiple scales in the coefficient $\kappa$, the fine-scale mesh size $h$ should be commensurate with the smallest scale and thus it can be very small in order to obtain an accurate solution. This necessarily involves huge computational complexity, and more efficient methods are in great demand.
In this work, we are concerned with flow problems with high-contrast heterogeneous coefficients,
which involve multiscale permeability fields, e.g., permeability fields with vugs and faults, and
furthermore, can be parameter-dependent, e.g., viscosity. Under such scenario, the computation of the
fine-scale solution $u_h$ is vulnerable to high computational
complexity, and one has to resort to multiscale methods. The GMsFEM has been extremely successful
for solving multiscale flow problems, which we briefly recap below.
The GMsFEM aims at solving Problem \eqref{eqn:pde} on the coarse mesh $\mathcal{T}^{H}$
cheaply, which, meanwhile, maintains a certain accuracy compared to the fine-scale solution $u_h$. To describe the
GMsFEM, we need a few notation. The vertices of $\mathcal{T}^H$
are denoted by $\{O_i\}_{i=1}^{N}$, with $N$ being the total number of coarse nodes.
The coarse neighborhood associated with the node $O_i$ is denoted by
\begin{equation} \label{neighborhood}
\omega_i:=\bigcup\{ K_j\in\mathcal{T}^H: ~~~ O_i\in \overline{K}_j\}.
\end{equation}
The overlap constant $\Cov$ is defined by
\begin{align}\label{eq:overlap}
\Cov:=\max\limits_{K\in \mathcal{T}^{H}}\#\{O_i: K\subset\omega_i \text{ for } i=1,2,\cdots,N\}.
\end{align}
We refer to Figure~\ref{schematic} for an illustration of neighborhoods and elements subordinated to the coarse
discretization $\mathcal{T}^H$. Throughout, we use $\omega_i$ to denote a coarse neighborhood.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65\textwidth]{gridschematic}
\caption{Illustration of a coarse neighborhood and coarse element with an overlapping constant $\Cov=4$.}
\label{schematic}
\end{figure}
Next, we outline the GMsFEM with a continuous Galerkin (CG) formulation; see Section \ref{cgdgmsfem} for details. We denote by $\omega_i$
the support of the multiscale basis functions. These basis functions are denoted by $\psi_k^{\omega_i}$ for
$k=1,\cdots,\ell_i$ for some $\ell_i\in \mathbb{N}_{+}$, which is the number of local basis functions associated with $\omega_i$. Throughout,
the superscript $i$ denotes the $i$-th coarse node or coarse neighborhood $\omega_i$.
Generally, the GMsFEM utilizes multiple basis functions per coarse neighborhood $\omega_i$,
and the index $k$ represents the numbering of these basis functions.
In turn, the CG multiscale solution $u_{\text{ms}}$ is sought as $u_{\text{ms}}(x)=\sum_{i,k} c_{k}^i \psi_{k}^{\omega_i}(x)$.
Once the basis functions $\psi_k^{\omega_i}$ are identified, the CG global coupling is given through the variational form
\begin{equation}
\label{eq:globalG} a(u_{\text{ms}},v)=(f,v), \quad \text{for all} \, \, v\in
V_{\text{off}},
\end{equation}
where $V_{\text{off}}$ denotes the finite element space spanned by these basis functions.
We conclude the section with the following assumption on $\Omega$ and $\kappa$.
\begin{assumption}[Structure of $D$ and $\kappa$]\label{ass:coeff}
Let $D$ be a domain with a $C^{1,\alpha}$ $(0<\alpha<1)$ boundary $\partial D$,
and $\{D_i\}_{i=1}^m\subset D$ be $m$ pairwise disjoint strictly convex open subsets, {each with a $C^{1,\alpha}$ boundary
$\Gamma_i:=\partial D_i$,} and denote $D_0=D\backslash \overline{\cup_{i=1}^{m} D_i}$.
Let the permeability coefficient $\kappa$ be piecewise regular function defined by
\begin{equation}
\kappa=\left\{
\begin{aligned}
&\eta_{i}(x) &\text{ in } D_{i},\\
&1 &\text{ in }D_0.
\end{aligned}
\right.
\end{equation}
Here $\eta_i\in C^{\mu}(\bar{D_i})$ with $\mu\in (0,1)$ for $i=1,\cdots,m$. Denote $\etamaxmin{min}:=\min_{i}\{\eta_i\}\geq 1$ and $\etamaxmin{max}:=\max_{i}\{\eta_i\}$.
\end{assumption}
Under Assumption \ref{ass:coeff}, the coefficient $\kappa$ is $\Gamma$-{\em quasi-monotone} on each coarse neighborhood $\omega_i$ and the global domain $D$
(see \cite[Definition 2.6]{pechstein2012weighted} for the precise definition) with either $\Gamma:=\partial \omega_i$
or $\Gamma:=\partial D$. Then the following weighted Friedrichs inequality \cite[Theorem 2.7]{pechstein2012weighted} holds.
\begin{theorem}[Weighted Friedrichs inequality]\label{thm:friedrichs}
Let $\text{diam}(D)$ be the diameter of the bounded domain $D$ and $\omega_i\subset D$. Define
\begin{align}
\Cpoin{\omega_i}&:=H^{-2}\max\limits_{w\in H^1_0(\omega_i)}\frac{\int_{\omega_i}{\kappa}w^2\dx}{\int_{\omega_i}\kappa|\nabla w|^2\dx},\label{eq:poinConstant}\\
\Cpoin{D}&:=\text{diam}(D)^{-2}\max\limits_{w\in H^1_0(D)}\frac{\int_{D}{\kappa}w^2\dx}{\int_{D}\kappa|\nabla w|^2\dx}.\label{eq:poinConstantG}
\end{align}
Then the positive constants $\Cpoin{\omega_i}$ and $\Cpoin{D}$ are independent of the contrast of $\kappa$.
\end{theorem}
\begin{remark}
Below we only require that the constants $\Cpoin{\omega_i}$ and $\Cpoin{D}$ be independent
of the contrast in $\kappa$. Assumption \ref{ass:coeff} is one sufficient condition to ensure this,
and it can be relaxed \cite{pechstein2012weighted}.
\end{remark}
\section{CG-based GMsFEM for high-contrast flow problems}
\label{cgdgmsfem}
In this section, we present the local spectral basis functions, local harmonic extension
basis functions and POD, and the global weak formulation based on these local multiscale basis functions.
\begin{comment}
In this section, we give a brief description of the GMsFEM based on the continuous Galerkin (CG) formulation
for high contrast flow problems. More details about the method can be found in \cite{egh12, eglp13}.
First we give a general outline of the GMsFEM.
\noindent
{{\bf Offline computations:}
\begin{itemize}
\item[Step 1] Generate coarse grids.
\item[Step 2] Construct snapshot space that will be used to compute an offline space.
\item[Step 3] Construct a small dimensional offline space by dimension reduction in the space of local snapshots.
\end{itemize}}
\noindent
{{\bf Online computations:}
\begin{itemize}
\item[Step 1] For each input parameter, compute multiscale basis functions (for parameter-dependent case only).
\item[Step 2] Solve a coarse-grid problem for any force term and boundary condition.
\end{itemize}}
{In the outline, the offline computations refer to the operations that are performed independently of the actual
simulations. Given the computational domain $\Omega$, a coarse grid $\mathcal{T}^H$ is generated
and local problems are solved on each coarse neighborhood $\omega_i$ to obtain the snapshot spaces; see
Section \ref{locbasis} below for further details. Then, lower-dimensional offline spaces are obtained
from the snapshot spaces by dimension reduction through solving suitable spectral problems. Online computations
refer to the operations that are needed for the actual simulations. Once the coefficient $\kappa$ and the other problem data
are given, we then can construct a (hopefully low-dimensional) online space from the offline space. The solutions
are then obtained using the online space, within a continuous Galerkin formulation. In passing, we note that
other finite element formulations can also be employed.}
{Note that the procedure can be applied to the general case where the coefficient is parameter-dependent. For the
high-contrast flow problem considered in this work,
the coefficient $\kappa$ is parameter-independent, and hence, the online computations part in the outline
can be skipped.
\end{comment}
\subsection{Local multiscale basis functions}
\label{locbasis}
First we present two principled approaches for constructing local multiscale functions: local spectral
bases and local harmonic extension bases, which represent the two main approaches within the GMsFEM framework.
The constructions are carried out on each coarse neighborhood $\omega_i$ with $i=1,2,\cdots,N$, and can be carried out in
parallel, if desired. Since the dimensionality of the local harmonic extension bases is problem-dependent and inversely proportional
to the smallest scale in $\kappa$, in practice, we often perform an ``optimal'' local model order reduction based
on POD to further reduce the complexity at the online stage.
Before presenting the constructions, we first introduce some useful function spaces, which will play an important role in the analysis below.
Let $L^2_{\widetilde{\kappa}}(\omega_i)$ and $H^1_{\kappa}(\omega_i)$ be Hilbert spaces
with their inner products and norms defined respectively by
\begin{alignat*}{3}
(w_1,w_2)_{i}&:=\int_{\omega_i}\widetilde{\kappa}w_1\cdot w_2\;\dx&&\|{w_1}\|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2:=(w_1,w_1)_{i}&\ \ \text{ for }w_1, w_2\in L^2_{\widetilde{\kappa}}(\omega_i),\\
\innerE{v_1}{v_2}_{i}&:=\int_{\omega_i}{\kappa}\nabla v_1\cdot \nabla v_2\;\dx \quad&&\normE{v_1}{\omega_i}^2:=(v_1,v_2)_i+\innerE{v_1}{v_1}_{i}&\text{ for } v_1,v_2\in H^1_{\kappa}(\omega_i).
\end{alignat*}
Next we define two subspaces $W_i\subset L^2_{ \widetilde{\kappa}}(\omega_i)$ and $V_i\subset H^1_{\kappa}(\omega_i)$ of codimension one by
\[
W_i:=\{v\in L^2_{ \widetilde{\kappa}}(\omega_i):\int_{\omega_i}\widetilde{\kappa}v\;\dx=0\}
\quad \mbox{and}\quad
V_i:=\{v\in H^1_{\kappa}(\omega_i):\int_{\omega_i}\widetilde{\kappa}v\;\dx=0\}.
\]
Furthermore, we introduce the following weighted Sobolev spaces:
\begin{align*}
L_{{\kappa}^{-1}}^{2}(\omega_i):=&\Big\{w:\|w\|_{L^2_{\kappa^{-1}(\omega_i)}}^2:=\int_{\omega_i}{\kappa}^{-1} w^2\dx<\infty \Big\},\\
H_{\kappa,0}^{1}(\omega_i):=&\Big\{w: w|_{\partial{\omega_i}}=0\text{ s.t. }\seminormE{w}{\omega_i}^2:=\int_{\omega_i}\kappa |\nabla w|^2\dx<\infty \Big\}.
\end{align*}
Similarly, we define the following weighted Sobolev spaces with their associated norms: $(L_{\widetilde{\kappa}^{-1}}^{2}(\omega_i),\|\cdot\|_{L^2_{\widetilde{\kappa}^{-1}(\omega_i)}})$, $(L_{{\kappa}^{-1}}^{2}(D),\|\cdot\|_{L^2_{\kappa^{-1}(D)}})$ and $(L_{\widetilde{\kappa}^{-1}}^{2}(D),\|\cdot\|_{L^2_{\widetilde{\kappa}^{-1}(D)}})$. The nonnegative weights $\widetilde{\kappa}$ and $\widetilde{\kappa}^{-1}$ will be defined in \eqref{defn:tildeKappa} and \eqref{eq:inv-tildeKappa} below, respectively.
Throughout, the superscripts $\si$, $\ti$ and $\hi$ are associated to the local
spectral spaces and local harmonic space on $\omega_i$, respectively. Below we
describe the construction of local multiscale basis functions on $\omega_i$.
\subsubsection*{Local spectral bases I}
To define the local spectral bases on $\omega_i$, we first introduce a local elliptic operator $\mathcal{L}_i$ on $\omega_i$ by
\begin{align}\label{eq:Li}
\left\{ \begin{aligned}
\mathcal{L}_i v&:=-\nabla\cdot(\kappa\nabla v)\quad \mbox{in }\omega_i,\\
\kappa\frac{\partial v}{\partial n}&=0\quad \mbox{on }\partial\omega_i.
\end{aligned}\right.
\end{align}
The Lax-Milgram theorem implies the well-posedness of the operator $\mathcal{L}_i:V_i\to V_i^*$,
the dual space $V_i^{*}$ of $V_i$.
Then the spectral problem can be formulated in terms of
$\mathcal{L}_i$, i.e., to seek $(\lambda_{j}^{\si}, v_{j}^{\si})\in \mathbb{R}\times V_i$ such that
\begin{alignat}{2}\label{eq:spectral}
\mathcal{L}_i v_{j}^{\si} &= \widetilde{\kappa}\lambda_{j}^{\si} v_{j}^{\si}
\quad &&\text{in} \, \, \, \omega_i,\\
\kappa\frac{\partial}{\partial n}v_{j}^{\si}&=0&&\text{ on } \partial \omega_i,\nonumber
\end{alignat}
where the parameter $\widetilde\kappa$ is defined by
\begin{equation}\label{defn:tildeKappa}
\widetilde{\kappa} =H^2 \kappa \sum_{i=1}^{N} | \nabla \chi_i |^2,
\end{equation}
with the multiscale function $\chi_i$ to be defined in \eqref{pou} below. Note that the use of $\widetilde{\kappa}$ in the local spectral problem \eqref{eq:spectral} instead of $\kappa$ is due to numerical consideration \cite{EFENDIEV2011937}. Furthermore, let $\widetilde{\kappa}^{-1}$ be defined by
\begin{equation}\label{eq:inv-tildeKappa}
\widetilde{\kappa}^{-1}(x)=
\left\{
\begin{aligned}
&\widetilde{\kappa}^{-1}, \quad &&\text{ when } \widetilde{\kappa}(x)\ne 0\\
&1, \quad &&\text{ otherwise }.
\end{aligned}
\right.
\end{equation}
\begin{remark}
Generally, one cannot preclude the existence of critical points from
the multiscale basis functions $\chi_i$
\cite{MR1289138,alberti2017critical}. In the two-dimensional case, it was proved
that there are at most a finite number of isolated critical points.
To simplify our presentation, we will assume $|D\cap\{\widetilde{\kappa}=0\}|=0$.
\end{remark}
The next result gives the eigenvalue behavior of the local spectral problem \eqref{eq:spectral}.
\begin{theorem}\label{lem:eigenvalue-blowup}
Let $\{(\lambda_j^{\si},v_j^{\si})\}_{j=1}^{\infty}$ be the eigenvalues and the corresponding normalized eigenfunctions in $W_i$ to the spectral problem \eqref{eq:spectral} listed according to their algebraic
multiplicities and the eigenvalues are ordered nondecreasingly. There holds
\begin{align}\label{eq:spectral_eigenvalue}
\lambda_j^{\si}\to \infty\quad \text{ as } j\to \infty.
\end{align}
\end{theorem}
To prove Theorem \ref{lem:eigenvalue-blowup}, we need a few notation. Let $\mathcal{S}_i:=\mathcal{L}_i^{-1}: V^{*}_i\to V_i$
be the inverse of the elliptic operator $\mathcal{L}_i$.
Denote $T:W_i\to L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ to be the multiplication operator defined by
\begin{align}\label{eq:T}
Tv:=\widetilde{\kappa}v \quad\text{ for all }\quad v\in W_i.
\end{align}
One can show by definition directly that $T$ is a bounded operator with unit norm. Moreover, there holds
\[
\int_{\omega_i}Tv\;\dx=0 \quad\text{ for all }v\in W_i.
\]
Thus the range of $T$, $\mathcal{R}(T)$, is a subspace in $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ with codimension one, and we have
\begin{align}\label{eq:R(T)}
\mathcal{R}(T)\hookrightarrow V_i^{*}.
\end{align}
For the proof of Theorem \ref{lem:eigenvalue-blowup}, we need the following compact embedding result.
\begin{lemma}\label{lem:embedding}
$V_i$ is compactly embedded into $W_i$, i.e.,
$V_i \hookrightarrow\hookrightarrow W_i.$
\end{lemma}
\begin{proof}
By Remark \ref{rem:chi}, the uniform boundedness of $\kappa$, the definition of $\widetilde{\kappa}$
and the overlapping condition \eqref{eq:overlap}, we obtain the boundedness of $\tilde{\kappa}$, i.e.,
\begin{align}\label{eq:upper_tilde}
\|\widetilde{\kappa}\|_{L^{\infty}(D)}\leq C_{\text{ov}}(HC_{0})^2\kappa\leq C_{\text{ov}}(HC_{0})^2\beta.
\end{align}
Hence, there holds the following embedding inequalities:
\[
L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\hookrightarrow L^2(\omega_i)\hookrightarrow L^2_{\widetilde{\kappa}}(\omega_i).
\]
This, the classical Sobolev embedding \cite{adams2003sobolev} and boundedness of $\kappa$ imply
the compactness of the embedding $V_i\hookrightarrow\hookrightarrow L^2(\omega_i)$ and thus, we
finally arrive at $V_i \hookrightarrow\hookrightarrow W_i$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{lem:eigenvalue-blowup}]
By \eqref{eq:R(T)}, the multiplication operator $T: W_i\to V_i^*$ is bounded.
Similarly, the operator $\mathcal{S}_i:V_i^*\to W_i$ is compact, in view of Lemma \ref{lem:embedding}.
Let $\widetilde{\mathcal{S}}_i:=\mathcal{S}_i T$.
Then the operator $\widetilde{\mathcal{S}}_i:W_i\to W_i$ is nonnegative and {compact}.
Now we claim that $\widetilde{\mathcal{S}}_i$ is self-adjoint on $W_i$.
Indeed, for all $v,w\in W_i$, we have
\begin{align*}
(\widetilde{\mathcal{S}}_i v, w)_i&=(\mathcal{S}_i Tv, w)_i=\int_{\omega_i}\widetilde{\kappa}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) w\;\dx\\
&=\int_{\omega_i}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) (\widetilde{\kappa}w)\;\dx\\
&=(v,(\mathcal{S}_i T)w)_i=(v,\widetilde{\mathcal{S}}_iw)_i,
\end{align*}
where we have used the weak formulation for \eqref{eq:Li} to deduce
$
\int_{\omega_i}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) (\widetilde{\kappa}w)\dx=\int_{\omega_i}(\widetilde{\kappa}v)\mathcal{L}_i^{-1} (\widetilde{\kappa}w)\dx.
$
By the standard spectral theory for compact operators \cite{yosida78}, it has at most countably many discrete eigenvalues, with zero being
the only accumulation point, and each nonzero eigenvalue has only finite multiplicity.
Noting that $\big\{\big((\lambda_j^{\si})^{-1},
v_j^{\si}\big)\big\}_{j=1}^{\infty}$ are the eigenpairs of $\widetilde{\mathcal{S}}_i$ completes the proof.
\end{proof}
Furthermore, by the construction, the eigenfunctions $\{ v_j^{\si}\}_{j=1}^{\infty}$
form a complete orthonormal bases (CONB) in $W_i$, and $\{\sqrt{\lambda_j^{\si}+1}{v_j^{\si}}\}_{j=1}^{\infty}$
form a CONB in $V_i$. Further, we have $L^2_{\widetilde{\kappa}}(\omega_i)=W_i\oplus \{1\}$.
Hence, $\{ v_j^{\si}\}_{j=1}^{\infty}\oplus \{1\}$ is a complete orthogonal bases in
$L^2_{\widetilde{\kappa}}(\omega_i)$ [Chapters 4 and 5]\cite{laugesen}\footnote{We thank Richard S. Laugesen (University of Illinois, Urbana-Champaign) for clarifying the convergence in $H^1_{\kappa}(\omega_i)$.}.
\begin{lemma}\label{lem:L2Inv}
The series $\{ \widetilde{\kappa} v_j^{\si}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ forms a complete
orthogonal bases in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$.
\end{lemma}
\begin{proof}
First, we show that $\{ \widetilde{\kappa} v_j^{\si}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ are orthogonal in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$.
Indeed, by definition, we deduce that for all $j\in \mathbb{N}_{+}$
\begin{align*}
\int_{\omega_i}\widetilde{\kappa}^{-1}\widetilde{\kappa}\cdot \widetilde{\kappa} v_j^{\si}\dx
=\int_{\omega_i}\widetilde{\kappa} v_j^{\si}\dx=(v_j^{\si},1)_i=0.
\end{align*}
Meanwhile, for all $j,k\in \mathbb{N}_{+}$, there holds
\begin{align*}
\int_{\omega_i}\widetilde{\kappa}^{-1}\widetilde{\kappa}v_k^{\si}\cdot \widetilde{\kappa} v_j^{\si}\dx
=\int_{\omega_i}\widetilde{\kappa} v_j^{\si}\cdot v_k^{\si}\dx=(v_j^{\si},v_k^{\si})_i=\delta_{j,k}.
\end{align*}
Next we show that $\{ \widetilde{\kappa} v_j^{\si}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ are complete in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$.
Actually, for any $v\in L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$ such that
\begin{equation}\label{eq:9}
\begin{aligned}
\int_{\omega_i}\widetilde{\kappa}^{-1}v\cdot \widetilde{\kappa}\dx=0\quad
\text{and }\quad \forall j\in \mathbb{N}_{+}:
\int_{\omega_i}\widetilde{\kappa}^{-1}v\cdot \widetilde{\kappa} v_j^{\si}\dx=0,
\end{aligned}
\end{equation}
we deduce directly from definition that
\begin{align*}
\int_{\omega_i}\widetilde{\kappa}(\widetilde{\kappa}^{-1}v)^2\dx
=\int_{\omega_i\cap\{\widetilde{\kappa}\ne 0\}}\widetilde{\kappa}^{-1}v^2\dx<\infty.
\end{align*}
This implies that $\widetilde{\kappa}^{-1}v\in L^2_{\widetilde{\kappa}}(\omega_i)$.
Furthermore, \eqref{eq:9} indicates that $\widetilde{\kappa}^{-1}v$
is orthogonal to a set of complete orthogonal basis functions $\{ v_j^{\si}\}_{j=1}^{\infty}\oplus \{1\}$ in $L^2_{\widetilde{\kappa}}(\omega_i)$. Therefore, $v=0$, which completes the proof.
\end{proof}
\begin{remark}\label{rem:dual}
Since $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ is a Hilbert space, we can identify its dual with itself, and there exists an isometry between $L^2_{\widetilde{\kappa}}(\omega_i)$ and $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$, e.g., the operator $T$ in \eqref{eq:T}. We identify $L^2_{\widetilde{\kappa}}(\omega_i)$ as the dual of $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$.
\end{remark}
Now we define the local spectral basis functions on $\omega_i$ for all
$i=1,\cdots, N$. Let $\ell_i^{\roma}\in \mathbb{N}_{+}$ be a prespecified number, denoting the number of local
basis functions associated with $\omega_i$. We take the eigenfunctions corresponding to the first
$(\ell_i^{\roma}-1)$ smallest eigenvalues for problem \eqref{eq:spectral} in addition to the
kernel of the elliptic operator $\mathcal{L}_i$, namely, $\{1\}$, to construct the local spectral offline space:
\[
V_{\text{off}}^{\text{S}_i,\ell_i^{\roma}}= \text{span}\{ v_{j}^{\si}:~~ 1\leq j <\ell_i^{\roma}\}\oplus \{1\}.
\]
Then $\dim(V_{\text{off}}^{\text{S}_i,\ell_i^{\roma}})=\ell_i^{\roma}$. The choice of the truncation number
$\ell_i^{\roma}\in \mathbb{N}_{+}$ has to be determined by the eigenvalue decay rate or the presence of
spectral gap. The space $V_{\text{off}}^{\text{S}_i,\ell_i^{\roma}}$ allows defining a finite-rank projection
operator $\mathcal{P}^{\si,\ell_i^{\roma}}: L^2_{\widetilde{\kappa}}(\omega_i)\to V_{\text{off}}^{\text{S}_i,
\ell_i^{\roma}}$ by (with the constant $c_0=\big(\int_{\omega_i}\widetilde{\kappa} \dx \big)^{-1}$):
\begin{align}\label{eq:FR_spec}
\mathcal{P}^{\si,\ell_i^{\roma}}v=c_0(v,1)_i+\sum\limits_{j=1}^{\ell_i^{\roma}-1}(v,v_j^{\si})_i v_j^{\si}\ \ \text{ for all } v\in L_{\tilde \kappa}^2(\omega_i).
\end{align}
The operator $\mathcal{P}^{\si,\ell_i^{\roma}}$ will play a role in the convergence analysis.
\subsubsection*{Local Steklov eigenvalue problem II}
The local Steklov eigenvalue problem can be formulated as to seeking $(\lambda_{j}^{\ti}, v_{j}^{\ti})\in \mathbb{R}\times H^1_{\kappa}(\omega_i)$ such that
\begin{alignat}{2}\label{eq:steklov}
-\nabla\cdot(\kappa\nabla v_{j}^{\ti}) &= 0 &&\quad\text{in} \, \, \, \omega_i,\\
\kappa\frac{\partial}{\partial n}v_{j}^{\ti}&=\lambda_{j}^{\ti} v_{j}^{\ti}&&\quad\text{ on } \partial \omega_i.\nonumber
\end{alignat}
It is well known that the spectrals of the Steklov eigenvalue problem blow up \cite{MR2770439}:
\begin{theorem}\label{lem:steklov-blowup}
Let $\{(\lambda_j^{\ti},v_j^{\ti})\}_{j=1}^{\infty}$ be the eigenvalues and the corresponding normalized eigenfunctions in $L^2(\partial\omega_i)$ to the spectral problem \eqref{eq:steklov} listed according to their algebraic
multiplicities and the eigenvalues are ordered nondecreasingly. There holds
\begin{align*
\lambda_j^{\ti}\to \infty\quad \text{ as } j\to \infty.
\end{align*}
\end{theorem}
Note that $\lambda_1^{\ti}=0$ and $v_1^{\ti}$ is a constant. Furthermore, the series $\big\{v_j^{\ti}\big\}_{j=1}^{\infty}$ forms a complete orthonormal bases in $L^2(\partial\omega_i)$. Below we use the notation $(\cdot,\cdot)_{\partial\omega_i}$ to denote the inner product on $L^2(\partial\omega_i)$.
Similarly, we define a local spectral space of dimension $\ell_i^{\romb}$ and the associated $\ell_i^{\romb}$-rank projection operator:
\begin{align}
V_{\text{off}}^{\text{T}_i,\ell_i^{\romb}}&= \text{span}\{ v_{j}^{\ti}:~~ 1\leq j \leq\ell_i^{\romb}\},\nonumber\\
\mathcal{P}^{\ti,\ell_i^{\romb}}v
&=\sum\limits_{j=1}^{\ell_i^{\romb}}( v,v_j^{\ti})_{\partial\omega_i} v_j^{\ti}\ \ \text{ for all } v\in L^2(\partial\omega_i).\label{eq:steklov_spec}
\end{align}
In addition to these local spectral basis functions defined in
Problems \eqref{eq:spectral} and \eqref{eq:steklov}, we need one more local basis function defined by the following local problem:
\begin{equation}\label{eq:1-basis}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla v^{i})&=\frac{\widetilde{\kappa}}{\int_{\omega_i}\widetilde{\kappa}\dx} \quad&&\text{ in } \omega_i,\\
-\kappa\frac{\partial v^{i}}{\partial n}&=|\partial\omega_i|^{-1}\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.
\end{equation}
Note that the approximation property of $V_{\text{off}}^{\text{S}_i,\ell_i^{\roma}}$, $V_{\text{off}}^{\text{T}_i,
\ell_i^{\romb}}$ to the local solution $u|_{\omega_i}$ is of great importance to the analysis of multiscale
methods \cite{melenk1996partition,EFENDIEV2011937}. We present relevant results in Section \ref{subsec:spectral} below.
\subsubsection*{Local harmonic extension bases}
This type of local multiscale bases is defined by local solvers over $\omega_i$. The number of
such local solvers is problem-dependent. It can be the space of all fine-scale finite element basis functions or the
solutions of some local problems with suitable choices of boundary conditions. In this work, we consider the following
$\kappa$-harmonic extensions to form the local multiscale space, which has been extensively used
in the literature. Specifically, given a fine-scale piecewise linear function $\delta_j^h(x)$ defined on
the boundary $\partial\omega_i$, let $\phi_{j}^{\hi}$ be the solution to the following
Dirichlet boundary value problem:
\begin{alignat}{2} \label{harmonic_ex}
-\nabla\cdot(\kappa(x) \nabla \phi_{j}^{\hi} ) &= 0
\quad &&\text{in} \, \, \, \omega_i,\\
\phi_{j}^{\hi}&=\delta_j^h &&\text{ on }\partial\omega_i,\nonumber
\end{alignat}
where $\delta_j^h(x):=\delta_{j,k}\,\text{ for all } j,k\in \textsl{J}_{h}(\omega_i)$ with $\delta_{j,k}$ denoting the Kronecker
delta symbol, and $\textsl{J}_{h}(\omega_i)$ denoting the set of all fine-grid boundary nodes on $\partial\omega_i$.
Let $L_i$ be the number of the local multiscale functions on $\omega_i$. Then the local multiscale space $V^{\hi}_{\text{snap}}$ on $\omega_i$ is defined by
\begin{align}\label{eq:Vharmonic}
V^{\hi}_{\text{snap}}:=\text{span}\{\phi_j^{\hi}: \quad 1\leq j\leq L_i\}.
\end{align}
Its approximation property will be discussed in Section \ref{subsec:harmonic}.
\subsection*{Discrete POD}
One challenge associated with the local multiscale space $V^{\hi}_{\text{snap}}$ lies in the fact that its
dimensionality can be very large, i.e., $L_i\gg1$, when the problem becomes increasingly complicated in the sense that there
are more multiple scales in the coefficient $\kappa$. Thus, the discrete POD is often employed on $\omega_i$ to reduce
the dimensionality of $V^{\hi}_{\text{snap}}$, while maintaining a certain accuracy.
The discrete POD proceeds as follows. {After obtaining} a large number of local multiscale functions $\{\phi_{j}^{\hi}\}_{j=1}^{L_i}$, with $L_i\gg 1$, by solving the local problem \eqref{harmonic_ex}, we generate a {problem adapted subset of much smaller size} from these basis functions by means of singular
value decomposition, by taking only left singular vectors corresponding to the largest singular values. The resulting low-dimensional linear subspace with $\ell_i$ singular vectors is termed as the offline space of rank $\ell_i$.
The auxiliary spectral problem in the construction is to find $( \lambda_j^{\hi}, v_j)\in \mathbb{R}\times \mathbb{R}^{L_i}$ for $1\leq j\leq L_i$ with the eigenvalues $\{\lambda_j^{\hi}\}_{j=1}^{L_i}$ in a nondecreasing order (with multiplicity counted) such that
\begin{align} \label{offeig}
A^{\text{off}} v_j& = \lambda_j^{\hi} S^{\text{off}} v_j,\\
(S^{\text{off}} v_j,v_j)_{\ell^2}&=1\nonumber.
\end{align}
The matrices $A^{\text{off}}, S^{\text{off}}\in \mathbb{R}^{L_i\times L_i}$ are respectively defined by
\begin{equation*}
\displaystyle A^{\text{off}} = [a_{mn}^{\text{off}}] = \int_{\omega_i} \kappa\nabla \phi_m^{\hi} \cdot \nabla \phi_n^{\hi}\dx \quad\text{ and }\quad
\displaystyle S^{\text{off}} = [s_{mn}^{\text{off}}] = \int_{\omega_i} \widetilde{\kappa} \phi_m^{\hi} \cdot\phi_n^{\hi}\dx .
\end{equation*}
Let $\mathbb{N}_{+}\ni \ell_i\leq L_i$ be a truncation number. Then we define the discrete POD-basis of rank $\ell_i$ by
\begin{align}\label{eq:pod-basis}
v_j^{\hi}:=\sum\limits_{k=1}^{L_i}(v_j)_{k}\phi_{k}^{\hi}\;\quad
\text{ for }j=1,\cdots,\ell_i,
\end{align}
with $(v_j)_{k}$ being the $k^{\text{th}}$ component of the eigenvector $v_j\in\mathbb{R}^{L_i}$. By the definition of the
discrete eigenvalue problem \eqref{offeig}, we have
\begin{align}\label{eq:podNorm}
(v_j^{\hi}, v_k^{\hi})_i =\delta_{jk} \quad \text{ and } \quad \int_{\omega_i}\kappa \nabla v_j^{\hi}\cdot\nabla v_k^{\hi}\dx =\lambda_j^{\hi}\delta_{jk} \qquad\text{ for all } 1\leq j,k\leq \ell_i.
\end{align}
The local offline space $V^{\text{H}_i,\ell_i}_{\text{off}}$ of rank $\ell_i$ is spanned by the first $\ell_i$
eigenvectors corresponding to the smallest eigenvalues for problem \eqref{offeig}:
\begin{align*}
V^{\text{H}_i,\ell_i}_{\text{off}} := \text{span}\left\{v_j^{\hi}: \quad 1\leq j\leq \ell_i \right\}.
\end{align*}
Analogously, we can define a rank $\ell_i$ projection operator $\mathcal{P}^{\si,\ell_i}: V_{\text{snap}}^{\hi}\to
V_{\text{off}}^{\hi,\ell_i}$ for all $\mathbb{N}_{+}\ni \ell_i\leq L_i$ by
\begin{equation}\label{eqn:proj-pod}
\mathcal{P}^{\hi,\ell_i}v=\sum\limits_{j=1}^{\ell_i}(v,v_j^{\hi})_i v_j^{\hi}\ \ \text{ for all } v\in V_{\text{snap}}^{\hi}.
\end{equation}
This projection is crucial to derive the error estimate for the discrete POD basis.
Its approximation property will be discussed in Section \ref{sec:discretePOD}.
\subsection{Galerkin approximation}
\label{globcoupling}
Next we define three types of global multiscale basis functions based on the local multiscale basis functions introduced in
Section \ref{locbasis} by partition of unity functions subordinated to the set of coarse neighborhoods $\{\omega_i\}_{i=1}^N$.
This gives rise to three multiscale methods for solving Problem \eqref{eqn:pde} that can approximate reasonably the exact solution $u$ (or
the fine-scale solution $u_h$).
We begin with an initial coarse space $V^{\text{init}}_0 = \text{span}\{ \chi_i \}_{i=1}^{N}$.
The functions $\chi_i$ are the standard multiscale basis functions on each coarse element $K\in \mathcal{T}^{H}$ defined by
\begin{alignat}{2} \label{pou}
-\nabla\cdot(\kappa(x)\nabla\chi_i) &= 0 &&\quad\text{ in }\;\;K, \\
\chi_i &= g_i &&\quad\text{ on }\partial K, \nonumber
\end{alignat}
where $g_i$ is affine over $\partial K$ with $g_i(O_j)=\delta_{ij}$ for all $i,j=1,\cdots, N$. Recall that $\{O_j\}_{j=1}^{N}$ are the set of coarse nodes on $\mathcal{T}^{H}$.
\begin{remark}[Properties of $\chi_i$]\label{rem:chi}
The definition \eqref{pou} implies that $\text{supp}(\chi_i)=\omega_i$. Thus, we have
\begin{align}\label{eq:chi_supp}
\chi_i=0\quad \text{ on }\partial \omega_i.
\end{align}
Furthermore, the maximum principle implies
$0\leq \chi_i\leq 1.$
Note that under Assumption \ref{ass:coeff}, the gradient of the multiscale basis functions
$\{\chi_i\}$ are uniformly bounded \cite[Corollary 1.3]{li2000gradient}
\begin{align}\label{eq:gradientChi}
\|\nabla\chi_i\|_{L^{\infty}(\omega_i)}\leq C_0,
\end{align}
where the constant $C_0$ depends on $D$, the size and shape of $D_j$ for $j=1,\cdots,m$,
the space dimension $d$ and the coefficient $\kappa$, but it
is independent of the distances between the inclusions $D_k$ and $D_j$ for $k,j=1,\cdots, m$.
It is worth noting that the precise dependence of the constant $C_0$ on $\kappa$ is still
unknown. However, when the contrast $\Lambda=\infty$, it is
known that the constant $C_0$ will blow up as two inclusions approach each other, for
which the problem reduces to the perfect or insulated conductivity problem
\cite{bao2010gradient}. Such extreme cases are beyond the scope of the present work.
The constant $C_0$ also depends on coarse grid size $H$ with a possible scaling $H^{-1}$.
\end{remark}
Since the set of functions $\{\chi_i\}_{i=1}^{N}$ form partition of unity functions subordinated
to $\{\omega_i\}_{i=1}^{N}$, we can construct global
multiscale basis functions from the local multiscale basis functions discussed in Section \ref{locbasis}
\cite{melenk1996partition,EFENDIEV2011937}. Specifically, the global multiscale spaces $V_{\text{off}}^{\text{S}}$,
$V_{\text{snap}}$ and $V_{\text{off}} ^{\text{H}}$ are respectively defined by
\begin{equation}\label{eq:globalBasis}
\begin{aligned}
V_{\text{off}}^{\text{S}} &:= \text{span} \{ \chi_i v_j^{\si},\chi_i v_{k}^{\ti},\chi_i v^{i}: \, \, 1 \leq i \leq N,\,\,\, 1 \leq j \leq \ell_i^{\roma} \text{ and } 1 \leq k \leq \ell_i^{\romb} \text{ with }\ell_i^{\roma}+\ell_i^{\romb}=\ell_i-1\},
\\
V_{\text{snap}} &:= \text{span}\{ \chi_i\phi_{j}^{ \hi}:~~~ 1\leq i\leq N \text{ and }1\leq j \leq {L_i} \},\\
V_{\text{off}} ^{\text{H}} &:= \text{span} \{ \chi_i v_j^{\hi}: \, \, 1 \leq i \leq N \, \, \, \text{and} \, \, \, 1 \leq j \leq \ell_i \}.
\end{aligned}
\end{equation}
Accordingly, the Galerkin approximations to Problem \eqref{eqn:pde} read respectively:
seeking $u_{\text{off}}^{\text{S}}\in V_{\text{off}}^{\text{S}}$, $u_{\text{snap}}\in V_{\text{snap}}$ and
$u_{\text{off}}^{\text{H}}\in V_{\text{off}}^{\text{H}}$, satisfying
\begin{align}
a(u_{\text{off}}^{\text{S}}, v) &= (f, v)_{D} \quad\text{for all} \,\,\, v \in V_{\text{off}}^{\text{S}},\label{cgvarform_spectral}\\
a(u_{\text{snap}}, v) &= (f, v)_{D} \quad\text{for all} \,\,\, v \in V_{\text{snap}},\label{cgvarform_snap}\\
a(u_{\text{off}}^{\text{H}}, v) &= (f, v)_{D} \quad \text{for all} \,\,\, v \in V_{\text{off}}^{\text{H}}.\label{cgvarform_pod}
\end{align}
Note that, by its construction, we have the inclusion relation $V_{\text{off}}^{\text{H}}\subset V_{\text{snap}}$ for all
$1\leq \ell_i\leq L_i$ with $i=1,2,\cdots, N$. Hence, the Gakerkin orthogonality property \cite[Corollary 2.5.10]{MR2373954} implies
\[
\seminormE{u-u_{\text{off}}^{\text{H}}}{D}^2= \seminormE{u-u_{\text{snap}}}{D}^2+\seminormE{u_{\text{snap}}-u_{\text{off}}^{\text{H}}}{D}^2.
\]
Furthermore, we will prove in Section \ref{sec:discretePOD} that
$u_{\text{off}}^{\text{H}}\to u_{\text{snap}}$ in $ H^1_0(D),$
and the convergence rate is determined by $\max_{i=1,\cdots,N}\big\{(H^2\lambda_{\ell_i+1}^{\hi})^{-1/2}\big\}$.
The main goal of this work is to derive bounds on the errors $\seminormE{u-u_{\text{off}}^{\text{S}}}{D}$,
$\seminormE{u-u_{\text{snap}}}{D}$ and $\seminormE{u-u_{\text{off}}^{\text{H}}}{D}$. This
will be carried out in Section \ref{sec:error} below.
\section{Error estimates}\label{sec:error}
This section is devoted to the energy error estimates for the multiscale approximations.
The general strategy is as follows. First, we derive approximation properties to
the local solution $u|_{\omega_i}$, for the local multiscale spaces $V_{\text{off}}^{\text{S}_i,
\ell_i^{\roma}}$, $V_{\text{off}}^{\text{T}_i,\ell_i^{\romb}}$, $V_{\text{snap}}^{\text{H}_i}$
and $V_{\text{off}}^{\text{H}_i,\ell_i}$. Then we combine these local estimates together
with partition of unity functions to establish the desired global energy error estimates.
\subsection{Spectral bases approximate error}\label{subsec:spectral}
Note that the solution $u$ satisfies the following equation
\begin{equation*
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u)&=f \quad&&\text{ in } \omega_i,\\
-\kappa\frac{\partial u}{\partial n}&=-\kappa\frac{\partial u}{\partial n}\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.
\end{equation*}
which can be split into three parts, namely
\begin{align}\label{eq:decomp}
u|_{\omega_i}=u^{i,\roma}+u^{i,\romb}+u^{i,\romc}.
\end{align}
Here, the three components $u^{i,\roma}$, $u^{i,\romb}$, and $u^{i,\romc}$ are respectively given by
\begin{equation}\label{eq:u-roma1}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u^{i,\roma})&=f-\bar{f}_i \quad&&\text{ in } \omega_i\\
-\kappa\frac{\partial u^{i,\roma}}{\partial n}&=0\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.
\end{equation}
where $\bar{f}_i:=\int_{\omega_i}f\dx\times\frac{\widetilde{\kappa}}{\int_{\omega_i}\widetilde{\kappa}\dx}$,
\begin{equation*
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u^{i,\romb})&=0 \quad&&\text{ in } \omega_i\\
-\kappa\frac{\partial u^{i,\romb}}{\partial n}&=\kappa\frac{\partial u}{\partial n}-\dashint_{\partial\omega_i}\kappa\frac{\partial u}{\partial n}\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.
\end{equation*}
and
\[
u^{i,\romc}=v^{i}\int_{\omega_i}f\dx
\]
with $v^i$ being defined in \eqref{eq:1-basis}. Clearly,
$u^{i,\romc}$ involves only one local solver.
We begin with an {\em a priori} estimate on $u^{i,\romb}$.
\begin{lemma}The following a priori estimate holds:
\begin{align}\label{eq:u2-bound}
\seminormE{u^{i,\romb}}{\omega_i}\leq \seminormE{u}{\omega_i}+H\Cpoin{\omega_i}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}.
\end{align}
\end{lemma}
\begin{proof}
Let $\widetilde{u}:=u^{i,\roma}+u^{i,\romc}$. Then it satisfies
\begin{equation*}\label{eq:u-roma}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla \widetilde{u})&=f \quad&&\text{ in } \omega_i,\\
\kappa\frac{\partial \widetilde{u}}{\partial n}&=\frac{1}{{|\partial\omega_i|}}\int_{\omega_i}f\;\dx\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.
\end{equation*}
To make the solution unique, we require $\int_{\partial\omega_i}\widetilde{u}\;{\rm d}s=0$.
Testing the first equation with $\widetilde{u}$ gives
\begin{align*}
\seminormE{\widetilde{u}}{\omega_i}^2=\int_{\omega_i}f\widetilde{u}\;\dx.
\end{align*}
Now Poincar\'{e} inequality \eqref{eq:poinConstant} and H\"{o}lder's inequality lead to
\begin{align*}
\seminormE{\widetilde{u}}{\omega_i}^2\leq \|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\|\widetilde{u}\|_{L^2_{\kappa}(\omega_i)}
\leq H\Cpoin{\omega_i}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\seminormE{\widetilde{u}}{\omega_i}.
\end{align*}
Therefore, we obtain
\begin{align*}
\seminormE{\widetilde{u}}{\omega_i}\leq H\Cpoin{\omega_i}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}.
\end{align*}
Finally, the desired result follows from the triangle inequality.
\end{proof}
Since $u^{i,\roma}\in L^{2}_{\widetilde{\kappa}}(\omega_i),$
$u^{i,\romb}\in L^{2}(\partial\omega_i)$,
and the series $\{v_j^{\si}\}_{j=1}^{\infty}\oplus \{1\}$ and $\{v_j^{\ti}\}_{j=1}^{\infty}$ form a complete orthogonal bases in $L^{2}_{\widetilde{\kappa}}(\omega_i)$ and $L^{2}(\partial\omega_i)$, respectively, $u^{i,\roma}$ and $u^{i,\romb}$ admit the following decompositions:
\begin{align}
u^{i,\roma}&=c_0(u^{i,\roma},1)_i+\sum\limits_{j=1}^{\infty}(u^{i,\roma},v_j^{\si})_i v_j^{\si},\label{eq:spectralU}\\
u^{i,\romb}&=\sum\limits_{j=1}^{\infty}(u^{i,\rm II},v_j^{\ti})_{\partial\omega_i} v_j^{\ti}.
\label{eq:spectralu2}
\end{align}
For any $n\in \mathbb{N}_{+}$, we employ the $n$-term truncation $u^{i,\roma}_n$ and $u^{i,\romb}_n$ to approximate $u^{i,\roma}$ and $u^{i,\romb}$, respectively,
on $\omega_i$:
\begin{align*}
u^{i,\roma}_n:=\mathcal{P}^{\si,n}u^{i,\roma}\in V_{\text{off}}^{\text{S}_i,n}
\quad \mbox{and}\quad
u^{i,\romb}_n:=\mathcal{P}^{\ti,n}u^{i,\romb}\in V_{\text{off}}^{\text{T}_i,n}
\end{align*}
\begin{lemma}\label{lem:assF}
Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$. Then there holds
\begin{align}\label{eq:f_norm}
\|f-\bar{f}_i\|_{L^{2}_{\widetilde{\kappa}^{-1}}(\omega_i)}^2
=\sum\limits_{j=1}^{\infty}\Big(\lambda_j^{\text{S}_i}\Big)^2 \Big|(u^{i,\roma},v_{j}^{\si})_i\Big|^2<\infty.
\end{align}
\end{lemma}
\begin{proof}
Since $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$, by Lemma \ref{lem:L2Inv}, $f-\bar f_i$ admits the following spectral decomposition:
\begin{align}\label{eq:99}
f-\bar{f}_i=\Big(\int_{\omega_i}\widetilde{\kappa}\dx\Big)^{-1}
\Big(\int_{\omega_i}
(f-\bar{f}_i)\;\dx\Big)\widetilde{\kappa}
+\sum\limits_{j=1}^{\infty}
\Big(\int_{\omega_i}(f-\bar{f}_i)
v_j^{\si}\dx\Big)\widetilde{\kappa}v_j^{\si}.
\end{align}
By the definition of $\bar f_i$, the first term vanishes.
Thus, it suffices to compute the $j^{\text{th}}$ expansion coefficient
$\int_{\omega_i}(f-\bar{f}_i)v_j^{\si}\dx$ for $j=1,2,\cdots$, which follows from \eqref{eq:u-roma1}.
Indeed, testing \eqref{eq:u-roma1} with $v_j^{\si}$ yields
\begin{align*}
\int_{\omega_i}\Big(f-\bar{f}_i\Big)v_j^{\si}\dx
&=\int_{\omega_i}\kappa\nabla u^{i,\roma}\cdot\nabla v_j^{\si}\dx=\lambda_{j}^{\si}\int_{\omega_i}\widetilde{\kappa}u^{i,\roma} v_j^{\si}\dx
=\lambda_{j}^{\si}(u^{i,\roma}, v_j^{\si})_i.
\end{align*}
\end{proof}
\begin{comment}
The next lemma shows that $f_n\to f$ in $H_{\kappa}^{-1}(\omega_i)$ as $n\to \infty$, which is critical to our further analysis.
\begin{lemma}\label{lem:critical}
Let $u$ be the solution to Problem \eqref{eqn:pde}, and let $u_n$ and $f_n$ be defined in \eqref{eq:un}.
Then $f_n\in H_{\kappa}^{-1}(\omega_i)$ for all $n\in \mathbb{N}_{+}$, and there holds
\begin{align}\label{eq:fn2f}
f_n\to f \quad\text{ in }\quad H_{\kappa}^{-1}(\omega_i) \quad \text{ as }\quad n\to \infty.
\end{align}
In particular, this implies the spectral expansion of the source term $f$ into
\begin{align}\label{eq:f_dec}
f =\sum\limits_{j=1}^{\infty}\lambda_j^{\text{S}_i} (u,v_{j}^{\si})_i \widetilde{\kappa}v_{j}^{\si} \quad \mbox{in } H_\kappa^{-1}(\omega_i).
\end{align}
Furthermore, if $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$, then there holds
\begin{align}\label{eq:f_norm}
\forall n\in \mathbb{N}_{+}: f_n\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\quad \text{ and }\quad\|f\|_{L^{2}_{\widetilde{\kappa}^{-1}}(\omega_i)}^2=\sum\limits_{j=1}^{\infty}\Big(\lambda_j^{\text{S}_i}\Big)^2 \Big|(u,v_{j}^{\si})_i\Big|^2.
\end{align}
\end{lemma}
\begin{proof}
Note that the restriction $u|_{\omega_i}\in H^1_{\kappa}(\omega_i)$. By the spectral decomposition \eqref{eq:spectralU}, we arrive at
\[
\|u\|_{H^1_{\kappa}(\omega_i)}^2=c_0(u,1)_i^2+\sum\limits_{j=1}^{\infty}(\lambda_j^{\si}+1)(u,v_j^{\si})_i^2<\infty.
\]
Together with \eqref{eq:un}, this shows
\begin{align}\label{eq:un2u}
\|u-u_n\|_{H^1_{\kappa}(\omega_i)}^2=\sum\limits_{j=n}^{\infty}(\lambda_j^{\si}+1)(u,v_j^{\si})_i^2\to 0 \quad\text{ as }\quad n\to \infty,
\end{align}
This implies
\begin{align}\label{eq:un2u_energy}
u_n\to u \quad \text{ in }\quad H^1_{\kappa}(\omega_i).
\end{align}
Moreover, the difference $u-u_n$ satisfies the following error equation
\begin{equation}\label{eq:111}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla (u-u_n))&=f-f_n \quad&&\text{ in } \omega_i\\
\kappa\frac{\partial (u-u_n)}{\partial n}&=\kappa\frac{\partial u}{\partial n}\quad&&\text{ on }\partial \omega_i .
\end{aligned}
\right.
\end{equation}
Therefore, by the definition of the $H^{-1}_{\kappa}(\omega_i)$ norm, we arrive at
\begin{align*}
\|f-f_n\|_{H^{-1}_{\kappa}(\omega_i)}&:=\sup\limits_{w\in H^{1}_{\kappa,0}(\omega_i)}\frac{\langle f-f_n, w\rangle_{H^{-1}_{\kappa}(\omega_i),H^1_{\kappa,0}(\omega_i)}}
{\seminormE{w}{\omega_i}}\\
&=\sup\limits_{w\in H^{1}_{\kappa,0}(\omega_i)}\frac{\int_{\omega_i}\kappa\nabla (u-u_n)\cdot\nabla w\;\dx}{\seminormE{w}{\omega_i}}\\
&\leq \|u-u_n\|_{H^1_{\kappa}(\omega_i)}.
\end{align*}
Here, the second equality is derived from testing \eqref{eq:111} with $w\in H^{1}_{\kappa,0}(\omega_i)$ and integration by parts, and the
inequality follows from H\"{o}lder's inequality. Together with \eqref{eq:un2u}, this proves \eqref{eq:fn2f}.
Similarly, we can prove for all $n\in \mathbb{N}_{+}$ that
\[
\|f_n\|_{H^{-1}_{\kappa}(\omega_i)}\leq \|u_n\|_{H^1_{\kappa}(\omega_i)}\leq \|u\|_{H^1_{\kappa}(\omega_i)}<\infty
\]
By the uniqueness of the limit in $H^{-1}_{\kappa}(\omega_i)$, we obtain \eqref{eq:f_dec}.
Finally, \eqref{eq:f_norm} can be obtained directly by definition. This completes the proof of the lemma.
\end{proof}
\begin{remark}
Note that \eqref{eq:un2u} implies that $u_n$ converges to $u$ not only in $L^2_{\widetilde{\kappa}}(\omega_i)$ but also in $H^1_{\kappa}(\omega_i)$. Nevertheless, we can not obtain the convergence rate in the stronger norm for free, which is realized by the summability condition on the source term, namely: $f\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$.
\end{remark}
\begin{remark}
Note that we can only obtain the convergence of $f_n$ to $f$ in $H_{\kappa}^{-1}(\omega_i)$, instead of in a much smaller subspace $V_i^{*}$. This allows for the spectral decomposition of $f$ in $H_{\kappa}^{-1}(\omega_i)$.
\end{remark}
\end{comment}
Now we state an important approximation property of the operator $\mc{P}^{\si,\ell_i^{\roma}}$ of rank $\ell_i^{\roma}$ defined in \eqref{eq:FR_spec}.
\begin{proposition}\label{prop:projection}
Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$ and $\ell_i^{\roma}\in \mathbb{N}_+$. Let
$u^{i,\roma}$ be the first component in \eqref{eq:decomp}. Then the projection
$\mc{P}^{\si,\ell_i^{\roma}}: L^2_{\widetilde{\kappa}}(\omega_i)\to V_{{\rm off}}^{\mathrm{S}_i,\ell_i^{\roma}}$ of rank
$\ell_i^{\roma}$ defined in \eqref{eq:FR_spec} has the following approximation properties:
\begin{align}
\normLT{u^{i,\roma}-\mc{P}^{\si,\ell_i^{\roma}}u^{i,\roma}}{\omega_i}&
\leq (\lambda_{\ell_i^{\roma}}^{\si})^{-1}\normLii{f}{\omega_i},\label{eq:3333}\\
\seminormE{u^{i,\roma}-\mc{P}^{\si,\ell_i^{\roma}}u^{i,\roma}}{\omega_i}&\leq ({\lambda_{\ell_i^{\roma}}^{\si}})^{-\frac12}\normLii{f}{\omega_i}. \label{eq:4444}
\end{align}
\end{proposition}
\begin{proof}
The definitions \eqref{eq:spectralU} and \eqref{eq:FR_spec}, and the orthonormality of $\{v_j^{\si}\}_{j=1}^{\infty}\oplus\{1\}$
in $L^2_{\widetilde{\kappa}}(\omega_i)$ directly yield
\begin{align*}
\normLT{u^{i,\roma}-\mc{P}^{\si,\ell_i^{\roma}}u^{i,\roma}}{\omega_i}^2
&=\sum\limits_{j=\ell_i^{\roma}}^{\infty}(u^{i,\roma},v_j^{\si})_i^2
=\sum\limits_{j=\ell_i^{\roma}}^{\infty}(\lambda_j^{\si})^{-2}
(\lambda_j^{\si})^{2}(u^{i,\roma},v_j^{\si})_i^2\\
&\leq (\lambda_{\ell_i^{\roma}}^{\si})^{-2}\sum\limits_{j=\ell_i^{\roma}}^{\infty}
(\lambda_j^{\si})^{2}(u^{i,\roma},v_j^{\si})_i^2\\
&\leq (\lambda_{\ell_i^{\roma}}^{\si})^{-2}\normLii{f-\bar{f}_i}{\omega_i}^2,
\end{align*}
where in the last step we have used \eqref{eq:f_norm}.
Next, since the first term in the expansion
\eqref{eq:99} vanishes, we deduce that $f-\bar{f}_i$ is the $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ projection
onto the codimension one subspace $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\backslash \{\widetilde{\kappa}\}$.
Thus,
\begin{align*}
\normLii{f-\bar{f}_i}{\omega_i}\leq \normLii{f}{\omega_i}.
\end{align*}
Plugging this inequality into the preceding estimate, we arrive at
\begin{align*}
\normLT{u^{i,\roma}-\mc{P}^{\si,\ell_i^{\roma}}u^{i,\roma}}{\omega_i}^2
\leq(\lambda_{\ell_i^{\roma}}^{\si})^{-2}\normLii{f}{\omega_i}^2,
\end{align*}
Taking the square root yields the first estimate. The second estimate can be derived in a similar manner.
\end{proof}
Next we give the approximation property of the finite rank operator $\mc{P}^{\ti,\ell_i^{\romb}}$ to
the second component of the local solution $u^{i,\romb}$, which relies on the regularity of
the very weak solution in the appendix.
\begin{lemma}\label{lemma:u2}
Let $\ell_i^{\roma}\in \mathbb{N}_+$ and let $u^{i,\romb}$ be the second component in \eqref{eq:decomp}. Then the projection $\mc{P}^{\ti,\ell_i^{\romb}}: L^2(\partial\omega_i)\to V_{\rm off}^{{\rm T}_i,\ell_i}$ of rank $\ell_i^{\romb}$ defined in \eqref{eq:steklov_spec} has the following approximation properties:
\begin{align}
\|{u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}}\|_{L^2(\partial\omega_i)}
&\leq (\lambda_{\ell_i^{\romb}+1}^{\ti})^{-\frac12}
\Big(\seminormE{u}{\omega_i}+ H\sqrt{\Cpoin{\omega_i}}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\Big),
\label{eq:5555}\\
\normLT{u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}}{\omega_i}
&\leq \Cw(\lambda_{\ell_i^{\romb}+1}^{\ti})^{-\frac12}
\Big(\seminormE{u}{\omega_i}+
H\sqrt{\Cpoin{\omega_i}}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\Big), \label{eq:56789}\\
\int_{\omega_i}\chi_i^2\kappa |\nabla (u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}) |^2\dx
&\leq 8H^{-2}\Cw^2(\lambda_{\ell_i^{\romb}+1}^{\ti})^{-1}
\Big(\seminormE{u}{\omega_i}^2+ H^2\Cpoin{\omega_i}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).
\label{eq:777}
\end{align}
\end{lemma}
\begin{proof}
The inequality \eqref{eq:5555} follows from the expansion \eqref{eq:spectralu2}, \eqref{eq:steklov_spec}
and \eqref{eq:u2-bound}, and the fact that $u^{i,\romb}\in H^1_{\kappa}(\omega_i)$.
Indeed, we obtain from \eqref{eq:spectralu2} and the orthonomality of $\{v_j^{\ti}\}_{j=1}^{\infty}$ in $L^2(\partial\omega_i)$ that
\begin{align*}
\|{u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}}\|_{L^2(\partial\omega_i)}^2
&=\sum\limits_{j>\ell_i^{\romb}}|(u^{i,\romb},v_j^{\ti})_{\partial\omega_i}|^2
=\sum\limits_{j>\ell_i^{\romb}}(\lambda_j^{\ti})^{-1}\lambda_j^{\ti}|(u^{i,\romb},v_j^{\ti})_{\partial\omega_i}|^2\\
&\leq (\lambda_{\ell_i^{\romb}+1}^{\ti})^{-1}\sum\limits_{j>\ell_i^{\romb}}\lambda_j^{\ti}|(u^{i,\romb},v_j^{\ti})_{\partial\omega_i}|^2.
\end{align*}
Then the estimate \eqref{eq:5555} follows from \eqref{eq:u2-bound} and the identity
$
\langle u^{i,\romb},u^{i,\romb} \rangle_i=\sum_{j=1}^{\infty}\lambda_j^{\ti}|(u^{i,\romb},v_j^{\ti})_{\partial\omega_i}|^2.
$
To prove \eqref{eq:56789}, we first write the local error equation for
$e:=u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}$ by
\begin{equation}\label{eq:222}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla e)&=0 \quad&&\text{ in } \omega_i,\\
e&=u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.
\end{equation}
Now Theorem \ref{lem:very-weak} yields
\begin{align*}
\normLT{u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}}{\omega_i}\leq \text{C}_{\text{weak}}\|{u^{i,\romb}-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}}\|_{L^2(\partial\omega_i)}
\end{align*}
for some constant $\text{C}_{\text{weak}}$ independent of the coefficient $\kappa$. This, together with \eqref{eq:5555}, proves \eqref{eq:56789}.
To derive the energy error estimate from the $L^2_{\widetilde{\kappa}}(\omega_i)$ error estimate, we employ a Cacciopoli type inequality.
Note that $\chi_i=0$ on the boundary $\partial \omega_i$, cf \eqref{eq:chi_supp}.
Multiplying the first equation in \eqref{eq:222} with $\chi_i^2 e_n$ and then integrating over $\omega_i$ and integration by parts lead to
\begin{align*}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2\dx=-2\int_{\omega_i}\kappa \nabla e_n\cdot \nabla \chi_i\chi_i e_n\;\dx.
\end{align*}
Together with H\"{o}lder's inequality and Young's inequality, we arrive at
\begin{align*}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2\dx\leq 4\int_{\omega_i}\kappa|\nabla\chi_i|^2 e_n^2\,\dx.
\end{align*}
Further, the definition of $\widetilde{\kappa}$ in \eqref{defn:tildeKappa} yields
\begin{align*}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2\dx\leq 4H^{-2}\int_{\omega_i}\widetilde{\kappa} e_n^2\,\dx.
\end{align*}
Now \eqref{eq:56789} and Young's inequality yield \eqref{eq:777}.
This completes the proof of the lemma.
\end{proof}
\begin{remark}
It is worth emphasizing that the local energy estimates \eqref{eq:4444} and \eqref{eq:777} are derived under almost no restrictive
assumptions besides the mild condition $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$. This estimate is new to the best of
our knowledge. The authors \cite{EFENDIEV2011937} utilized the Cacciopoli inequality to derive similar
estimates, which, however, incurs some (implicit) assumptions on the problem. Hence, the estimates \eqref{eq:4444} and \eqref{eq:777}
are important for justifying the local spectral approach.
\end{remark}
Finally, we present the rank-$\ell_i$ approximation to $u|_{\omega_i}$, where $\ell_i:=\ell_i^{\roma}+\ell_i^{\romb}+1$ with $\ell_i^{\roma}, \ell_i^{\romb}\in \mathbb{N}$ for all $i=1,2,\cdots, N$:
\begin{align}\label{eq:spectral-finiteRank}
\widetilde{u}_i:=\mc{P}^{\si,\ell_i^{\roma}}u^{i,\roma}_i+\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}_i+u^{i,\romc}.
\end{align}
Now, we present an error estimate for the Galerkin approximation $u_{\text{off}}^{\text{S}}$ based on
the local spectral basis, cf. \eqref{cgvarform_spectral}. Our proof is inspired by the partition of unity finite
element method (FEM) \cite[Theorem 2.1]{melenk1996partition}.
\begin{lemma}\label{lem:spectralApprox}
Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)\cap L^2_{{\kappa}^{-1}}(D)$ and $\ell_i^{\roma}, \ell_i^{\romb}\in \mathbb{N}$ for all $i=1,2,\cdots, N$. Let $u$ be the solution to Problem \eqref{eqn:pde}. Denote $V_{{\rm off}}^{{\rm S}}\ni w_{{\rm off}}^{{\rm S}}:=\sum\limits_{i=1}^{N}\chi_i \widetilde{u}_i$. Then there holds
\begin{equation*
\begin{aligned}
\seminormE{u-w_{{\rm off}}^{{\rm S}}}{D}
&\leq {2}C_{{\rm ov}}\max_{i=1,\cdots,N}\big\{({H\lambda_{\ell_i^{\roma}}^{\si}})^{-1}+
({\lambda_{\ell_i^{\roma}}^{\si}})^{-\frac12}
\big\}\normLii{f}{D}\\
&+7C_{{\rm ov}}\Cw{\rm C}_{{\rm poin}}\max_{i=1,\cdots,N}
\big\{(H^2\lambda_{\ell_i^{\romb}+1}^{\ti})^{-\frac12}\big\}
\|f\|_{L^2_{\kappa^{-1}}(D)},
\end{aligned}
\end{equation*}
where ${\rm C}_{\text{poin}}:={\rm diam}(D)\Cpoin{D}^{1/2}+H\max_{i=1,\cdots,N}\{\Cpoin{\omega_i}^{1/2}\}$.
\end{lemma}
\begin{proof}
Let $\locv{e}{}{}:=u-w_{\text{off}}^{\text{S}}$.
Then the property of the partition of unity of $\{\chi_i\}_{i=1}^{N}$ leads to
\[
\locv{e}{}{}=\sum\limits_{i=1}^{N}\chi_i\locv{e}{i}{} \qquad\text{ with }
\qquad\locv{e}{i}{}:=(u^{\roma}_i-\mc{P}^{\si,\ell_i^{\roma}}u^{\roma}_i)+(u^{i,\romb}_i-\mc{P}^{\ti,\ell_i^{\romb}}u^{i,\romb}_i)
:=\locv{e}{i}{\roma}+\locv{e}{i}{\romb}.
\]
Taking its squared energy norm and using the overlap condition \eqref{eq:overlap}, we arrive at
\begin{align*
\int_{D}\kappa|\nabla \locv{e}{}{}|^2\dx&=\int_{D}\kappa|\sum\limits_{i=1}^{N}
\nabla(\chi_i\locv{e}{i}{})|^2\dx
\leq\Cov\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{})|^2\dx.
\end{align*}
This and Young's inequality together imply
\begin{align}\label{eq:1111}
\int_{D}\kappa|\nabla \locv{e}{}{}|^2\dx
\leq 2\Cov\sum\limits_{i=1}^{N}
\Big(\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{\roma})|^2\dx+\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{\romb})|^2\dx\Big).
\end{align}
It remains to estimate the two integral terms in the bracket. By Cauchy-Schwarz inequality and
the definition \eqref{defn:tildeKappa} of $\tilde \kappa$, we obtain
\begin{align}
\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{\roma})|^2\dx&\leq 2\Big( \int_{\omega_i}\big(\kappa\sum\limits_{j=1}^{N}
|\nabla\chi_j|^2\big)|\locv{e}{i}{\roma}|^2\dx+\int_{\omega_i}\kappa\chi_i^2
|\nabla\locv{e}{i}{\roma}|^2\dx\Big)\nonumber\\
&\leq 2\Big( H^{-2}\int_{\omega_i}\widetilde{\kappa}|\locv{e}{i}{\roma}|^2\dx+\int_{\omega_i}\chi_i^2\kappa
|\nabla\locv{e}{i}{\roma}|^2\dx\Big).\label{eq:444}
\end{align}
Then Proposition \ref{prop:projection} yields
\begin{align*}
\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{\roma})|^2\dx
\leq 2\Big((H\lambda_{\ell_i^{\roma}}^{\si})^{-2}
+(\lambda_{\ell_i^{\roma}}^{\si})^{-1}\Big)\normLii{f}{\omega_i}^2.
\end{align*}
Analogously, we can derive the following upper bound for the second term:
\begin{align*}
\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{\romb})|^2\dx
\leq 20\Cw^2(H^2\lambda_{\ell_i^{\romb}+1}^{\ti})^{-1}
\Big(\seminormE{u}{\omega_i}^2+H^2\Cpoin{\omega_i}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).
\end{align*}
Inserting these two estimate into \eqref{eq:1111} gives
\begin{align*}
\int_{D}\kappa|\nabla \locv{e}{}{}|^2\dx&\leq 4C_{\text{ov}}\sum\limits_{i=1}^{N}\Big((H\lambda_{\ell_i}^{\si})^{-2}
+(\lambda_{\ell_i^{\roma}}^{\si})^{-1}\Big)
\normLii{f}{\omega_i}^2\\
&+40C_{\text{ov}}\sum\limits_{i=1}^{N}\Cw^2(H^2\lambda_{\ell^{\romb}_{i}+1}^{\ti})^{-1}
\Big(\seminormE{u}{\omega_i}^2+\Cpoin{\omega_i}H^2 \|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).
\end{align*}
Finally, the overlap condition \eqref{eq:overlap} leads to
\begin{equation}\label{eq:999}
\begin{aligned}
\int_{D}\kappa|\nabla \locv{e}{}{}|^2\dx&\leq 4C_{\text{ov}}^2\max_{i=1,\cdots,N}\Big\{(H\lambda_{\ell_i^{\roma}}^{\si})^{-2}
+(\lambda_{\ell_i^{\roma}}^{\si})^{-1}\Big\}
\normLii{f}{D}^2\\
&+40C_{\text{ov}}^2\Cw^2\max_{i=1,\cdots,N}\{(H^2
{\lambda_{\ell_i^{\romb}+1}^{\ti}})^{-1}
\}\\
&\times\Big(\seminormE{u}{D}^2+ H^2\max_{i=1,\cdots,N}\{{\Cpoin{\omega_i}}
\}\|f\|_{L^2_{\kappa^{-1}}(D)}^2\Big).
\end{aligned}
\end{equation}
Furthermore, since $f\in L^2_{\kappa^{-1}}(D)$, we obtain from Poincar\'{e}'s inequality \eqref{eq:poinConstantG} the {\em a priori} estimate
\begin{align}\label{eq:888}
\seminormE{u}{D}\leq \text{diam}(D)\Cpoin{D}^{1/2}\normLi{f}{D}.
\end{align}
Indeed, we can get by \eqref{eq:poinConstantG} that
\begin{align*}
\int_D \kappa u^{2}\dx\leq \text{diam}(D)^2{\Cpoin{D}}\int_{D}\kappa|\nabla u|^2\dx.
\end{align*}
Testing \eqref{eqn:pde} with $u\in V$, by H\"{o}lder's inequality, leads to
\begin{align*}
\int_{D}\kappa|\nabla u|^2\dx&=\int_D fu\; \dx
\leq \|f\|_{L^2_{\kappa^{-1}}(D)}\|u\|_{L^2_{\kappa}(D)}.
\end{align*}
These two inequalities together imply \eqref{eq:888}. Inserting \eqref{eq:888} into \eqref{eq:999} shows the desired assertion.
\end{proof}
An immediate corollary of Lemma \ref{lem:spectralApprox}, after appealing to the Galerkin orthogonality
property \cite[Corollary 2.5.10]{MR2373954}, is the following energy error between $u$ and $u_{\text{off}}^{\text{S}}$:
\begin{proposition}\label{prop:Finalspectral}
Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)\cap L^2_{{\kappa}^{-1}}(D)$ and let $\ell_i^{\roma}, \ell_i^{\romb}\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u\in V$ and $u_{{\rm off}}^{{\rm S}}\in V_{{\rm off}}^{{\rm S}}$ be the solutions to Problems \eqref{eqn:pde} and \eqref{cgvarform_spectral}, respectively. There holds
\begin{equation}\label{eq:snapErr}
\begin{aligned}
\seminormE{u-u_{{\rm off}}^{{\rm S}}}{D}&:=\min\limits_{w\in V_{{\rm off}}^{{\rm S}}}\seminormE{u-w}{D}\\
&\leq \sqrt{2}C_{{\rm ov}}\max_{i=1,\cdots,N}\big\{({H\lambda_{\ell_i^{\roma}}^{\si}})^{-1}+
({\lambda_{\ell_i^{\roma}}^{\si}})^{-\frac12}
\big\}\normLii{f}{D}\\
&+7C_{{\rm ov}}\Cw{\rm C}_{{\rm poin}}\max_{i=1,\cdots,N}
\big\{({H^2\lambda_{\ell_i^{\romb}+1}^{\ti}})^{-\frac12}\big\}
\|f\|_{L^2_{\kappa^{-1}}(D)}.
\end{aligned}
\end{equation}
\end{proposition}
\begin{remark}\label{rem:spectral}
According to Proposition \ref{prop:Finalspectral}, the convergence rate is essentially determined by two factors:
the smallest eigenvalue $\lambda_{\ell_i}^{\si}$ that is not included in the local spectral basis and
the coarse mesh size $H$. A proper balance between them is necessary for the convergence. For any
fixed $H>0$, in view of the eigenvalue problems \eqref{eq:spectral} and \eqref{eq:steklov},
a simple scaling argument implies
\begin{align*}
H^2\lambda^{\si}_{\ell_i^{\roma}}\to \infty \quad\text{ and }\quad H\lambda^{\ti}_{\ell_i^{\romb}}\to \infty,\quad \text{ as }\quad \ell_i^{\roma},\ell_i^{\romb} \to \infty.
\end{align*}
Hence, assuming that $\ell_i^{\roma}$ and $\ell_i^{\romb}$ are sufficiently large such that
$H^2\lambda^{\si}_{\ell_i^{\romb}}\geq 1$ and $H\lambda^{\ti}_{\ell_i^{\romb}}\geq H^{-3}$,
from Proposition \ref{prop:Finalspectral}, we obtain
\begin{align}\label{eq:aaaa}
\seminormE{u-u_{{\rm off}}^{{\rm S}}}{D}\lesssim H \Big(\normLii{f}{D}+\normLi{f}{D}\Big)
\end{align}
Note that the estimate of type \eqref{eq:aaaa} is the main goal of the convergence analysis for many multiscale methods
\cite{MR1660141, MR2721592, li2017error}. In practice, the numbers $\ell_i^{\roma}$ and $\ell_i^{\romb}$ of local multiscale functions
fully determine the computational complexity of the multiscale solver for Problem \eqref{cgvarform_spectral}
at the offline stage. However, its optimal choice rests on the decay rate of the nonincreasing sequences
$\big\{(\lambda_{n}^{\si})^{-1}\big\}_{n=1}^{\infty}$ and $\big\{(\lambda_{n}^{\ti})^{-1}\big\}_{n=1}^{\infty}$. The precise characterization of eigenvalue decay estimates
for heterogeneous problems seems poorly understood at present, and the topic is beyond the scope of the present work.
\end{remark}
\subsection{Harmonic extension bases approximation error}\label{subsec:harmonic}
By the definition of the local harmonic extension snapshot space $\locV{i}{snap}$ in \eqref{harmonic_ex}
and \eqref{eq:Vharmonic}, there exists $u^{i}_{\text{snap}}\in \locV{i}{snap}$ satisfying
\begin{align}\label{eq:usnap}
u^{i}_{\text{snap}}:=u_{h} \qquad\text{ on }\quad \partial \omega_i.
\end{align}
In the error analysis below, the weighted Friedrichs
(or Poincar\'{e}) inequalities play an important role. These inequalities require certain conditions
on the coefficient $\kappa$ and domain $D$ that in general are not fully understood. Assumption
\ref{ass:coeff} is one sufficient condition for the weighted Friedrichs inequality \cite{galvis2010domain,pechstein2012weighted}.
Now we can derive the following local energy error estimate.
\begin{lemma}\label{lem:energyHA}
Let $\locv{e}{i}{snap}=u_h-u^{i}_{\text{snap}}$. Then there holds
\begin{align}\label{eq:energyHA}
\seminormE{\locv{e}{i}{snap}}{\omega_i}\leq {H}\sqrt{\Cpoin{\omega_i}}\normLi{f}{\omega_i}.
\end{align}
\end{lemma}
\begin{proof}
Indeed, by definition, the following error equation holds:
\begin{equation}\label{eq:locErr}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla \locv{e}{i}{snap})&=f \quad&&\text{ in } \omega_i,\\
\locv{e}{i}{snap}&=0 \quad&&\text{ on }\partial \omega_i.
\end{aligned}\right.
\end{equation}
Then \eqref{eq:poinConstant} and H\"{o}lder's inequality give the assertion.
\end{proof}
\begin{lemma}
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$ and $\ell_i\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u_h\in V_h$ be the unique solution to Problem \eqref{eqn:weakform_h}. Denote $V_{{\rm snap}}\ni w_{{\rm snap}}:=\sum_{i=1}^{N}\chi_i u^{i}_{{\rm snap}}$. Then there holds
\begin{align*
\seminormE{u_h-w_{{\rm snap}}}{D}\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H\Cpoin{\omega_i}+\sqrt{\Cpoin{\omega_i}}\Big\}\normLi{f}{D}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\locv{e}{}{snap}:=u_h-w_{\text{snap}}$. Since $\{\chi_i\}_{i=1}^{N}$ forms a set of partition
of unity functions subordinated to the set $\{\omega_i\}_{i=1}^{N}$, we deduce
\[
\locv{e}{}{snap}=\sum\limits_{i=1}^{N}\chi_i\locv{e}{i}{snap},
\]
where $\locv{e}{i}{snap}:=u_h-u^{i}_{\text{snap}}$ is the local error on $\omega_i$.
Taking its squared energy norm and using the overlap condition \eqref{eq:overlap}, we arrive at
\begin{align}\label{eq:0000}
\int_{D}\kappa|\nabla \locv{e}{}{snap}|^2\dx&=\int_{D}\kappa|\sum\limits_{i=1}^{N}
\nabla(\chi_i\locv{e}{i}{snap})|^2\dx
\leq\Cov\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{snap})|^2{\color{blue} \dx}.
\end{align}
It remains to estimate the integral term. Young's inequality gives
\begin{align*}
\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{snap})|^2\dx&\leq 2\Big( \int_{\omega_i}\big(\kappa
|\nabla\chi_i|^2\big)|\locv{e}{i}{snap}|^2\dx+\int_{\omega_i}\kappa
|\nabla\locv{e}{i}{snap}|^2\dx\Big).
\end{align*}
Taking \eqref{eq:gradientChi} and \eqref{eq:poinConstant} into account, we get
\begin{align*}
\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{snap})|^2\dx&\leq 2\sum\limits_{i=1}^{N}\Big( C_0^2H^2\Cpoin{\omega_i}+1\Big)\int_{\omega_i}\kappa
|\nabla\locv{e}{i}{snap}|^2\dx.
\end{align*}
This and \eqref{eq:energyHA} yield
\begin{align*}
\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i\locv{e}{i}{snap})|^2\dx
\leq 2\sum\limits_{i=1}^{N}\Big(C_0^2H^2\Cpoin{\omega_i}+1\Big)\times H^2\Cpoin{\omega_i}\normLi{f}{\omega_i}^2.
\end{align*}
Finally, the overlap condition \eqref{eq:overlap} and inequality \eqref{eq:0000} show the desired assertion.
\end{proof}
Finally, we derive an energy error estimate for the conforming Galerkin approximation to Problem \eqref{eqn:pde}
based on the multiscale space $V_{\text{snap}}$.
\begin{proposition}\label{prop:FinalSnap}
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u\in V$ and $u_{{\rm snap}}\in V_{{\rm snap}}$ be the solutions to Problems \eqref{eqn:pde} and \eqref{cgvarform_snap}, respectively. Then there holds
\begin{align*}
\seminormE{u-u_{{\rm snap}}}{D}\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H\Cpoin{\omega_i}&+\sqrt{\Cpoin{\omega_i}}\Big\}\normLi{f}{D}+\min\limits_{v_h\in V_h}\seminormE{u-v_h}{D}.
\end{align*}
\end{proposition}
\begin{proof}
This assertion follows directly from the Galerkin orthogonality property \cite[Corollary 2.5.10]{MR2373954},
the triangle inequality and the fine-scale {\em a priori} estimate \eqref{eq:fineApriori}.
\end{proof}
\subsection{Discrete POD approximation error}\label{sec:discretePOD}
Now we turn to the discrete POD approximation.
First, we present an {\em a priori} estimate for Problem \eqref{eqn:weakform_h}.
It will be used to derive the energy estimate for $u_{\text{snap}}^i$ defined in \eqref{eq:usnap}.
\begin{lemma}
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u_h\in V_h$ be the solution to Problem \eqref{eqn:weakform_h}. Then there holds
\begin{align}
\seminormE{u_h}{D}&\leq 2{\rm diam}(D)\sqrt{\Cpoin{D}}\normLi{f}{D}. \label{eq:uApriori}
\end{align}
\end{lemma}
\begin{proof}
In analogy to \eqref{eq:888}, we obtain
\begin{align*
\seminormE{u}{D}&\leq \text{diam}(D)\sqrt{\Cpoin{D}}\normLi{f}{D},\\
\seminormE{u_h}{D}&\leq \text{diam}(D)\sqrt{\Cpoin{D}}\normLi{f}{D}.
\end{align*}
This and the triangle inequality lead to the desired assertion.
\end{proof}
Let $u^{i}_{\text{snap}}\in \locV{i}{snap}$ be defined in \eqref{eq:usnap}.
Then we deduce from \eqref{eq:energyHA} and the triangle inequality that
\begin{align}\label{usnap:apriori}
\seminormE{u_{\text{snap}}^i}{\omega_i}\leq {H}\Cpoin{\omega_i}^{1/2}\normLi{f}{\omega_i}+\seminormE{u_h}{\omega_i}.
\end{align}
Note that the series $\{v_j^{\hi}\}_{j=1}^{L_i}$ forms a set of orthogonal basis in $ \locV{i}{snap}$, cf.
\eqref{eq:podNorm}. Therefore, the function $u^{i}_{\text{snap}}\in \locV{i}{snap}$ admits the following expansion
\begin{align}\label{eq:usnap_expand}
u^{i}_{\text{snap}}=\sum\limits_{j=1}^{L_i}(u^{i}_{\text{snap}},v_j^{\hi})_i v_j^{\hi}.
\end{align}
To approximate $u^{i}_{\text{snap}}$ in the space $V_{\text{off}}^{\hi,n}$ of dimension $n$ for some $\mathbb{N}_{+}\ni n\leq L_i$,
we take its first $n$-term truncation:
\begin{align}\label{eq_uin}
u^i_n:=\mathcal{P}^{\hi,n}u^{i}_{\text{snap}}=\sum\limits_{j=1}^{n}(u^{i}_{\text{snap}},v_j^{\hi})_i v_j^{\hi},
\end{align}
where the projection operator $\mathcal{P}^{\hi,n}$ is defined in \eqref{eqn:proj-pod}.
The next result provides the approximation property of $u^i_n$ to $u^{i}_{\text{snap}}$ in the $L^2_{\widetilde{\kappa}}(\omega_i)$ norm:
\begin{lemma}\label{lem:5.1}
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u^{i}_{{\rm snap}}\in \locV{i}{\rm snap}$ and $u^i_n\in V_{{\rm off}}^{\hi,n}$ be defined in \eqref{eq:usnap} and \eqref{eq_uin} for $\mathbb{N}_{+}\ni n\leq L_i$, respectively. Then there holds
\begin{align*}
\| u^{i}_{{\rm snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}\leq \sqrt{2}(\lambda_{n+1}^{\hi})^{-1/2}\Big( {H}\sqrt{\Cpoin{\omega_i}}\normLi{f}{\omega_i}+\seminormE{u_h}{\omega_i}\Big).
\end{align*}
\end{lemma}
\begin{proof}
It follows from the expansion \eqref{eq:usnap_expand} and \eqref{eq:podNorm} that
\begin{align*}
\int_{\omega_i }\kappa|\nabla u^{i}_{\text{snap}}|^2\dx=\sum\limits_{j=1}^{L_i}|(u^{i}_{\text{snap}},v_j^{\hi})_i|^2 \lambda_j^{\hi}.
\end{align*}
Together with \eqref{usnap:apriori}, we get
\begin{align}\label{eq:333}
\sum\limits_{j=1}^{L_i}|(u^i_{\rm snap},v_j^{\hi})_i|^2 \lambda_j^{\hi}\leq 2\Big( {H}^2{\Cpoin{\omega_i}}\normLii{f}{\omega_i}^2+\seminormE{u_h}{\omega_i}^2\Big).
\end{align}
Meanwhile, the combination of \eqref{eq_uin}, \eqref{eq:usnap_expand} and \eqref{eq:podNorm} leads to
\begin{align*}
\| u^{i}_{\text{snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2&=\sum\limits_{j=n+1}^{L_i}|(u^{i}_{\text{snap}},v_j^{\hi})_i|^2
=\sum\limits_{j=n+1}^{L_i}(\lambda_j^{\hi})^{-1}\lambda_j^{\hi}|(u^{i}_{\text{snap}},v_j^{\hi})_i|^2\\
&\leq (\lambda_{n+1}^{\hi})^{-1}\sum\limits_{j=n+1}^{L_i}\lambda_j^{\hi}|(u^{i}_{\text{snap}},v_j^{\hi})_i|^2.
\end{align*}
Further, an application of \eqref{eq:333} implies
\begin{align*}
\| u^{i}_{\text{snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2
\leq (\lambda_{n+1}^{\hi})^{-1}\times 2\Big( {H}^2{\Cpoin{\omega_i}}\normLi{f}{\omega_i}^2+\seminormE{u_h}{\omega_i}^2\Big).
\end{align*}
Finally, taking the square root on both sides shows the desired result.
\end{proof}
Note that for all $\mathbb{N}_{+}\ni n\leq L_i$, both approximations $u^{i}_{\text{snap}}$ and $u^i_n$ are $\kappa$-harmonic functions.
Thus, we can apply the argument in the proof of \eqref{eq:777} to get the following local energy error estimate.
\begin{lemma}\label{lem:5.2}
Let $u^{i}_{{\rm snap}}\in \locV{i}{\rm snap}$ and $u^i_n\in V_{{\rm off}}^{\hi,n}$ be defined in \eqref{eq:usnap}
and \eqref{eq_uin} for all $\mathbb{N}_{+}\ni n\leq L_i$. Then there holds
\begin{align*}
\int_{\omega_i}\chi_i^2\kappa |\nabla (u^{i}_{{\rm snap}}-u^i_n) |^2\dx
&\leq 4H^{-2}\int_{\omega_i}\widetilde{\kappa} (u^{i}_{{\rm snap}}-u^i_n)^2\,\dx.
\end{align*}
\end{lemma}
\begin{proof}
The proof is analogous to that for \eqref{eq:777}, and thus omitted.
\end{proof}
With the help of local estimates presented in Lemmas \ref{lem:5.1} and \ref{lem:5.2},
we can now bound the energy error for the POD method by means of the partition of unity
FEM \cite[Theorem 2.1]{melenk1996partition}.
\begin{lemma}\label{lem:4.6}
Assume that $f\in L^2_{\kappa^{-1}}(D)$.
For all $\mathbb{N}_{+}\ni \ell_i\leq L_i$, denote $V_{{\rm snap}}\ni w_{{\rm snap}}:
=\sum_{i=1}^{N}\chi_i u^{i}_{{\rm snap}}$ and $V_{{\rm off}}^{\text{H}}\ni w_{{\rm off}}^{{\rm H}}
:=\sum_{i=1}^{N}\chi_i u^{i}_{\ell_i}$. Then there holds
\begin{align*}
\seminormE{w_{{\rm snap}}-w_{{\rm off}}^{{\rm H}}}{D}\leq \sqrt{20C_{{\rm ov}}}&\max_{i=1,\cdots,N}
\Big\{{(H^{2}\lambda_{\ell_i+1}^{\hi})^{-1/2}}\Big\}
C_1\normLi{f}{D},
\end{align*}
where the constant $C_1$ is given by
$C_1:=H\max_{i=1,\cdots,N}\big\{\sqrt{\Cpoin{\omega_i}}\big\}+2{\rm diam}(D)\sqrt{\Cpoin{D}}.$
\end{lemma}
\begin{proof}
An argument similar to \eqref{eq:444} leads to
\begin{align*}
\seminormE{w_{\text{snap}}-w_{\text{off}}^{\text{H}}}{D}^2\leq 2\sum\limits_{i=1}^{N}\Big( H^{-2}\int_{\omega_i}\widetilde{\kappa}|u^{i}_{\text{snap}}-u^i_{\ell_i}|^2\dx+\int_{\omega_i}\chi_i^2\kappa
|\nabla (u^{i}_{\text{snap}}-u^i_{\ell_i})|^2\dx\Big).
\end{align*}
Together with Lemma \ref{lem:5.2}, we obtain
\begin{align*}
\seminormE{w_{\text{snap}}-w_{\text{off}}^{\text{H}}}{D}^2\leq 10H^{-2}\sum\limits_{i=1}^{N} \int_{\omega_i}\widetilde{\kappa}|u^{i}_{\text{snap}}-u^i_{\ell_i}|^2\dx.
\end{align*}
Then from Lemma \ref{lem:5.1}, we deduce
\begin{align*}
\seminormE{w_{\text{snap}}-w_{\text{off}}^{\text{H}}}{D}^2\leq 20 \max_{i=1,\cdots,N}\{{(H^{2}\lambda_{\ell_i+1}^{\hi})^{-1}}\}\sum\limits_{i=1}^{N}\Big( {H}^2{\Cpoin{\omega_i}}\normLi{f}{\omega_i}^2+\seminormE{u_h}{\omega_i}^2\Big).
\end{align*}
Finally, the overlap condition \eqref{eq:overlap} together with \eqref{eq:uApriori} shows the desired assertion.
\end{proof}
Finally, we derive an error estimate for the CG approximation to Problem \eqref{eqn:pde} based
on the discrete POD multiscale space $V_{\text{off}}^{\text{H}}$.
\begin{proposition}\label{prop:Finalpod}
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$ and $\ell_i\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u\in V$ and
$u_{{\rm off}}^{{\rm H}}\in V_{{\rm off}}^{{\rm H}}$ be the solutions to Problems \eqref{eqn:pde} and \eqref{cgvarform_pod}, respectively. Then there holds
\begin{align}\label{eq:podErr}
\seminormE{u-u_{{\rm off}}^{{\rm H}}}{D}&\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H\Cpoin{\omega_i}+\sqrt{\Cpoin{\omega_i}}\Big\}\normLi{f}{D}\\
&+\sqrt{20C_{{\rm ov}}}\max_{i=1,\cdots,N}\Big\{{(H^{2}\lambda_{\ell_i+1}^{\hi})^{-\frac12}}\Big\} C_1\normLi{f}{D}
+\min\limits_{v_h\in V_h}\seminormE{u-v_h}{D}. \nonumber
\end{align}
\end{proposition}
\begin{proof}
This assertion follows from the Galerkin orthogonality property \cite[Corollary 2.5.10]{MR2373954}, the
triangle inequality and the fine-scale {\em a priori} estimate \eqref{eq:fineApriori}, Proposition
\ref{prop:FinalSnap} and Lemma \ref{lem:4.6}.
\end{proof}
\begin{remark}
Since the discrete eigenvalue problem \eqref{offeig} is generated from the continuous eigenvalue problem \eqref{eq:spectral} with finite ensembles $\{\phi_j^{\hi}\}_{j=1}^{L_i}$, a scaling argument shows
\begin{align*}
H^{2}\lambda_{n}^{\hi}\to \infty \quad\text{ as } n\to \infty \text{ and } h\to 0.
\end{align*}
This and \eqref{eq:podErr} imply the convergence of the POD solution $u_{\rm off}^{\rm H}$ in the energy norm.
\end{remark}
\begin{comment}
The feasibility of such a strategy relies on the following simple observation: Under Assumption
\ref{A:3}, $U_{\kappa_M}$ is a compact subset of $V\equiv H_0^1(D)$. Hence, for any prescribed accuracy
$\epsilon>0$, there exists a finite subset $U_\epsilon\subset {V}$ such that
\begin{equation*}
{U_{\kappa_M}}\subset \mathop\cup\limits_{v\in U_\epsilon}B_\epsilon(v),
\end{equation*}
where $B_\epsilon(v)=\{v'\in V:\normHsemi{v'-v}{D}\leq \epsilon\}$ is the $\epsilon$-ball
centered at $v$ in $V$. Hence, theoretically, one can approximate accurately the solution space
$U_{\kappa_M}$ by the solutions to \eqref{eqn:spde_KL} corresponding to a finite number of sampling points
in the random domain $\Omega$. These solutions are known as snapshots, whose construction is beyond
the scope of this work.
Assumption \ref{A:3} ensures a coherence structure in the solution space $U_{\kappa_M}$, and
thereby an effective low-dimensional subspace from snapshots. Numerically, this can be achieved by several
different model reduction strategies, e.g., greedy algorithm \cite{DeVore2013} and discrete POD
\cite{Kunisch&Volkwein2002}, and we shall focus on
the discrete POD.
Our main goal is to approximate the snapshot space $V^{\hi}_{\text{snap}}$ by a low-dimensional subspace $V^{\text{H}_i,\ell_i}_{\text{off}} $, along
the line of research in \cite{Kunisch&Volkwein2002}. To this end, we employ
Kolmogorov $n$-width from classical approximation theory \cite{pinkus1985n,DiUl12}, which
characterizes the best (not necessarily linear) approximation error.
For $v\in V_i$ and a subspace $X\subset V_i$, we define the distance
$\mathrm{dist}(v,X)$ between $v$ and $X$ by
\[
\text{dist}(v,X):=\inf_{w\in X}\normE{v-w}{\omega_i}.
\]
The Kolmogorov $n$-width of $V^{\hi}_{\text{snap}}$, denoted by $d_n(V^{\hi}_{\text{snap}}; V_i)$, is then defined by
\begin{equation*}
d_n(V^{\hi}_{\text{snap}};V_i)=\inf\limits_{\substack{\dim(X)=n\\ X\subset V_i}}\sup_{u\in U_N}\text{dist}(u,X),\;\text{ for } n=0,1,\cdots, L_i.
\end{equation*}
Thus, by the definition of infimum, for any $\epsilon>0$, there exists an $n$-dimensional linear space
$X_n\subset V_i$ with $n=0,1,\cdots, L_i$, satisfying
\begin{align}\label{eqn:kolmogorovSpace}
\sup_{u\in V^{\hi}_{\text{snap}}}\text{dist}(u,X_n)\leq d_n(V^{\hi}_{\text{snap}};V_i)+\frac{\epsilon}{L_i}.
\end{align}
The following lemma provides the explicit expression of the POD-basis and the corresponding approximation error
at the ensemble level \cite[Proposition 3.3]{Kunisch&Volkwein2002}.
\begin{lemma}\label{lemma:POD}
Let $0<\lambda_1^{\hi}\leq \lambda_2^{\hi}\leq \cdots \leq \lambda_{L_i}^{\hi}$ be the positive eigenvalues of the spectral problem \eqref{offeig} and let $v_1,v_2,\cdots,v_{L_i}\in \mathbb{R}^{{L_i}}$ be the corresponding orthonormal eigenvectors. Then the POD-basis of rank $\ell_i\leq L_i$ \eqref{eq:pod-basis} have the following approximation property:
\[
\frac{1}{L_i}\sum\limits_{k=1}^{L_i}\normE{\phi_{k}^{\hi}-\sum\limits_{j=1}^{n}({\phi_{k}^{\hi}},{v_j^{\hi}})_{i}v_j^{\hi}}{\omega_i}^{2}=\sum\limits_{j=n+1}^{L_i}\frac{1}{1+\lambda_j^{\hi}}.
\]
\end{lemma}
Below we derive the convergence rate of discrete POD solution $u_{\text{off}}^{\text{H}}$ to the snapshot solution $u_{\text{snap}}$, cf. \eqref{cgvarform_snap}, by establishing the
approximation of $V^{\text{H}_i,\ell_i}_{\text{off}}$ to the compact set $V_{\text{snap}}^{\hi}$ on each coarse neighborhood $\omega_i$. To this end, we will first establish a few technical lemmas,
which may be of independent interest.
A first lemma provides a lower bound for $ \sup\limits_{v\in U^{\hi}}{\mathrm{dist}}(v,V_n)$ in terms of $\lambda_{n+1}^{\hi}$ for any $n$-dimensional subspace $V_n\subset V_i$.
\begin{lemma}\label{lemma:POD_1}
For any $n\in\mathbb{N}$, let $V_n\subset V_i$ be {any} $n$-dimensional linear subspace. There holds
\begin{align}
\sup_{v\in U^{\hi}}{\mathrm{dist}}(v,V_n)\geq \sqrt{\lambda_{n+1}^{\hi}}.
\end{align}
\end{lemma}
\begin{proof}
Given any $n$-dimensional linear subspace $V_n\subset V_i$, since $\dim(V_{\text{off}}^{\hi,n+1})
=n+1>n=\dim(V_n)$ and $V_{\text{off}}^{\hi,n+1}\subset V_i$, by application of \cite[Lemma 2.3]{KATO}, there exists
a vector $t\in V_{\text{off}}^{\hi,n+1}$ such that
\begin{align}\label{eq:80}
{\text{dist}}(t,V_n)=\normE{t}{\omega_i}>0.
\end{align}
Since $\{v_j^{\hi}\}_{j=1}^{n+1}$ forms an orthogonal basis in $V_{\text{off}}^{\hi,n+1}$ and
$\normHsemi{v_j^{\hi}}{\omega_i}=\sqrt{\lambda_j^{\hi}}$ cf. \eqref{eq:podNorm}, one can write
$t=\sum\limits_{j=1}^{n+1}z_j v_j^{\hi}$ for some $z=(z_j)_{j=1}^{n+1}\in \mathbb{R}^{n+1}$.
Furthermore, there holds
\begin{equation}\label{eq:88}
\normHsemi{t}{\omega_i}^2=\sum\limits_{j=1}^{n+1}\lambda_j^{\hi}z_j^{2}.
\end{equation}
In the meanwhile, due to \eqref{eq:pod-basis}, this vector $t$ also admits the following expansion:
\begin{align}
t=\sum\limits_{j=1}^{n+1}z_j \Big( \sum\limits_{k=1}^{L_i}(v_j)_{k}\phi_{k}^{\hi} \Big)
&=\sum\limits_{k=1}^{L_i} \Big( \sum\limits_{j=1}^{n+1}(v_j)_{k}z_j \Big)\phi_{k}^{\hi}\nonumber\\
&:=\sum\limits_{k=1}^{L_i}c_k\phi_{k}^{\hi}. \label{eq:555}
\end{align}
This, together with \eqref{offeig}, yields
\begin{align}\label{eq:666}
\sum\limits_{k=1}^{L_i}\sum\limits_{j=1}^{L_i}c_kc_j(S^{\text{off}})_{kj}=\sum\limits_{j=1}^{n+1}z_j^2.
\end{align}
We now define a new function $t^{*}:=\Big(\sum\limits_{j=1}^{n+1}z_j^2\Big)^{-1/2}t$. The identity \eqref{eq:666} indicates that $t^*\in U^{\hi}$. Thus, by \eqref{eq:88}, we have
\begin{align*}
\normHsemi{t^*}{\omega_i}^2&=\Big(\sum\limits_{j=1}^{n+1}z_j^2\Big)^{-1}\normHsemi{t}{\omega_i}^2
=\Big(\sum\limits_{j=1}^{n+1}z_j^2\Big)^{-1}\sum\limits_{j=1}^{n+1}z_j^{2}\lambda_j^{\hi}\\
&\geq\Big(\sum\limits_{j=1}^{n+1}z_j^2\Big)^{-1}\sum\limits_{i=1}^{n+1}z_{i}^{2}\lambda_{n+1}^{\hi}=\lambda_{n+1}^{\hi}.
\end{align*}
This and \eqref{eq:80} imply
\[
\sup\limits_{v\in {U}^{\hi}}{\text{dist}}(v,V_n)\geq {\text{dist}}(v^*,V_n)
\geq\sqrt{\lambda_{n+1}^{\hi}},
\]
thereby concluding the proof.
\end{proof}
The next lemma gives an explicit expression for $\sup\limits_{v\in U^{\hi}}\mathrm{dist}(v,V_{\text{off}}^{\hi,n})$.
\begin{lemma}\label{lemma:POD_2}
The following identity holds:
\begin{align}\label{eq:length_pod}
\sup\limits_{v\in U^{\hi}}\mathrm{dist}(v,V_{\text{off}}^{\hi,n})=(\lambda_{n+1}^{\hi})^{-1/2}.
\end{align}
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:POD_1}, it suffices to show the inequality
$
\sup_{v\in S_{\tilde{U}_N}}\mathrm{dist}(v,V_{\text{off}}^{\hi,n})\leq\sqrt{\lambda_{n+1}^{\hi}}.
$
Given any $v\in {U}^{\hi}$, we have
$
v=\sum_{i=1}^{N}b_i(Uv_i)=\sum_{i=1}^{N}U(b_iv_i)$ for some $ b=(b_i)_{i=1}^{N}\in \mathbb{R}^{N}
$
with $1=\|W_Nb\|= \|b\|=1$, since $\{Uv_i\}_{i=1}^{N}$ is an orthonormal basis of $\tilde{U}_N$.
Now, let $t=\sum_{i=1}^{n}b_i(Uv_i)$ be a function in $\tilde{U}_N$. The property
$\{Uv_i\}_{i=1}^{n}\in V_{\text{off}}^{\hi,n}$ implies $t\in V_{\text{off}}^{\hi,n}$. Consequently,
\begin{align*}
\normHsemi{v-t}{D}^2=\normHsemi{\sum\limits_{i=n+1}^{N}b_i(Uv_i)}{D}^2=\sum\limits_{i=n+1}^{N}b_i^2\normHsemi{Uv_i}{D}^2
\end{align*}
due to the orthogonality of $\{Uv_i\}_{i=n+1}^{N}$. Meanwhile, since $b$ is a unit vector in $\mathbb{R}^{N}$,
$\normHsemi{Uv_i}{D}=\sqrt{N\lambda_{i}^{N}}$ and $\{\lambda_i^{N}\}_{i=1}^{N}$ is ordered nonincreasingly, we deduce
$
\normHsemi{v-t}{D}^2=\sum_{i=n+1}^{N}b_i^2N\lambda_i^{N}\leq N\lambda_{n+1}^{N},
$
which shows the desired assertion.
\end{proof}
\begin{remark}\label{rk:POD_tedian}
From Lemmas \ref{lemma:POD_1} and \ref{lemma:POD_2}, the POD space of rank-$n$ $V_{\text{off}}^{\hi,n}$ is an
optimal approximation space of the snapshot space $\tilde{U}_N$ in the sense that it is the
minimizer of the nonlinear optimization problem
\begin{align*}
\inf\limits_{\substack{Y\subset V\\ \dim(Y)=n}}\sup\limits_{v\in S_{\tilde{U}_N}}\mathrm{dist}(v,Y)= \sqrt{N\lambda_{n+1}^{N}}.
\end{align*}
Furthermore, we have the hierarchical structure
$
V_{\text{off}}^{\hi,n}\subset\tilde{U}_{N}\subset V.
$
\end{remark}
The next lemma gives an upper bound on $\sup_{v\in S_{\tilde{U}_N}}{\mathrm{dist}}(v,X_n)$.
\begin{lemma}\label{lemma:n_width}
Given $\epsilon>0$, let $X_n\subset V$ be the $n$-dimensional linear subspace defined by \eqref{eqn:kolmogorovSpace}. There holds
\begin{align*}
\sup\limits_{v\in S_{\tilde{U}_N}}{\mathrm{dist}}(v,X_n)\leq \sqrt{N}(d_n(U_N;V)+\frac{\epsilon}{N}).
\end{align*}
\end{lemma}
\begin{proof}
Given any $v\in S_{\tilde{U}_N}$, there exists a vector $c=\{c_i\}_{i=1}^{N}\in \mathbb{R}^{N}$ such that
$v=\sum_{i=1}^{N}c_i\mathcal{S}_M(y_i)f$ and $ \|c\|=1.$
In view of \eqref{eqn:kolmogorovSpace}, for each $\mathcal{S}_M(y_i)f\in U_{N}$, there exists $g_i\in X_n\subset V$, such that
\[\normHsemi{\mathcal{S}_M(y_i)f-g_i}{D}\leq d_n(U_{N};V)+\frac{\epsilon}{N}.\]
Let $u=\sum_{i=1}^{N}c_ig_i\in X_n$. Then, by Cauchy-Schwarz inequality, we have
\begin{align*}
\normHsemi{v-u}{D} &=\normHsemi{\sum\limits_{i=1}^{N}
c_i(\mathcal{S}_Mf(y_i)-g_i)}{D}
\leq \sum\limits_{i=1}^{N}|c_i|\normHsemi{\mathcal{S}_M(y_i)f-g_i}{D}\\
&\leq \sqrt{N} \|c\|
(d_n(U_N;V)+\frac{\epsilon}{N})
=\sqrt{N}(d_n(U_{N};V)+\frac{\epsilon}{N}),
\end{align*}
which shows the desired assertion by taking the supremum over $v\in S_{\tilde{U}_N}$.
\end{proof}
Finally, by combining Remark \ref{rk:POD_tedian} with Lemma \ref{lemma:n_width} and letting $\epsilon\rightarrow 0$, we
obtain the following decay rate on the eigenvalues of the correlation matrix $\mathcal{K}$.
\begin{theorem}\label{thm:dPOD_decay}
For the eigenvalues $\{\lambda_n^N\}_{n=1}^{N}$ of the correlation matrix $\mathcal{K}$ defined in \eqref{eqn:K}, there holds
\[
\sqrt{\lambda_{n+1}^{N}}\leq d_n(U_{N};V).
\]
\end{theorem}
\begin{remark}
In \cite[Section 3.2]{Kunisch&Volkwein2002}, the convergence of $\sum_{j=n+1}^N\lambda_{j}^{N}$
to $\sum_{j=n+1}^{\infty}\lambda_{j}$ as $N\rightarrow \infty$ was shown, by
interpreting $\mathcal{K}$ as the trapezoidal approximation of $\mathcal{R}$. In contrast, Theorem \ref{thm:dPOD_decay}
gives the decay rate of the eigenvalues $\{\lambda_{n}^{N}\}_{n=1}^{N}$, which is actually as fast as
Kolmogorov $n$-width $\{d_n(U_{N};V)\}_{n=1}^{N}$.
\end{remark}
Now we are ready to derive error estimates for the discrete Galerkin POD approximation, cf. \eqref{eqn:weakform_Mn}.
By means of the triangle inequality, we split the error into three parts
\begin{align*}
\normHsemi{\mathcal{S}(y)f-\mathcal{S}^{N}_{M,n}(y_0)f}{D}&\leq \normHsemi{\mathcal{S}(y)f-\mathcal{S}_M(y)f}{D}+\normHsemi{\mathcal{S}_M(y)f-\mathcal{S}_{M}(y_0)f}{D}\\
&\quad +\normHsemi{\mathcal{S}_M(y_0)f-\mathcal{S}^{N}_{M,n}(y_0)f}{D},
\end{align*}
where $\mathcal{S}_M(y_0)f\in U_N$ is a certain snapshot that approximates $\mathcal{S}_M(y)f$.
In the following, we estimate the three terms one by one. First, the error between the approximation $\mathcal{S}_M(y)f$ and the
true solution $\mathcal{S}(y)f$, due to truncating the infinite KL series after $M$ terms can be bounded by Theorem
\ref{thm:TruncationFinal}:
\begin{equation}\label{eq:error2}
\normHsemi{\mathcal{S}(y)f-\mathcal{S}_M(y)f}{D}\lesssim C(\alpha,\beta,D){1\over \alpha}M^{-\frac{2s}{dp_1}}\beta^{{p_1-2}\over p_1}\norm{f}_{W^{-1,p_2'}(D)}.
\end{equation}
For the second term, given any prescribed accuracy $\epsilon>0$ and $y\in \Omega$, Assumption \ref{A:4} ensures
that there exists some $\mathcal{S}_M(y_0)f\in U_N$ such that
\begin{align}\label{eq:error1}
\normHsemi{\mathcal{S}_M(y)f-\mathcal{S}_M(y_0)f}{D}\leq \frac{\epsilon}{3},
\end{align}
which reflects the approximation error of the snapshot ensemble $U_N$ to the
solution space $U_{\kappa_M}$. Last, the Galerkin orthogonality implies that the solution
$\mathcal{S}_{M,n}^{N}(y_0)f$ to \eqref{eqn:weakform_Mn} satisfies
\begin{align*}
\normHsemi{\mathcal{S}_M(y_0)f-\mathcal{S}_{M,n}^{N}(y_0)f}{D}\leq \frac{\beta}{\alpha}\min\limits_{v\in V^{\text{POD}}_{n}}\normHsemi{\mathcal{S}_M(y_0)f-v}{D}.
\end{align*}
By Lemma \ref{lemma:POD_2} and Theorem \ref{thm:dPOD_decay}, we obtain
\begin{align}\label{eq:error3}
\normHsemi{\mathcal{S}_M(y_0)f-\mathcal{S}_{M,n}^{N}(y_0)f}{D}\leq \frac{\beta}{\alpha}\sqrt{N\lambda_{n+1}^{N}}\leq \frac{\beta}{\alpha}\sqrt{N}d_n(U_{N};V).
\end{align}
Combining the preceding estimates yields
\begin{align*}
\normHsemi{\mathcal{S}(y)f-\mathcal{S}_{M,n}^{N}(y_0)f}{D}&\lesssim \frac{\epsilon}{3}+C(\alpha,\beta,D){\frac1\alpha}\theta^{\frac{2}{p_1}}(\frac{d}{2s})^{\frac{1}{p_1}}(M+1)^{-\frac{2s}{dp_1}}\beta^{{p_1-2}\over p_1}\norm{f}_{W^{-1,p_2'}(D)}\\
&+\frac{\beta}{\alpha}\sqrt{N}d_{n}(U_{N};V).\nonumber
\end{align*}
Upon requesting that all three error components in \eqref{eq:error1}, \eqref{eq:error2} and
\eqref{eq:error3} fall within a prescribed accuracy ${\epsilon}/{3}$, we deduce the following
complexity
\begin{align}\label{eq:cost_final}
{M=\mathcal{O}(\epsilon^{-\frac{dp_1}{2s}})}\quad \text{ and }\quad n=\min\left\{j\in \mathbb{N}: d_j(U_{N};V)\leq \frac{\alpha\epsilon}{3\beta\sqrt{N}}\right\}.
\end{align}
Therefore, we obtain the following complexity estimate on the POD approximation $\mathcal{S}_{M,n}^N(y_0)f$.
\begin{theorem}\label{thm:overall}
Let $\epsilon>0$ be any prescribed accuracy and $y\in \Omega$. Let $N$ be the number of snapshots
fulfilling \eqref{eq:error1}. Then condition \eqref{eq:cost_final} implies the following error estimate
between the solution $\mathcal{S}(y)f$ to \eqref{eqn:spde} and the approximation $\mathcal{S}_{M,n}^{N}(y_0)f$ via \eqref{eqn:weakform_Mn}:
\[
\normHsemi{\mathcal{S}(y)f-\mathcal{S}_{M,n}^{N}(y_0)f}{D}=\mathcal{O}(\epsilon).
\]
\end{theorem}
\end{comment}
\section{Concluding remarks}\label{sec:conclusion}
In this paper, we have analyzed three types of multiscale methods in the framework of the generalized multiscale finite
element methods (GMsFEMs) for elliptic problems with heterogeneous high-contrast coefficients. Their convergence rates
in the energy norm are derived under a very mild assumption on the source term, and are given in terms
of the eigenvalues and coarse grid mesh size. It is worth pointing out that the analysis does not rely on any oversampling
technique that is typically adopted in existing studies. The analysis indicates that the eigenvalue decay behavior of
eigenvalue problems with high-contrast heterogeneous coefficients is crucial for the convergence behavior
of the multiscale methods, including the GMsFEM. This motivates further investigations on such eigenvalue problems in order
to gain a better mathematical understanding of these methods. Some partial findings along this line have been presented
in the work \cite{li2017low}, however, much more work remains to be done.
\section*{Acknowledgements}
The work was partially supported by the Hausdorff Center for Mathematics, University of Bonn, Germany. The author acknowledges the
support from the Royal Society through a Newton international fellowship, and thanks Eric Chung (Chinese University of Hong Kong), Juan Galvis (Universidad Nacional de Colombia, Colombia), Michael Griebel (University of Bonn, Germany) and Daniel Peterseim (University of Augsburg, Germany) for fruitful discussions on the topic of the paper.
|
{
"timestamp": "2018-02-27T02:06:44",
"yymm": "1802",
"arxiv_id": "1802.08873",
"language": "en",
"url": "https://arxiv.org/abs/1802.08873"
}
|
\section{Introduction}
Quadrotor unmanned aerial vehicles are characterized by a simple mechanical structure comprised of two pairs of counter rotating outrunner motors where each one is driving a dedicated propeller, resulting in a platform with high thrust-to-weight ratio, able to achieve vertical takeoff and landing (VTOL) maneuvers and operate in a broad spectrum of flight scenarios.
Quadrotors have good flight endurance characteristics and acceptable payload transporting potential for a plethora of applications.
Although the quadrotor UAV has six degrees of freedom, it is underactuated since it has only four inputs and can only track four commands or less.
A plethora of theoretical and experimental works regarding quadrotors exist including results demonstrating aerobatic maneuvers \cite{geomquadlee}, decentralized collision avoidance for multiple quadrotors \cite{Gillula}, safe passage schemes satisfying constraints on velocities, accelerations, and inputs \cite{Mellinger}, backsteping control laws \cite{MahonyHamel}, and hybrid global/robust controllers \cite{Casau}, \cite{Abdessameud},\cite{NaldiRob}.
This work follows the geometric framework.
A geometric nonlinear control system (GNCS) for a quadrotor UAV is developed directly on the special Euclidean group, thus inherently entailing in the control design the characteristics of the nonlinear configuration manifold, and avoiding singularities and ambiguities associated with minimal attitude representations.
The key contributions of this work are:
(a) An attitude and a position controller is developed based on nonlinear surfaces composed by tracking errors that evolve directly on the nonlinear configuration manifold.
These controllers allow for precision pose tracking by tuning three gains per controller and are able to follow an attitude tracking command and a position tracking command.
(b) In contrast to other GNCSs such as like \cite{geomquadlee}, \cite{geommac} -\cite{qeopidfar}, rigorous stability proofs are developed and regions of attraction both with and without restrictions on the initial position/velocity error are identified.
A region of attraction independent of the initial position/velocity error is desired since it introduces simplicity in trajectory design.
The proposed strategies are validated in simulation in the presence of motor saturation and wind disturbances.
\section{Quadrotor Kinetics Model}
\IEEEpubidadjcol
The quadrotor studied is comprised by two pairs of counter rotating out-runner motors, see Fig. \ref{Quadrotor}.
Each motor drives a dedicated propeller and generates thrust and torque normal to the plane produced by the centers of mass (CM) of the four rotors.
An inertial reference frame I$_{R}\big\{\mathbf{E}_1,\mathbf{E}_2,\mathbf{E}_3\big\}$ and a body-fixed frame I$_{b}\big\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\big\}$ are employed with the origin of the latter to be located at the quadrotor CM, which belongs to the four rotor CM plane.
Vectors $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$ are co-linear with the two quadrotor legs , see Fig. \ref{Quadrotor}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\columnwidth]{./Quad-eps-converted-to.pdf}
\put(-82,74){\parbox{\columnwidth}{$\mathbf{e}_{1}$}}
\put(-59,101){\parbox{\columnwidth}{$\mathbf{e}_{2}$}}
\put(-101,142){\parbox{\columnwidth}{$\mathbf{e}_{3}$}}
\put(-143,71){\parbox{\columnwidth}{$\mathbf{x}$}}
\put(-177,37){\parbox{\columnwidth}{$\mathbf{E}_{1}$}}
\put(-147,58){\parbox{0.4\linewidth}{$\mathbf{E}_{2}$}}
\put(-196,107){\parbox{0.4\linewidth}{$\mathbf{E}_{3}$}}
\put(-95,112){\parbox{0.4\linewidth}{$f_{1}$}}
\put(-82,122){\parbox{0.4\linewidth}{$f_{2}$}}
\put(-113,127){\parbox{0.4\linewidth}{$f_{3}$}}
\put(-129,116){\parbox{0.4\linewidth}{$f_{4}$}}
\caption{Quadrotor with coordinate frames, and actuator forces.}
\label{Quadrotor}
\end{figure}
The following apply throughout the paper.
The actual control input is the thrust of each propeller, which is co-linear with $\mathbf{e}_{3}$.
The first and third propellers generate positive thrust when rotating clockwise, while the second and fourth propellers generate positive thrust when rotating counterclockwise.
The magnitude of the total thrust is denoted by $f=\sum_{i=1}^{4} f_i\in\mathbb{R}$, where $f_{i}$ and other system variables are defined in Table \ref{Table}.
\begin{table}[h]
\caption{Definitions of variables.}
\label{Table}
\begin{center}
\begin{tabular}{|l|l|}
\hline
$\mathbf{x}\in\mathbb{R}^3$ & Quadrotor CM position wrt. I$_R$ in I$_R$\\
\hline
$\mathbf{v}\in\mathbb{R}^3$ & Quadrotor CM velocity wrt. I$_R$ in I$_R$\\
\hline
$^{b}\boldsymbol{\omega}\in\mathbb{R}^3$ & Quadrotor angular velocity wrt I$_R$ in I$_{b}$\\
\hline
$\mathbf{R}\in\text{SO}\left(3\right)$ & Rotation matrix from $\mathbf{I}_b$ to $\mathbf{I}_R$ frame\\
\hline
$^b\mathbf{u}\in\mathbb{R}^3$ & Control moment $^b\mathbf{u}{=}[{}^{b}u_{1};{}^{b}u_{2};{}^{b}u_{3}]$ in I$_b$\\
\hline
$f_i\in\mathbb{R}$ & Force produced by the i-th propeller along $\mathbf{e}_{3}$ \\
\hline
$b_T\in\mathbb{R}^{+}$ & Torque coefficient \\
\hline
$g\in\mathbb{R}$ & Gravity constant\\
\hline
$d\in\mathbb{R}^{+}$ & Distance between system CM and each motor axis\\
\hline
$\mathbf{J}\in\mathbb{R}^{3\times3}$ & Inertial matrix (IM) of the quadrotor in I$_b$\\
\hline
$m\in\mathbb{R}$ & Quadrotor total mass\\
\hline
$\lambda_{min,max}(.)$ & Minimum, maximum eigenvalue of $(.)$ respectively\\
\hline
\end{tabular}
\end{center}
\end{table}
The motor torques, $\boldsymbol{\tau}_{i}$, corresponding to each propeller are assumed to be proportional to thrust,
\begin{IEEEeqnarray}{C}
\boldsymbol{\tau}_{i}=(-1)^{i}b_{T}f_{i}\mathbf{e}_{3},\; i=1,..,4
\end{IEEEeqnarray}
where the $(-1)^{i}$ term connects each propeller with the correct rotation direction (clockwise and counterclockwise).
The control inputs include the total propeller thrust $f$ and moment, $^{b}\mathbf{u}$, given by,
\begin{IEEEeqnarray}{C}
\begin{bmatrix}
f\\
^b\mathbf{u}
\end{bmatrix}
=
\begin{bmatrix}
1&1&1&1\\
0&d&0&-d\\
-d&0&d&0\\
-b_{T}&b_{T}&-b_{T}&b_{T}\\
\end{bmatrix}\!\!\mathbf{F},\;
\mathbf{F}=\begin{bmatrix}
f_{1}\\
f_{2}\\
f_{3}\\
f_{4}\\
\end{bmatrix}\IEEEeqnarraynumspace
\label{eq:distribution}
\end{IEEEeqnarray}
with $\mathbf{F}\in\mathbb{R}^{4}$ the thrust vector, and the $4\times4$ matrix to be always full rank for $d,b_{T}\in\mathbb{R}^{+}$.
The spatial configuration of the quadrotor UAV is described by the quadrotor attitude and the location of its center of mass, both with respect to $\mathbf{I}_{R}$.
The configuration manifold is the special Euclidean group SE(3)=$\mathbb{R}^{3}\times\text{SO(3)}$.
The total thrust produced by the propellers, in $\mathbf{I}_{R}$, is given by $\mathbf{R}f\mathbf{e}_{3}$.
The equations of motion of the quadrotor are given by,
\begin{IEEEeqnarray}{rCl}
\mathbf{\dot{x}}&=&{}\mathbf{v}\IEEEnonumber\IEEEeqnarraynumspace\\
m\dot{\mathbf{v}}&=&-mg\mathbf{E}_{3}+\mathbf{R}f\mathbf{e}_{3}+\boldsymbol{\delta}_{x}\IEEEyesnumber\label{eq:position}\\
\mathbf{J}{}^{b}\dot{\boldsymbol{\omega}}&=&{}^{b}\mathbf{u}-{}^{b}\boldsymbol{\omega}\times\mathbf{J}{}^{b}\boldsymbol{\omega}+\boldsymbol\delta_{R}\IEEEyesnumber\IEEEeqnarraynumspace\label{eq:attitude}\\
\dot{\mathbf{R}}&=&\mathbf{R}S({}^{b}\boldsymbol{\omega}) \IEEEyesnumber
\label{attitude_kinem}\end{IEEEeqnarray}
where $\boldsymbol\delta_{x}$, $\boldsymbol\delta_{R}$ are disturbance terms and $S(.):\mathbb{R}^{3}\rightarrow\mathfrak{so}(3)$ is the cross product map given by,
\begin{IEEEeqnarray}{rCl}
\begin{array}{c}
S(\mathbf{r}){=}[{0},{-r_{3}},{r_{2}};{r_{3}},{0},{-r_{1}};{-r_{2}},{r_{1}},0]\\
{S^{-1}}(S(\mathbf{r})){=}\mathbf{r}
\end{array}\label{iso}
\end{IEEEeqnarray}
\section{Quadrotor Tracking Controls\label{tracControls}}
Given the underactuated nature of quadrotors, in this paper two flight modes are considered:
\begin{itemize}
\item \textit{Attitude Control Mode}:
The controller achieves tracking for the attitude of the quadrotor UAV.
\item \textit{Position Control Mode}:
The controller achieves tracking for the quadrotor CM position and a pointing attitude associated with the quadrotor yaw.
\end{itemize}
Using these flight modes in suitable successions, a quadrotor can perform a complex desired flight maneuver.
Moreover it will be shown that each mode has stability properties that allow the safe switching between flight modes (end of Section \ref{tracControls}).
\newcounter{Prop1}
\addtocounter{Prop1}{1}
\newcounter{Prop2}
\addtocounter{Prop2}{2}
\newcounter{Prop3}
\addtocounter{Prop3}{3}
\newcounter{Prop4}
\addtocounter{Prop4}{4}
\newcounter{Prop5}
\addtocounter{Prop5}{5}
\newcounter{sub}
\addtocounter{sub}{1}
\subsection{Attitude Control Mode (ACM)\label{conmodatt}}
An attitude control system able to follow an arbitrary smooth desired orientation $\mathbf{R}_{d}(t)\in\text{SO(3)}$ and its associated angular velocity $^{b}\boldsymbol{\omega}_{d}(t)\in\mathbb{R}^{3}$ is developed next under the assumption that $\boldsymbol{\delta}_{R}=0_{3\times1}$.
\subsubsection{Attitude tracking errors}
For a given tracking command ($\mathbf{R}_{d}$, $^{b}\boldsymbol{\omega}_{d}$) and current attitude and angular velocity ($\mathbf{R}$, $^{b}\boldsymbol{\omega}$), two sets of geometric attitude tracking errors are considered.
Each set consists of an \textit{attitude error function} $\Psi:\text{SO(3)}\times\text{SO(3)}\rightarrow\mathbb{R}$, and an \textit{attitude error vector} $\mathbf{e}_{R}\in\mathbb{R}^{3}$, defined as follows.
The first set is, \cite{qeoadapclee}:
\begin{IEEEeqnarray}{rCl}
\Psi(\mathbf{R},\mathbf{R}_{d})&=&\frac{1}{2}tr[\mathbf{I}-\mathbf{R}^{T}_{d}\mathbf{R}]\geq 0
\label{error_function}\\
\mathbf{e}_{R}(\mathbf{R},\mathbf{R}_{d})&=&\frac{1}{2}S^{-1}(\mathbf{R}^{T}_{d}\mathbf{R}-\mathbf{R}^{T}\mathbf{R}_{d})
\label{att_error}
\end{IEEEeqnarray}
where $tr[.]$ is the trace function.
The second according to \cite{err_fun}:
\begin{IEEEeqnarray}{rCl}
\Psi(\mathbf{R},\mathbf{R}_{d})&=&2-\sqrt{1+tr[\mathbf{R}^{T}_{d}\mathbf{R}]}\geq 0
\label{error_function_A}\\
\mathbf{e}_{R}(\mathbf{R},\mathbf{R}_{d})&=&\frac{1}{2}S^{-1}(\mathbf{R}^{T}_{d}\mathbf{R}{-}\mathbf{R}^{T}\mathbf{R}_{d})(1{+}tr[\mathbf{R}^{T}_{d}\mathbf{R}])^{-\frac{1}{2}}
\label{att_error_A}
\end{IEEEeqnarray}
Both (\ref{error_function}),(\ref{error_function_A}) yield the angular velocity error vector, $\mathbf{e}_{\omega}{\in}\mathbb{R}^{3}$,
\begin{IEEEeqnarray}{rCl}
\mathbf{e}_{\omega}(\mathbf{R},{}^{b}\boldsymbol{\omega},\mathbf{R}_{d},{}^{b}\boldsymbol{\omega}_{d})&=&{}^{b}\boldsymbol{\omega}-\mathbf{R}^{T}\mathbf{R}_{d}{}^{b}\boldsymbol{\omega}_{d}
\label{ang_vel_error}
\end{IEEEeqnarray}
For the ACM, the controller is designed to be compatible with both sets of $\mathbf{e}_{R}$.
This is because the first set given by $\{(\ref{error_function}),(\ref{att_error})\}$ bestows excellent tracking properties to the controller if the orientation tracking error remains less than $90^{o}$ wrt. an axis-angle rotation; however for an orientation error larger than $90^{o}$, the magnitude of the attitude error vector, (\ref{att_error}), is not proportional to the orientation error and results to deteriorating performance as the state approaches the antipodal equilibrium (see \cite{err_fun} for more details).
In contrast to this, the second set given by $\{(\ref{error_function_A}),(\ref{att_error_A})\}$ does not suffer from this problem but is marginally outperformed by the first set if the attitude error is less than $90^{o}$.
Thus depending on the flight conditions, the user can choose which set of attitude tracking errors to use.
Note that the maximum attitude difference, that of 180$^{o}$ with respect to an equivalent axis-angle rotation between $\mathbf{R}$ and $\mathbf{R}_{d}$, occurs when the rotation matrices are antipodal; then (\ref{error_function}) or (\ref{error_function_A}) yield $\Psi(\mathbf{R},\mathbf{R}_{d})$=2, i.e. 100\% error.
If both rotation matrices express the same attitude i.e., $\mathbf{R}$=$\mathbf{R}_{d}$, then $\Psi(\mathbf{R},\mathbf{R}_{d})$=0, i.e. 0\% error.
Important properties regarding (\ref{error_function})-(\ref{ang_vel_error}), including the associated attitude error dynamics used throughout this work are included in \text{Proposition \arabic{Prop1}} and \text{Proposition \arabic{Prop2}} found in Appendix \ref{appA}.
\subsubsection{Attitude tracking controller}
A controller is developed stabilizing $\mathbf{e}_{R}$, $\mathbf{e}_{\omega}$, to zero exponentially, almost globally under the assumption that $\boldsymbol{\delta}_{R}=0_{3\times1}$.
\textbf{Proposition \arabic{Prop3}.}
For $\eta,k_{R},k_{\omega}\in\mathbb{R}^{+}$, with,
\begin{IEEEeqnarray}{C}
\eta>{k_{R}}/{k_{\omega}}^{2}\label{eq:hta}
\end{IEEEeqnarray}
and initial conditions satisfying,
\begin{IEEEeqnarray}{C}
\Psi(\mathbf{R}(0),\mathbf{R}_{d}(0))<2
\label{Psi_0}\\
\lVert\mathbf{e}_{\omega}(0)\rVert^{2}<2\eta k_{R}\left(2-\Psi(\mathbf{R}(0),\mathbf{R}_{d}(0))\right)\label{surface_0}
\end{IEEEeqnarray}
and for a desired arbitrary smooth attitude $\mathbf{R}_{d}(t)\in\text{SO(3)}$ in,
\begin{IEEEeqnarray}{rCl}
L_{2}&=&\{(\mathbf{R},\mathbf{R}_{d})\in\text{SO(3)}\times\text{SO(3)}|\Psi(\mathbf{R},\mathbf{R}_{d})<2 \}
\label{L_2}
\end{IEEEeqnarray}
then, under the assumption of perfect parameter knowledge, we propose the following nonlinear surface-based controller,
\begin{IEEEeqnarray}{rCl}
^{b}\mathbf{u}&=&{}^{b}\boldsymbol{\omega}\times\mathbf{J}{}^{b}\boldsymbol{\omega}-\mathbf{J}\left(\frac{k_{R}}{k_{\omega}}\dot{\mathbf{e}}_{R}+\mathbf{a}_{d}+\eta\mathbf{s}_{R}\right)\label{att_contr}
\end{IEEEeqnarray}
where $\mathbf{a}_{d}$ is defined in App. A(\ref{E_ad}) and the surface $\mathbf{s}_{R}$ is,
\begin{IEEEeqnarray}{C}
\mathbf{s}_{R}=k_{R}\mathbf{e}_{R}+k_{\omega}\mathbf{e}_{\omega}\label{att_surface}
\end{IEEEeqnarray}
Then, the zero equilibrium of the quadrotor closed loop attitude tracking error $(\mathbf{e}_{R},\mathbf{e}_{\omega})=(\mathbf{0},\mathbf{0})$ is almost globally exponentially stable;
moreover there exist constants $\mu,\tau>0$ such that
\begin{IEEEeqnarray}{C}
\Psi(\mathbf{R},\mathbf{R}_{d})<min\{2,\mu e^{-\tau t}\}
\label{Psi_bou}
\end{IEEEeqnarray}
\textbf{Proof.}
See Appendix \ref{appAtt}.
The convergence properties introduced by $\mathbf{s}_{R}$ to the developed attitude controller are analyzed at the end of Section \ref{tracControls} with the developed position controller.
The initial angular velocity can be arbitrarily large by using sufficiently large gains.
The region of attraction given by (\ref{Psi_0})-(\ref{surface_0}) ensures that the initial attitude error is less than $180^{o}$ with respect to an axis-angle rotation for a desired $\mathbf{R}_{d}$ (i.e., $\mathbf{R}_{d}(t)$ is not antipodal to $\mathbf{R}(t)$).
Consequently exponential stability is guaranteed almost globally.
This is the best that one can do since it has been shown that the topology of SO(3) prohibits the design of a smooth global controller, \cite{obstruction}.
Because (\ref{att_contr}) is developed directly on SO(3), it avoids singularities and ambiguities associated with minimum attitude representations like Euler angles or quaternions completely.
Also this controller can be applied to the attitude dynamics of any rigid body and not only on quadrotor systems.
Since attitude tracking does not depend on $f$, the ACM is more suited for short durations of time.
The thrust magnitude can be selected to achieve an additional objective compatible with the attitude tracking command, i.e. track a desired altitude command \cite{geomquadlee},\cite{geommac},\cite{geomquadlee_asian}.
Finally, despite developing (\ref{att_contr}) under the assumption that $\boldsymbol{\delta}_{R}=0_{3\times1}$, its robustness properties will be tested during simulation in presence of motor saturation and wind disturbances.
\subsection{Position Control Mode (PCM)\label{conmodpos}}
Under the assumption that $\boldsymbol{\delta}_{x}=0_{3\times1}$, a control system is developed for the position dynamics of the quadrotor, stabilizing the tracking errors to zero asymptotically, almost globally.
\subsubsection{Position tracking errors}
For an arbitrary smooth position tracking instruction $\mathbf{x}_{d}\in\mathbb{R}^{3}$, the tracking errors for the position and the velocity are taken as,
\begin{IEEEeqnarray}{C}
\mathbf{e}_{x}=\mathbf{x}-\mathbf{x}_{d},\; \mathbf{e}_{v}=\mathbf{v}-\dot{\mathbf{x}}_{d}\label{pos_error}
\end{IEEEeqnarray}
For $k_{x},k_{v}{\in}\mathbb{R}^{+}$ the position nonlinear surface is defined as,
\begin{IEEEeqnarray}{C}
\mathbf{s}_{x}=k_{x}\mathbf{e}_{x}+k_{v}\mathbf{e}_{v}\label{pos_surf}
\end{IEEEeqnarray}
In the PCM, the attitude dynamics must be compatible with the desired position tracking.
This results in the definition of a position-induced attitude matrix, $\mathbf{R}_{x}(t){\in}\text{SO(3)}$, for use as an attitude command.
To define this matrix, first the desired thrust direction of the quadrotor, $\mathbf{e}_{3_{x}}$, is computed by,
\begin{IEEEeqnarray}{C}
\mathbf{e}_{3_{x}}{=}\frac{mg\mathbf{E}_{3}-m\frac{k_{x}}{k_{v}}\mathbf{e}_{v}-a\mathbf{s}_{x}+m\ddot{\mathbf{x}}_{d}}{\lVert{mg\mathbf{E}_{3}-m\frac{k_{x}}{k_{v}}\mathbf{e}_{v}-a\mathbf{s}_{x}+m\ddot{\mathbf{x}}_{d}}\rVert}\in\text{S}^{2}, a{\in}\mathbb{R}^{+}\label{e_3_x}
\end{IEEEeqnarray}
where it is assumed that by selecting ${\mathbf{x}}_{d}$, $\dot{\mathbf{x}}_{d}$, $\ddot{\mathbf{x}}_{d}$ hereafter,
\begin{IEEEeqnarray*}{C}
{\lVert{mg\mathbf{E}_{3}-m\frac{k_{x}}{k_{v}}\mathbf{e}_{v}-a\mathbf{s}_{x}+m\ddot{\mathbf{x}}_{d}}\rVert}> 0
\end{IEEEeqnarray*}
Secondly the user defines a desired yaw direction $\mathbf{e}_{1_{d}}\in\text{S}^{2}$ of the $\mathbf{e}_{1}$ body-fixed axis of the quadrotor such that $\mathbf{e}_{1_{d}}\nparallel\mathbf{e}_{3_{x}}$.
This is used to find the position-induced heading, $\mathbf{e}_{1_{h}}$, \cite{geommac},
\begin{IEEEeqnarray*}{C}
\mathbf{e}_{1_{h}}=\frac{(\mathbf{e}_{3_{x}}\times\mathbf{e}_{1_{d}})\times\mathbf{e}_{3_{x}}}{\lVert(\mathbf{e}_{3_{x}}\times\mathbf{e}_{1_{d}})\times\mathbf{e}_{3_{x}}\rVert}
\end{IEEEeqnarray*}
The position related attitude $\mathbf{R}_{x}(t){\in}\text{SO(3)}$, $^{b}\boldsymbol{\omega}_{x}(t){\in}\mathbb{R}^{3\times1}$ is,
\begin{IEEEeqnarray}{C}
\mathbf{R}_{x}{=}\left[\mathbf{e}_{1_{h}},\frac{\mathbf{e}_{3_{x}}\times\mathbf{e}_{1_{h}}}{\lVert\mathbf{e}_{3_{x}}\times\mathbf{e}_{1_{h}}\rVert},\mathbf{e}_{3_{x}}\right],\;{}^{b}\boldsymbol{\omega}_{x}{=}{S^{-1}}\!(\mathbf{R}^{T}_{x}\dot{\mathbf{R}}_{x})\label{Rxbox}
\end{IEEEeqnarray}
and the attitude dynamics are guided to follow $\mathbf{R}_{x}(t)$, $^{b}\boldsymbol{\omega}_{x}(t)$.
\subsubsection{Position tracking controller\label{sec:thr_mag}}
Under the assumption that $\boldsymbol{\delta}_{x}=0_{3\times1}$, a control system is developed for the position dynamics of the quadrotor UAV, achieving almost global asymptotic stabilization of ($\mathbf{e}_{x}$,$\mathbf{e}_{v}$,$\mathbf{e}_{R}$,$\mathbf{e}_{\omega}$) to the zero equilibrium through the action/effect of the soon to be introduced Propositions \arabic{Prop4} and \arabic{Prop5}.
For a sufficiently smooth pointing direction $\mathbf{e}_{1_{d}}(t)\in\text{S}^{2}$, and a sufficiently smooth position tracking instruction $\mathbf{x}_{d}(t)\in\mathbb{R}^{3}$ the following position controller is defined,
\begin{IEEEeqnarray}{rCl}
\addtocounter{equation}{1}
\!\!\!\!f(\mathbf{x}_{d},\dot{\mathbf{x}}_{d},\ddot{\mathbf{x}}_{d})&{=}&(mg\mathbf{E}_{3}{-}m\frac{k_{x}}{k_{v}}\mathbf{e}_{v}{-}a\mathbf{s}_{x}{+}m\ddot{\mathbf{x}}_{d})^{T}\mathbf{R}\mathbf{e}_{3}\IEEEyessubnumber\label{f}\\
\!\!\!\!{}^{b}\mathbf{u}(\mathbf{R}_{x},{{}^{b}\boldsymbol{\omega}_{x}})&{=}&{}^{b}\boldsymbol{\omega}{\times}\mathbf{J}{}^{b}\boldsymbol{\omega}{-}\mathbf{J}\left(\frac{k_{R}}{k_{\omega}}\dot{\mathbf{e}}_{R_{x}}{+}\mathbf{a}_{d_{x}}{+}\eta\mathbf{s}_{R_{x}}\right)\IEEEyessubnumber\label{att_contrx}
\end{IEEEeqnarray}
where $\mathbf{s}_{R_{x}}$, $\mathbf{a}_{d_{x}}$, are given by (\ref{att_surface}), App. A(\ref{E_ad}), and $\dot{\mathbf{e}}_{R_{x}}$ is given by App. A(\ref{dot_Att_Error}) if $\{(\ref{error_function}),(\ref{att_error})\}$ are used and is given by App. A(\ref{dot_Att_Error_A}) if $\{(\ref{error_function_A}),(\ref{att_error_A})\}$ are used.
The desired attitude matrix that is used in all the components of (\ref{att_contrx}) is given by (\ref{Rxbox}).
The utilization of nonlinear surfaces resulted to the thrust feedback expression, (\ref{f}), comprised by three gains.
However (\ref{f}) can be scaled to a PD form as in \cite{geomquadlee}.
Since (\ref{f}) is paired with the newly developed attitude controller (\ref{att_contrx}), it forms a new PCM controller of improved closed loop response wrt. \cite{geomquadlee}, see Sect. \ref{simulation}, and its behavior/closed-loop stabilization properties are investigated next.
The closed loop system defined by (\ref{eq:position})-(\ref{attitude_kinem}) under the action of (\ref{f})-(\ref{att_contrx}) is shown to achieve almost global asymptotic stabilization of ($\mathbf{e}_{x}$,$\mathbf{e}_{v}$,$\mathbf{e}_{R}$,$\mathbf{e}_{\omega}$) to the zero equilibrium by the combined action of Propositions \arabic{Prop4} and \arabic{Prop5}.
Specifically (\ref{att_contrx}) drives $\mathbf{R}(t)$ to asymptotically track $\mathbf{R}_{x}(t)$ and combined with (\ref{f}), asymptotic position tracking is achieved.
The first result of exponential stability for a sub-domain of the quadrotor closed loop position dynamics is presented next.
\textbf{Proposition \arabic{Prop4}.}
Considering the controllers in (\ref{f}), (\ref{att_contrx}) and for initial conditions in the domain,
\begin{IEEEeqnarray}{rCl}
D_{x}&=&\{(\mathbf{e}_{x},\mathbf{e}_{v},\mathbf{e}_{R},\mathbf{e}_{\omega})\in\mathbb{R}^{3}\times\mathbb{R}^{3}\times\mathbb{R}^{3}\times\mathbb{R}^{3}|\IEEEnonumber\\
&{ }&\Psi(\mathbf{R}(0),\mathbf{R}_{x}(0))<\psi_{p}<1\}
\label{D_x}
\end{IEEEeqnarray}
and for $\ddot{\mathbf{x}}_{d}\in\mathbb{R}^{3\times1}$, $B\in\mathbb{R}^{+}$ such that the following holds,
\begin{IEEEeqnarray}{C}
\lVert mg\mathbf{E}_{3}+m\ddot{\mathbf{x}}_{d}\rVert \leq B \label{B}
\end{IEEEeqnarray}
We define $\mathbf{\Pi}_{1},\mathbf{\Pi}_{2}\in\mathbb{R}^{2\times2}$ as,
\begin{IEEEeqnarray}{C}
\mathbf{\Pi}_{1}{=}
\begin{bmatrix}
ak_{x}^{2}(1{-}\theta)&-ak_{x}k_{v}\theta{-}\frac{mk_{x}^{2}\theta}{2k_{v}}\\
-ak_{x}k_{v}\theta{-}\frac{mk_{x}^{2}\theta}{2k_{v}}&ak_{v}^{2}{-}\theta(mk_{x}{+}ak_{v}^{2})
\end{bmatrix},\IEEEnonumber\\
\mathbf{\Pi}_{2}=
\begin{bmatrix}
Bk_{x}&0\\
Bk_{v}&0
\end{bmatrix}\label{P}
\end{IEEEeqnarray}
where $\theta<\theta_{max}\in\mathbb{R}^{+}$ and $\theta_{max}$ is given by,
\begin{IEEEeqnarray}{rCl}
\theta_{max}&=&\min\{\frac{ak_{v}^{2}}{ak_{v}^{2}{+}mk_{x}},\delta_{1}+\delta_{2}\},\IEEEyesnumber\label{theta}\\
\delta_{1}&=&2{\frac {k_{v}^{2}\sqrt {4k_{x}^{4}k_{v}^{4}a^{4}+4k_{x}^{5}k_{v}^{2}{a
}^{3}m+2k_{x}^{6}{m}^{2}{a}^{2}}}{k_{x}^{4}{m}^{2}}}
\IEEEnonumber\\
\delta_{2}&=&-4{\frac {{a}^{2}k_{v}^{4}}{{m}^{2}k_{x}^{2}}}{-}2{\frac {ak_{v}^{2}}{mk_{x}}}\IEEEnonumber
\end{IEEEeqnarray}
If $\{(\ref{error_function}),(\ref{att_error})\}$ is used, the attitude error bound, $\psi_{p}$, satisfies,
\begin{IEEEeqnarray*}{C}
\theta_{max}=\sqrt{\psi_{p}(2-\psi_{p})}
\end{IEEEeqnarray*}
while if the set $\{(\ref{error_function_A}),(\ref{att_error_A})\}$ is used, $\psi_{p}$ satisfies,
\begin{IEEEeqnarray*}{C}
\theta_{max}=\sqrt{\psi_{p}(1-\frac{\psi_{p}}{4})}
\end{IEEEeqnarray*}
In conjunction with suitable gains $\eta,k_{R},k_{\omega}\in\mathbb{R}^{+}$, such that,
\begin{IEEEeqnarray}{C}
\lambda_{min}(\mathbf{W}_{3})>\frac{\lVert\mathbf{\Pi}_{2}\rVert^{2}}{4\eta\lambda_{min}(\mathbf{\Pi}_{1})},\mathbf{W}_{3}=\begin{bmatrix}
k_{R}^{2}&0\\
0&k_{\omega}^{2}
\end{bmatrix}\label{w3}
\end{IEEEeqnarray}
then the zero equilibrium of the closed loop errors $(\mathbf{e}_{x},\mathbf{e}_{v},\mathbf{e}_{R},\mathbf{e}_{\omega})$ is exponentially stable in the domain given by (\ref{D_x}).
A region of attraction is identified by (\ref{D_x}), (\ref{theta}), and
\begin{IEEEeqnarray}{C}
\lVert\mathbf{e}_{\omega}(0)\rVert^{2}<2\eta k_{R}\left(\psi_{p}-\Psi(\mathbf{R}(0),\mathbf{R}_{x}(0))\right)\label{ep_0}
\end{IEEEeqnarray}
\textbf{Proof.}
See Appendix \ref{appB}.
Proposition \arabic{Prop4} requires that the norm of the initial attitude error is less than $\theta_{max}$ to achieve exponential stability (the upper bound of $\theta$, (\ref{theta}), depends solely on the control gains and the quadrotor mass).
This corresponds to a slightly reduced region of attraction in comparison to the regions in \cite{geomquadlee}, \cite{geommac} -\cite{qeopidfar}, because no restriction on the initial position/velocity error was applied during the stability proof.
This approach is not only novel, wrt. the geometric quadrotor literature, but it also offers the advantage of simplifying the trajectory design procedure.
In contrast, the region of attraction in other geometric treatments includes bounds on the initial position or velocity (see \cite{geomquadlee}, \cite{geommac} -\cite{qeopidfar}) meaning that the trajectory should comply to the position/velocity bounds and also to the attitude bound, a more involved/complicated task.
If a user prefers a larger basin of exponential stability, this can be achieved by introducing bounds on the initial position/velocity (see Appendix \ref{appB}, Section (\ref{altROA}) for more details).
Then two new regions of attraction are produced involving larger initial attitude errors and are given by (\ref{ep_0}) and,
\begin{IEEEeqnarray}{C}
\Psi(\mathbf{R}(0),\mathbf{R}_{x}(0))<\psi_{p}<1, \lVert\mathbf{e}_{x/v}(0)\rVert<e_{x/v_{max}}\label{fra_the_xv}\\
\theta<\theta_{max}=\min\{\frac{ak_{v}^{2}}{ak_{v}^{2}{+}mk_{x}}\}\label{thetaxv}
\end{IEEEeqnarray}
where the second inequality in (\ref{fra_the_xv}) denotes either a bound on the initial position error, $e_{x_{max}}$, or a bound on the initial velocity error, $e_{v_{max}}$, but not on both (see Appendix \ref{appB}, Section (\ref{altROA}) for more details and expressions regarding $\Pi_{1}$, $\Pi_{2}$, that comply with (\ref{w3})).
Depending on user preference, the trajectory design procedure can be realized using either one of the three regions of attraction ($\{(\ref{D_x}), (\ref{theta}), (\ref{ep_0})\}$, $\{(\ref{ep_0}), (\ref{fra_the_xv}), (\ref{thetaxv})\}$ using $e_{x_{max}}$ and $\{(\ref{ep_0}), (\ref{fra_the_xv}), (\ref{thetaxv})\}$ using $e_{v_{max}}$) guiding us to favorable conditions for switching between flight modes.
For completeness, all three regions of exponential stability were derived;
however this work focuses on the region given by $\{(\ref{D_x}), (\ref{theta}), (\ref{ep_0})\}$.
Finally, the proposition that follows shows that the structure of the position controller is characterized by almost global exponential attractiveness.
This compensates for the reduced position/velocity free region of attraction and introduces greater freedom to the user in regards to control objectives, since the region of attraction does not depend explicitly on the initial position/velocity error.
If the quadrotor initial states are outside of (\ref{D_x}), with respect to the initial attitude, Proposition \arabic{Prop3} still applies due to the action of (\ref{att_contrx}).
Thus the attitude state enters (\ref{D_x}) in finite time $t^{*}$ and the results of Proposition \arabic{Prop4} take effect.
The result regarding the position mode is stated next.
\textbf{Proposition \arabic{Prop5}.}
For initial conditions satisfying (\ref{surface_0}), and
\begin{IEEEeqnarray}{C}
\psi_{p}\leq\Psi(\mathbf{R}(0),\mathbf{R}_{x}(0))<2\label{Psi_atr}
\end{IEEEeqnarray}
and a uniformly bounded desired acceleration (\ref{B}), the thrust magnitude defined in (\ref{f}), in conjunction with the control moment (\ref{att_contrx}), renders the zero equilibrium of $(\mathbf{e}_{x},\mathbf{e}_{v},\mathbf{e}_{R},\mathbf{e}_{\omega})$ almost globally exponentially attractive.
\textbf{Proof of Proposition \arabic{Prop5}.}
See Proposition 4 in \cite{geommac} but apply the thrust feedback expression (\ref{f}).
Proposition \arabic{Prop5} shows that during the finite time that it takes for the attitude states to enter the region of attraction for exponential stability (\ref{D_x}), (\ref{ep_0}), the position tracking errors (\ref{pos_error}) remain bounded.
The calculated region of exponential attractiveness given by (\ref{Psi_atr}) ensures that the initial attitude error is less than $180^{o}$ with respect to an axis-angle rotation for a desired $\mathbf{R}_{x}$ (i.e., $\mathbf{R}_{x}(t)$ is not antipodal to $\mathbf{R}(t)$).
Consequently the zero equilibrium of the tracking errors is almost globally exponentially attractive.
Note that for both control modes \ref{conmodatt} (\ref{conmodpos}), through the utilization of the nonlinear surfaces $\mathbf{s}_{R}$, ($\mathbf{s}_{x}$), the closed loop dynamics of the nonlinear system are altered, enabling the user to influence the convergence of the system to the zero equilibrium by using three gains per surface.
First by using the gains $\eta$, ($a$), to affect the reaching time to the surface, by penalizing the combined surface error, followed by the gains $k_{R},k_{\omega},(k_{x},k_{v})$, to affect the convergence time when on/near the surface by penalizing independently the attitude, angular velocity, (position, translational velocity), errors.
This is showcased in Fig. \ref{SMCc}, where the quadrotor response is shown during an attitude maneuver (Fig. \ref{attsur}), and a position maneuver (Fig. \ref{possur}).
In both cases, the same simulation is repeated but with larger gains $\eta$, ($a$), resulting in faster reaching times, see black solid lines in Fig. \ref{attsur},\ref{possur}.
In Fig. \ref{attsur}, by doubling $\eta$, the reaching time from $t_{\mathbf{s}_{R}}{=}0.169$ improves to $t_{\mathbf{s}_{R}}{=}0.099$ and in Fig. \ref{possur}, by increasing $a$ by four, the reaching time from $t_{\mathbf{s}_{x}}{=}1.999$ improves to $t_{\mathbf{s}_{x}}{=}0.569$.
As a result, the strict algebraic relation to the gains imposed by the proposed controller design, introduces ''sliding like'' closed loop dynamics, see description in Fig. \ref{SMCc}, and allows for finer control on the convergence rate to the zero equilibrium by using the insights gained by the Lyapunov analysis.
Also the sliding behavior is achieved here without the signum function; thus chattering is avoided.
\begin{figure}[!h]
\centering
\subfloat[\label{attsur}]{\includegraphics[width=0.49\columnwidth]{./quat_att_sur-eps-converted-to.pdf}}
~
\subfloat[\label{possur}]{\includegraphics[width=0.49\columnwidth]{./quat_pos_sur-eps-converted-to.pdf}}
\caption{
Sliding behavior produced by, (\ref{att_contr}), ((\ref{f}), (\ref{att_contrx})) using $\{(\ref{error_function_A}),(\ref{att_error_A})\}$.
(\ref{attsur}) Convergence to $\mathbf{s}_{R}$ for a step of $179.9999^{o}$.
(\ref{possur}) Convergence to $\mathbf{s}_{x}$ for a position step to $\mathbf{x}_{d}{=}[1;1;1]cm$.
The black and dashed green lines indicate the reaching phase to $\mathbf{s}_{R,x}$ followed by sliding behavior indicated by blue lines.
The black lines indicate usage of higher sliding gains $\eta,a$.
The reaching times, $t_{\mathbf{s}_{R,x}}$, are colored accordingly.
}
\label{SMCc}
\end{figure}
Due to the combined action of (\ref{f}) with (\ref{att_contrx}) it was possible to identify, for the first time wrt. the geometric literature, a region of attraction \textit{independent} of the initial position/velocity error.
This is a new development in regards to the geometric literature.
Additionally the developed expression, (\ref{f}), with the third gain allows for more intuitive tuning thus offering further refinement of the closed loop response.
Concluding, by the combined action of Propositions \arabic{Prop4} and \arabic{Prop5}, asymptotic almost global stabilization of ($\mathbf{e}_{x}$,$\mathbf{e}_{v}$,$\mathbf{e}_{R}$,$\mathbf{e}_{\omega}$) to the zero equilibrium is achieved.
Since both flight modes have almost global stability properties, the closed loop system is robust to switching between flight modes.
The only consideration in respect to trajectory planning is that the desired trajectory must agree with (\ref{Psi_0})-(\ref{surface_0}).
Despite developing (\ref{f}), (\ref{att_contrx}) under the assumption that $\boldsymbol{\delta}_{x}{=}0_{3\times1}$, the robustness of the controller will be tested during simulation in the presence of motor saturation and wind disturbances.
\section{Results\label{simulation}}
The effectiveness of the developed GNCS is verified through simulations.
First by a comparison with the GNCS in \cite{geomquadlee}, to verify the claims from Section \ref{sec:thr_mag} in regards to the thrust magnitude (\ref{f}), followed by an aggressive recovery/trajectory tracking maneuver in the presence of motor saturations and noise to test the effectiveness and robustness of the developed GNCS.
To analyze GNCSs consisting of different structure and strategies, a criterion is needed for a commensurate comparison of their performance.
To this end the Root-Mean-Square (RMS) of the thrusts is used as a criterion, given by,
\begin{IEEEeqnarray}{C}
f_{RMS}(t)=\sqrt{\frac{1}{t}\int_{0}^{t}\sum_{1}^{4} [f_{i}(t)]^{2} d\tau}
\label{rms}
\end{IEEEeqnarray}
Specifically we use (\ref{rms}) to calculate the RMS control effort difference, ${\Delta}f_{RMS}(t)$, given by,
\begin{IEEEeqnarray}{C}
{\Delta}f_{RMS}(t)=f^{proposed}_{RMS}(t)-f^{benchmark}_{RMS}(t)
\label{Deltarms}
\end{IEEEeqnarray}
and tune our developed GNCS such that (\ref{Deltarms}) is negative during the simulation at all times so that the benchmark controller has equal or larger control authority.
By comparing the controller performance, if the developed GNCS produces the least error with \textit{less control effort} it is deemed superior.
The system parameters were taken from a real quadrotor described in \cite{Hinf}:
\begin{IEEEeqnarray*}{C}
\mathbf{J}=[0.0181,0,0;0,0.0196,0;0,0,0.0273]\;kg\,m^{2}\IEEEnonumber\\
m=1.225\;kg,
d=0.23\;m,
b_{T}=0.0121\;m\IEEEnonumber
\end{IEEEeqnarray*}
and the actuator constraints, see \cite{Hinf}, are given by:
\begin{IEEEeqnarray*}{C}
f_{i,min}=0{ }\text{[N]}, f_{i,max}=6.9939{ }\text{[N]}
\end{IEEEeqnarray*}
The wind profile shown in Fig. \ref{windpr} is used in conjunction with the drag equation, \cite{Batchelor}, with the drag coefficient and reference area matrices of the quadrotor to be given by,
\begin{IEEEeqnarray*}{C}
C_{D}{=}\text{diag}(0.2{,}0.22{,}0.5),
A_{D}{=}\text{diag}(0.0907{,}0.0907{,}0.4004)\text{m}^2
\end{IEEEeqnarray*}
The torque due to wind is calculated by assuming that the disturbance force is applied at $0.04\mathbf{e}_{3}$.
Finally all simulations were conducted using fixed-step integration with $dt{=}1{\cdot}10^{-3}$s.
\subsection{Geometric-NCS comparison}
For this comparison, the GNCS in \cite{geomquadlee} was selected as a benchmark since it is the first quadrotor GNCSs developed directly on SE(3), it demonstrates remarkable results in aggressive maneuvers, and to validate the claims of Sect. \ref{sec:thr_mag}.
The controllers use the first set of error vectors given by $\{(\ref{error_function}),(\ref{att_error})\}$, and no saturation/disturbances are included, to conclude controller competence.
The gains were tuned using (\ref{Deltarms}) as follows.
First the attitude gains were tuned for a desired pitch command of $90^{o}$ followed by tuning the position gains for a desired $\mathbf{x}_{d}{=}[1;1;1][cm]$.
Tuning the attitude controller first, ensures that during the PCM, the attitude controller embedded in the position control loop will produce homogeneous control effort.
Also the gains must be compliant to (\ref{eq:hta}), (\ref{w3}).
The developed controller gains are:
\begin{IEEEeqnarray*}{C}
k_{\omega}{=}150,
k_{R}{=}5625,
\eta{=}0.8\\
k_{v}{=}59.82,
k_{x}{=}894.62,
a{=}0.5071
\end{IEEEeqnarray*}
The benchmark controller \cite{geomquadlee} parameters used are:
\begin{IEEEeqnarray*}{C}
k_{\omega}=[2.1720,0,0;0,2.3520,0;0,0,3.2760]\\
k_{R}{=}[65.16,0,0;0,70.56,0;0,0,98.28],k_{v}{=}38.71,k_{x}{=}375.61
\end{IEEEeqnarray*}
The initial conditions (IC's) are: $\mathbf{x}(0)=\mathbf{v}(0)={}^{b}\boldsymbol{\omega}(0)=0_{3\times 1},\mathbf{R}(0)=\mathbf{I}$.
The results are presented in Fig. \ref{Tuning}.
Examining Fig. \ref{psi90tun}, the effectiveness of (\ref{att_contr}) (solid black line: 1) with respect to the benchmark controller (dashed blue line: 2) in performing attitude maneuvers is demonstrated as $\Psi$ converges to zero faster and with less control effort, see Fig. \ref{frmstun} inner plot.
The quadrotor response for a position command to $\mathbf{x}_{d}{=}[1;1;1][cm]$ is shown in Fig (\ref{psitun},\ref{trajtun}).
Examining Fig. (\ref{trajtun}), it is clear that the developed position controller ((\ref{f}), (\ref{att_contrx})) performs equally well with the benchmark controller.
However the attitude error during the position maneuver is negotiated better by the developed position controller as $\Psi$ converges to zero faster and with a smaller overall error, $\Psi{<}0.078$, vs $\Psi{<}0.1198$, an important prevalence.
In Fig. \ref{frmstun} the value of, (\ref{Deltarms}), is displayed for both the attitude (inner plot), and position (outer plot), maneuvers.
Notice that the benchmark controller underperforms despite using more control effort, see Fig. \ref{frmstun}.
\begin{figure}[!h]
\centering
\subfloat[\label{frmstun}]{\includegraphics[width=0.5\columnwidth]{./RMS_quad_pos-eps-converted-to.pdf}}
~
\subfloat[\label{psi90tun}]{\includegraphics[width=0.5\columnwidth]{./quad_R_att-eps-converted-to.pdf}}
\subfloat[\label{psitun}]{\includegraphics[width=0.5\columnwidth]{./quad_pos_psi-eps-converted-to.pdf}}
~
\subfloat[\label{trajtun}]{\includegraphics[width=0.5\columnwidth]{./quad_pos_N-eps-converted-to.pdf}}
\caption{Quadrotor response after the tuning procedure.
(\ref{frmstun}) RMS control effort by (\ref{Deltarms}).
(\ref{psi90tun}) Response for a step command of $90^{o}$.
(\ref{psitun},\ref{trajtun}) Response for a position command to $\mathbf{x}_{d}{=}[1;1;1][cm]$.
(\ref{psitun}) Attitude error given by (\ref{error_function}).
(\ref{trajtun})
Position error, $\lVert\mathbf{e}_{x}\rVert$.
Solid lines (1): Developed, Dashed lines (2): Benchmark.
}
\label{Tuning}
\end{figure}
The reason that (\ref{Deltarms}) exhibits large values in Fig. \ref{frmstun}, is due to the high gains used to achieve precise trajectory tracking.
As a result because the controllers are fed with step commands, extremely large control efforts are observed.
In view of the above, the ability of the developed PCM in achieving the position command coequally to \cite{geomquadlee} but with less control effort while simultaneously negotiate the attitude error more efficiently again with less control effort makes it more effective and validates the claims of Sect. \ref{sec:thr_mag}.
\subsection{Aggressive recovery/trajectory tracking maneuver}
A complex flight maneuver is conducted, in the presence of motor saturation and noise due to wind, involving transitions between flight modes.
In this simulation, the developed controllers utilize the second set of error vectors given by, $\{(\ref{error_function_A}),(\ref{att_error_A})\}$.
The maneuver was selected to showcase both the trajectory tracking for position and attitude, and the recovery capabilities of the developed GNCS.
The IC's are: $\mathbf{x}(0)=[0;0;5],\mathbf{v}(0)={}^{b}\boldsymbol{\omega}(0)=0_{3\times 1},\mathbf{R}(0)=\mathbf{I}$.
Since this simulation contains portions characterized by large error vectors, softer gains are needed to ensure smooth behavior and minimize motor saturation.
The gains used are:
\begin{IEEEeqnarray*}{C}
k_{\omega}{=}40,
k_{R}{=}400,
\eta{=}1.002\\
k_{v}{=}7.06,
k_{x}{=}12.46,
a{=}0.5081
\end{IEEEeqnarray*}
The flight scenario, to be achieved through the concatenation of the two flight modes, is described next:
\begin{enumerate}[(a)]
\item ($t < 4$): Position Mode: Translation from the IC's to $\mathbf{x}_{d}=[0;1;10],\mathbf{v}_{d}=[0;0;7],\mathbf{e}_{1d}=[1;0;0]$ using smooth polynomials of eighth degree (SP$8^{th}$).
\item ($4\leq t < 4.4$): Attitude Mode: The quadrotor performs a $180^{o}$ pitch maneuver, i.e. goes inverted.
$\mathbf{R}_{d}(t)$ was designed by defining the pitch angle using SP$8^{th}$.
\item ($4.4\leq t < 4.9$): Attitude Mode: The quadrotor recovers from its inverted state to $\mathbf{R}_{d}(t)=\mathbf{I}$, i.e. point to point command.
\item ($4.9\leq t \leq 10$): Position Mode: Translation to $\mathbf{x}_{d}=[-1;1.5;10],\mathbf{e}_{1d}=[1;0;0]$ using SP$8^{th}$ with IC's equal to the values of the states of the quadrotor at the end of the attitude mode.
\end{enumerate}
Simulation results of the maneuver are illustrated in Fig. \ref{Aggressive} where the duration that the attitude mode is utilized is illustrated by the magenta colored intervals.
The percentage attitude error using (\ref{error_function_A}) is shown in Fig. \ref{PsiComp}.
It is observed that up to $t=4.4$, i.e. the beginning of the quadrotor recovery from the inverted position, the quadrotor atttitude error is maintained below 5\% (below $9^{o}$ wrt. an axis-angle rotation).
During the recovery interval ($4.4<t<4.9$), despite the large attitude error of $77.64\%$ introduced by the attitude step command, the quadrotor successfully converges to the desired orientation undeterred by the disturbances due to wind and motor saturations, see Fig. \ref{thr_U}, \ref{windpr}.
The position response is shown in Fig. \ref{PosComp}.
During the position mode, i.e. $t<4$ and $t>4.9$, the states track the reference trajectories effectively, see Fig. \ref{PosComp}.
At the position mode interval, $\lVert\mathbf{e}_{x}\rVert$ (not shown here due to space) increases above 0.06m, to 0.5m, only between $3<t<4$ where the wind increases rapidly, see Fig. \ref{windpr} for the wind profile.
The effect of the wind at the same interval is evident also by the noisy motor thrusts, see Fig. \ref{thr_U} at $3<t<4$.
A simulation conducted in the absence of wind, not shown due to space, showed that the noisy behavior in Fig. \ref{thr_U} is eradicated and $\lVert\mathbf{e}_{x}\rVert<0.06$ throughout the position mode interval.
Concluding, the effectiveness of the proposed GNCSs in performing precise trajectory tracking maneuvers (attitude/position) and recovery maneuvers in the presence of motor saturations and disturbances was shown.
The safe switching between flight modes, stated at the end of Section \ref{conmodpos}, was also demonstrated.
\begin{figure}[!h]
\centering
\subfloat[\label{PsiComp}]{\includegraphics[width=0.5\columnwidth]{./aggr_traj_01-eps-converted-to.pdf}}
~
\subfloat[\label{PosComp}]{\includegraphics[width=0.5\columnwidth]{./aggr_traj_02-eps-converted-to.pdf}}
\subfloat[\label{thr_U}]{\includegraphics[width=0.5\columnwidth]{./aggr_traj_03-eps-converted-to.pdf}}
~
\subfloat[\label{windpr}]{\includegraphics[width=0.5\columnwidth]{./aggr_traj_04-eps-converted-to.pdf}}
\caption{
Complex trajectory tracking.
(\ref{PsiComp}) Attitude error given by (\ref{error_function_A}).
(\ref{PosComp}) Position state $\mathbf{x}(t)$ (solid black line) and reference $\mathbf{x}_{d}(t)$ (blue dashed line).
(\ref{thr_U}) Thrusts (Developed).
(\ref{windpr}) Wind profile.}
\label{Aggressive}
\end{figure}
\section{Conclusion and Future Work}
In this paper, new controllers for a quadrotor unmanned micro aerial vehicle were developed, based on nonlinear surfaces and employing tracking errors that evolve directly on the nonlinear configuration manifold, inherently including in the control design the nonlinear characteristics of the SE(3) configuration space.
Through rigorous stability proofs, the developed controllers were shown to have desirable closed loop properties that are almost global.
A region of attraction, independent of the position error, was produced and analyzed for the first time, wrt. the geometric literature.
The effectiveness of the developed GNCS was validated by numerical simulations of aggressive maneuvers, in the presence of motor saturations and disturbances due to wind.
Our future work will include experimental trials and an investigation of the developed GNCS robustness properties.
|
{
"timestamp": "2018-11-13T02:22:36",
"yymm": "1802",
"arxiv_id": "1802.08920",
"language": "en",
"url": "https://arxiv.org/abs/1802.08920"
}
|
\section{Introduction}
\label{intro}
I will first quickly summarize a few relevant concepts
discussed in much more detail in previous reports
\cite{888,learningtothink2015}. The reader might profit from being familiar
with some of our earlier work on
algorithmic transfer learning~\cite{Schmidhuber:04oops,powerplay2011and13,learningtothink2015} and
recurrent neural networks (RNNs)
for control and planning~\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego,Schmidhuber:91nips,SchmidhuberHuber:91} and
hierarchical chunking~\cite{chunker91and92}.
To become a general problem solver that is able to run arbitrary
problem-solving programs, the controller of a robot or an artificial
agent must be a general-purpose
computer~\cite{Goedel:31,Church:36,Turing:36,Post:36}. Artificial RNNs fit this bill. A typical RNN
consists of many simple, connected processors called neurons, each
producing a sequence of real-valued activations. Input neurons get
activated through sensors perceiving the environment, other neurons
get activated through weighted connections or wires from previously
active neurons, and some neurons may affect the environment by
triggering actions. {\em Learning} or {\em credit assignment} is
about finding real-valued weights that make the RNN exhibit {\em
desired} behavior, such as driving a car.
The weight matrix of an RNN is
its program.
Many RNN-like models can be used to build general computers, e.g.,
RNNs controlling pushdown automata~\cite{Das:92,mozer1993connectionist}
or other types of differentiable memory~\cite{graves2016}
including differentiable fast
weights~\cite{Schmidhuber:92ncfastweights,Schmidhuber:93ratioicann}, as well as closely related
RNN-based meta-learners~\cite{Schmidhuber:93selfreficann,Hochreiter:01meta,scholarpedia2010}.
Using sloppy but convenient terminology, we refer to all of them as
RNNs~\cite{learningtothink2015}. In practical applications, most RNNs are {\em Long Short-Term Memory} (LSTM)
networks~\cite{lstm97and95,Gers:2000nc,Graves:09tpami,888},
now used billions of times per day for automatic translation~\cite{wu2016google,facebook2017}, speech recognition~\cite{googlevoice2015}, and many other tasks~\cite{888}.
If there are large 2-dimensional inputs such as
video images, the LSTM may have a front-end~\cite{vinyals2014caption} in form of a convolutional neural net
(CNN)~\cite{Fukushima:1979neocognitron,LeCun:89,weng1992,Behnke:LNCS,ranzato-cvpr-07,scherer:2010,ciresan2012cvpr,888}
implemented on fast GPUs~\cite{ciresan2012cvpr,888}.
Such a
CNN-LSTM combination is still an RNN.
Without a teacher, reward-maximizing programs of an RNN
must be learned through repeated trial and error,
e.g., through artificial evolution~\cite{miller:icga89,yao:review93,Sims:1994:EVC,moriarty:phd,gomez:phd,Gomez:03,wierstraCEC08,glasmachers:2010b,Sun2009a,sun:gecco13}
\cite[Sec.~6.6]{888}, or reinforcement learning~\cite{Kaelbling:96,Sutton:98,wiering2012,888}
through policy gradients~\cite{Williams:86,Sutton:99,baxter2001,aberdeenthesis,ghavamzadehICML03,stoneICRA04,wierstraCEC08,rueckstiess2008b,sehnke2009parameter,gruettner2010multi,wierstra2010,peters2010}
\cite[Sec.~6]{888}.
The search space can often be reduced dramatically by evolving
{\em compact encodings} of RNNs, e.g.,~\cite{Schmidhuber:97nn+,stanley09,koutnik:gecco13,steenkiste2016wavelet}\cite[Sec.~6.7]{888}.
Nevertheless, this is often much harder than imitating teachers
through gradient-based supervised learning
\cite{Werbos:88gasmarket,WilliamsZipser:92,RobinsonFallside:87tr}\cite{888}
for LSTM~\cite{lstm97and95,Gers:2000nc,Graves:09tpami}.
However, reinforcement learning RNN controllers can profit from gradient-based RNNs used as predictive
world models~\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego,Schmidhuber:91nips,learningtothink2015}.
See previous papers for many additional references on this~\cite{888,learningtothink2015}.
In what follows, I will elaborate on such previous work.
\section{One Big RNN For Everything: Basic Ideas and Related Work}
\label{ONE}
I will focus on
the incremental training of an increasingly general problem solver
interacting with an environment, continually~\cite{Ring:94}
learning to solve new tasks (possibly without supervisor) and without forgetting any previous, still valuable skills.
The problem solver is a single RNN called ONE.
Unlike previous RNNs,
ONE or copies thereof or parts thereof are trained
in various ways, in particular, by {\bf (1)} black box optimization /
reinforcement learning / artificial evolution without a teacher,
or {\bf (2)} gradient descent-based supervised or
unsupervised learning (Sec. \ref{intro}).
{\bf (1)} is usually much harder than {\bf (2)}.
Here I combine {\bf (1)} and {\bf (2)} in a way that leaves much if not most of the work to {\bf (2)},
building on several ideas from previous work:
\begin{enumerate}[leftmargin=*]
\item {\bf Extra goal-defining input patterns to encode user-given tasks.}
A reinforcement learning neural controller of 1990 learned to control a fovea through
sequences of saccades to find particular objects in visual scenes,
thus learning sequential attention~\cite{SchmidhuberHuber:91}. User-defined goals were
provided to the system by special ``goal input vectors" that remained constant
\cite[Sec.~3.2]{SchmidhuberHuber:91}
while
the system shaped its incoming stream of standard
visual inputs through its fovea-shifting actions.
Also in 1990, gradient-based recurrent subgoal generators
~\cite{Schmidhuber:90compositional,Schmidhuber:91icannsubgoals,SchmidhuberWahnsiedler:92sab}
used special start and goal-defining input vectors, also for an evaluator network predicting the costs and rewards
associated with moving from starts to goals.
The later {\sc PowerPlay} system (2011)~\cite{powerplay2011and13} also used such task-defining
special inputs, actually selecting on its own new goals and tasks, to become
a more and more general problem solver in an active but unsupervised fashion.
In the present paper, variants of
ONE will also adopt this concept of extra goal-defining inputs to distinguish between numerous different tasks.
\item {\bf Incremental black box optimization of reward-maximizing RNN controllers}. If ONE already knows
how to solve several tasks, then a copy of ONE may profit from this prior knowledge,
learning a new task through additional weight changes more quickly
than learning the task from scratch, e.g.,~\cite{gomez:ab97,stoneML05,ghavamzadehICML03},
ideally through optimal {\em algorithmic transfer learning}, like in the
at least
{\em asymptotically Optimal Ordered Problem Solver}~\cite{Schmidhuber:04oops},
where new solution candidates in form of programs may exploit older ones in arbitrary computable fashion.
\item {\bf Unsupervised prediction and compression of all data of all trials.}
An RNN-based ``world model" M
of 1990~\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego} learned to predict (and thus compress~\cite{chunker91and92})
future inputs including
vector-valued reward signals~\cite{Schmidhuber:90sandiego}
from the environment of an agent controlled by another RNN called C through
environment-changing actions. This was also done in more recent,
more sophisticated CM systems~\cite{learningtothink2015}.
Here we collapse both M and C into ONE,
very much like in Sec. 5.3 of the previous paper~\cite{learningtothink2015},
where C and M were bi-directionally connected such that they effectively became one big net
that ``learns to think"~\cite{learningtothink2015}.
In the present paper, however, we do not make any explicit difference any more between C and M.
\item {\bf Compressing all behaviors so far into ONE.}
The chunker-automatizer system of the neural history compressor of
1991~\cite{chunker91and92,schmidhuber1993}
used gradient descent to compress the learned behavior of a so-called {\em ``conscious"} chunker RNN
into a separate {\em ``subconscious"} automatizer RNN,
which not only learned to imitate the chunker network,
but also was continually retrained on its own previous tasks,
namely, (1) to predict teacher-given targets through supervised learning,
and (2) to compress through unsupervised learning all sequences of observations by predicting them (what is predictable does not have to be stored extra).
It was shown that this type of unsupervised pretraining for deep learning networks can greatly facilitate
the learning of additional user-defined tasks~\cite{chunker91and92,schmidhuber1993}.
Here we apply the basic idea to the incremental skill training of ONE.
Both the predictive skills acquired by gradient descent and the task-specific control skills acquired by black box optimization
can be collapsed into one single network (namely, ONE itself) through pure gradient descent,
by retraining ONE on all input-output traces of all previously learned
behaviors that are still deemed useful~\cite{powerplay2011and13}.
Towards this end, we simply retrain ONE to reproduce control behaviors of successful past versions of ONE,
but without really executing the behaviors in the environment (usually the expensive part).
Simultaneously, all input-output traces ever observed (including those of failed trials) can be used
to train ONE to become a better predictor of future inputs,
given previous inputs and actions.
Of course, this requires to store input-output traces of all trials~\cite{Schmidhuber:06cs,Schmidhuber:09sice,learningtothink2015}.
\end{enumerate}
That is,
once a new skill has been learned by a copy of ONE
(or even by another machine learning device),
e.g., through slow trial and error-based evolution or reinforcement learning,
ONE is simply retrained in {\sc PowerPlay} style~\cite{powerplay2011and13}
through well-known, feasible,
{\em gradient-based} methods on stored input/output traces~\cite[Sec.~3.1.2]{powerplay2011and13}
of all previously learned control and prediction skills still considered worth memorizing,
similar to the chunker-automatizer system of the neural history compressor of
1991~\cite{chunker91and92}.
In particular, standard
gradient descent through
backpropagation in discrete graphs of
nodes with differentiable activation
functions~\cite{Linnainmaa:1970,Werbos:81sensitivity}\cite[Sec.~5.5]{888}
can be used to squeeze many expensively evolved skills into the limited computational
resources of ONE.
Compare recent work on incremental skill learning~\cite{progressive2018}.
Well-known regularizers \cite[Sec.~5.6.3]{888} can be used to further compress ONE,
possibly shrinking it by pruning neurons and connections,
as proposed already in 1965 for deep learning multilayer perceptrons~\cite{ivakhnenko1965,ivakhnenko1971,learningtothink2015}.
This forces ONE even more to relate partially analogous skills
(with shared algorithmic information~\cite{Solomonoff:64,Kolmogorov:65,Chaitin:66,Levin:73a,Solomonoff:78,LiVitanyi:97,Schmidhuber:04oops}) to each other,
creating common sub-programs in form of shared subnetworks of ONE.
This may greatly speed up subsequent learning of
novel but algorithmically related skills,
through reuse of such subroutines created as by-products of data compression,
where the data are actually programs encoded in ONE's previous weight matrices.
So ONE continually collapses more and more skills and predictive knowledge into itself,
compactly encoding shared algorithmic information in re-usable form,
to learn new problem-solving programs more quickly.
\section{More Formally: ONE and its Self-Acquired Data}
\label{formally}
The notation below is similar but not identical to the one in previous work
on an RNN-based CM system called the RNNAI~\cite{learningtothink2015}.
Let $m,n,o,p,q,s$ denote positive integer constants, and
$i,k,h,t,\tau$ positive integer variables assuming ranges implicit
in the given contexts. The $i$-th component of any real-valued vector,
$v$, is denoted by $v_i$.
For convenience, let us assume that ONE's life span can be partitioned
into trials $T_1, T_2, \ldots$ In each trial, ONE attempts to solve a particular task,
trying to manipulate some unknown environment through a sequence of
actions to achieve some goal.
Let us consider one particular trial $T$ and its discrete sequence of
time steps, $t = 1,2,\ldots, t_{T}$.
At the beginning of a given time step, $t$, ONE receives a ``normal'' sensory
input vector, $in(t) \in \mathbb{R}^m$, and a reward input vector, $r(t)
\in \mathbb{R}^n$. For example, parts of $in(t)$ may represent the pixel
intensities of an incoming video frame, while components of $r(t)$ may
reflect external positive rewards, or negative values produced by pain
sensors whenever they measure excessive temperature or pressure or low battery load (hunger).
Inputs $in(t)$ may also encode user-given goals or tasks,
e.g., through commands spoken by a user.
Often, however, it is convenient to use an extra input vector
$goal(t) \in \mathbb{R}^p$ to uniquely encode user-given goals,
as we have done since 1990, e.g.,~\cite{SchmidhuberHuber:91,powerplay2011and13}.
Let
$sense(t) \in \mathbb{R}^{m+p+n}$ denote the concatenation of the
vectors $in(t)$, $goal(t)$ and $r(t)$. The total reward at time $t$ is $R(t)=
\sum_{i=1}^{n}r_i(t)$. The total cumulative reward up to time $t$ is
$CR(t)= \sum_{\tau=1}^{t}R(\tau)$. During time step $t$, ONE
computes during several micro steps (e.g.,~\cite[Sec.~3.1]{learningtothink2015}) an output action vector, $out(t) \in \mathbb{R}^o$, which may
influence the environment and thus future $sense(\tau)$ for $\tau >t$.
\subsection{Training a Copy of ONE on New Control Tasks Without a Teacher}
\label{control}
One of ONE's goals is to maximize $CR(t_{T})$.
Towards this end, copies of successive instances of
ONE are trained in a series of trials through
a black box optimization method
in Step 3 of Algorithm~\ref{ONEalg},
e.g., through
incremental neuroevolution~\cite{gomez:ab97},
hierarchical neuroevolution~\cite{stoneML05,vanhoorn:09cig},
hierarchical policy gradient
algorithms~\cite{ghavamzadehICML03},
or asymptotically optimal ways of {\em algorithmic transfer learning}~\cite{Schmidhuber:04oops}.
Given a new task and a ONE trained on several previous tasks, such
hierarchical/incremental methods may create a copy of the current ONE, freeze its current weights,
then enlarge the copy of ONE by adding a few new units and connections~\cite{ivakhnenko1971} which are
trained until the new task is satisfactorily solved. This process can reduce the size of the search
space for the new task, while giving the new weights the opportunity to
learn to somehow use certain frozen parts of ONE's copy as subroutines.
(Of course, it is also possible to simply retrain {\em all} weights of the entire copy to solve the new task.)
Compare a recent study of incremental skill learning with feedforward networks~\cite{progressive2018}.
In non-deterministic or noisy environments,
by definition
the task is considered solved once the latest version of
the RNN has performed satisfactorily on a statistically significant numer of trials
according to a user-given criterion, which also implies that
the input-output traces of these trials (Sec.~\ref{collapse}) are sufficient
to retrain ONE in Step 4 of Algorithm~\ref{ONEalg}
without further interaction with the environment.
\subsection{Unsupervised ONE Learning to Predict/Compress Observations}
\label{compress}
ONE may further profit from
unsupervised learning that compresses the observed data~\cite{chunker91and92}
into a compact representation that may make subsequent learning of externally posed tasks easier~\cite{chunker91and92,learningtothink2015}.
Hence, another goal of ONE can be to compress ONE's entire growing
interaction history of all failed and successful trials~\cite{Schmidhuber:06cs,Schmidhuber:10ieeetamd}, e.g., through neural predictive coding~\cite{chunker91and92,SchmidhuberHeil:96}.
For this purpose, ONE has $m+n$ special output units to produce for $t<t_{T}$
a prediction $pred(t)
\in \mathbb{R}^{m+n}$ of $sense(t+1)$
\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego,Schmidhuber:90sab,Schmidhuber:90cmss,Schmidhuber:91nips}
from ONE's previous observations and actions, which are in principle accessible to ONE through (recurrent) connections.
In one of the simplest cases, this contributes $\| pred(t) - sense(t +1) \|^2$
to the error function to be
minimized by gradient descent in ONE's weights,
in Step 4 of Algorithm~\ref{ONEalg}.
This will train $pred(t)$ to become more like the expected value of of $sense(t+1)$, given the past.
See previous papers~\cite{SchmidhuberHeil:96,Schmidhuber:06cs,learningtothink2015} for
ways of translating such neural predictions into compression performance.
(Similar prediction tasks could also be specified through particular
prediction task-specific goal inputs $goal(t)$, like with other tasks.)
\subsection{Training ONE to Predict Cumulative Rewards}
\label{cumulative}
We may give ONE yet another set of
$n$ special output units to produce for $t<t_{T}$ another prediction $PR(t)
\in \mathbb{R}^{n+1}$ of $r(t+1)+r(t+2)+\ldots + r(t_{T})$ and of the
total remaing reward $CR(t_{T})-CR(t)$~\cite{Schmidhuber:90diffgenau}.
Unlike in the present paper, predictions of
expected cumulative rewards are actually {\em essential} in {\em traditional} reinforcement learning~\cite{Kaelbling:96,Sutton:98,wiering2012,888} where they are usually
limited to the case of {\em scalar} rewards
(while ONE's rewards may be {\em vector-valued} like in old work of 1990~\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego}).
Of course, in principle, such cumulative knowledge is already
implicitly present in a ONE that has learned to
predict only next step rewards $r(t+1)$.
However,
explicit predictions of expected cumulative rewards may represent redundant but useful derived secondary features that further
facilitate black box optimization in later incarnations of Step 3 of Algorithm~\ref{ONEalg},
which may discover useful subprograms of the RNN making
good use of those features.
\subsection{Adding Other Reasonable Objectives to ONE's Goals}
\label{other}
We can add additional objectives to ONE's goals. For example,
we may give ONE another set of
$q$ special output units and train them through unsupervised learning~\cite{Schmidhuber:92ncfactorial}
to produce for $t \leq t_{T}$ a vector $code(t)
\in \mathbb{R}^{q}$ that represents an ideal factorial code~\cite{Barlow:89review} of the observed history so far,
or that encodes the data in related ways generally considered useful, e.g.,~\cite{herault1984reseau,Jutten:91,Schuster:92,Schmidhuber:99zif,greff2017neural}.
\subsection{No Fundamental Problem with Bad Predictions of Inputs and Rewards}
\label{bad}
Note that like in work of 2015~\cite{learningtothink2015} but
unlike in earlier work on {\em learning to plan} of 1990~\cite{Schmidhuber:90diffgenau,Schmidhuber:90sandiego},
it is not that important that ONE becomes a good predictor of inputs
(Sec.~\ref{compress}) including cumulative rewards (Sec.~\ref{cumulative}).
In fact, in noisy environments, perfect prediction is impossible.
The learning of solutions of control tasks in Step 3 of Algorithm~\ref{ONEalg}, however, does not essentially {\em depend} on good predictions,
although it might profit from internal subroutines of ONE
(learned in Step 4) that at least occasionally
yield good predictions of expected future observations
in form of of $pred(t)$ or $PR(t)$.
Likewise, control learning may profit from but
does not existentially {\em depend} on near-optimal codes
according to Sec.~\ref{other}.
To summarize, ONE's subroutines for making codes and predictions
may or may not help to solve control problems during Step 3, where
it is ONE's task to figure out when to use or ignore those subroutines.
\subsection{Store Behavioral Traces}
\label{store}
Like in previous work since 2006~\cite{Schmidhuber:06cs,Schmidhuber:09sice,learningtothink2015},
to be able to retrain ONE on all observations ever made,
{\em
we should store ONE's entire, growing, lifelong sensory-motor interaction
history including all inputs and goals and actions and reward signals observed
during all successful and failed trials~\cite{Schmidhuber:06cs,Schmidhuber:09sice,learningtothink2015},
including what initially looks like noise but later may turn out to
be regular}. This is normally not done, but feasible today.
Remarkably, as pointed out in 2009, even human brains may have enough storage capacity
to store 100 years of sensory input at a reasonable resolution~\cite{Schmidhuber:09sice}.
On the other hand, in some applications, storage space is limited, and we might want to store (and re-train on)
only some (low-resolution variants) of the previous observations, selected according to certain user-given criteria.
This does not fundamentally change the basic setup - ONE may still profit from subroutines
that encode such limited previous experiences, as long as they convey algorithmic
information about solutions for new tasks to be learned.
\subsection{Incrementally Collapse All Previously Learned Skills into ONE}
\label{collapse}
Let $all(t)$ denote the concatenation of $sense(t)$ and $out(t)$ and $pred(t)$ (and possibly $PR(t)$ and $code(t)$ if any).
Let $trace(T)$ denote the sequence $(all(1), all(2), \ldots, all(t_{T}))$.
To combine the objectives of the previous, very general
papers~\cite{powerplay2011and13,learningtothink2015},
we can use
simple, well understood, rather efficient, {\em gradient-based learning} to compress~\cite{chunker91and92}
all relevant aspects of
$trace(T_1), trace(T_2), \ldots$ into ONE, and thus compress all
control~\cite{rusu2016progressive} and prediction~\cite{chunker91and92} skills learned so far by previous instances of ONE (or even by separate machine learning methods),
preventing ONE not only from forgetting previous knowledge,
but also making ONE discover new relations and analogies and
other types of mutual algorithmic information
among subroutines implementing previous skills.
Typically, given a ONE that already knows many skills,
traces of a new skill learned by a copy of ONE are added to the relevant traces,
and compressed into ONE, which is also re-trained on traces of the previous skills.
See Step 4 of Algorithm~\ref{ONEalg}.
Note that
{\sc PowerPlay} (2011)~\cite{powerplay2011and13,Srivastava2013first} also
uses environment-independent replay of behavioral traces (or functionally equivalent but more efficient methods) to avoid forgetting and to compress or speed up
previously found, sub-optimal solutions. At any given time, an acceptable
(possibly self-invented)
task is to solve a previously solved task with fewer computational
resources such as time, space, energy, as long as this does not worsen
performance on other tasks.
In the present paper, we focus on pure gradient descent for ONE (which may have an LSTM-like architecture) to implement the {\sc PowerPlay} principle.
\subsection{Learning Goal Input-Dependence Through Compression}
\label{goals}
After Step 3 of Algorithm~\ref{ONEalg}, a copy of
ONE may have been modified and may have learned to control an agent in a video game such that it reaches a given goal in a maze,
indicated through a particular goal input, e.g., one that looks a bit like the goal~\cite[Sec.~3.2]{SchmidhuberHuber:91}.
However, the weight changes of ONE's copy may be insufficient to perform this behavior
{\em exclusively} when the corresponding goal input is on.
And it may have forgotten previous skills for finding other goals,
given other goal inputs. Nevertheless, the gradient-based~\cite{rusu2016progressive} dreaming phase of Step 4
can correct and fine-tune all those behaviors,
making them goal input-dependent in a way that would be hard for
typical black box optimizers such as neuroevolution.
\begin{algorithm}[H]
\begin{algorithmic}
\STATE {\bf 1.}
Access global variables (also accessible to calling procedures such as Algorithm~\ref{simplealg}):
the present ONE and its weights, positive real-valued variables $c, \lambda$ defining search time budgets,
and a control task description $ A \in \cal T$
from a possibly infinite set of possible task descriptions $\cal T$
~\cite[Sec.~2]{powerplay2011and13}.
\STATE {\bf 2.}
Unless goal descriptions are transmitted through normal input units,
e.g., in form of speech,
select a unique, task-specific~\cite{SchmidhuberHuber:91}
goal input $G(A) \in \mathbb{R}^p$ for ONE;
otherwise $G(A)$ is a vector of $p$ zeros.
\STATE {\bf 3 (Try to Solve New Task).}
Make a copy of the present ONE and call it ONE1;
make a copy of the original ONE (before training) and call it ONE0
(notation in both cases like for ONE; Sec. \ref{formally}).
The total search time budget~\cite{Schmidhuber:04oops} of the present Step 3 is $c$ seconds.
In parallel (or interleaving) fashion,
apply a trial-based black box optimization method (Sec. \ref{control})
to (all or some of the weights of) ONE0 and ONE1,
spending equal time on both,
until $c$ seconds have been spent without success (then go to Step 4), or until either ONE0 or ONE1
have learned task $A$ sufficiently well, according to some given termination criterion,
where for both ONE0 and ONE1
for all time steps $t$ of all trials,
$G(A)=goal(t)=const.$
In case of first success through ONE0, rename it ONE1.
If both ONE1 and the environment are deterministic, such that trials are repeatable exactly,
mark only the final ONE1's $trace(T)$ (Sec.~\ref{collapse}) as {\em relevant}, where $T$ is the final successful trial.
Otherwise, to gain statistical significance, mark as {\em relevant} the
traces of sufficiently many (Sec.~\ref{control}) successful trials conducted by the final
ONE1 on task $A$.
{\em Comment: Previously learned programs and subroutines already
encoded in the weight matrix of ONE at the beginning of Step 3
may help to greatly speed up ONE1's optimization process - see Sec.~\ref{control}.
ONE0, however, is trying to learn $A$ from scratch, playing the role of a safety belt in case
ONE1 has become ``too biased" through previous learning (following the algorithmic transfer learning approach of the
asymptotically Optimal Ordered Problem Solver~\cite{Schmidhuber:04oops}).}
\STATE {\bf 4 (Dream and Consolidate).}
Since ONE1 may have forgotten previous skills in Step 3,
and may not even have understood the goal input-dependence of the newly learned behavior for $A$ (Sec.~\ref{goals}),
spend $\lambda c$ seconds on:
retrain ONE by {\em standard gradient-based learning} (Sec.~\ref{intro},\ref{collapse}) to reproduce the input history-dependent outputs $out(t)$ in all traces of all previously learned
{\em relevant}
behaviors that are still deemed useful (including those for the most recent task $A$ learned by ONE1, if any).
Simultaneously, use all traces (including those of failed trials)
to retrain ONE to make better predictions $pred(t)$ (Sec.~\ref{compress}) and $code(t)$ (Sec.~\ref{other}) if any,
given previous inputs and actions
(but do not provide any target values for action outputs $out(t)$ and corresponding
$PR(t)$ (Sec.~\ref{cumulative}) in replays of formerly {\em relevant} traces of trials of unsuccessful or superseded controllers implemented by earlier incarnations of ONE - see Sec.~\ref{discard}).
Use regularizers to compactify and simplify ONE as much as possible~\cite{888,learningtothink2015}.
{\em Comment: This process collapses all previous prediction skills and still relevant goal-dependent control skills into ONE,
without requiring new expensive interactions with the environment. We may call this a
consolidation phase or sleep phase~\cite{Schmidhuber:09abials}
or dream phase or regularity detection phase.}
\end{algorithmic}
\caption{How ONE can learn (without a teacher) one more control skill as well as additional prediction skills,
using pure gradient-based learning for avoiding to forget previously learned skills and for learning goal input-dependent behavior. See Sec. \ref{formally} for details of steps 3-4.}
\label{ONEalg}
\end{algorithm}
\newpage
The setup is also sufficient for high-dimensional spoken commands
arriving as input vector sequences at certain standard input units connected to a microphone.
The non-trivial pattern recognition required to recognize commands such as
{\em ``go to the north-east corner of the maze"}
will require a substantial subnetwork of ONE and many weights.
We cannot expect neuroevolution to learn such speech recognition within reasonable time.
However, a copy of ONE may rather easily learn by neuroevolution during Step 3 of Algorithm~\ref{ONEalg}
to always go to the north-east corner of the maze, ignoring speech inputs.
In a later incarnation of Step 3,
a copy of another instance of ONE may rather easily learn
to always go to the north-west corner of the maze, again ignoring corresponding spoken commands such as
{\em ``go to the north-west corner of the maze."}
In the consolidation phase of Step 4, ONE then may rather easily learn~\cite{fernandez:icann2007,googlevoice2015}
the speech command-dependence of these behaviors through gradient-based learning,
without having to interact with the environment again.
Compare the concept of {\em input injection}~\cite{progressive2018}.
\subsection{Discarding Sub-Optimal Previous Behaviors}
\label{discard}
Once ONE has learned to solve some control task in suboptimal fashion,
it may later learn to solve it faster, or with fewer computational resources.
That's why
Step 4 of Algorithm~\ref{ONEalg}
does not retrain ONE to generate action outputs $out(t)$
in replays~\cite{Lin:91} of formerly {\em relevant} traces of trials of superseded controllers implemented by earlier versions of ONE.
However, replays of unsuccessful trials can still be used to retrain ONE to become a better predictor or world model~\cite{learningtothink2015}, given past observations and actions (Sec.~\ref{compress}).
\subsection{Algorithmic Information Theory (AIT) Argument}
\label{ait}
As discussed in earlier work~\cite{learningtothink2015},
according to the Theory of Algorithmic Information (AIT) or Kolmogorov Complexity~\cite{Solomonoff:64,Kolmogorov:65,Chaitin:66,Levin:73a,Solomonoff:78,LiVitanyi:97},
given some universal computer, $U$, whose programs are
encoded as bit strings, the mutual information between two programs
$p$ and $q$ is expressed as $K(q \mid p)$,
the length of the shortest program
$\bar{w}$ that computes $q$, given $p$, ignoring an additive constant
of $O(1)$ depending on $U$ (in practical applications the computation
will be time-bounded~\cite{LiVitanyi:97}). That is, if $p$ is a
solution to problem $P$, and $q$ is a fast (say, linear time) solution
to problem $Q$, and if $K(q \mid p)$ is small, and $\bar{w}$ is both fast
and much shorter than $q$, then {\em asymptotically optimal universal
search}~\cite{Levin:73,Schmidhuber:04oops} for a solution to $Q$,
given $p$, will generally find $\bar{w}$ first (to compute $q$ and
solve $Q$), and thus solve $Q$ much faster than search for $q$ from
scratch~\cite{Schmidhuber:04oops}.
In the style of the previous report~\cite{learningtothink2015},
we can directly apply this AIT argument to ONE.
For example, suppose that ONE has learned to represent (e.g., through predictive coding~\cite{chunker91and92,SchmidhuberHeil:96})
videos of people placing toys in boxes,
or to summarize such videos through textual outputs.
Now suppose ONE's next task is to learn to control a robot that places toys in boxes.
Although the robot's actuators may be quite different from human arms and hands,
and although videos and video-describing texts are quite different from desirable trajectories of
robot movements, ONE's knowledge about videos is expected to convey algorithmic
information about solutions to ONE's new control task, perhaps in form of connected
high-level spatio-temporal feature detectors representing typical movements of hands and elbows independent of arm size.
Training ONE to address this information in its own subroutines
and partially reuse them to solve the robot's task may
be much faster than learning to solve the task from scratch with a fresh network.
\begin{algorithm}[t]
\begin{algorithmic}
\STATE {\bf 1.}
Initialize global variables ONE, a finite set $\cal T$ of task descriptions~\cite[Sec.~2]{powerplay2011and13}, and positive real-valued variables $c, \lambda$ used to define training time budgets.
\STATE {\bf 2.}
Spend $c$ seconds on trying to solve a 1st task in $\cal T$ through Algorithm~\ref{ONEalg},
then $c$ seconds on trying to solve the 2nd,
and so on (here a teacher may or may not suggest an initial ordering of tasks).
In line with Algorithm~\ref{ONEalg},
whenever a task gets solved within the allocated time,
spend $\lambda c$ seconds on compressing its traces into ONE,
while also retraining ONE on previous traces to reduce forgetting of older skills,
and even on traces of unsuccessful trials to improve ONE's predictions (if any).
\STATE {\bf 3.}
If no task in $\cal T$ got solved, set $c:=2c$ and go to 2.
\STATE {\bf 4.}
Set $\cal T$ equal to the set of still unsolved tasks. If $\cal T$ is empty, exit. Reset $c$ to its original value of Step 1. Go to 2 {\em (with a ``more sophisticated" ONE that already knows how to solve some tasks).}
\end{algorithmic}
\caption{Simple automatic ordering of ONE's tasks - see Sec. \ref{simple}.}
\label{simplealg}
\end{algorithm}
\subsection{Gaining Efficiency by Selective Replays}
\label{selective}
Instead of retraining ONE in a sleep phase
(step 4 of algorithm~\ref{ONEalg}) on all input-output traces of all trials ever,
we may also
retrain it on parts thereof, by selecting trials randomly or
otherwise, and replaying~\cite{Lin:91} them to retrain ONE in standard
fashion~\cite{learningtothink2015}.
Generally speaking, we cannot expect perfect compression of previously learned skills and knowledge within limited retraining time spent in a particular invocation of Step 4.
Nevertheless, repeated incarnations of Step 4 will over time improve ONE's performance on all tasks so far.
\subsection{Heuristics: Gaining Efficiency by Tracking Weight Variance}
\label{variance}
As a heuristic, we may track the variance of each weight's value at the ends of all trials.
Frequently used weights with low variance can be suspected
to be important for many tasks, and may get small or zero learning rates during Step 3 of Algorithm~\ref{ONEalg},
thus making them even more stable, such that the system does not easily forget them
during the learning of new tasks.
Weights with high variance, however, may get high learning rates in Step 3, and thus participate
easily in the learning of new skills.
Similar heuristics go back to the early days of neural network research.
They can protect ONE's earlier acquired skills and knowledge to a certain extent, to facilitate retraining in Step 4.
\subsection{Gaining Efficiency by Tracking Which Weights Are Used for Which Tasks}
\label{tracking}
To avoid forgetting previous skills,
instead of replaying all previous traces of still relevant trials
(the simplest option to achieve the {\sc PowerPlay} criterion~\cite{powerplay2011and13}),
one can also implement ONE as a self-modularizing, computation
cost-minimizing, winner-take-all
RNN~\cite{Schmidhuber:89cs,Schmidhuber:12slimnn,Srivastava2013first}.
Then we can keep track of which weights of ONE are used
for which tasks.
That is, to test whether ONE has forgotten something
in the wake of recent modifications of some of its weights,
only input-output traces in the union of affected tasks have to
be re-tested~\cite[Sec.~3.3.2]{powerplay2011and13}.
First
implementations of this simple principle were described in previous work on
{\sc PowerPlay}~\cite{powerplay2011and13,Srivastava2013first}.
\subsection{Ordering Tasks Automatically}
\label{order}
So far the present paper has focused on user-given sequences of tasks. But in general, given a set of tasks, no teacher knows the best sequential ordering of tasks,
to make ONE learn to solve all tasks as quickly as possible.
The {\sc PowerPlay} framework (2011)~\cite{powerplay2011and13} offers a general solution to the automatic task ordering problem.
Given is a set of tasks, which may actually be the set of {\em all} tasks with computable task descriptions, or a more limited set of tasks, some of them possibly given by a user. In unsupervised mode, one
{\sc PowerPlay} variant systematically searches the space of possible pairs of new tasks and modifications of the current problem solver,
until it finds a more powerful problem solver that solves all previously learned tasks plus the new one, while the unmodified predecessor does not.
The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size.
\subsubsection{Simple automatic ordering of ONE's tasks}
\label{simple}
A related, more naive, but easy-to-implement strategy is given by {\bf Algorithm~\ref{simplealg}, }
which temporally skips tasks that it currently cannot solve within a given time budget,
trying to solve them again later after it has learned other skills,
eventually doubling the time budget if any unsolved tasks are left.
\section{Conclusion}
Supervised learning in large LSTMs works so well that it has become highly commercial, e.g.,~\cite{googlevoice2015,wu2016google,amazon2016,facebook2017}.
True AI, however, must continually learn to solve more and more complex control problems in partially observable environments {\em without a teacher}.
In principle, this could be achieved by black box optimization through neuroevolution or related techniques.
Such approaches, however, are currently feasible only for networks much smaller than large commercial supervised LSTMs.
Here we combine the best of both worlds, and apply the AIT argument to
show how a single recurrent neural network called ONE can incrementally absorb more and more control and prediction skills
through rather efficient and well-understood gradient descent-based compression of desirable behaviors,
including behaviors of control policies learned by past instances of ONE through neuroevolution or similar general but slow techniques.
Ideally, none of the ``holy data" from all trials is ever discarded;
all can be used to incrementally make ONE an increasingly general problem solver
able to solve more and more tasks.
Essentially, during ONE's dreams,
gradient-based compression of policies and
data streams simplifies ONE,
squeezing the essence of ONE's previously learned skills and knowledge into
the code implemented within the recurrent weight matrix of ONE itself.
This can improve ONE's ability to generalize and quickly learn new,
related tasks when it is awake.
|
{
"timestamp": "2018-02-27T02:06:31",
"yymm": "1802",
"arxiv_id": "1802.08864",
"language": "en",
"url": "https://arxiv.org/abs/1802.08864"
}
|
\section{Introduction}
Open quantum dynamics---the study of the evolution of quantum systems interacting with an environment---has wide sweeping theoretical and experimental importance. It is fundamental in the study of quantum thermodynamics. Since thermalization is a non-unitary process, it requires an environment. Open dynamics is also critical in understanding the noise and decoherence modes ubiquitously present in experimental settings \cite{Nielsen:2000}.
The formalism of Gaussian Quantum Mechanics (GQM), (see, e.g., \cite{GQMRev}) simplifies the treatment of many quantum mechanical problems by making use of the phase space representation of quantum mechanics, focusing on states that can be fully characterized with a Gaussian Wigner function. Such states are theoretically and experimentally relevant, including coherent states, thermal states, and squeezed states. As long as all the relevant transformations preserve this Gaussianity (i.e. take Gaussian states to Gaussian states), GQM provides a significant decrease in the overhead of describing quantum states and transformations. One needs only track the system's first and second statistical moments instead of a vector in an infinite dimensional Hilbert space. The literature abounds with reviews on Gaussian quantum mechanics, in particular in its applications to quantum information; the reader is referred to \cite{weedbrook, adesso1, lami}.
In this paper, we consider the dynamics induced in a generic Gaussian system when rapidly bombarded by a series of Gaussian ancillae, a scenario we call \textit{Gaussian ancillary bombardment}. An intuitive example of such a scenario is a harmonic oscillator in a thermal bath of harmonic oscillators.
To study the general scenario, in Sec. \ref{InterpolateGQM} we adapt the rapid repeated interaction formalism developed in \cite{Grimmer2016a,Grimmer2017a} to the Gaussian setting. Specifically, we construct an interpolating master equation for the discrete time dynamics induced by the rapid interactions. In Sec. \ref{AncillaryBombardmentGQM} we apply this adapted formalism to the a generic Gaussian ancillary bombardment scenario and analyze the resulting master equation. In this analysis, we make use of the partition of open Gaussian dynamics developed in \cite{ArXivGrimmer2017b} to characterize the dynamics in terms of unitarity, ability to cause energy flow, state-dependence and mode mixing.
Finally, in Sec. \ref{Example}, we apply the tools built in this paper to the problem of understanding thermalization as resulting from the Markovian bombardment of a small system by the microconstituents of a thermal reservoir. We show that if we are to model equilibration and thermalization as resulting from this kind of dynamics then these processes critically depend on the system-environment coupling.
The methods and results we present not only add to a growing understanding of Gaussian open dynamics \cite{koga, nicacio, nicacio2} but also provide tools for investigating
the thermodynamics of systems that are repeatedly disturbed by an environment, particularly with regard to microscopic details connected with the flow of energy and information.
\section{Gaussian Quantum Mechanics}\label{ReviewGQM}
Consider a system composed of $N$ coupled modes (for example, harmonic oscillators) with the $n^{th}$ of these modes characterized by its quadrature operators, $\hat{q}_n$ and $\hat{p}_n$, which obey the canonical bosonic commutation relations,
\begin{equation}
[\hat{q}_n,\hat{q}_m]
=[\hat{p}_n,\hat{p}_m]
=0
\quad\text{and}\quad
[\hat{q}_n,\hat{p}_m]=\mathrm{i} \, \delta_{nm} \, \hat{\openone}.
\end{equation}
Such systems can be fully described in terms of a pseudo-probability distribution defined on the system's phase space \cite{Groenewold,Moyal}. In particular, a state with density matrix $\rho$ can be equivalently represented by its Wigner function,
\begin{equation}
W(\bm{q},\bm{p})=\frac{1}{\pi^N}\!\int_{-\infty}^\infty \d^N \bm{s}
\bra{\bm{q}+\bm{s}}\rho\ket{\bm{q}-\bm{s}}\exp(-2\mathrm{i} \, \bm{p}\cdot\bm{s}).
\end{equation}
Gaussian Quantum Mechanics (GQM) is the restriction of quantum mechanics to the class of states whose Wigner functions are Gaussian and to the class of transformations which preserve this Gaussianity. The following summary of GQM significantly summarizes the in-depth summary given in \cite{ArXivGrimmer2017b} in which many of the following claims are spelled out and demonstrated.
The main benefit of this restriction to Gaussian states and transformations is that it allows for a significantly simplified description of quantum states and transformations whilst still describing a wide variety of theoretically and experimentally relevant situations. In particular, a Gaussian distribution is completely determined by its first and second statistical moments. Thus collecting the system's quadrature operators into the vector
\bel{XhatDef}
\hat{\bm{X}}
\coloneqq
(\hat{q}_1,\hat{p}_1,\hat{q}_2,\hat{p}_2,\dots,\hat{q}_N,\hat{p}_N)^\intercal,
\end{equation}
a Gaussian state is fully described by (a) the mean of each of these operators, collected in the $2N$-dimensional mean vector
\bel{XDef}
\bm{X}
\coloneqq\langle\hat{\bm{X}}\rangle
=\big(\langle\hat{q}_1\rangle,\langle\hat{p}_1\rangle,\dots,\langle\hat{q}_N\rangle,\langle\hat{p}_N\rangle\big)^\intercal,
\end{equation}
and (b) by the covariances between them, collected in the the $2N$ by $2N$ symmetric covariance matrix
\bel{Vdef}
\sigma_j{}^k
\coloneqq
\big\langle
\hat{X}_j \, \hat{X}^k
+ \hat{X}^k \, \hat{X}_j
\big\rangle
-2\big\langle\hat{X}_j\big\rangle
\big\langle\hat{X}^k\big\rangle.
\end{equation}
Note that any two quadrature operators, say $\hat{X}_j$ and $\hat{X}^k$, will either commute to $\mathrm{i} \, \hat{\openone}$ or to $0$ such that all of the system's commutation relations are captured by the phase space matrix $\Omega$, defined as
\begin{align}\label{OmegaDef}
[\hat{X}_j,\hat{X}^k]
&=\mathrm{i} \ \Omega_j{}^k \, \hat{\openone}.
\end{align}
This matrix, called the symplectic form, is given explicitly as
\bel{OmegaExplicit}
\Omega
=\bigoplus_{n=1}^N \omega
=\openone_N\otimes\omega; \ \ \ \ \omega
=\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix},
\end{equation}
in the same representation as \eqref{XhatDef}. Note that $\Omega$ is real-valued, antisymmetric, and invertible with \mbox{$\Omega^{-1}=\Omega^T=-\Omega$}.
As in standard quantum mechanics, in GQM the commutation relations underlie the uncertainty principle, which all valid states obey. For Gaussian states the uncertainty principle is \cite{Simon1994},
\bel{SigmaPosCond}
\sigma\geq\mathrm{i} \, \Omega.
\end{equation}
For a matrix $M$, the notation $M\geq 0$ indicates here that $M$ is positive semi-definite. Moreover $M_1\geq M_2$ here means $M_1-M_2\geq0$. The uncertainty bound \eqref{SigmaPosCond} implies that that $\sigma\geq0$ (see Sec. II in \cite{ArXivGrimmer2017b}).
Gaussian unitary transformations are unitary transformations in the system's Hilbert space that preserve the Gaussianity of the state. Differential Gaussian unitary transformations are generated by Hamiltonians that are at most quadratic in the the operator vector \cite{Schumaker1986}. Such Hamiltonians can always be cast in the form,
\bel{QuadHamForm}
\hat{H}=\frac{1}{2}\hat{\bm{X}}^\intercal F \, \hat{\bm{X}}
+\bm{\alpha}^\intercal\hat{\bm{X}}.
\end{equation}
where $F$ is a $2N$ by $2N$ real symmetric matrix and $\bm{\alpha}$ is a real-valued $2N$ dimensional vector. From \eqref{QuadHamForm}, one can calculate the evolution of the mean vector, $\bm{X}$, and of the covariance matrix, $\sigma$, as
\begin{align}
\label{SymplecticDiffXUpHam}
\frac{\d}{\d t}\bm{X}(t)
&=\Omega (F \bm{X}(t)+\bm{\alpha}),\\
\label{SymplecticDiffVUpHam}
\frac{\d}{\d t}\sigma(t)
&=(\Omega \, F) \, \sigma(t)
+\sigma(t) \, (\Omega \, F)^\intercal.
\end{align}
For a time-independent Hamiltonian, integrating these equations for a time interval $[0,t]$ gives
\begin{align}
\label{SymplecticXUp}
\bm{X}(0)&\longrightarrow \bm{X}(t)=S(t) \, \bm{X}(0)+\bm{d}(t),\\
\label{SymplecticVUp}
\sigma(0)&\longrightarrow \sigma(t)=S(t) \, \sigma(0) \, S^\intercal(t)
\end{align}
where
\begin{align}
\label{SHamDef}
S(t)&=\text{exp}(\Omega F \, t),\\
\label{dHamDef}
\bm{d}(t)&=\frac{\text{exp}(\Omega F \, t)-\openone_{2N}}{\Omega F} \, \Omega\bm{\alpha}.
\end{align}
Note that \eqref{dHamDef} does not require $\Omega F$ to be invertible. Instead the notation can be understood in terms of the following series expansion
\bel{(ExpX-1)byXDef}
\frac{\text{exp}(X \, t)-\openone}{X}
=\sum_{m=0}^\infty \frac{t^{m+1}}{(m+1)!}X^m.
\end{equation}
More generally, any transformation of the form \eqref{SymplecticXUp} and \eqref{SymplecticVUp} (i.e., with generic $S$ and $\bm{d}$) can be implemented by evolving under a (potentially time dependent\footnote{Notice that in order to implement a general sympletic transformation a time dependent generator is generally needed. This follows from the exponential in the symplectic group not being surjective.}) quadratic Hamiltonian with the sole restriction that it preserves the symplectic form (i.e., the commutation relation) as
\bel{SympTranDef}
S \, \Omega \, S^\intercal=\Omega.
\end{equation}
Such a matrix $S$ implements a symplectic transformation. Together with $\bm{d}$, the update \eqref{SymplecticXUp} and \eqref{SymplecticVUp} constitutes a symplectic-affine transformation. Gaussian unitary transformations on the system's Hilbert space correspond to symplectic-affine transformations on the system's phase space.
\begin{comment}
It is important to note here that not every symplectic transformation can be achieved by such a time-independent quadratic Hamiltonian evolution. Explicitly, there are symplectic matrices $S$ such that
\begin{equation}
S \neq \exp(\Omega \, F)
\end{equation}
for any real symmetric matrix $F$. For example
\bel{ProbS}
S=\begin{pmatrix}
-4 & 0\\ 0 & -1/4\\
\end{pmatrix}
\neq \exp(\Omega \, F)
\end{equation}
If it was then $\sqrt{S}=\exp(\Omega \, F/2)$ would be symplectic as well and in particular would have a real trace. But
\begin{equation}
\text{Tr}(\sqrt{S})=\pm 2 \, \mathrm{i}\pm\mathrm{i}/2
\end{equation}
where the two plus/minuses are independent, which cannot be real. Mathematically, this is just an example of the known fact that the exponential operation is not surjective in the symplectic Lie group.
However, every symplectic matrix $S$ can be written as
\begin{equation}
S = \exp(\Omega \, F_2) \, \exp(\Omega \, F_1)
\end{equation}
for some real symmetric matrices $F_1$ and $F_2$. Find Proof, Polar Decomposition of Symplectic Group. Thus by concatenating two phases of time-independent quadratic Hamiltonian evolution (or more generally by allowing for time-dependence) we can implement any symplectic transformation.
Thus, in the context of Gaussian unitary transformations, we see novel transformations by allowing for time-dependent generators. This is not true in the non-Gaussian context, any unitary transformation can be implemented with a time-independent Hamiltonian. This implies that the problematic symplectic transformations \eqref{ProbS} discussed above, can in fact be implemented by a time-independent Hamiltonian, but this Hamiltonian will not be quadratic, and thus the intermediate states will not be Gaussian. One can imagine that the subspace of unitaries which are Gaussian has corners in it which take two steps to walk around. Alternatively, one can take a direct shortcut by leaving the realm of Gaussianity.
\end{comment}
In addition to the Gaussian unitary transformations described above, one can implement non-unitary Gaussian transformations by allowing the system to interact with an environment. In direct analogy with the Stinespring dilation theorem, one can implement any completely positive trace preserving (CPTP) Gaussian transformation as a Gaussian unitary transformation in some larger Hilbert space (or equivalently as a symplectic-affine transformation in a larger phase space) \cite{GaussianDilation}. From this it follows that the most general form of Gaussian update on $\bm{X}$ and $\sigma$ is,
\begin{align}
\label{GeneralUpdateX}
\bm{X}(0)&\to \bm{X}(t)=T(t)\bm{X}(0)+\bm{d}(t),\\
\label{GeneralUpdateV}
\sigma(0)&\to \sigma(t)=T(t) \, \sigma(0) \, T^\intercal(t)+R(t).
\end{align}
where $\bm{d}(t)$ is a real $2N$-dimensional vector, $T(t)$ and $\bm{R}(t)$ are $2N$ by $2N$ real matrices, $R(t)$ is symmetric, and $T(t)$ (unlike $S$) is not necessarily symplectic.
A transformation (given by $T$, $\bm{d}$, $R$) is CPTP if and only if it obeys the complete positivity condition \cite{GQMRev}
\bel{FiniteCPCond}
R\geq\mathrm{i} \, (T \, \Omega \, T^\intercal-\Omega).
\end{equation}
where a sketch of the proof appears in the appendix of \cite{ArXivGrimmer2017b}. Recall the notation $M\geq 0$ indicates that $M$ is a positive semi-definite matrix.
We can take the update given by \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV} to be differential, as
\begin{align}
T(\d t)&=\openone_{2N}+\d t \ \Omega \, A,\\
\bm{d}(\d t)&=\d t \ \Omega \, \bm{b},\\
R(\d t)&=\d t \ C,
\end{align}
where $\bm{b}$ is a real $2N$-dimensional vector, $A$ and $C$ are $2N$ by $2N$ real matrices, $C$ is symmetric. Since $\Omega$ is invertible, and since $A$ and $\bm{b}$ are arbitrary, assuming that a factor of $\Omega$ precedes $A$ and $\bm{b}$ is justified.
From this differential update one can find that the general form of the Gaussian master equations is
\begin{align}
\label{GeneralDiffXUp}
\frac{\d}{\d t}\bm{X}(t)
&=\Omega(A(t) \bm{X}(t)+\bm{b}(t)),\\
\label{GeneralDiffVUp}
\frac{\d}{\d t}\sigma(t)
&=(\Omega A(t)) \, \sigma(t)
+\sigma(t) \, (\Omega A(t))^\intercal
+C(t).
\end{align}
The differential version of the complete positivity condition \eqref{FiniteCPCond} is
\bel{DiffCPCond}
C\geq\mathrm{i} \, \Omega (A-A^\intercal)\Omega
\end{equation}
from which it follows that $C\geq0$.
In \cite{ArXivGrimmer2017b} the dynamical effect of the $A$, $\bm{b}$, and $C$ terms were explored in detail. To summarize, the effect of the $A$ term is to implement rotations, squeezings, and amplifications in phase space, whereas the $\bm{b}$ term implements displacement and the $C$ term implements state-independent noise.
For time-independent generators ($A$, $\bm{b}$, and $C$), integrating these equations for a time interval $[0,t]$ gives an update of the form \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV} with
\begin{align}
\label{TfromAbC}
T(t)&=\exp(\Omega A \, t),\\
\label{dfromAbC}
\bm{d}(t)&=\frac{\exp(\Omega A \, t)-\openone_{2N}}{\Omega A} \, \Omega \, \bm{b},\\
\label{RfromAbC}
R(t)&=\text{vec}^{-1}\Big(\frac{\exp((\Omega A\otimes\Omega A) \, t)-\openone_{4N^2}}{\Omega A\otimes\Omega A} \ \text{vec}(C)\Big).
\end{align}
where the $\vec$ operation is defined \cite{ArXivGrimmer2017b} to map outer products to tensor products as
\bel{OuterToTensor}
\vec(\lambda \ \bm{u}\bm{v}^\intercal)
\coloneqq\lambda \ \bm{u}\otimes\bm{v}
\end{equation}
for some scalar $\lambda$ and vectors $\bm{u}$ and $\bm{v}$. By linearity this defines its action on any matrix.
One quickly finds that for any matrices $X$, $Y$ and $Z$
\bel{VecIdentity}
\vec(X \, Y \, Z^\intercal)=(X\otimes Z)\vec(Y).
\end{equation}
This operation can be represented by the vector formed by taking the entries of a matrix in order as follows,
\begin{equation}
\vec\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=(a,b,c,d)^\intercal.
\end{equation}
Note that $\text{vec}^{-1}$ is trivially defined by ``restacking'' the matrices entries.
Also, as before, note that it is not necessary that $\Omega A$ and $\Omega A\otimes\Omega A$ are invertible for us to evaluate \eqref{dfromAbC} and \eqref{RfromAbC} as we can make use of the series \eqref{(ExpX-1)byXDef}.
\section{Rapid Repeated Gaussian Interaction}\label{InterpolateGQM}
In this section we build a Gaussian master equation of the general form \eqref{GeneralDiffXUp} and \eqref{GeneralDiffVUp} from rapid repeated application of a Gaussian channel of the general form \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV}.
Specifically, we take a Gaussian system (characterized by its mean vector, $\bm{X}$, and its covariance matrix, $\sigma$) to be updated in discrete time steps of duration $\delta t$ via the Gaussian channel given by some $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$ as
\begin{align}
\label{UpdateSchemeXX}
\bm{X}((n+1)\delta t)
&=T(\delta t) \, \bm{X}(n \, \delta t)
+\bm{d}(\delta t),\\
\label{UpdateSchemeVV}
\sigma((n+1)\delta t)
&=T(\delta t) \, \sigma(n \, \delta t) \, T^\intercal(\delta t)
+R(\delta t).
\end{align}
Given the initial system state, $\bm{X}(0)$ and $\sigma(0)$, the above update scheme defines the system state at the discrete time points $t=n\,\delta t$. Note this update is Markovian since it is time-local (it only depends on the current state of the system).
Further we make the natural assumptions that
\bel{NothingNoTime}
T(0)=\openone_{2N}, \ \ \ \bm{d}(0)=0, \ \ \ \text{and} \ \ \ R(0)=0
\end{equation}
(nothing happens in no time) and that
\bel{FiniteRate}
T'(0), \ \ \ \bm{d}'(0), \ \ \ \text{and} \ \ \ R'(0) \ \ \ \text{exist}
\end{equation}
(things happen at a finite rate). Finally we assume that the update scheme is invertible. Ultimately, this means that $T(\delta t)$ is non-singular. Note that we automatically have this for small enough $\delta t$.
From the above update we seek to construct a Gaussian master equation of the general form
\begin{align}
\label{GQMInterpMasterEqsX}
\bm{X}'(t)
&=\Omega(A_{\delta t} \, \bm{X}(t)
+\bm{b}_{\delta t}),\\
\label{GQMInterpMasterEqsV}
\sigma'(t)
&=(\Omega A_{\delta t}) \, \sigma(t)
+\sigma(t) \, (\Omega A_{\delta t})^\intercal
+C_{\delta t}
\end{align}
for some generators $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ such that the dynamics it describes exactly matches the dynamics given by the discrete updater at every time point, $t=n \, \delta t$. As the dynamics generated by \eqref{GQMInterpMasterEqsX} and \eqref{GQMInterpMasterEqsV} is defined for all $t\geq0$ (not just $t=n\,\delta t$) this master equation constitutes an interpolation scheme (see \cite{Grimmer2016a} for details).
In general, such an interpolation scheme is not uniquely determined. However, as discussed in \cite{Grimmer2017a}, there is a unique interpolation scheme with time-independent generators which converge in the rapid interaction limit (as $\delta t\to0$).
This unique interpolation scheme is constructed in detail in Appendix \ref{AppGQMInterpolate}, yielding the interpolation generators
\begin{align}
\label{AdtDef}
\Omega A_{\delta t}
&=\frac{1}{\delta t}\text{Log}(T(\delta t)),\\
\label{bdtDef}
\Omega \, \bm{b}_{\delta t}
&=\frac{1}{\delta t}
\frac{\text{Log}(T(\delta t))}{T(\delta t)-\openone_{2N}}\bm{d}(\delta t),\\
\label{CdtDef}
C_{\delta t}
&=\vec^{-1}\Big(\frac{1}{\delta t}\frac{\text{Log}(T(\delta t) \otimes T(\delta t))}{T(\delta t) \otimes T(\delta t)-\openone_{4N^2}} \, \vec\big(R(\delta t)\big)\Big).
\end{align}
where we emphasize that
the expressions for $\bm{b}_{\delta t}$, and $C_{\delta t}$ are to be understood via the series expansion
\bel{LogSeries2}
\frac{\text{Log}(X)}{X-\openone}
=\sum_{m=0}^\infty\frac{(-1)^m}{m+1}(X-\openone)^m
\end{equation}
and so \mbox{$T(\delta t)-\openone_{2N}$} and \mbox{$T(\delta t)\, \otimes \, T(\delta t)-\openone_{4N^2}$} need not be invertible.
Finally, we note that in the above equations we take the logarithm's principle branch cut, such that $\text{Log}(\openone)=0$. This assures that the interpolation generators converge as $\delta t\to0$.
If in addition to the minimal regularity assumed above --- \eqref{NothingNoTime} and \eqref{FiniteRate} --- we have that $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$ are analytic at $\delta t=0$, then we can then expand them as a series in $\delta t$ as
\begin{align}
\label{TSeries}
T(\delta t)
&=\openone_{2N}
+\delta t \, T_1
+\delta t^2 \, T_2
+\delta t^3 \, T_3
+\delta t^4 \, T_4
+\dots,\\
\label{dSeries}
\bm{d}(\delta t)
&=0
+\delta t \, \bm{d}_1
+\delta t^2 \, \bm{d}_2
+\delta t^3 \, \bm{d}_3
+\delta t^4 \, \bm{d}_4
+\dots,\\
\label{RSeries}
R(\delta t)
&=0
+\delta t \, R_1
+\delta t^2 \, R_2
+\delta t^3 \, R_3
+\delta t^4 \, R_4
+\dots \, .
\end{align}
Using these series expansions, through \eqref{AdtDef}, \eqref{bdtDef}, and \eqref{CdtDef}, we can expand each interpolation generator as a series in $\delta t$ as well,
\begin{align}
\label{ASeries}
A_{\delta t}
&=A_0
+\delta t \, A_1
+\delta t^2 \, A_2
+\delta t^3 \, A_3
+\dots,\\
\label{bSeries}
\bm{b}_{\delta t}
&=\bm{b}_0
+\delta t \, \bm{b}_1
+\delta t^2 \, \bm{b}_2
+\delta t^3 \, \bm{b}_3
+\dots,\\
\label{CSeries}
C_{\delta t}
&=C_0
+\delta t \, C_1
+\delta t^2 \, C_2
+\delta t^3 \, C_3
+\dots,
\end{align}
where the first few terms of the expansion of $A_{\delta t}$ are given by
\begin{align}
\label{A0def}
\Omega A_0=T_1,&\\
\label{A1def}
\Omega A_1=T_2
&-\frac{1}{2}T_1{}^2,\\
\label{A2def}
\Omega A_2=T_3
&-\frac{1}{2}(T_1 T_2+T_2 T_1)
+\frac{1}{3}T_1{}^3.
\end{align}
The first few terms of the expansion of $\bm{b}_{\delta t}$ are given by
\begin{align}
\Omega \, \bm{b}_0
=\bm{d}_1&,\\
\Omega \, \bm{b}_1
=\bm{d}_2
&-\frac{1}{2}T_1\bm{d}_1,\\
\Omega \, \bm{b}_2
=\bm{d}_3
&-\frac{1}{2}(T_1\bm{d}_2+T_2\bm{d}_1)
+\frac{1}{3}T_1^2\bm{d}_1.
\end{align}
Finally, the first few terms of the expansion of $C_{\delta t}$ are given by
\begin{align}
C_0&=R_1,\\
C_1&=R_2
-\frac{1}{2}(T_1 R_1+R_1 T_1^\intercal),\\
C_2&=R_3
-\frac{1}{2}(T_2 R_1+R_1 T_2^\intercal+T_1 R_2+R_2 T_1^\intercal)\\
\nonumber
&+\frac{1}{3}(T_1{}^2 R_1+R_1 T_1^\intercal{}^2)
+\frac{1}{6} T_1 R_1 T_1^\intercal.
\end{align}
Higher order terms in these series can be calculated but are not discussed in this paper.
\section{Gaussian ancillary bombardment}\label{AncillaryBombardmentGQM}
In this section we construct the Gaussian channel corresponding to a specific physically motivated situation that we refer to as \textit{Gaussian ancillary bombardment}, in analogy with the ancillary bombardment introduced in \cite{Grimmer2016a}. Following this we use the results of the previous section to calculate the interpolation generators and expand them as a series in $\delta t$. Finally, we will analyze these expansions order by order using the partition developed in \cite{ArXivGrimmer2017b}.
In a general Gaussian ancillary bombardment scenario, we consider a Gaussian system that is repeatedly bombarded by a series of Gaussian ancillae. Updating the system's state via \eqref{UpdateSchemeXX} and \eqref{UpdateSchemeVV} here corresponds to the system interacting with one of these Gaussian ancillae. An intuitive example of such a scenario (and one we analyze in Sec \ref{Example}) is a harmonic oscillator bombarded by a thermal bath of harmonic oscillators.
Let us consider a system, $\text{S}$, to be a Gaussian system composed of $N_\text{S}$ modes. Likewise let each ancilla, $\text{A}$, be a Gaussian system composed of $N_\text{A}$ modes. Together they form a joint system, $\text{SA}$, which is Gaussian and is composed of $N_\text{S}+N_\text{A}$ modes. Note that dimensions of $\text{S}$, $\text{A}$ and $\text{SA}$'s phase spaces are $2N_\text{S}$, $2N_\text{A}$, and $2N_\text{S}+2N_\text{A}$ respectively.
The system and ancilla's quadrature operators are collected together into the operator vector
\begin{equation}
\hat{\bm{X}}_\text{SA}=(\hat{\bm{X}}_\text{S},\hat{\bm{X}}_\text{A})^\intercal.
\end{equation}
Since the system's and ancilla's observables live in different Hilbert spaces, all pairs of their observables commute with each other. Thus they have the joint symplectic form,
\begin{equation}
\Omega_\text{SA}=
\begin{pmatrix}
\Omega_\text{S} & 0\\
0 & \Omega_\text{A}
\end{pmatrix}
\end{equation}
where $\Omega_\text{S}$ and $\Omega_\text{A}$ are the symplectic forms in the phase space of S and A respectively.
We assume that the system and ancilla are initially uncorrelated, having the initial joint mean vector,
\begin{equation}
\bm{X}_\text{SA}(0)=(\bm{X}_\text{S}(0),\bm{X}_\text{A}(0))^\intercal,
\end{equation}
and the initial joint covariance matrix,
\begin{equation}
\sigma_\text{SA}(0)=
\begin{pmatrix}
\sigma_\text{S}(0) & 0\\
0 & \sigma_\text{A}(0)
\end{pmatrix}.
\end{equation}
Further we assume that they evolve under a quadratic Hamiltonian,
\bel{HSADef}
\hat{H}_\text{SA}
=\frac{1}{2}\hat{\bm{X}}_\text{SA}^\intercal \, F_\text{SA} \, \hat{\bm{X}}_\text{SA}
+\bm{\alpha}^\intercal_\text{SA}\hat{\bm{X}}_\text{SA}
\end{equation}
where $F_\text{SA}$ is real and symmetric and $\bm{\alpha}_\text{SA}$ is real.
It is useful to divide this Hamiltonian into subblocks corresponding to the system and ancilla's phase spaces as,
\begin{equation}
F_\text{SA}=
\begin{pmatrix}
F_\text{S} & G\\
G^\intercal & F_\text{A}
\end{pmatrix},
\quad \quad
\bm{\alpha}_\text{SA}=
\begin{pmatrix}
\bm{\alpha}_\text{S}\\
\bm{\alpha}_\text{A}
\end{pmatrix}.
\end{equation}
Note that $F_\text{S}$ and $F_\text{A}$ are symmetric and that $G$ is not generally square, having dimensions $2 N_\text{S}$ by $2 N_\text{A}$.
Divided this way we can see that $F_S$ and $\bm{\alpha}_\text{S}$ correspond to the system's free Hamiltonian,
\begin{equation}
\hat{H}_\text{S}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, F_\text{S} \, \hat{\bm{X}}_\text{S}
+\bm{\alpha}^\intercal_\text{S}\hat{\bm{X}}_\text{S}.
\end{equation}
Similarly $F_\text{A}$ and $\bm{\alpha}_\text{A}$ correspond to the ancilla's free Hamiltonian,
\begin{equation}
\hat{H}_\text{A}
=\frac{1}{2}\hat{\bm{X}}_A^\intercal \, F_\text{A} \, \hat{\bm{X}}_\text{A}
+\bm{\alpha}^\intercal_\text{A}\hat{\bm{X}}_\text{A}.
\end{equation}
Finally, we can see that the $G$ matrix contains all of the couplings between the system and the ancilla, corresponding to the interaction Hamiltonian,
\begin{equation}
\hat{H}_\text{I}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, G \, \hat{\bm{X}}_\text{A}
+\frac{1}{2}\hat{\bm{X}}_\text{A}^\intercal \, G^ \intercal\, \hat{\bm{X}}_\text{S}.
\end{equation}
Next we compute the effect that evolving for a time $\delta t$ under this Hamiltonian has on the system (determining $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$). In order to do this we compute the evolution of the joint system then isolate the effect on the system. This evolution is unitary and therefore given by a symplectic-affine transformation in the joint phase space. Specifically,
\begin{align}
\label{SXUpdt}
\bm{X}_\text{SA}(\delta t)
&=S_\text{SA}(\delta t) \, \bm{X}_\text{SA}(0)+\bm{d}_\text{SA}(\delta t),\\
\label{SVUpdt}
\sigma_\text{SA}(\delta t)
&=S_\text{SA}(\delta t) \, \sigma_\text{SA}(0) \, S^\intercal_\text{SA}(\delta t)
\end{align}
where
\begin{align}
\label{AppSHamDef}
S_\text{SA}(\delta t)&=\text{exp}(\Omega_\text{SA} F_\text{SA} \, \delta t),\\
\label{AppdHamDef}
\bm{d}_\text{SA}(\delta t)&=\frac{\text{exp}(\Omega_\text{SA} F_\text{SA} \, \delta t)-\openone_{2 N_\text{S}+2 N_\text{A}}}{\Omega_\text{SA} F_\text{SA}} \, \Omega_\text{SA} \, \bm{\alpha}_\text{SA}.
\end{align}
In order to find the effective update on the system's state we can divide these into blocks as
\begin{equation}
\nonumber
S_\text{SA}(\delta t)
=\begin{pmatrix}
M_\text{SS}(\delta t) & M_\text{SA}(\delta t) \\
M_\text{AS}(\delta t) & M_\text{AA}(\delta t) \\
\end{pmatrix}
\ \ \text{and} \ \
\bm{d}_\text{SA}(\delta t)
=\begin{pmatrix}
\bm{d}_\text{S}(\delta t) \\ \bm{d}_\text{A}(\delta t) \\
\end{pmatrix}.
\end{equation}
Expanding \eqref{SXUpdt} and \eqref{SVUpdt} over the direct sum between the system and ancilla's phase spaces, one can identify that the reduced state of the system ($\bm{X}_\text{S}$ and $\sigma_\text{S}$) is updated as
\begin{align}
\bm{X}_\text{S}(\delta t)
&=T(\delta t) \, \bm{X}_\text{S}(0)
+\bm{d}(\delta t),\\
\sigma_\text{S}(\delta t)
&=T(\delta t) \, \sigma_\text{S}( 0) \, T^\intercal(\delta t)
+R(\delta t),
\end{align}
where
\begin{align}\label{TSMdSDef}
T(\delta t)&=M_\text{SS}(\delta t),\\
\bm{d}(\delta t)&=M_\text{SA}(\delta t) \ \bm{X}_\text{A}(0)+\bm{d}_\text{S}(\delta t),\\
R(\delta t)&=M_\text{SA}(\delta t) \, \sigma_\text{A}(0) \, M^\intercal_\text{SA}(\delta t).
\end{align}
With some effort, these can be expanded as a series in $\delta t$ (as in \eqref{TSeries}, \eqref{dSeries}, and \eqref{RSeries}). Using the results of the previous section, we can then write the interpolation generators $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ as a series in $\delta t$ (as in \eqref{ASeries}, \eqref{bSeries}, and \eqref{CSeries}) now with coefficients written explicitly in terms of the Hamiltonian \eqref{HSADef}.
This calculation is tedious but ultimately straightforward. For the first few terms of the expansion of $A_{\delta t}$ it yields
\begin{align}
A_0=&F_S,\\
\label{A1DefHam}
A_1=&\frac{1}{2}G \, \Omega_A G^\intercal,\\
A_2=&-\frac{1}{12} G \, \Omega_A G^\intercal \Omega_S F_S
-\frac{1}{12} F_S \Omega_S G \, \Omega_A G^\intercal\\
\nonumber
&+\frac{1}{6} G \, \Omega_A F_A \Omega_A G^\intercal.
\end{align}
For the first few terms of the expansion of $\bm{b}_{\delta t}$ we find
\begin{align}
\bm{b}_0&=
\bm{\alpha}_\text{S}
+G\bm{X}_\text{A}(0),\\
\bm{b}_1&=
\frac{1}{2} G \, \Omega_\text{A} F_\text{A}\bm{X}_\text{A}(0)
+\frac{1}{2} G \, \Omega_\text{A}\bm{\alpha}_\text{A},\\
\bm{b}_2&=-\frac{1}{12} F_\text{S} \Omega_\text{S} G \, \Omega_\text{A} \bm{\alpha}_\text{A}
+\frac{1}{6} \Omega_\text{S} G \, \Omega_\text{A} F_\text{A} \Omega_\text{A} \bm{\alpha}_\text{A}\\
\nonumber
&-\frac{1}{12} F_\text{S} \Omega_\text{S} G \, \Omega_\text{A} F_\text{A} \bm{X}_\text{A}(0)
+\frac{1}{6} G \, \Omega_\text{A} F_\text{A} \Omega_\text{A} F_\text{A} \bm{X}_\text{A}(0)\\
\nonumber
&-\frac{1}{12} G \, \Omega_\text{A} G^\intercal \Omega_\text{S} \bm{\alpha}_\text{S}
-\frac{1}{12} G \, \Omega_\text{A} G^\intercal \Omega_\text{S} G \bm{X}_\text{A}(0).
\end{align}
Finally, the first few terms of the expansion of $C_{\delta t}$ are
\begin{align}
C_0&=0,\\
\label{C1DefHam}
C_1&=\Omega_\text{S} G \sigma_\text{A}(0) G^\intercal \Omega^\intercal_\text{S},\\
\label{C2DefHam}
C_2&=\frac{1}{2} \Omega_\text{S} G \big(\Omega_\text{A} F_\text{A} \sigma_\text{A}(0)+\sigma_\text{A}(0) (\Omega_\text{A} F_\text{A})^\intercal\big) G^\intercal \Omega^\intercal_\text{S}.
\end{align}
It is worth noting the functional dependence of $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ on the parameters of the bombardment scenario. These include the system free Hamiltonian ($F_\text{S}$ and $\bm{\alpha}_\text{S}$), ancillae free Hamiltonian ($F_\text{A}$ and $\bm{\alpha}_\text{A}$), the interaction Hamiltonian ($G$) and the initial state of the ancilla ($\bm{X}_\text{A}$ and $\sigma_\text{A}$). The interpolation generators depend on these (even non-perturbatively) as
\begin{align}
&A_{\delta t}(F_\text{S},F_\text{A},G),\\
&\bm{b}_{\delta t}(F_\text{S},F_\text{A},G,\bm{\alpha}_\text{S},\bm{\alpha}_\text{A},\bm{X}_\text{A}(0)),\\
&C_{\delta t}(F_\text{S},F_\text{A},G,\sigma_\text{A}(0)).
\end{align}
The $A_{\delta t}$ term (which implements rotation, squeezing, amplifications and relaxation \cite{ArXivGrimmer2017b}) does not depend on either the linear part of the Hamiltonians nor on the initial ancilla state. This means that the presence and strength of all of these effects is controlled solely by the nature of the coupling to the environment and not by the particular state of the environment. Recall this is true even in the regime of long-time interactions.
Additionally, since the dynamics of the mean vector is determined entirely by $A_{\delta t}$ and $\bm{b}_{\delta t}$ it is therefore independent of the initial covariance of the ancilla, $\sigma_\text{A}(0)$.
It is also interesting to note which types of dynamics become available at each order in the series. To do this we use the results of \cite{ArXivGrimmer2017b} which partitions the generators of Gaussian dynamics into 11 parts based on: (a) whether or not the dynamics allows for energy flow between the system and the environment, (b) whether it allows for entanglement to be created between the system and the environment, (c) whether the effect of the dynamics is state-dependent or state-independent and finally (d) whether it mixes different modes together.
The result of applying this partition to the dynamics generated by Gaussian ancillary bombardment is summarized in Table \ref{Table22} (for details see Appendix \ref{AppGBPart}).
Summarizing this analysis, at zeroth order we have access to all the types of dynamics present in the system's free Hamiltonian with the option to induce an additional displacement (coming from $\bm{b}_0$). At higher orders the dynamics will generically be able to access all types of displacement and noise. Past zeroth order, the rotation, squeezing and amplification effects (coming from $A$) that are available to the system alternate between unitary and non-unitary.
\begin{table*}
\begin{tabular}{||c|c|c|c|c|c|c||}
\hline Type of Dynamics & \quad 0th (Free) \quad & \quad 0th (Induced) \quad & \quad Odd ( $\geq$ 1st) \quad & \quad Even ( $\geq$ 2nd) \quad \\
\hline Single-mode Rotation & Yes & No & No & Yes \\
\hline Single-mode Squeezing & Yes & No & No & Yes \\
\hline Displacement & Yes & Yes & Yes & Yes \\
\hline Single-mode Squeezed Noise & No & No & Yes & Yes \\
\hline Amplification/Relaxation & No & No & Yes & No \\
\hline Thermal Noise & No & No & Yes & Yes \\
\hline Multi Mode Rotation & Yes & No & No & Yes \\
\hline Multi Mode Squeezing & Yes & No & No & Yes \\
\hline Multi Mode Counter-Rotation & No & No & Yes & No \\
\hline Multi Mode Noise & No & No & Yes & Yes \\
\hline Multi Mode Counter-Squeezing & No & No & Yes & No \\
\hline
\end{tabular}
\caption{The dynamics available to a bombarded Gaussian system at each order in $\delta t$. The eleven types of dynamics listed in this table are described in detail in \cite{ArXivGrimmer2017b}. The zeroth order effects are further divided into those available through the system's free Hamiltonian and those which can be induced through the interaction.
}\label{Table22}
\end{table*}
Finally, before analyzing each of these expansions order by order, we make some comments about when open Gaussian dynamics in general, and Gaussian ancillary bombardment in particular, can lead to purification. This is an important characterization because dynamics being able to increase the purity of at least one state is a prerequisite for the dynamics to be able to capture the process of thermalization (e.g. cooling through bombardment by a cold environment).
Following \cite{Grimmer2017a} we say that a map can purify if there exists a state whose purity increases under the map. The purity of a Gaussian state \cite{GPurity} is given in our notation by
\begin{equation}
\mathcal{P}=\text{Tr}(\rho^2)
=\frac{1}{\text{det}(\sigma)}.
\end{equation}
A necessary and sufficient condition for Gaussian dynamics to be able to purify is
\bel{GaussianNandS}
\text{Tr}\big(\Omega A\big)<0.
\end{equation}
Within the partition described in \cite{ArXivGrimmer2017b}, only the Gaussian dynamics including amplification/purification effects are capable or purifying. From Table \ref{Table22} we can see that such effects are only available at odd orders. Thus if no purification effects are present at first order, the leading order purification effects will be at third order, generically two orders lower than the leading order noise term, $C_1$, with which they will compete. In subsection \ref{FirstOrderGaussian} we find that many commonly used interaction Hamiltonians cannot purify at first order.
\subsection{Zeroth Order Dynamics}
The zeroth order dynamics (i.e, in the continuum limit, as $\delta t\to 0$) is unitary, since $A_0$ is symmetric and $C_0$ vanishes. Specifically in zeroth order we have the dynamics,
\begin{align}\label{GQMInterpMasterEqs}
\bm{X}_{S}'(t)
&=\Omega(F_{S} \, \bm{X}_S(t)
+\alpha_S +G \, \bm{X}_S(0))\\
\sigma_S'(t)
&=(\Omega F_{S}) \, \sigma_S(t)
+\sigma_S(t) \, (\Omega F_{S})^\intercal.
\end{align}
Comparing this to \eqref{SymplecticDiffXUpHam} and \eqref{SymplecticDiffVUpHam} we can see that this is just evolution under the effective Hamiltonian
\begin{align}\label{Heff0Gaussian}
\hat{H}_\text{eff}^{(0)}
&=\frac{1}{2}\hat{\bm{X}}_S^\intercal \, F_{S} \, \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal(\bm{\alpha}_S
+ G\bm{X}_A(0))\\
\nonumber
&=\hat{H}_S+\hat{\bm{X}}_S{}^\intercal G\bm{X}_A(0).
\end{align}
This is in line with the general result from \cite{Layden:2015b} showing that rapid repeated interaction (even in a non-Gaussian setting) produces unitary dynamics in the continuum limit.
In \cite{Layden:2015b} this result was interpreted as saying that in this regime the ancillae affect the system but do not entangle with it (they ``push'' the system, but do not``talk'' to it). Further it was shown in \cite{Layden:2015b} that by switching evolution between (non-commuting) $\hat{H}_\text{S}$ and $\hat{H}_\text{eff}^{(0)}$ one can generally gain full unitary control of the system. However this cannot be done within the context of Gaussian ancillary bombardment.
In fact we will argue that only a limited range of Gaussian dynamics is available to the system at zeroth order. Specifically, unlike in \cite{Layden:2015b}, by turning on and off the environment, one can only adjust the system's Hamiltonian by a linear term in $\hat{\bm{X}}_\text{S}$, as can be seen from \eqref{Heff0Gaussian}. Such a modification of the system's Hamiltonian can only apply a displacement and cannot affect the dynamics of the system's covariance matrix. Thus while we are able to push the Gaussian state around as we like in phase space, we are not able to adjust its ``shape'' at will.
Finally, for completeness we note that since the zeroth order evolution is unitary it is trivially completely positive. Explicity from \eqref{DiffCPCond},
\bel{CPCheck0}
C_0=0\geq\mathrm{i}\Omega_S (A_0-A_0^\intercal)\Omega_S^\intercal=0.
\end{equation}
\subsection{First Order Dynamics}\label{FirstOrderGaussian}
At first order, we see a new displacement term (from $b_1$), the first noise in the dynamics (from $C_1$) and several other non-unitary effects (from $A_1$). Specifically, from Table \ref{Table22} we can see that in addition to the displacement effects coming from $b_1$ we can have all three kinds of noise (from $C_1$) as well as amplification/relaxation, multi-mode counter-rotation, and counter-squeezing coming from $A_1$. Note that we do not have access to single or multi-mode rotation or squeezing at first order. Since noise is generically present at first order (see below) single or multi-mode rotation or squeezing will be generally be subleading to the noise in the dynamics.
At this order the dynamics coming from both $A_1$ and $C_1$ is non-unitary ($A_1$ is antisymmetric, and noise is always non-unitary), thus the only unitary effects at first order come from $\bm{b}_1$. These effects give a first order correction to the effective Hamiltonian
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\mathcal{O}(\delta t^2)
\end{equation}
of
\begin{equation}
\hat{H}_\text{eff}^{(1)}
=\hat{\bm{X}}_\text{S}^\intercal \, \bm{b}_1
=\frac{1}{2}\hat{\bm{X}}^\intercal_\text{S}
G \, \Omega_A
\big(F_A\bm{X}_A(0)
+\bm{\alpha}_A\big).
\end{equation}
This correction can be understood as accounting for the ancilla freely evolving during the interaction.
The first order noise term is given by
\begin{equation}
C_1=\Omega_S G \, \sigma_A(0) \, G^\intercal \Omega_S^\intercal
\end{equation}
which we note is positive semi-definite \mbox{($C_1\geq 0$)}, since \mbox{$\sigma_\text{A}(0)\geq0$}. This noise vanishes only if $G=0$ (there is no interaction) or if $\sigma_A(0)$ is singular (i.e., infinitely squeezed) and $G^\intercal\Omega_S^\intercal$ maps entirely into the kernel of $\sigma_A(0)$.
As discussed above, a necessary and sufficient condition for Gaussian dynamics to cause purification is \eqref{GaussianNandS}. Since the zeroth order dynamics is unitary the first opportunity for purification is at first order. This can happen if and only if
\bel{FirstOrderPurifyNandS}
0>\text{Tr}\big(\Omega_\text{S} A_1\big)
= \frac{1}{2} \text{Tr}\big(\Omega_\text{S} \, G \, \Omega_\text{A} \, G^\intercal\big).
\end{equation}
In \cite{Grimmer2016a} a necessary and sufficient condition for dynamics causing causing purification at leading order was given in a general (non-Gaussian) ancillary bombardment scenario provided the system is finite dimensional. As such the results described there cannot be applied to Gaussian systems. They concluded that in order to cause purification at leading possible order an interaction must be ``sufficiently complicated''. In particular they found that a tensor product interaction Hamiltonian of the form
\begin{equation}
H_\text{I}=\hat{Q}_\text{S}\otimes \hat{R}_\text{A}
\end{equation}
will not purify at leading order. We will now prove that this result in fact does extend to the Gaussian context despite the infinite dimensional nature of the systems and ancillae.
Both $\hat{Q}_\text{S}$ and $\hat{R}_\text{A}$ must be linear in their respective quadrature operators, and so
\begin{equation}
\hat{Q}_\text{S}
=\bm{u}^\intercal\hat{\bm{X}}_\text{S}
=\hat{\bm{X}}_\text{S}^\intercal\bm{u}
\quad\text{and}\quad
\hat{R}_\text{A}
=\bm{v}^\intercal\hat{\bm{X}}_\text{A}
=\hat{\bm{X}}_\text{A}^\intercal\bm{v}
\end{equation}
for some real vectors $\bm{u}$ and $\bm{v}$ in order that $H_\text{I}$ be quadratic in these operators. Thus we can write
\begin{equation}
\hat{H}_\text{I}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, G \, \hat{\bm{X}}_\text{A}
+\frac{1}{2}\hat{\bm{X}}_\text{A}^\intercal \, G^ \intercal\, \hat{\bm{X}}_\text{S}.
\end{equation}
with
\begin{equation}
G=\bm{u}\bm{v}^\intercal.
\end{equation}
Thus in Gaussian quantum mechanics, tensor product interactions correspond to rank one interaction matrices.
From \eqref{FirstOrderPurifyNandS} we can quickly see that a rank one interaction cannot purify at leading order since
\begin{align}
\text{Tr}\big(\Omega_\text{S} G \Omega_\text{A} G^\intercal\big)
&=\text{Tr}\big(\Omega_\text{S} \bm{u}\bm{v}^\intercal \Omega_\text{A} \bm{v}\bm{u}^\intercal\big)\\
\nonumber
&= \bm{u}^\intercal\Omega_\text{S} \bm{u} \ \bm{v}^\intercal \Omega_\text{A} \bm{v}\\
\nonumber
&=0
\end{align}
since $\Omega_\text{S}$ and $\Omega_\text{A}$ are antisymmetric.
Thus we have extended the result of \cite{Grimmer2016a} that ``simple'' interaction Hamiltonians cannot cause purification at leading order in rapid bombardment from finite dimensional systems to include Gaussian systems.
Moreover, for rank one interactions, purification will not arise at second order since all effects coming from $A_2$ are unitary. Thus the first purification effects can only arise at third order, generically two orders below the leading order noise terms that any purification effects would compete with.
\begin{comment}
Hey Dan, weren't we meeting at 3pm today?
Your right, I got caught up adding a new result to the paper. I proved the purification conditions in the Gaussian context. The old result doesnt apply to infinite dimesntional systems but all the results carry over analogously.
ah nice...
So are you coming before 4pm? that's the time of the next meeting
I don't think I will be on campus today. I have added a bit of content to the paper so we should read over that and send it out tomorrow, i think
okay are you gonna be here on Wednesday?
No I am leaving wednesday
okay, then let me konw when the new content is added and i'll go throughit on my own before sending it and let you know.
If you want I can highlight the new content
Yes please
Ill have it done by tonight
ok, jsut shoot an email when you're done
I kind of like talking this way lol. will do
lol yeah... well at least one makes sure that the person editing will read it :P
Okay nice job about the proof. Looking forward to reading it.
\end{comment}
Finally we show that up to first order the dynamics is completely positive. Assuming that the ancillae start in a valid state we have
\begin{equation}
\sigma_A \geq\mathrm{i} \, \Omega_A
\end{equation}
Multiplying this by $\Omega_S G$ and $G^\intercal \Omega_S^\intercal$ on either side maintains the inequality, yielding
\begin{equation}
\Omega_S G \sigma_A G^\intercal \Omega_S^\intercal
\geq \mathrm{i} \, \Omega_S G \, \Omega_A G^\intercal\Omega_S^\intercal,
\end{equation}
but here we can recognize $C_1$ and $A_1$ from \eqref{A1DefHam} and \eqref{C1DefHam}:
\bel{CPCheck1}
C_1\geq 2\mathrm{i} \, \Omega_S A_1\Omega_S^\intercal
=\mathrm{i} \, \Omega_S (A_1-A_1^\intercal)\Omega_S.
\end{equation}
where we have used the antisymmetry of $A_1$. This is exactly the complete positivity condition, \eqref{DiffCPCond}, at first order. Adding this inequality to \eqref{CPCheck0} we confirm the dynamics is completely positive at first order.
\subsection{Second Order Dynamics}
At second order the effective Hamiltonian is
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\delta t^2 \, \hat{H}_\text{eff}^{(2)}
+\mathcal{O}(\delta t^3)
\end{equation}
with
\begin{align}
\hat{H}_\text{eff}^{(2)}
&=\frac{1}{4}\hat{\bm{X}}_S^\intercal (A_2+A_2^\intercal) \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal \, \bm{b}_2
\end{align}
and there is a further correction coming from both $A_2$ and $\bm{b}_1$. This is the first order at which we have a correction to the effective Hamiltonian that is quadratic in the quadrature operators, allowing for single and multi-mode rotations and squeezings.
At second order (and in fact at all even orders) the $A_2$ term does not contribute to the non-unitary dynamics. The only new non-unitary dynamics at this order comes from the new noise term $C_2$. As we can see from \eqref{C2DefHam}, this term can be interpreted as a correction to the $C_1$ noise term accounting for the ancilla's covariance matrix undergoing free evolution during the interaction.
Up to second order the dynamics is completely positive. Proving this amounts to showing that \eqref{DiffCPCond} is obeyed at second order
\begin{align}\label{CPCheck2}
C_0+\delta t \, C_1+\delta t^2 \, C_2 +\mathcal{O}(\delta t^3) &\geq
\mathrm{i} \, \Omega_S (A_0-A^\intercal_0)\Omega_S^\intercal\\
&\nonumber
+\delta t \, \mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal\\
&\nonumber
+\delta t^2 \, \mathrm{i} \, \Omega_S (A_2-A^\intercal_2)\Omega_S^\intercal.
\end{align}
Removing several vanishing terms ($C_0=0, A_0-A^\intercal_0=0$, and $A_2-A^\intercal_2=0$) as well as a factor of $\delta t$ we have
\begin{align}\label{CPCheck2}
C_1+\delta t \, C_2 +\mathcal{O}(\delta t^2) \geq
\mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal
\end{align}
In order to prove this we consider the state of the ancilla after it evolves under its free Hamiltonian for a time $\delta t/2$. Since free evolution is a completely positive map, applying it to a valid initial state yields a state that satisfies \eqref{SigmaPosCond}. Computing the covariance matrix of this state to leading order yields,
\begin{align}\label{CPCheck2}
\sigma_A(0)+\frac{\delta t}{2}\big(\Omega_A F_A \sigma_A(0)+\sigma_A(0) (\Omega_A F_A)^\intercal\big)
+\mathcal{O}(\delta t^2)
\geq\mathrm{i}\Omega_A.
\end{align}
Multiplying by $\Omega_S G$ and $G^\intercal \Omega_S^\intercal$ on the either side and using equation \eqref{A1DefHam}, \eqref{C1DefHam}, and \eqref{C2DefHam} yields
\begin{align}\label{CPCheck2}
C_1+\delta t C_2 +\mathcal{O}(\delta t^2)\geq 2\mathrm{i} \, \Omega_S A_1\Omega_S^\intercal
=\mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal
\end{align}
where in the last step we again employed the antisymmetry of $A_1$. This is the desired result.
\subsection{Higher Order Dynamics}
At third and higher orders the dynamics of the interpolation scheme is not always completely positive. This could indicate either the presence of non-Markovianity (specifically RHP non-Markovianity \cite{RHPnonMarkov}) in the interpolated dynamics or the breakdown of one of the assumptions underlying the construction of the interpolation scheme, for instance the time-independence of the interpolation generators.
Note that while the differential dynamics given by \eqref{GQMInterpMasterEqsX} and \eqref{GQMInterpMasterEqsV} may not be completely positive, the discrete dynamics described by \eqref{UpdateSchemeXX} and \eqref{UpdateSchemeVV} is guaranteed to be completely positive at every time step (i.e. when $t=n\, \delta t$) since the interpolated dynamics matches the discrete dynamics at those precise times. In the language of \cite{Layden:2015b,Grimmer2016a} this error is termed stroboscopic and can be bounded by a combination of the timescale $\delta t$ and the energy scale of the dynamics, $E$.
\begin{comment}
This non positivity is confirmed by calculating the third order noise
\begin{equation}
C^{(3)}=C_0+\delta t \, C_1+\delta t^2 \, C_2 +\delta t^3 C_3
\end{equation}
in the case of a single harmonic oscillator being bombarded by ground states oscillators via an \mbox{$\hat{q}_\text{S}\otimes \hat{q}_\text{A}$} coupling. The parameters of such an interaction are
\begin{align}
F_S
&\nonumber
=\omega_\text{S}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{\alpha}_S=0,
\quad
F_A
=\omega_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{\alpha}_A=0\\
\sigma_A(0)
&\nonumber
=\nu_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{X}_A(0)=0,
\quad
G=g
\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}.
\end{align}
One finds
\begin{equation}
C^{(3)}
=\begin{pmatrix}
\frac{-1}{12}\delta t^3 \, g^2 \, \omega_\text{S}^2 \, \nu_\text{A} & 0\\
0 & \delta t \, g^2 \, \nu_\text{A}-\frac{1}{12}\delta t^3 \, g^2 \, \omega_\text{S}^2 \, \nu_\text{A}
\end{pmatrix}
\end{equation}
to be non-positive semidefinite. This implies that the third order dynamics is not completely positive.
\end{comment}
\begin{comment}
At third order, there is a correction to the effective Hamiltonian of the system coming from $\bm{b}_3$. However, as noted above the $A_3$ is antisymmetric,
\begin{equation}
A_{1,\textsc{s}}=(A_1+A_1^\intercal)/2=0
\end{equation}
Therefore it does not contribute to the symplectic part of the dynamics or to the effective Hamiltonian. Thus at third order the effective Hamiltonian is
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\delta t^2 \, \hat{H}_\text{eff}^{(2)}
+\delta t^3 \, \hat{H}_\text{eff}^{(3)}
+\mathcal{O}(\delta t^4)
\end{equation}
with
\begin{align}
\hat{H}_\text{eff}^{(3)}
&=\frac{1}{4}\hat{\bm{X}}_S^\intercal (A_3+A_3^\intercal) \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal \, \bm{b}_3\\
&=\hat{\bm{X}}_S^\intercal \, \bm{b}_3.
\end{align}
In addition to a correction to the effective Hamiltonian, at third order we see new unsymplectic dynamics coming from $A$ and $C$. The forms of these are now too difficult to analyze in detail. However as we will see in the coming examples, at third order the dynamics may not be completely positive.
This ultimately leads us to conclude that we should have included time dependence in our interpolation schemes at least in the Gaussian setting.
\end{comment}
\section{Thermalization of a Harmonic Oscillator}\label{Example}
As a first relevant physical scenario that Gaussian ancillary bombardment can shed some light on, we consider the analysis of the time evolution of a harmonic oscillator subject to short interactions with the components of a thermal reservoir. This is a picture usually associated with thermalization processes and as such we would a-priori expect that this evolution has fixed points related to the second law of thermodynamics.
More concretely, one might expect in such a scenario that the harmonic oscillator will thermalize to the temperature of the reservoir, in a way largely independent of the coupling between them. Perhaps surprisingly, we will show that the system does not always thermalize. Moreover, when it does thermalize its final temperature depends critically on the nature of the coupling to the bath (as well as the bath's temperature as expected).
Let us consider a single harmonic oscillator (the system, S) repeatedly interacting with a series of other harmonic oscillators (the ancillae, A) in thermal states with a fixed temperature.
At this point it is convenient to introduce the following basis for 2 by 2 matrices:
\bel{2by2basis}
\openone_2
=\begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix},
\,
\omega=\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
\end{pmatrix},
\,
X=\begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix},
\,
Z=\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}.
\end{equation}
The system's free Hamiltonian is assumed to be
\begin{equation}
\hat{H}_\text{S}
=\frac{E_\text{S}}{2} (\hat{q}_\text{S}{}^2+\hat{p}_\text{S}{}^2)
=\frac{E_\text{S}}{2}\begin{pmatrix}
\hat{q}_\text{S} & \hat{p}_\text{S}
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{q}_\text{S}\\
\hat{p}_\text{S}
\end{pmatrix},
\end{equation}
where $E_\text{S}$ is the energy gap of the oscillator. This Hamiltonian is represented in phase space as
\begin{equation}
F_\text{S}=E_\text{S}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=E_\text{S} \, \openone_2,
\quad\text{and}\quad
\bm{\alpha}_\text{S}=0.
\end{equation}
Similarly the ancillae' free Hamiltonian is assumed to be
\begin{equation}
\hat{H}_\text{A}
=\frac{E_\text{A}}{2} (\hat{q}_\text{A}{}^2+\hat{p}_\text{A}{}^2)
=\frac{E_\text{A}}{2}\begin{pmatrix}
\hat{q}_\text{A} & \hat{p}_\text{A}
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{q}_\text{A}\\
\hat{p}_\text{A}
\end{pmatrix},
\end{equation}
where $E_\text{A}$ is the energy gap of the ancilla.
This Hamiltonian is representation in phase space as
\begin{equation}
F_\text{A}=E_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=E_\text{A} \, \openone_2
\quad\quad
\bm{\alpha}_\text{A}=0.
\end{equation}
The interaction Hamiltonian between the system and the ancillae is assumed to be a generic quadratic coupling,
\begin{equation}
\hat{H}_\text{int}
=\frac{1}{2}\hat{\bm{X}}_S^\intercal \, G \, \hat{\bm{X}}_A
+\frac{1}{2}\hat{\bm{X}}_A^\intercal \, G^ \intercal\, \hat{\bm{X}}_S
\end{equation}
for any real-valued $2$ by $2$ matrix, $G$. Further, the ancillae are taken to each initially be in the thermal state (see \cite{GQMRev}),
\begin{equation}
\sigma_A(0)=\nu_A
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=\nu_A \, \openone_2,
\quad\quad
\bm{X}_A(0)=0.
\end{equation}
The parameter $\nu$ is a temperature monotone related to the inverse temperature $\beta$ and the energy gap as $E$ as,
\bel{NuBetaRelation}
\nu=\frac{\text{exp}(\beta E)+1}{\text{exp}(\beta E)-1}
\end{equation}
This represents a valid state as long as $\nu_A\geq1$.
As discussed above (and in \cite{Layden:2015b}), at zeroth order the system's dynamics is unitary. In fact, in the Gaussian regime, the dynamics is just the system's free dynamics plus a potential displacement coming from the bombardment. In this case, because the ancilla state has $\bm{X}_A(0)=0$, no new displacement dynamics is induced at zeroth order. Therefore the system evolves freely at zeroth order. All new dynamical effects besides free evolution are higher order, thus associated with a finite interaction duration.
Explicitly computing the zeroth order interpolation generators one finds
\begin{align}
A_0&=E_{S} \ \openone_2,\\
\bm{b}_0&=0,\\
C_0&=0,
\end{align}
which simply describe the free rotation of the system.
We do however see novel dynamical effects at first order. We find
\begin{align}
A_1&=\frac{1}{2}G \, \Omega_A G^\intercal=\frac{1}{2}\text{det}(G) \, \omega,\\
\bm{b}_1&=0,\\
C_1&=\nu_A \, \Omega_S \, G \, G^\intercal \, \Omega_S^\intercal
\end{align}
for the first order interpolation generators.
These produce non-unitary dynamics in the system. In particular, using the partition developed in \cite{ArXivGrimmer2017b}, we can see that $A_1$ produces amplification or relaxation depending on the sign of $\text{det}(G)$ at a rate $\sim\delta t \, \text{det}(G)$. Specifically if $\text{det}(G)>0$ the effect of this term (alone) is to exponentially shrink the state's mean vector and covariance matrix towards zero. Alternatively if $\text{det}(G)<0$ this term alone would push the state's mean vector and covariance matrix to grow exponentially. If $\text{det}(G)=0$ this term has no effect.
This amplification/relaxation competes with the noise introduced at first order by $C_1$. Generically this will include both thermal noise and squeezed noise. If \mbox{$\text{det}(G)\leq 0$} then both the $A_1$ and $C_1$ terms serve to increase the uncertainty of the state. In this case no fixed point is reached, hence the system does not thermalize. However, if $\text{det}(G)>0$ then the two effects come to an equilibrium that is approximately thermal, as we will show below.
Explicitly the first order master equation for the covariance matrix is
\begin{align}
\frac{\d}{\d t}\sigma_\text{S}(t)
&=\Omega_\text{S}(A_0+\delta t A_1)\sigma_\text{S}(t)\\
&+\sigma_\text{S}(t)(\Omega_\text{S}(A_0+\delta t A_1))^\intercal
+C_0+\delta t \, C_1.
\end{align}
We can expand the system's covariance matrix over the basis \eqref{2by2basis} as
\begin{align}
\sigma_S(t)
=\nu_S(t)\openone_2
+s_\times(t) X
+s_+(t)Z.
\end{align}
where $\nu_S(t)$ captures the system's temperature and $s_\times(t)$ and $s_+(t)$ capture how the state is squeezed.
In terms of these coefficients the first order master equation for the covariance matrix is
\begin{align}
\frac{\d}{\d t}\nu_S(t)
&=-\delta t \, \text{det}(G) \, \nu_\text{S}(t)
+ \frac{\delta t}{2} \text{Tr}(G^\intercal G) \, \nu_A\\
\frac{\d}{\d t}s_\times(t)
&=-2 \, E_{\text{S}} \, s_+(t)
-\delta t \, \text{det}(G) \, s_\times(t)\\
&\nonumber
-\frac{\delta t}{2} \text{Tr}(G^\intercal X G) \, \nu_A \\
\frac{\d}{\d t}s_+(t)
&=2 \, E_{\text{S}} \, s_\times(t)
-\delta t \, \text{det}(G) \, s_+(t)\\
&\nonumber
-\frac{\delta t}{2} \text{Tr}(G^\intercal Z G) \nu_A.
\end{align}
These equations have a fixed point if and only if $\text{det}(G)>0$, in which case the fixed point is attractive. In this case the final state of the system is
\begin{equation}
\sigma_S(\infty)
=\nu_S(\infty) \, \openone_2
+\mathcal{O}(\delta t)
\end{equation}
with
\begin{equation}
\nu_S(\infty)=\tilde{\nu}_A\coloneqq\frac{\text{Tr}(G^\intercal G)}{2\, \text{det}(G)}\nu_A
\end{equation}
where $\tilde{\nu}_A$ represents the effective temperature of the ancilla. The system approaces this state at a rate $\delta t \, \text{det}(G)$. Note that the final temperature of the system depends on the coupling between the system and environment non-trivially.
At this point, one may wonder if it is possible for the system to become colder than its environment through such a rapid bombardment process. Noting that all $2\times 2$ matrices have
\bel{Frob2Det}
\text{Tr}(G^\intercal G)\geq 2 \, \text{det}(G),
\end{equation}
we see that the system cannot be cooled to have $\nu_\text{S}(\infty)$ lower than $\nu_\text{A}$,
\bel{NuInequality}
\nu_S(\infty)=\tilde{\nu}_A\geq\nu_A.
\end{equation}
However, this does not mean that system cannot become cooler than its environment. Recall from equation \eqref{NuBetaRelation} that $\nu$ is a monotone function of temperature (in fact, it is a monotone function of $\beta E$). Thus \eqref{NuInequality} implies
\bel{BetaInequality}
\beta_\text{S}(\infty)E_\text{S}\leq E_\text{A}\beta_\text{A}
\end{equation}
or equivalently
\bel{BetaInequality2}
T_\text{S}(\infty)\geq \frac{E_\text{S}}{E_\text{A}}
T_\text{A}.
\end{equation}
If the ancilla has has a larger energy gap than the system the system will be cooled to a temperature below that of the ancillae.
This appears to be connected to the property of Gaussian passivity, introduced in \cite{Brown}. A quantum state is called Gaussian passive iff there exists no Gaussian unitary that can lower the state's energy.
In fact, if we assume that \(E_\text{S} < E_\text{A}\), then (\ref{BetaInequality}) is the necessary and sufficient condition for Gaussian passivity. Thus, under the condition \(E_\text{S} < E_\text{A}\), the result of bombardment is to evolve the system such that the joint system-ancilla system is Gaussian passive. However, in the case that the system energy gap is larger than that of the ancilla, this result implies that the joint system becomes explicitely Gaussian non-passive! The energetics of the bombardment steady state therefore depend strongly on the ordering of system and ancilla frequencies. This connection warrents further investigation.
The above inequalities ---\eqref{Frob2Det}, \eqref{NuInequality} and \eqref{BetaInequality}-- are saturated (i.e., we have maximal cooling) only for the following two parameter family of interaction matrices
\bel{PerfectThermalizationGForm}
G=g_{1} \, \openone
+g_{w} \, \omega
=\begin{pmatrix}
g_{1} & g_{w}\\
-g_{w} & g_{1}
\end{pmatrix}
\end{equation}
whose associated Hamiltonians associated are
\begin{equation}
\hat{H}_\text{I}
=g_1(\hat{q}_S\hat{q}_A+\hat{p}_S\hat{p}_A)+g_w(\hat{q}_S\hat{p}_A-\hat{p}_S\hat{q}_A).
\end{equation}
Written in terms of the system and ancillae creation and annihilation operators the maximally cooling interaction Hamiltonians are
\bel{MaxCoolaForm}
\hat{H}_\text{I}
=\begin{pmatrix}
\hat{a}_\text{S} & \hat{a}_\text{S}^\dagger
\end{pmatrix}
\begin{pmatrix}
0 & g\\
g^* & 0
\end{pmatrix}
\begin{pmatrix}
\hat{a}_\text{A} \\ \hat{a}_\text{A}^\dagger
\end{pmatrix}.
\end{equation}
where $g=g_1+\mathrm{i} g_w$. Notice that these are exactly the interaction Hamiltonians that result from dropping all the $\hat{a}_\text{S} \, \hat{a}_\text{A}$ and $\hat{a}_\text{S}^\dagger \, \hat{a}_\text{A}^\dagger$ terms as one does in the rotating wave approximation. Thus taking the rotating wave approximation can have significant phenomenological effects in rapid repeated interaction scenarios. For instance $H_\text{I}=\lambda \, \hat{q}_S \, \hat{q}_A$ does not thermalize (since it has $\text{det}(G)=0$) but under the rotating wave approximation it causes maximal cooling.
\\~\\
In order to see why the interaction Hamiltonians given by \eqref{MaxCoolaForm} cause the system to equilibrate with its environment it is useful to look at their effect on definite number states. For instance
\begin{align}
\hat{H}_\text{I}\ket{n_S, \, n_A}
\nonumber
&=\big(
g \, \hat{a}_S\hat{a}_A^\dagger
+g^* \, \hat{a}_S^\dagger\hat{a}_A
\big)\ket{n_S, \, n_A}\\
&=g \, \sqrt{n_S}\sqrt{n_A+1}\ket{n_S-1, \, n_A+1}\\
&+g^* \, \sqrt{n_S+1}\sqrt{n_A}\ket{n_S+1, \, n_A-1}
\end{align}
such that the effect of this Hamiltonian is a superposition of either transfering an excitation from S to A or vice versa. In general, these possibilities do not have the same amplitude. If $n_S>n_A$ then
\begin{equation}
\vert g \, \sqrt{n_S}\sqrt{n_A+1}\vert >
\vert g^* \, \sqrt{n_S+1}\sqrt{n_A}\vert
\end{equation}
such that the amplitude of an excitation being transferred from S to A is larger. Likewise if $n_A>n_S$ then the amplitude of an excitation to be transferred from A to S is larger. Thus this coupling will tend to transfer excitations from the more excited system to the less excited one. As we saw above this ultimately leads to an equilibrium of excitation profiles, $\nu_S=\nu_A$. Note that this is not a thermal equilibrium.
\\~\\
On the other hand, the part of the Hamiltonian
\begin{equation}
\hat{H}_\text{I}
=h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A
\end{equation}
that is eliminated by the rotating wave approximation does not lead to equilibration. Its effect on the definite number state is
\begin{align}
\hat{H}_\text{I}\ket{n_S, \, n_A}
&=\big(
h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A
\big)\ket{n_S, \, n_A}\\
\nonumber
&=h \, \sqrt{n_S+1}\sqrt{n_A+1}\ket{n_S+1, \, n_A+1}\\
\nonumber
&+h^* \, \sqrt{n_S}\sqrt{n_A}\ket{n_S-1, \, n_A-1}.
\end{align}
That is produces a superposition of both oscillators becoming more excited and both becoming less excited. Notice however that for every $n_S$ and $n_A$,
\begin{equation}
\vert h \, \sqrt{n_S+1}\sqrt{n_A+1}\vert >
\vert h^* \, \sqrt{n_S}\sqrt{n_A}\vert
\end{equation}
such that joint excitation has a larger amplitude than joint de-excitation. This causes the system to increasingly become more and more excited.
\\~\\
Given a general quadratic interaction Hamiltonian
\begin{align}
\hat{H}_\text{I}
=g \, \hat{a}_S\hat{a}_A^\dagger
+g^* \, \hat{a}_S^\dagger\hat{a}_A
+h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A,
\end{align}
if $\vert h\vert>\vert g\vert$ the system does not equilibrate. However if $\vert g\vert>\vert h\vert$ then the system equilibrates to have
\begin{equation}
\nu_S(\infty)
=\frac{\text{Tr}(G^\intercal G)}{2\, \text{det}(G)}\nu_A
=\frac{\vert g \vert^2+\vert h \vert^2}{\vert g \vert^2-\vert h \vert^2}\nu_A.
\end{equation}
The final state of the system is determined by a competition between these equilibrating and exciting effects.
\begin{comment}
Ignoring the trivial case where $G=0$, since $\det{G}=0$ we know $G$ must be rank 1, that is,
\begin{equation}
G=(g_{x,S},g_{p,S})\otimes(g_{x,A},g_{p,A})^\intercal.
\end{equation}
By choosing our coordinates on the system and ancilla phase space appropriately we can bring $G$ to be an $XX$ coupling,
\begin{equation}
G=g
\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}.
\end{equation}
Note that we could equivalently choose to rewrite $G$ as an $XP$ coupling.
These cover most of the standardly assumed interaction, $XX$, $XP$, $PP$.
\end{comment}
\begin{comment}
Consider a bound pair of harmonic oscillators (our system) bombarded by a gas of thermal harmonic oscillators.
In particular, we take the system to have free Hamiltonian,
\begin{equation}
F_S=\omega_{0,S}
\begin{pmatrix}
\openone_2 & 0\\
0 & \openone_2\\
\end{pmatrix},
\quad\quad
\bm{\alpha}_S=0.
\end{equation}
Notice that the two parts of the system are uncoupled. We take the ancillae to have free Hamiltonian,
\begin{equation}
F_A=\omega_{0,A}\openone_2,
\quad\quad
\bm{\alpha}_A=0.
\end{equation}
Moreover, the ancillae are taken to be in the thermal state,
\begin{equation}
\sigma_A=\nu_A\openone_2,
\quad\quad
\bm{X}_A=0
\end{equation}
with $\nu_A\geq1$.
Finally, we choose for each part of our system to interact with the constituents of the gas via the maximally cooling Hamiltonian identified in the previous example,
\begin{equation}
G_{12A}=\begin{pmatrix}
G_{1A}\\
G_{2A}\\
\end{pmatrix};
\qquad
G_{1A}
=G_{2A}
=g
\begin{pmatrix}
1 & 0\\
0 & 1\\
\end{pmatrix}.
\end{equation}
The zeroth order dynamics of the system are just the free system dynamics,
\begin{align}
A_0&=\omega_{0,S}
\begin{pmatrix}
\openone_2 & 0\\
0 & \openone_2
\end{pmatrix}\\
\bm{b}_0&=0\\
C_0&=0.
\end{align}
It is now convenient to identify the part of the covariance matrix which is fixed under the free dynamics. Taking the covariance matrix to be written as,
\begin{equation}
\sigma(t)=
\begin{pmatrix}
\sigma_1(t) & \gamma(t)\\
\gamma^\intercal(t) & \sigma_2(t)
\end{pmatrix}
\end{equation}
we find that under the free dynamics
\begin{align}\label{2QHOFreeDynamics}
\sigma_1'(t)&=\omega\sigma_1(t)-\sigma_1(t)\omega\\
\sigma_2'(t)&=\omega\sigma_2(t)-\sigma_2(t)\omega\\
\gamma'(t)&=\omega\gamma(t)-\gamma(t)\omega.
\end{align}
Thus the part of the covariance matrix which is fixed under the free dynamics is built out of subblocks which commute with $\omega$, namely $\openone_2$ and $\omega$ itself. The reduced states $\sigma_1$ and $\sigma_2$ must be symmetric and are therefore proportional to the identity and thus thermal.
Thus the fixed states of the free Hamiltonian form a four parameter family,
\begin{align}
\sigma_1&=(\bar{\nu}_S+\Delta\nu_S/2) \, \openone_2\\
\sigma_2&=(\bar{\nu}_S-\Delta\nu_S/2) \, \openone_2\\
\gamma&=\gamma_1 \, \openone_2+\gamma_w \, \omega
\end{align}
where $\bar{\nu}_S$ is the average temperature of the system, $\Delta\nu_S$ is the temperature spread of the system, and $\gamma_1$ and $\gamma_w$ capture the correlations between the two parts of the system.
The rest of the covariance matrix is built of $X$ and $Z$ subblocks. Noting that $\omega X-X\omega=2Z$ and $\omega Z-Z\omega=-2X$, we can see from equations \eqref{2QHOFreeDynamics} that these two parts of the covariance matrix are independent under the free Hamiltonian.
Next we consider the first order dynamics. Calculating the first order dynamics we have,
\begin{equation}
A_1=\frac{1}{2}
\text{det}(G_1)
\begin{pmatrix}
\omega & \omega\\
\omega & \omega
\end{pmatrix}
\end{equation}
and noise
\begin{equation}
C_1=\nu_A
\begin{pmatrix}
\omega G_1G_1^\intercal\omega^\intercal
& \omega G_1G_1^\intercal\omega^\intercal\\
\omega G_1G_1^\intercal\omega^\intercal
& \omega G_1G_1^\intercal\omega^\intercal\\
\end{pmatrix}.
\end{equation}
Noting that
\begin{equation}
\Omega_S A_1=\frac{1}{2}
\text{det}(G_1)
\begin{pmatrix}
-\openone_2 & -\openone_2\\
-\openone_2 & - \openone_2
\end{pmatrix}
\end{equation}
we once again we see the free stationary states, and the rest of the state are dynamically independent. Expand on this. We can thus restrict our attention to the free stationary states. The relevant part of the noise is
\begin{equation}
C_{1,\text{thermal}}=\frac{\nu_A}{2}
\text{Tr}(G_1G_1^\intercal)
\begin{pmatrix}
\openone_2 & \openone_2\\
\openone_2 & \openone_2
\end{pmatrix}.
\end{equation}
Moreover, we see that the first order dynamics generate dynamics within the fixed space of the zeroth order dynamics. Evolving under the first order dynamics gives,
\begin{align}
\bar{\nu}_S'(t)&=-\delta t \ \text{det}(G_1)(\bar{\nu}_S(t)+\gamma_1(t)-\tilde{\nu}_A)\\
\Delta\nu_S'(t)&=-\delta t \ \text{det}(G_1) \ \Delta\nu_S(t)\\
\gamma_1'(t)&=-\delta t \ \text{det}(G_1)(\gamma_1(t)+\bar{\nu}_S(t)-\tilde{\nu}_A)\\
\gamma_w'(t)&=-\delta t \ \text{det}(G_1) \ \gamma_w(t)
\end{align}
where
\begin{equation}
\tilde{\nu}_A=\frac{\text{Tr}(G_1G_1^\intercal)}{2 \, \text{det}(G_1)}\nu_A
\end{equation}
is the effective temperature of the bath. Note that as discussed earlier $\tilde{\nu}_A\geq\nu_A$, with equality only when $G_1$ has the form \eqref{PerfectThermalizationGForm}.
In order for these dynamics to converge we need $\text{det}(G_1)>0$. In which case we can immediately see that $\Delta\nu$ and $\gamma_w$ are exponentially suppressed at a rate $\Gamma=\delta t \, \text{det}(G_1)$ as,
\begin{align}
\Delta\nu_S(t)&=\Delta\nu_S(0)\exp(-\Gamma \ t)\\
\gamma_w(t)&=\gamma_w(0)\exp(-\Gamma \ t)
\end{align}
The coupled equations yield,
\begin{align}
\bar{\nu}_S(t)&=\frac{1}{2}\big(\bar{\nu}_S(0)-\gamma_1(0)+\tilde{\nu}_A\big)\\
&+\frac{\exp(-2\Gamma \ t)}{2}\big(\gamma_1(0)+\bar{\nu}_S(0)-\tilde{\nu}_A\big)\\
\gamma_1(t)&=\frac{1}{2}\big(\gamma_1(0)-\bar{\nu}_S(0)+\tilde{\nu}_A\big)\\
&+\frac{\exp(-2\Gamma \ t)}{2}\big(\gamma_1(0)+\bar{\nu}_S(0)-\tilde{\nu}_A\big).
\end{align}
Thus we see that the average temperature and the correlations are exponentially driven towards
\begin{align}
\bar{\nu}_S(\infty)&=\frac{1}{2}\big(\bar{\nu}_S(0)-\gamma(0)+\tilde{\nu}_A\big)\\
\gamma_1(\infty)&=\frac{1}{2}\big(\gamma(0)-\bar{\nu}_S(0)+\tilde{\nu}_A\big)
\end{align}
at a rate $2\Gamma$.
In the case where the parts of the system start off uncorrelated we see that the system averages its initial temperature with the effective temperature of the bath, developing correlations proportional to the temperature difference. The cooling process is halted early due to a build up of correlations. If the correlations are purged and then cooling recommences the termperature will lower again. Even after that we will only be at the effective termperature. Picking the right interaction gets us to the real temperature. But this isn't actually the "real" termperature as it is just the excitation number. The real temperature depends on the ancilla free Hamiltonian scale which the system doesn't know about.
Thus we see thermalization is frought with excess temperature can be converted into correlations. Is the reverse process possible? Can we extract heat our of correlations?
In particular we will look at the interplay of temperature and single mode squeezing with the correlation (multi-mode squeezings) between the two parts of the system.
In addition to being natural, this scenario is also interesting as a method of producing dynamics with squeezed fixed point. Assuming that there is no squeezing in the system's free dynamics, we would need the squeezing effects to arise at higher orders. If our system is a single harmonic oscillator, the only possible squeezing effects are symplectic. Such effects must can only arise at even order, and thus must be at least second order. Thus the squeezing effect will be dominated by the first order noise. Thus in designing a squeezing protocol we must consider a system of two oscillators, making use of the richer variety of unsymplectic and multi-mode squeezings it offers.
\end{comment}
\section{Conclusion}
We have considered the dynamics induced in a generic Gaussian system when rapidly bombarded (at a frequency $1/\delta t$) by a series of Gaussian ancillae, a scenario we call \textit{Gaussian ancillary bombardment}. This scenario covers (as a particular case) a harmonic oscillator bombarded by a thermal bath of harmonic oscillators.
We have applied this formalism to the relevant case of thermalization by interaction with an environment by investigating the particular case of an harmonic oscillator bombarded by the constituents of a thermal bath of harmonic oscillators.
We have explicitly shown that the equilibration of systems continually bombarded by the micro-constituents of a thermal reservoir is much richer than just the naive expectation that `the system will evolve to reach the environment's temperature'. Namely, we analyzed in depth the effect that the coupling of the system to the ancillae composing the thermal bath have on the systems dynamics. In particular we have exactly characterized the couplings which cause the system to reach a thermal fixed point. Perhaps surprisingly we showed that most couplings will not even equilibrate (e.g. \mbox{$H_\text{I}\sim q_\text{S}\otimes q_\text{E}$}). Furthermore, we analyzed the effect that the nature of the system-environment coupling has on whether the system equilibrates or not and how the final temperature of the system depends on this coupling. Remarkably, we find that in the space of possible couplings only an extremely limited set of interactions causes the system to thermalize to the temperature of its environment. We relate such couplings to the rotative wave approximation.
We have found other more general results that apply to Gaussian ancillary bombardment. For example we found that a sufficiently complicated interaction Hamiltonian is required to cause purification in this context. We also found that in a general Gaussian Bombardment scenario the presence and strength of any dynamics implementing rotation, squeezing and amplification are entirely independent of the state of the ancillae constituting the environment (even outside perturbation theory).
Expanding the dynamics as a series in $\delta t$ we found that different types of dynamics are available at each order in the inverse of the interaction frequency with the following consequences: (a) at zeroth order the evolution is unitary as predicted by the general results in \cite{Layden:2015b}; (b) however, unlike in \cite{Layden:2015b} in the Gaussian regime only a limited range of dynamics (only displacements) can be induced in the system at zeroth order; (c) past zeroth order noise and displacement effects are generically present; (d) rotations, squeezing and amplification effects alternate between unitary and non-unitary at each order.
Our work paves the way to addressing open questions related to the thermodynamics of systems bombarded by environments, and how the energy and information flows between system and environments depend on the particular microscopic details of the interaction.
\acknowledgments
AK, EMM and RBM acknowledge support through the Discovery program of the Natural Sciences and Engineering Research Council of Canada (NSERC). DG acknowledges support by NSERC through the Vanier Scholarship. EGB also acknowledges support by NSERC through their Postdoctoral Fellowship.
|
{
"timestamp": "2018-02-26T02:11:10",
"yymm": "1802",
"arxiv_id": "1802.08629",
"language": "en",
"url": "https://arxiv.org/abs/1802.08629"
}
|
\subsection*{Acknowledgements}
The author would like to thank Jean-Louis Colliot-Th\'el\`ene for pointing out some erroneous statements in an early draft and for suggesting some useful references. The author would also like to thank Bruno Kahn for making some helpful suggestions.
\subsection*{Notation}
Throughout this note, we will let $X$ be a smooth projective variety of dimension $d$ over a field $k$, which we will assume to be of characteristic $0$ when necessary. We also let $\overline{k}$ be the separable closure of $k$ and $\overline{X}:= X \times_{k} \overline{k}$. We also let $G_{k}$ denote the absolute Galois group of $k$.
\section{Preliminaries}
\subsection{The isogeny category}
\begin{Def} Given an additive category $\mathcal{A}$, we define the associated {\em isogeny category of ${\mathcal A}$} to be the Serre localization ${\mathcal A}_{{\mathbb Q}}$ of ${\mathcal A}$ along the isogenies; more precisely, ${\mathcal A}_{{\mathbb Q}}$ is the category whose objects are the same as those of ${\mathcal A}$ and whose morphisms are given by:
\[ \mathop{\rm Hom}\nolimits_{{\mathcal A}_{{\mathbb Q}}} (A, B):= \mathop{\rm Hom}\nolimits_{{\mathcal A}} (A, B) \otimes {\mathbb Q} \]
\end{Def}
\noindent As a matter of convenience, we have the following straightforward lemma:
\begin{Lem}\label{lem-bas} Suppose that ${\mathcal A}$ is an additive category and that $\phi \in \mathop{\rm Hom}\nolimits_{{\mathcal A}} (A, B)$ is split-injective (resp., split-surjective) in the isogeny category ${\mathcal A}_{{\mathbb Q}}$. Then, the kernel (resp., cokernel) of $\phi$ is of finite-exponent.
\end{Lem}
\noindent There is also the following analogue one of Deligne's decomposition theorems set in the isogeny category:
\begin{Lem}\label{lem-gen} Let $\mathcal{A}$ be an Abelian category and let $D({\mathcal A})$ be the derived category of bounded complexes in ${\mathcal A}$. Suppose that there exists $A^{*} \in D({\mathcal A})$ and $\phi \in \mathop{\rm Hom}\nolimits_{D({\mathcal A})} (A^{*}, A^{*}[2])$ for which the map on cohomology
\[ H^{n-i}(\phi^{i}): H^{n-i}(A^{*}) \to H^{n+i}(A^{*}) \]
is an isogeny for all $i >0$. Then, there exists a non-canonical decomposition in $D({\mathcal A})_{{\mathbb Q}}$:
\[ A^{*} \cong \bigoplus_{i} H^{i} (A^{*})[-i] \]
\begin{proof} This is nothing more than \cite{D1} Theorem 1.5 after one realizes that $D({\mathcal A}_{{\mathbb Q}})$ is the isogeny category of $D({\mathcal A})$.
\end{proof}
\end{Lem}
\begin{Cor}\label{main-cor} Retain the notation and assumptions of the previous lemma and let $\Gamma: {\mathcal A} \to \textbf{Ab}$ be a left-exact functor to the category of Abelian groups. Then, for all $i \geq 0$ the edge map
\[ H^{i} (R\Gamma A^{*}) \to \Gamma H^{i} (A^{*})\]
is split-surjective in the isogeny category of Abelian groups; in particular, the cokernel is of finite exponent.
\begin{proof} The first statement follows from Lemma \ref{lem-gen} and \cite{D1} Prop. 1.2. The second statement follows from Lemma \ref{lem-bas}.
\end{proof}
\end{Cor}
\subsection{Some Lefschetz-type Theorems}
\begin{Lem}\label{hard-lef} Suppose that $h \in Pic(X)$ is the class of an ample divisor and $\ell$ is a prime different from the characteristic of $k$.
\begin{enumerate}[label=(\alph*)]
\item\label{l} For all $m\geq 0$, $H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}) \xrightarrow{\cup h^{m}} H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}(m))$
is an isogeny in the category of $G_{k}$-modules.
\item\label{no-l} Suppose further that $k$ has characteristic $0$ or that $X$ satisfies the standard conjectures. Also set
\[ H^{*}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}') := \prod_{\ell \neq \text{char } k} H^{*}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}) \]
Then, for all $m\geq 0$ $H^{d-m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}') \xrightarrow{\cup h^{m}} H^{d+m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))$
is an isogeny in the category of $G_{k}$-modules.
\end{enumerate}
\begin{proof} Since $X$ is defined over a finitely generated field, we can assume (by invariance of \'etale cohomology under separably closed extensions, \cite{M} VI Cor. 2.6) that $k$ is some finitely generated field. When $k$ has characteristic $0$, one thus reduces to the case that $k \subset {\mathbb C}$. In this case, the classical hard Lefschetz theorem gives an isomorphism of singular cohomology:
\[ H^{d-m} (X_{{\mathbb C}}, {\mathbb Q}) \xrightarrow{\cup h^{m}} H^{d+m} (X_{{\mathbb C}}, {\mathbb Q}(m))\]
which means that the corresponding map with integral coefficients is an isogeny of Abelian groups. Both statements \ref{l} and \ref{no-l} then follow from the comparison isomorphism between singular and \'etale cohomology. Now suppose that $k$ has positive characteristic. Then, to prove statements \ref{l} and \ref{no-l}, note that by the main result of \cite{D2} we have an isomorphism:
\[H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}) \xrightarrow{\cup h^{m}} H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}(m)) \]
Since $H^{*}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})$ is a finitely generated ${\mathbb Z}_{\ell}$ module, it follows that the corresponding map with ${\mathbb Z}_{\ell}$ coefficients is an isogeny of $G_{k}$-modules. To obtain the corresponding statement for $H^{*}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}})$, we use the Lefschetz standard conjecture to obtain a correspondence
\[ \Lambda_{m} \in CH^{d-m} (X \times X)\]
for which there exists some integer $M$ satisfying
\[ (\cup h^{m})\circ \Lambda_{m} = M\cdot \text{id}_{H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}(m))}, \ \Lambda_{m}\circ(\cup h^{m}) = M\cdot \text{id}_{H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})} \]
for all $\ell$. Thus,
\[\cup h^{m}: H^{d-m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}') \to H^{d+m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))\]
is an isogeny of $G_{k}$-modules, which gives statement \ref{no-l} in positive characteristic.
\end{proof}
\end{Lem}
\begin{Rem} The reader should note that the role of the Lefschetz standard conjecture in characteristic $p>0$ is to ensure that the degree of the isogeny
\[\cup h^{m}: H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}) \to H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}(m))\]
does not depend on $\ell$. The author is not sure how to prove this in the absence of the Lefschetz standard conjecture.
\end{Rem}
\begin{Cor}\label{hard-lef-cor} With the notation of Lemma \ref{hard-lef},
\begin{enumerate}[label=(\alph*)]
\item\label{l-2} For all $m\geq 0$, $H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}) \xrightarrow{\cup h^{m}} H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(m))$
is an isogeny in the category of $G_{k}$-modules.
\item\label{no-l-2} Suppose further that $k$ has characteristic $0$ or that $X$ satisfies the standard conjectures and set
\[ H^{*}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}') := \prod_{\ell \neq \text{char } k} H^{*}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}) \]
Then, for all $m\geq 0$ $H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}') \xrightarrow{\cup h^{m}} H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}(m)')$
is an isogeny in the category of $G_{k}$-modules.
\end{enumerate}
\begin{proof}
For statement \ref{l-2}, there is a commutative with rows exact:
\[\begin{tikzcd} H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}) \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \arrow[hook]{r} \arrow{d}{\cup h^{d-m}} & H^{d-m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}) \arrow[two heads]{r} \arrow{d}{\cup h^{d-m}} & H^{d-m+1}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})[\ell^{\infty}] \arrow{d}{\cup h^{d-m}} \\
H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}) \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \arrow[hook]{r} & H^{d+m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}) \arrow[two heads]{r} & H^{d+m+1}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})[\ell^{\infty}]
\end{tikzcd}\]
(suppressing weights). Since $H^{*}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})$ is a finitely generated ${\mathbb Z}_{\ell}$-module, both the rightmost terms are finite. So, to prove that the middle vertical arrow is an isogeny, it suffices to prove that the left vertical arrow is an isogeny. But this latter follows directly from Lemma \ref{hard-lef} \ref{l}. To prove the statement of \ref{no-l-2}, note that there is an identical diagram with coefficients in
\[ {\mathbb Q}/{\mathbb Z}' = \bigoplus_{\ell \neq \text{char } k} {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \]
and with the rightmost terms $H^{d-m+1}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})_{tors}[\frac{1}{p}]$ and $H^{d+m+1}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell})_{tors}[\frac{1}{p}]$, where $p$ is the exponential characteristic of $k$. These latter groups are finite by the main result of\cite{G}. The argument from statement \ref{l-2} works mutatis mutandis (using Lemma \ref{hard-lef} \ref{no-l} this time).
\end{proof}
\end{Cor}
\begin{Cor}\label{natural} Retain the assumptions of Lemma \ref{hard-lef}.
\begin{enumerate}[label=(\alph*)]
\item\label{l-3} For all $r$ and $m$, the cokernel of the natural map
\begin{equation} H^{m}_{\text{\'et}} (X, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(r)) \to H^{m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(r))^{G_{k}}\label{weak} \end{equation}
is finite.
\item\label{no-l-3} Suppose further that $k$ has characteristic $0$ or that $X$ satisfies the standard conjectures. Then, for all $r$ and $m$, the cokernel of the natural map
\begin{equation} H^{m}_{\text{\'et}} (X, {\mathbb Q}/{\mathbb Z}'(r)) \to H^{m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(r))^{G_{k}}\label{weak-2} \end{equation}
is finite.
\end{enumerate}
\begin{proof} For both statements, we first show that the cokernel is of finite exponent. This follows for (\ref{weak}) (resp., for (\ref{weak-2})) by Corollary \ref{hard-lef-cor} \ref{l-2} and Corollary \ref{main-cor} (resp., by Corollary \ref{hard-lef-cor} \ref{no-l-2} and Corollary \ref{main-cor}). The desired statement then follows from the lemma below, which is a straightforward group-theoretic fact (cf, \cite{CTS} \S 1.2).
\end{proof}
\end{Cor}
\begin{Lem}\label{nat-lem} Suppose that $A$ is an Abelian group of the form $({\mathbb Q}_{\ell}/{\mathbb Z}_{\ell})^{n} \oplus F$ (or $({\mathbb Q}/{\mathbb Z}')^{n} \oplus F$), where $n \geq 0$ and $F$ is finite. Then, any finite-exponent subquotient of $A$ is finite.
\end{Lem}
\begin{Rem} We remark that Corollary \ref{natural} is only interesting for $m = 2r, 2r-1$ since for all others, the right hand groups are already finite by the results of \cite{CTR}.
\end{Rem}
\subsection{\'Etale motivic cohomology}
\noindent For $m \geq 0$ an integer, let $\tau$ denote either the Zariski or (small) \'etale topology and let $\Delta^{p}\subset \mathbb{A}^{p+1}_{k}$ denote the algebraic simplex defined by $x_{0} + \ldots + x_{p} = 1$ and let $\partial\Delta^{p}$ denote its boundary. As in \cite{RS} one has sheaves of Abelian groups for the $\tau$ topology on $X$ for $i\leq 0$, $U \mapsto z^{m} (U, i)$, where $z^{m} (U, i)$ denotes the free Abelian group of codimension $m$ cycles on $U \times \Delta^{i}$ that intersect $U \times \partial\Delta^{i}$ properly. These fit into a bounded above complex whose boundary maps are defined in the usual way. Then, for any Abelian group $A$ consider the corresponding {\em cycle complex}:
\[ A_{X}(m)_{\tau} := (z^{m} (-, *)_{\tau} \otimes A)[-2m] \]
and then define the corresponding hypercohomology groups, known as the motivic and \'etale motivic cohomology groups, respectively:
\[ \begin{split}
H^{p}_{M} (X, A(m)) & = \mathbb{H}^{p}_{\text{Zar}} (X, A_{X}(m)_{\text{Zar}})\\
H^{p}_{L} (X, A(m)) & = \mathbb{H}^{p}_{\text{\'et}} (X, A_{X}(m)_{\text{\'et}})
\end{split}
\]
For the Zariski motivic cohomology groups, there are isomorphisms with Bloch's higher Chow groups, $CH^{m} (X, 2m-p) \cong H^{p}_{M} (X, {\mathbb Z}(m))$ (see \cite{RS} p. 515). As per convention, we denote the Lichtenbaum Chow group by $CH^{m}_{L} (X) = H^{2m}_{L} (X, {\mathbb Z}(m))$. We also observe that $H^{p}_{M} (X, {\mathbb Q}(p)) \cong H^{p}_{L} (X, {\mathbb Q}(p))$ by \cite{K} Th\'eor\`eme 2.6.\\
\indent Geisser and Levine (in \cite{GL1} and \cite{GL2}) establish quasi-isomorphisms in the (bounded below) derived category of \'etale sheaves over $X$:
\begin{equation} ({\mathbb Z}/\ell^{r})_{X}(m)_{\text{\'et}} \xrightarrow{\sim} {\mathbb Z}/\ell^{r}(m) \label{Geisser-Levine} \end{equation}
for $\ell$ a prime and $r\geq 1$. In particular, the corresponding map on hypercohomology is an isomorphism:
\begin{equation}H^{p}_{L} (X, {\mathbb Z}/\ell^{r}(m)) \cong H^{p}_{\text{\'et}} (X, {\mathbb Z}/\ell^{r}(m))\label{iso-1} \end{equation}
\begin{Def} We denote by $N^{m} (X)_{\ell}$ (or when there is no confusion, $N^{m} (X)$) the image of the $\ell$-adic cycle class map:
\[\mathop{\lim_{\longleftarrow}}_{r\geq 0} c_{\ell^{r}}^{m}: CH^{m}_{L} (X) \to H^{2m}_{\text{\'et}} (X, {\mathbb Z}_{\ell}(m))\]
Also, define the $m^{th}$ higher Brauer group $Br^{m} (X) := H^{2m+1}_{L} (X, {\mathbb Z}(m)$.
\end{Def}
\begin{Thm}\label{props} With the above notation,
\begin{enumerate}[label=(\alph*)]
\item There exists a natural short exact sequence:
\[ 0 \to H^{p}_{L} (X, {\mathbb Z}(m)) \otimes {\mathbb Q}/{\mathbb Z} \to H^{p}_{\text{\'et}} (X, {\mathbb Q}/{\mathbb Z}(m)) \to H^{p}_{L} (X, {\mathbb Z}(m))_{tors} \to 0 \]
where the first non-zero arrow is the cycle class map.
\item $H^{p}_{L} (X, {\mathbb Z}(m))$ is torsion for $p>2m$.
\item When $k$ is algebraically closed of characteristic $0$, $N^{m} (X) \otimes {\mathbb Q} \subset A^{m} (X)$, the ${\mathbb Q}$-vector space of absolute Hodge cycles in $H^{2m}_{\text{\'et}} (X, {\mathbb Q}_{\ell}(m))$.
\end{enumerate}
\end{Thm}
\subsection{A splitting result}
\noindent For the applications to follow, we will need the following auxiliary results.
\begin{Lem}\label{above} $N^{m} (\overline{X})$ is a finitely generated Abelian group.
\begin{proof} This follows from the fact that
\[ CH^{n} (X) \otimes {\mathbb Q} \cong CH^{n}_{L} (X) \otimes {\mathbb Q} \]
and the fact that the image of the cycle class map
\[ CH^{n} (\overline{X}) \otimes {\mathbb Q} \to H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}(m))\]
is a finitely generated vector space.
\end{proof}
\end{Lem}
\noindent Now, we adopt the notation:
\[ \hat{{\mathbb Z}}' := \prod_{\ell \neq \text{char } k} {\mathbb Z}_{\ell}, \ \hat{{\mathbb Q}}' := \prod_{\ell \neq \text{char } k} {\mathbb Q}_{\ell}\]
\begin{Lem}\label{lattice} Suppose that $k$ has characteristic $0$. Then, there exists a $G_{k}$-invariant pairing:
\[ H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \otimes H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \xrightarrow{(,)} \hat{{\mathbb Z}}'\]
and a discrete subgroup $N^{m} (\overline{X})' \subset H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))$ for which:
\begin{enumerate}[label=(\alph*)]
\item $N^{m} (\overline{X}) \subset N^{m} (\overline{X})'$
\item $N^{m} (\overline{X})'$ is $G_{k}$-stable
\item $(,)$ restricted to $N^{m} (\overline{X})'$ is integral and non-degenerate.
\end{enumerate}
\begin{proof} Select an ample divisor $h \in Pic(X)$. Then, for $2m \leq d$, we define the pairing as:
\[ (,): H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \otimes H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \to \hat{{\mathbb Z}}', \ (\alpha, \beta) = \alpha \cup \beta \cup h^{d-2m} \]
This is certainly $G_{k}$-invariant. Now, if $d < 2m$ we define the pairing as follows. By Lemma \ref{hard-lef} there is an isogeny
\[ \Lambda_{m}: H^{2m} (\overline{X}, \hat{{\mathbb Z}}'(m)) \to H^{2d-2m} (\overline{X}, \hat{{\mathbb Z}}'(d-m)) \]
which is inverse to $\cup h^{2m-d}$. In fact, since $k$ has characteristic $0$, this isogeny is defined for singular cohomology with integral coefficients. Thus, we can define the pairing as
\[ (,): H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \otimes H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \to \hat{{\mathbb Z}}', \ (\alpha, \beta) = \Lambda_{m}(\alpha) \cup \beta \]
Again, this is a $G_{k}$-invariant pairing since $\Lambda_{m}$ is the inverse isogeny of $\cup h^{d-2m}$. Now, we set $N^{m} (\overline{X})'$ to be a lattice in the ${\mathbb Q}$-vector space $A^{m}_{\text{\'et}} (\overline{X}) \subset H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Q}}'(m))$ of absolute Hodge classes of degree $2m$. Note that $A^{m}_{\text{\'et}} (\overline{X})$ does not depend on $\ell$, contains $N^{m} (\overline{X})$ (Theorem \ref{props}) and is $G_{k}$-stable. Thus, upon refining this lattice, we can assume that it contains $N^{m} (\overline{X})$ and is $G_{k}$-stable. That the pairing $(,)$ is non-degenerate on $A^{m}_{\text{\'et}} (\overline{X})$ is well-known (see, for instance, \cite{A} Prop. 3.3).
\end{proof}
\end{Lem}
\begin{Lem}\label{split} Suppose that $\hat{V}$ is a continuous $\hat{{\mathbb Z}}'[G_{k}]$-module possessing a $G_{k}$-invariant pairing:
\[ (,): \hat{V} \otimes \hat{V} \to \hat{{\mathbb Z}}' \]
and $M \subset \hat{V}$ is a finitely generated Abelian group which is $G_{k}$-stable, discrete and for which $(,)$ restricted to $M$ is integral and non-degenerate. Then, the inclusion
\[ M \otimes \hat{{\mathbb Z}}' \hookrightarrow \hat{V} \]
is split-injective in the isogeny category of $G_{k}$-modules.
\begin{proof} As a first reduction, we may freely pass to a finite extension of $k$ whenever necessary. To see why, let $k'/k$ be a finite Galois extension and observe that if a splitting $\rho$ exists which is $G_{k'}$-invariant, then one can set
\[ \rho' := \sum_{h \in H} h^{*}\rho \]
where $H = G_{k}/G_{k'}$ to obtain a $G_{k}$-invariant splitting.\\
\indent Now, there is the composition of $G_{k}$-modules:
\[ \phi: M \otimes \hat{{\mathbb Z}}' \hookrightarrow \hat{V} \to \hat{V}^{\vee}=\mathop{\rm Hom}\nolimits_{\hat{{\mathbb Z}}'} (\hat{V}, \hat{{\mathbb Z}}') \xrightarrow{res_{M \otimes \hat{{\mathbb Z}}'}} \mathop{\rm Hom}\nolimits_{\hat{{\mathbb Z}}'} (M \otimes \hat{{\mathbb Z}}', \hat{{\mathbb Z}}')\]
defined by
\[ \alpha \mapsto (\alpha, -), \alpha \in M \]
Here, the middle arrow is induced by the pairing $(,)$. It suffices to show that $\phi$ is split-injective. By the non-degeneracy of $(,)$ on $M$, $\phi$ is injective. Upon passing to a finite extension of $k$, we can assume that $M$ is a trivial $G_{k}$-module (since $M$ is discrete by assumption). Certainly, $\phi$ is split-injective in the isogeny category of Abelian groups, from which it follows that $\phi$ is split-injective in the isogeny category of $G_{k}$-modules, as desired.
\end{proof}
\end{Lem}
\begin{Cor}\label{splitting} Suppose $k$ has characteristic $0$ or that $X$ satisfies the standard conjectures. Then, the natural inclusion
\[ N^{m} (\overline{X}) \otimes \hat{{\mathbb Z}}' \hookrightarrow H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \]
is split-injective in the isogeny category of $G_{k}$-modules.
\begin{proof} First assume that $k$ has characteristic $0$. From Lemma \ref{split} it follows that the natural inclusion
\[ N^{m} (\overline{X})'\otimes \hat{{\mathbb Z}}' \hookrightarrow H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \]
is split-injective, where $N^{m} (\overline{X})'$ is the discrete subgroup from Lemma \ref{lattice}. Again, we can assume (upon passing to a finite extension of $k$) that $N^{m} (\overline{X})'$ is a trivial $G_{k}$-module, from which it follows that the injection of (trivial) $G_{k}$-modules
\[ N^{m} (\overline{X}) \hookrightarrow N^{m} (\overline{X})'\]
is also split. It follows that that the composite injection:
\[ N^{m} (\overline{X}) \otimes \hat{{\mathbb Z}}' \hookrightarrow N^{m} (\overline{X})' \otimes \hat{{\mathbb Z}}' \hookrightarrow H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m)) \]
is also split-injective, as desired.\\
\indent On the other hand, if $X$ satisfies the standard conjectures, then homological equivalence coincides with numerical equivalence, which means that the pairing $(,)$ is non-degenerate on $N^{m} (\overline{X})$. The above proof then works mutatis mutandis with $N^{m} (\overline{X}) = N^{m} (\overline{X})'$.
\end{proof}
\end{Cor}
\section{Proof of Proposition \ref{Tate}}
\noindent (Proof of $\Leftarrow$) Consider the short exact sequence of $G_{k}$-modules:
\begin{equation} 0 \to N^{m} (\overline{X}) \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \to H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(m)) \to Br^{m} (\overline{X})[\ell^{\infty}] \to 0\label{exact-1} \end{equation}
Applying the $G_{k}$-invariants functor to (\ref{exact-1}), we obtain an exact sequence:
\[ 0 \to N^{m} (\overline{X})^{G_{k}} \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \xhookrightarrow{\phi} H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(m))^{G_{k}} \to Br^{m} (\overline{X})[\ell^{\infty}]^{G_{k}}\]
Suppose that $Br^{m} (\overline{X})[\ell^{\infty}]^{G_{k}}$ is finite, so that $\text{coker } \phi$ is also. Then, there is a commutative diagram with rows exact:
\[ \begin{tikzcd}
N^{m} (\overline{X})^{G_{k}} \otimes {\mathbb Z}_{\ell} \arrow{r} \arrow{d} & N^{m} (\overline{X})^{G_{k}} \otimes {\mathbb Q}_{\ell} \arrow[two heads]{r} \arrow{d} & N^{m} (\overline{X})^{G_{k}} \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}\arrow{d}{\phi}\\
H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}(m))^{G_{k}}/tors \arrow[hook]{r} & H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}(m))^{G_{k}} \arrow{r} & H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(m))^{G_{k}}
\end{tikzcd} \]
A diagram chase then shows that there is an inclusion
\[ C \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \hookrightarrow \text{coker } \phi \]
where
\[ C:= \text{coker } \{ N^{m} (\overline{X})^{G_{k}} \to H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Z}_{\ell}(m))^{G_{k}} \} \]
It follows that $C \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}$ is finite. Since $C$ is a finitely generated ${\mathbb Z}_{\ell}$-module, it then follows that $C \otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} = 0$ and hence that $C$ is torsion; in particular, $TC^{m} (X)_{{\mathbb Q}_{\ell}}$ holds.\\
\indent (Proof of $\Rightarrow$) Suppose conversely that the Tate conjecture $TC^{m} (X)_{{\mathbb Q}_{\ell}}$ holds. Then, the proof proceeds as in \cite{SZ} Prop. 2.5. Indeed, let
\[ T_{\ell}(Br^{m} (\overline{X})) := \mathop{\rm Hom}\nolimits({\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}, Br^{m} (\overline{X})) =\mathop{\lim_{\longleftarrow}}_{r\geq 0} Br^{m}(\overline{X})[\ell^{r}]\]
Then, by Corollary (\ref{splitting} there is a splitting of $G$-modules
\begin{equation} H^{2m}_{\text{\'et}} (\overline{X},{\mathbb Q}_{\ell}(m)) \cong N^{m}(\overline{X}) \otimes {\mathbb Q}_{\ell} \oplus T_{\ell}(Br^{m} (\overline{X})) \otimes {\mathbb Q}_{\ell}\label{split-exact} \end{equation}
Moreover, from the Tate conjecture there exists some finite Galois extension $k'/k$ for which
\[ N^{m}(\overline{X})^{Gal(\overline{k}/k')} \otimes {\mathbb Q}_{\ell} \xrightarrow{\cong} H^{2m}_{\text{\'et}} (\overline{X},{\mathbb Q}_{\ell}(m))^{Gal(\overline{k}/k')} \]
and hence that $T_{\ell}(Br^{m} (\overline{X}))^{Gal(\overline{k}/k')}$ is finite. Finally, since ${\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}$ is a trivial $G$-module, it follows that
\[ T_{\ell}(Br^{m} (\overline{X}))^{Gal(\overline{k}/k')} = \mathop{\rm Hom}\nolimits({\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}, Br^{m} (\overline{X})^{Gal(\overline{k}/k')}) \]
and since this is finite, we deduce that so is $Br^{m} (\overline{X})^{Gal(\overline{k}/k')}$.
\begin{Rem} The main difficulty in extending this result to characteristic $p>0$ is that there is no notion of absolute Hodge classes in this case and so even with the Tate conjecture for $X$, it is not clear that the splitting of $G_{k}$-modules (\ref{split-exact}) holds. If one instead assumes the Tate conjecture for all products $X^{n}$, then the results of \cite{Mo} (for instance) show that $H^{2m}_{\text{\'et}} (\overline{X},{\mathbb Q}_{\ell}(m))$ is semi-simple as a $G_{k}$-module. This would then imply (\ref{split-exact}).
\end{Rem}
\section{Proof of Theorem \ref{main}}
\noindent Note that there is a commutative diagram with rows exact:
\[\begin{tikzcd}
& N^{m} (X)_{{\mathbb Q}/{\mathbb Z}'} \arrow{r} \arrow{d} & H^{2m}_{\text{\'et}} (X, {\mathbb Q}/{\mathbb Z}'(m)) \arrow{r} \arrow{d} & Br^{m} (X)[\frac{1}{p}] \arrow{r} \arrow{d}{\phi} &0\\
0 \arrow{r} & N^{m} (\overline{X})_{{\mathbb Q}/{\mathbb Z}'}^{G_{k}} \arrow{r} & H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(m))^{G_{k}} \arrow{r} & Br^{m} (\overline{X})^{G_{k}}[\frac{1}{p}] \arrow[two heads]{r} & K
\end{tikzcd}
\]
where $A_{{\mathbb Q}/{\mathbb Z}'} = A \otimes {\mathbb Q}/{\mathbb Z}'$ for any Abelian group $A$ and where
\[K:= \text{ker } \{H^{1} (k, N^{m} (\overline{X})_{{\mathbb Q}/{\mathbb Z}'}) \xrightarrow{\alpha} H^{1} (k, H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(m))\}\]
Since the middle vertical arrow has finite cokernel by Corollary \ref{natural}, what remains is to show that $K$ is finite. By Lemma \ref{nat-lem}, it suffices to prove that $K$ has finite exponent. To this end, observe that $\alpha$ factors as
\[ H^{1} (k, N^{m} (\overline{X})_{{\mathbb Q}/{\mathbb Z}}) \xrightarrow{\beta} H^{1} (k, H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{{\mathbb Q}/{\mathbb Z}}) \xrightarrow{\gamma} H^{1} (k, H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(m)) \]
From Corollary \ref{splitting},
\[ N^{m} (\overline{X}) \hookrightarrow H^{2}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(1)) \]
is split-injective in the isogeny category of $G_{k}$-modules, from which it follows that $\beta$ is split-injective in the isogeny category of Abelian groups. From Lemma \ref{lem-bas} it follows that the kernel of $\beta$ is of finite exponent. On the other hand, there is a short exact sequence of $G_{k}$-modules:
\[ 0 \to H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{{\mathbb Q}/{\mathbb Z}'} \to H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(m)) \to H^{2m+1}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{tors} \to 0 \]
By the comparison isomorphism with singular cohomology, $H^{2m+1}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{tors}$ is finite (since singular cohomology with integral coefficients is finitely generated). Applying $H^{*} (k, -)$ to this exact sequence, one obtains an exact sequence:
\[ H^{2m+1}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{tors}^{G_{k}} \to H^{1} (k, H^{2m}_{\text{\'et}} (\overline{X}, \hat{{\mathbb Z}}'(m))_{{\mathbb Q}/{\mathbb Z}}) \xrightarrow{\gamma} H^{1} (k, H^{2m}_{\text{\'et}} (\overline{X}, {\mathbb Q}/{\mathbb Z}'(m))) \]
from which it follows that the kernel of $\gamma$ has finite exponent. We deduce that $K$ has finite exponent, as desired.
|
{
"timestamp": "2018-08-03T02:04:49",
"yymm": "1802",
"arxiv_id": "1802.08902",
"language": "en",
"url": "https://arxiv.org/abs/1802.08902"
}
|
\section{Introduction}\label{sec1}
With the extraordinary photometric precision and near-continuous observations, the {\it Kepler} mission \citep{Kepler} has revolutionized the study of exoplanets and stellar physics. Before the loss of a second reaction wheel, {\it Kepler} had monitored more than $150,000$ stars for 4 years in the originally fixed field of view (FoV). The collected data enable one to study in unprecedented detail the internal structures of stars by means of asteroseismology \citep{DeRidder2009, Papics2014, Keen2015, Kurtzetal2015, Saio2015, Schmid2015, VanReeth2015}.
{\it Kepler} has revealed some surprising results of the classical pulsators along the main sequence (MS), such as $\gamma$ Dor stars, $\delta$ Sct stars, slowly pulsating B (SPB) stars, and $\beta$ Cep stars. The distribution of oscillating A and F stars extend well beyond the instability strips defined by ground-based observations and theoretical calculations, and hybrid pulsators are far more common than previously expected \citep{GABetal2010, UMGetal2011}. There are also non-pulsating stars within the instability strips \citep{BD2011, Balonaetal2011}.
Further investigations of these problems require reliable classification and statistics of different types of variable stars. However, it is difficult to discriminate between $\gamma$ Dor and SPB stars, or between $\delta$ Sct and $\beta$ Cep stars based solely on their oscillations, because the period ranges of these different types of stars overlap. Therefore accurately determined stellar parameters, such as effective temperature $T_\mathrm{eff}$, surface gravity $\log g$, and metallicity, are crucial for the identification of MS oscillations and precision asteroseismology. However, only a small fraction of these upper MS stars in the {\it Kepler} field have parameters derived from ground-based spectroscopic or photometric follow-up observations. Most of the classification and statistical analysis of variable stars still rely on the {\it Kepler} Input Catalog \citep[KIC;][]{KIC}. \citet{PAMZetal2012} found that the KIC systematically underestimates $T_\mathrm{eff}$. The underestimation increases toward high temperature and becomes rather significant for B stars \citep{Balonaetal2011, LTSetal2011, MNJMK2012, TLSU2013}. As warned by \citet{KIC}, ``the KIC $T_\mathrm{eff}$ estimates are untrustworthy for $T_\mathrm{eff} \ge 10^4$ K" due to the lack of $u$-band data.
Underestimated $T_\mathrm{eff}$ may lead to misidentification of B stars. In the case of oscillation, a $\beta$ Cep (or SPB) star could easily be misidentified as a $\delta$ Sct (or $\gamma$ Dor) candidate. This may further affect our conclusions on the distribution and proportion of different types of stellar oscillations. Therefore in order to fully exploit the large amount of {\it Kepler} data, it is desirable to obtain accurate stellar parameters efficiently from ground-based follow-up observations.
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope \citep[LAMOST, also known as Guoshoujing Telescope;][]{LAMOST} has the potential to fulfill this purpose. LAMOST is a reflecting Schmidt telescope with a large aperture (4 m) and a wide FoV ($5^\circ$). The $4000$ fibers on the focal plane enable it to collect low-resolution ($R=1800$) spectra of thousands of objects down to magnitude 19 in a single exposure \citep{Lsurvey}. The LAMOST--{\it Kepler} project aims to determine the parameters of more than 1 million objects in the {\it Kepler} field \citep{DeCat2015}. Using $12,000$ stars, \citet{DZZetal2014} confirmed the systematic underestimation of metallicity in KIC, and gave a correction relation based on LAMOST metallicity. In this paper, we verify the discrepancy in B-star parameters between the KIC and ground-based observations using LAMOST data in Section \ref{sec2}, and report the identification of four misclassified MS B stars in Section \ref{sec3}. The oscillation properties of the four stars are studied in detail based on four years of {\it Kepler} observation in Section \ref{sec4}. In the last section, we present our conclusions.
\section{Comparison of stellar parameters}\label{sec2}
We have collected the published values of $T_\mathrm{eff}$ and $\log g$ for {\it Kepler} MS B stars, including those from high-resolution spectroscopy by \citeauthor{LTSetal2011} (2011; $R=32,000$), \citeauthor{BM2011} (2011; $R=15,000$--$80,000$), \citeauthor{TLSU2013} (2013; $R=32,000$), and \citeauthor{Papics2013} (2013; $R=85,000$), and low-resolution spectroscopy by \citeauthor{Balonaetal2011} (2011; $R=550$). \citet{Balonaetal2011} also derived $T_\mathrm{eff}$ and $\log g$ for 38 B stars from Str\"omgren photometry. We searched for these stars in LAMOST Data Release 2 (DR2)\footnote[1]{http://dr2.lamost.org}, and made comparisons with KIC and the follow-up observations.
LAMOST DR2 comprises all of the observations before 2014 June, including the pilot survey and the first two years of the regular survey. More than $56,000$ spectra of {\it Kepler} targets have been collected. The stellar parameters of OBA stars are determined using the ULySS package\footnote[2]{http://ulyss.univ-lyon1.fr} \citep{KPBW2009, WSPGK2011}, consisting of minimizing the $\chi^2$ value between an observation and a model spectrum via full spectral fitting within 3900--5700{\r A} wavelength range (avoiding the combining section of the blue-arm and red-arm spectra). The ELODIE library \citep{ELODIE} was used as a reference. For normal FGK-type stars, their atmospheric parameters are determined using the LAMOST stellar parameter pipeline \citep{Lpipeline, Wu2014, Ren2016}. The estimation of stellar parameters is sensitive to the signal-to-noise ratio (S/N) of the spectra \citep{Lpipeline}. In our selection of B stars, we employed a criterion of the S/N in $g$ band $\ge 50$.
By cross identification, we found $18$ {\it Kepler} MS B stars in LAMOST DR2 with other spectroscopic or photometric follow-up observations. The KIC numbers and stellar parameters from KIC, LAMOST, and the literature of these stars are listed in Table~\ref{tab1}. \citet{Papics2013} identified KIC 4931738 as a double-lined spectroscopic binary system consisting of two MS B stars. The $T_\mathrm{eff}$ of KIC 4931738 in Table~\ref{tab1} is that of the primary derived from disentangled spectra. The errors in stellar parameters from LAMOST and high-resolution spectroscopy are also given in Table~\ref{tab1}. External calibrations show that the average uncertainties in LAMOST $T_\mathrm{eff}$, $\log g$, and [Fe/H] of OBA stars are $4.3\%$, 0.18, and 0.15, respectively \citep{MILES, WSPGK2011}. For the KIC, the typical errors in $T_\mathrm{eff}$ and $\log g$ are 200\,K and 0.5, respectively \citep{LTSetal2011}. For low-resolution spectroscopy, the formal fitting errors in $T_\mathrm{eff}$ and $\log g$ are 200\,K and 0.05, respectively \citep{Balonaetal2011}. The errors in parameters derived from Str\"omgren photometry were not provided \citep{Balonaetal2011}, but the calibration method \citep{Balona1994} gives typical errors of $5\%$ in $T_\mathrm{eff}$ and 0.1 in $\log g$.
\begin{deluxetable*}{ccclcrllcc}
\rotate
\tablewidth{0pt}
\tablecaption{Comparison of stellar parameters of {\it Kepler} B stars.\label{tab1}}
\tablehead{
\colhead{KIC Number} & \multicolumn{2}{c}{KIC} & \multicolumn{3}{c}{LAMOST} & \multicolumn{2}{c}{Spectroscopy} & \multicolumn{2}{c}{Photometry}\\
& \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$} & \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$} & \colhead{[Fe/H]} & \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$} & \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$}}
\startdata
\phn3629496 & \phn9796 & 4.512 & 11603\,$\pm$\,45 & 3.99\,$\pm$\,0.04 & -0.02\,$\pm$\,0.02 & 11320\,$\pm$\,210\,$^a$ & 3.75\,$\pm$\,0.10\,$^a$ & \nodata & \nodata \\
\phn3756031 & 11177 & 4.241 & 15869\,$\pm$\,73 & 3.69\,$\pm$\,0.06 & 0.01\,$\pm$\,0.02 & 15980\,$\pm$\,310\,$^b$ & 3.75\,$\pm$\,0.06\,$^b$ & 16310\,$^c$ & 4.19\,$^c$ \\
\phn3839930 & 11272 & 4.277 & 15407\,$\pm$\,63 & 3.70\,$\pm$\,0.05 & -0.09\,$\pm$\,0.03 & 16500\,$^c$ & 4.2\,$^c$ & 17160\,$^c$ & 4.51\,$^c$ \\
\phn3865742 & 11406 & 4.355 & 19007\,$\pm$\,144 & 3.87\,$\pm$\,0.07 & -0.06\,$\pm$\,0.04 & 19500\,$^c$ & 3.7\,$^c$ & 20190\,$^c$ & 4.20\,$^c$ \\
\phn4136285 & \nodata & \nodata & 15968\,$\pm$\,55 & 3.68\,$\pm$\,0.05 & 0.12\,$\pm$\,0.02 & 16000\,$\pm$\,1000\,$^d$ & 4.0\,$\pm$\,0.1\,$^d$ & \nodata & \nodata \\
\phn4931738 & 10136 & 4.420 & 12726\,$\pm$\,115 & 4.05\,$\pm$\,0.10 & 0.03\,$\pm$\,0.06 & 13730\,$\pm$\,200\,$^e$ & 3.97\,$\pm$\,0.05\,$^e$ & \nodata & \nodata \\
\phn5130305 & \phn9533 & 4.144 & 10497\,$\pm$\,69 & 3.91\,$\pm$\,0.08 & -0.34\,$\pm$\,0.05 & 10670\,$\pm$\,200\,$^b$ & 3.86\,$\pm$\,0.07\,$^b$ & 10190\,$^c$ & 4.38\,$^c$ \\
\phn7599132 & 10251 & 3.624 & 10687\,$\pm$\,43 & 3.94\,$\pm$\,0.04 & -0.30\,$\pm$\,0.03 & 11090\,$\pm$\,140\,$^b$ & 4.08\,$\pm$\,0.06\,$^b$ & 10560\,$^c$ & 4.39\,$^c$ \\
\phn8057661 & \phn8230 & 3.974 & 20063\,$\pm$\,126 & 3.84\,$\pm$\,0.04 & -0.11\,$\pm$\,0.03 & \nodata & \nodata & 21360\,$^c$ & 4.23\,$^c$ \\
\phn8177087 & \phn9645 & 4.104 & 14909\,$\pm$\,86 & 3.41\,$\pm$\,0.07 & 0.04\,$\pm$\,0.03 & 13330\,$\pm$\,220\,$^b$ & 3.42\,$\pm$\,0.06\,$^b$ & 13380\,$^c$ & 3.79\,$^c$ \\
\phn8381949 & \phn9782 & 4.394 & 20402\,$\pm$\,90 & 3.74\,$\pm$\,0.05 & -0.08\,$\pm$\,0.02 & 24500\,$^c$ & 4.3\,$^c$ & 21000\,$^c$ & 3.82\,$^c$ \\
\phn8389948 & \phn8712 & 3.612 & 10049\,$\pm$\,52 & 3.77\,$\pm$\,0.07 & -0.43\,$\pm$\,0.04 & 10240\,$\pm$\,340\,$^b$ & 3.86\,$\pm$\,0.12\,$^b$ & \phn9690\,$^c$ & 4.33\,$^c$ \\
\phn8714886 & \phn9142 & 4.093 & 16285\,$\pm$\,64 & 3.76\,$\pm$\,0.05 & -0.10\,$\pm$\,0.02 & 19000\,$^c$ & 4.3\,$^c$ & 18505\,$^c$ & 4.49\,$^c$ \\
\phn9964614 & \phn8915 & 4.067 & 19109\,$\pm$\,80 & 3.69\,$\pm$\,0.03 & -0.01\,$\pm$\,0.02 & 20300\,$^c$ & 3.9\,$^c$ & 19471\,$^c$ & 3.75\,$^c$ \\
10536147 & 12490 & 5.890 & 20605\,$\pm$\,97 & 3.78\,$\pm$\,0.05 & -0.03\,$\pm$\,0.02 & 20800\,$^c$ & 3.8\,$^c$ & \nodata & \nodata \\
10658302 & 14809 & 6.076 & 15179\,$\pm$\,91 & 3.75\,$\pm$\,0.07 & -0.02\,$\pm$\,0.05 & 15900\,$^c$ & 3.9\,$^c$ & \nodata & \nodata \\
10960750 & \nodata & \nodata & 20219\,$\pm$\,45 & 3.80\,$\pm$\,0.02 & -0.11\,$\pm$\,0.02 & 19960\,$\pm$\,880\,$^b$ & 3.91\,$\pm$\,0.11\,$^b$ & 20141\,$^c$ & 3.85\,$^c$ \\
11360704 & 12400 & 4.934 & 17644\,$\pm$\,78 & 3.70\,$\pm$\,0.07 & -0.10\,$\pm$\,0.02 & 20700\,$^c$ & 4.1\,$^c$ & 17644\,$^c$ & 3.89\,$^c$ \\
\enddata
\tablerefs{$^a$\,Tkachenko et al. (2013), $^b$\,Lehmann et al. (2011), $^c$\,Balona et al. (2011), $^d$\,Bohlender \& Monin (2011), $^e$\,P\'apics et al. (2013).}
\end{deluxetable*}
Figure~\ref{fig1} shows the temperature difference $\Delta T_\mathrm{eff}$ between other sources and LAMOST. As can be seen, LAMOST values are in good agreement with spectroscopic and photometric data, especially with those from high-resolution spectroscopy. The KIC estimates are clearly too low. The underestimation increases almost linearly with increasing $T_\mathrm{eff}$, and becomes more than $10,000$\,K at high-temperature end. In the work of Lehmann et al. (2011), $\Delta T_\mathrm{eff}$ was fitted using a second-order polynomial with the zero point set at $7,000$\,K. Such underestimation in $T_\mathrm{eff}$ could result in the misclassification of B stars. Three stars in Table~\ref{tab1} with $T\mathrm{_{eff}(KIC)}<10,000$\,K (KIC 3629496, KIC 5130305, and KIC 8389948) were indeed cataloged as A stars \citep{Balona2013}.
\begin{figure}[htbp]
\epsscale{}
\plotone{f1.png}
\caption{Difference in effective temperature between other observations and LAMOST as a function of LAMOST $T_\mathrm{eff}$. The open circles show $\Delta T_\mathrm{eff}$ between KIC and LAMOST for the four B stars reported in this paper.}
\label{fig1}
\end{figure}
The surface gravity difference $\Delta(\log g)$ between other observations and LAMOST is shown in Figure~\ref{fig2}. Although the results from low-resolution spectroscopy and Str\"omgren photometry seem more close to the KIC values, comparison between LAMOST and the KIC shows that KIC $\log g$ is systematically higher, confirming the finding of \citet{LTSetal2011} based on high-precision spectroscopy. Considering the large error in KIC $\log g$, the difference in $\log g$ is less significant than that in $T_\mathrm{eff}$. However, for two stars, namely KIC 10536147 and KIC 10658302, KIC $\log g$ does show clear overestimation, which is large enough for them to be misclassified as compact objects. In fact, KIC 10658302 was selected as a candidate of compact stellar objects based on KIC data, but was then excluded after spectroscopic observation \citep{Ostensen2010}. KIC 10536147 was listed as a white dwarf candidate \citep{MNJMK2012}, and was thus omitted from the study of MS B stars \citep{BBDDDC2015}. However, we can rule out the possibility based on its LAMOST spectrum and updated parameters.
\begin{figure}[htbp]
\epsscale{}
\plotone{f2.png}
\caption{Difference in surface gravity between other observations and LAMOST as a function of LAMOST $\log g$.}
\label{fig2}
\end{figure}
Only eight stars in Table~\ref{tab1} have metallicity derived from high-resolution spectroscopy. The comparison between the LAMOST metallicities and those derived from high-resolution spectroscopic data is shown in Figure~\ref{fig3}. The discrepancies are obvious, even when the external uncertainty in LAMOST [Fe/H] is taken into account. Unlike $T_\mathrm{eff}$ and $\log g$, the estimation of metallicity from low-resolution spectra of B stars is less reliable due to the lack of metallic lines. Nevertheless, the results shown in Figure~\ref{fig3} are surprising, where the LAMOST and spectroscopic values are almost anticorrelated. Further systematic comparisons based on larger samples are needed to assess the reliability of LAMOST metallicity of B stars.
\begin{figure}[htbp]
\epsscale{}
\plotone{f3.png}
\caption{Comparison of metallicity between LAMOST and high-resolution spectroscopy.}
\label{fig3}
\end{figure}
\section{Identification of four B-type variable stars}\label{sec3}
As shown in the previous section, the underestimation in KIC $T_\mathrm{eff}$ and overestimation in KIC $\log g$ resulted in the misclassification of B stars. The consistency between LAMOST and high-resolution spectroscopy proves that we can use LAMOST data to identify such misclassified stars. We searched in LAMOST DR2 for B stars in the {\it Kepler} field with S/N$(g) \ge 50$ and obtained another four MS B stars with large $T_\mathrm{eff}$ discrepancies between the KIC and LAMOST ($\Delta T_\mathrm{eff}/T_\mathrm{eff}\mathrm{(LAMOST)}\simeq -50\%$). Their stellar parameters are listed in Table~\ref{tab2}, including the radial velocity, $v_\mathrm{r}$, derived from LAMOST spectra. The $T_\mathrm{eff}$ differences between the KIC and LAMOST are shown in Figure~\ref{fig1} as open circles. The four stars have very similar $T_\mathrm{eff}$ and $\log g$, and their metallicities are close to the solar value.
\begin{deluxetable*}{ccccccccr}
\tablewidth{0pt}
\tablecaption{Stellar parameters of the four B stars.\label{tab2}}
\tablehead{
\colhead{KIC Number} & \multicolumn{3}{c}{KIC} && \multicolumn{4}{c}{LAMOST} \\
& \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$} & \colhead{[Fe/H]} && \colhead{$T_\mathrm{eff}$ (K)} & \colhead{$\log g$} & \colhead{[Fe/H]} & \colhead{$v_\mathrm{r}$ (km s$^{-1}$)}}
\startdata
5309849 & 9084 & 4.058 & \phn0.048 && 16369\,$\pm$\,185 & 3.72\,$\pm$\,0.12 & -0.17\,$\pm$\,0.07 & -12.1\,$\pm$\,6.6\\
6462033 & 8387 & 4.315 & \phn0.276 && 16808\,$\pm$\,93\phn & 3.73\,$\pm$\,0.06 & -0.04\,$\pm$\,0.03 & 4.6\,$\pm$\,3.0\\
8255796 & 8583 & 3.971 & \phn0.023 && 16622\,$\pm$\,118 & 3.71\,$\pm$\,0.07 & \phn0.01\,$\pm$\,0.04 & -117.8\,$\pm$\,3.9\\
8324482 & 8159 & 3.819 & -0.017 && 16469\,$\pm$\,71\phn & 3.72\,$\pm$\,0.05 & -0.06\,$\pm$\,0.03 & -18.5\,$\pm$\,2.4\\
\enddata
\tablecomments{The LAMOST errors are internal ones (see Section \ref{sec2} for realistic error estimates).}
\end{deluxetable*}
Figure~\ref{fig4} shows the LAMOST spectra of these stars. According to \citet{GC2009}, we compare He I $\lambda4471$ and Mg II $\rm\lambda4481$ for the four apparent B-type spectra and find that they are earlier than $\sim$B5, consistent with the $T_\mathrm{eff}$ estimates. The relatively broad Balmer lines indicate that all of the samples are MS stars.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{f4.png}
\caption{LAMOST spectra of the four B stars. The inserts are close-up views of the blue ends.}
\label{fig4}
\end{figure}
Among the four samples, KIC 8324482 shows special broad wings in H$\gamma$ and H$\delta$. This feature is very rare in normal B-type stars. However, although the Balmer lines show wide wings, the weak lines, e.g. He I and Mg II lines, do not show clear broadening, which is not fully understood.
\citet{USIU2014} carried out spectroscopic observations of KIC 6462033. By fitting H$\alpha$ and H$\beta$ line profiles, they found that the KIC $T_\mathrm{eff}$ was too high. Their best fit yielded $T_\mathrm{eff}=7150$ K. However, hydrogen line-profile fit does not necessarily lead to conclusive determinations, because hydrogen lines reach a maximum at $\sim$A2 on the MS and weaken toward both higher and lower temperatures. On the other hand, the presence of helium lines provides strong evidence of hot stars.
\begin{figure}[htbp]
\epsscale{}
\plotone{f5.png}
\caption{Keil diagram of MS B stars using LAMOST stellar parameters. The green stars show the four misclassified B stars reported in this paper. The black symbols are the 17 B stars in Table~1 as classified by \citet{Balonaetal2011} and \citet{MNJMK2012}: circles are SPB stars, squares are hybrid stars, and triangles are stars with rotational modulation. The dotted lines show evolutionary tracks for selected masses computed with \texttt{MESA} using the same input physics as in Section~3.1 of \citet{MESA2015}. The solid and long-dashed black lines represent the zero-age main sequence and terminal-age mian sequence, respectively. The boundaries of the SPB (thick solid red line) and $\beta$ Cep (thick dashed blue line) instability strips are plotted for $M \le 9\mathrm{M}_\odot$ models (from \citealt{MESA2015}).}
\label{fig5}
\end{figure}
Using LAMOST $T_\mathrm{eff}$ and $\log g$, the four stars are placed in the Keil diagram in Figure~\ref{fig5} as green stars. The evolutionary tracks (dotted lines) were calculated using the \texttt{MESA} code \citep{MESA2015} with OPAL opacities \citep{OPAL} and proto-solar abundances from \citet{AGSS2009}. \citet{MESA2015} derived the theoretical instability strips of massive stars for oscillation modes of spherical degree $l=0$--$3$. The boundaries of the combined instability strips for $l \le 3$ modes were converted from the H-R diagram through stellar models and plotted as a solid red line for SPB stars and a dashed blue line for $\beta$ Cep stars in Figure~\ref{fig5}. The four stars are located near the $6M_\odot$ track and well within the SPB instability strip.
For comparison, the positions and oscillation types of 17 stars in Table~1 are shown as filled black symbols based on their LAMOST stellar parameters and the classification by \citet{Balonaetal2011} and \citet{MNJMK2012}. They are in good agreement with the theoretical instability strips. The three stars with rotational modulation are cooler than the red edge of the SPB instability strip. All of the other 14 stars are within the SPB instability strip, and indeed show low-frequency oscillations. The hybrid stars are generally hotter than pure SPB stars, and mostly within the overlapping region of the SPB and $\beta$ Cep instability strips. The oscillations of MS B stars are excited by the $\kappa$ mechanism operating in the partial ionization zone of iron-group elements, therefore the sizes of the theoretical instability strips and the overlapping region are sensitive to the adopted metallicity and the calculation of opacity in the Z-bump. Recent updates in the opacity calculations predict wider theoretical instability strips for MS B stars \citep{WFCKG2015, Moravveji2016}.
\section{Oscillation properties}\label{sec4}
The oscillation properties of all four stars have been previously studied. \citet{DBADR2011} carried out an automated variability analysis of all $\sim 150,000$ public {\it Kepler} light curves from the first quarter (Q1). KIC 5309849, KIC 6462033, KIC 8255796, and KIC 8324482 were assigned with the largest probability to the categories rotational modulation, active stars, miscellaneous, and $\gamma$ Dor stars, respectively. As noted by \citet{UMGetal2011}, the classifier used only three independent frequencies with the highest amplitudes, and no external information was taken into account. Based on KIC parameters and light curves with longer time coverage, KIC 6462033 \citep{UMGetal2011, Balona2014, USIU2014} and KIC 8324482 \citep{Balona2014} were identified as $\gamma$ Dor pulsators, while KIC 8255796 was classified as a rotating A star \citep{Balona2013}. As shown in Figure~\ref{fig5}, these four stars all fall into the instability strip of SPB stars with updated stellar parameters. We discuss their variability based on LAMOST parameters and new frequency analysis using all four years of the {\it Kepler} data.
\subsection{{\it Kepler} photometry}\label{sec4.1}
{\it Kepler} data are available in two cadences: long cadence (LC) with 29.4-min integrations and short cadence (SC) with 58.9-s integrations. LC data are delivered in quarters (Q0--Q17). Most quarters lasted for approximately one fourth of {\it Kepler}'s orbital period of 372.5\,d, which is the time interval between two satellite rolls performed to keep the solar panels pointing toward the Sun. Q0 (9.7\,d), Q1 (33.5\,d), and Q17 (31.8\,d) are short quarters. SC data are further divided into months, corresponding to the 32-d interval between two data downlinks. At any given time, only 512 targets were observed in SC mode. In our sample of four B stars, SC data are only available for KIC 6462033 for one month, while LC data are almost continuous throughout the mission for all four stars (Q1-Q17 for KIC 8255796 and Q0-Q17 for the other three). Therefore we use LC data in the following analysis. The time span $\Delta T$ is 1459.5\,d for Q1--Q17 data, and 1470.5\,d for Q0--Q17 data. The corresponding Rayleigh frequency resolution is $1/\Delta T \approx 0.0007$\,d$^{-1}$. The Nyquist frequency of LC data is 24.5\,d$^{-1}$.
Both light curve files and target pixel files can be downloaded from the Mikulski Archive for Space Telescopes\footnote[2]{http://archive.stsci.edu}. The light curves were extracted from the target pixel files using standard masks, which sometimes missed pixels with significant flux. Therefore we re-extracted the light curves from the target pixel files using custom masks. The custom masks were designed for each quarter separately by containing all of the pixels with significant flux and avoiding possible contamination. The extraction produces new simple aperture photometry (SAP) light curves. Systematic artifacts were then removed from the SAP light curves by subtracting the cotrending basis vectors (CBVs). The use of at least two CBVs was recommended. In practice, we found that 2--4 CBVs usually sufficed for artifact removal. We then filtered out all of the bad data points with SAP\_QUALITY $\ne 0$, and converted the fluxes to part per million (ppm) around the mean value for each quarter.
The creation of custom masks, extraction, and cotrending of new light curves were carried out using the software \texttt{PyKE} \citep{PyKE}. Figure~\ref{fig6} shows an example of the improvement in extracted light curves. As can be seen, the new light curves show significantly less instrumental trends and larger variability amplitude.
\begin{figure}[htbp]
\epsscale{}
\plotone{f6.png}
\caption{Comparison of light curves of Q3 LC data of KIC 6462033. Top panel: SAP light curve using the standard mask. Bottom panel: new SAP light curve (black) using the custom mask and cotrended light curve (red) using 3 CBVs. Notice the different flux levels in the two panels.}
\label{fig6}
\end{figure}
\subsection{Frequency analysis}\label{sec4.2}
We used the software \texttt{SigSpec} \citep{SigSpec} to carry out the frequency analysis. It performs discrete Fourier transform and least-square fitting iteratively and automatically. This greatly facilitates the extraction of a large amount of frequencies, which is common in the analysis of space photometry. For a given time series, \texttt{SigSpec} carries out step-by-step prewhitening of the most significant signal components. When a new peak is extracted at each step, the software fits all the detected frequencies, amplitudes, and phases simultaneously. The error estimation of frequency, amplitude, and phase follows the formulas given by \citet{KRW2008}. If the frequency separation is less than three times the Rayleigh frequency resolution, the appropriate upper limit of the frequency error is $1/(4\Delta T) \approx 0.00017$\,d$^{-1}$.
\texttt{SigSpec} uses the parameter {\it sig} to define the amplitude spectral significance. In the analysis of ground-based data, a frequency is usually accepted as significant if its amplitude S/N $\ge 4.0$ (Breger et al. 1993). This is approximately equivalent to {\it sig} $\ge 5.46$ \citep{SigSpec}. However, the criterion of S/N $\ge 4.0$ may not be suitable in the analysis of space-based observations because it often results in a huge number of closely spaced low-amplitude frequencies, which may not be all physical \citep[e.g.][]{Papics2013}. The cumulative errors in the iterative prewhitening process influence subsequent frequency extraction, and give rise to spurious frequencies \citep{VanReeth2015}. Therefore the number of low-amplitude frequencies is likely overestimated using the S/N $\ge 4.0$ criterion. In practice, a higher or more stringent limit of S/N \citep[e.g.][]{Papics2013} or {\it sig} \citep[e.g.][]{Chapellier2011} was often used, in order to avoid false detection and reduce the number of frequencies to a manageable amount. In this work, we adopted the {\it sig} $=20$ criterion\citep{Chapellier2011}, and extracted $\sim\,500$ frequencies for each star.
The extracted frequencies were then checked for combinations (including harmonics). A frequency was accepted to be a combination if the difference between it and the combination of independent frequencies was less than the Rayleigh frequency resolution $1/\Delta T$. If two frequencies were separated by less than $1/\Delta T$, they were deemed as indistinguishable \citep{Papics2012}. The chance of finding spurious combinations, which are a result of mere mathematical coincidence, increases with the number of detected frequencies \citep{Papics2012}. Except for KIC 8255794, we only considered second or third order combinations formed by two independent frequencies \citep{Papics2013, Papics2014}.
High-order g modes of the same degree in SPB stars are expected to show equidistant period spacing in non-rotating homogeneous approximation \citep{Tassoul1980}. The chemical gradient near the core causes oscillatory deviations to the constant period spacing which can be further modified by extra mixing \citep{Miglio2008}, and stellar rotation introduces shifts in the period spacing series \citep{Bouabid2013}. Therefore the period spacing patterns in SPB stars are sensitive probes of stellar interior. After excluding true combinations, we searched for possible quasi-equidistant period spacings and possible distorted period series. The former can be detected in the autocorrelation function \citep{Papics2014}, while the latter requires manual identification \citep{Papics2017}.
\subsection{Results for individual stars}\label{sec4.3}
Figure~\ref{fig7} shows light-curve segments and the amplitude spectra of the four stars using all four years of the data. Low frequencies are clearly seen in each panel. Although the stars are very close in the Keil diagram (Figure~\ref{fig5}), their amplitude spectra show different characteristics. We describe the results of the frequency analysis for each star.
\begin{figure*}[htbp]
\centering
\gridline{\fig{f7.png}{0.8\textwidth}{}}
\caption{Light-curve segments (left panels) and amplitude spectra (right panels) of the four B stars. The inserts in the right panels are close-up views of low frequencies.}
\label{fig7}
\end{figure*}
\subsubsection{KIC 5309849}\label{sec4.3.1}
We extracted 512 significant frequencies with $sig > 20$. The light curve and amplitude spectrum of KIC 5309849 are dominated by the most significant frequency $f_1=0.18505$\,d$^{-1}$ with much higher amplitude and significance than the other frequencies. $f_1$ was detected as the rotation frequency \citep{NGSK2013, RRB2013}, and we do find its harmonic $2f_1$, which is usually taken as a sign of rotation \citep{Balona2013}. However, there is also the possibility of ellipsoidal variability, in which case KIC 5309849 is a close binary, and the very stable $f_1$ is related to orbital variation. Unfortunately, we do not have multi-epoch LAMOST observations to confirm its binary nature.
The classification of rotational modulation or ellipsoidal variable is incomplete to describe the variability of KIC 5309849. After extracting $f_1$, there are still many significant low frequencies below 2\,d$^{-1}$ that are most likely caused by pulsation. We managed to detect a short tilted period series consisting of nine consecutive radial orders among the high-amplitude frequencies, as listed in Table~\ref{tab:5309849} and shown in Figure~\ref{fig8}. The upward slope indicates that these modes are retrograde modes shifted by rotation \citep{VanReeth2015}. Considering the stellar parameters, the detected period spacing values are not likely between consecutive dipole modes, but of higher degree. The compatibility of $f_1$ being the rotation frequency and the observed period spacing needs further confirmation from theoretical modeling.
\begin{table}
\centering
\caption{Period spaing series in KIC 5309849.}\label{tab:5309849}
\begin{tabular}{ccccc}
\tablewidth{0pt}
\hline
\hline
& \colhead{Frequency} & \colhead{Period} & \colhead{Amplitude} & \colhead{{\it sig}} \\
& \colhead{(d$^{-1}$)} & \colhead{(d)} & \colhead{(ppm)} & \\
& \colhead{$\pm$\,0.00017}& & & \\
\hline
1 & 1.15055 & 0.86915 & 209\,$\pm$\,8\phn & 628 \\
2 & 1.21977 & 0.81983 & 156\,$\pm$\,7\phn & 450 \\
3 & 1.30480 & 0.76640 & 178\,$\pm$\,7\phn & 508 \\
4 & 1.38798 & 0.72047 & 194\,$\pm$\,6\phn & 856 \\
5 & 1.46852 & 0.68096 & 166\,$\pm$\,12 & 177 \\
6 & 1.54803 & 0.64598 & 158\,$\pm$\,6\phn & 619 \\
7 & 1.64623 & 0.60745 & 905\,$\pm$\,15 & 3417 \\
8 & 1.75519 & 0.56974 & 552\,$\pm$\,11 & 2228 \\
9 & 1.84875 & 0.54091 & \phn27\,$\pm$\,3\phn & 67 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htbp]
\epsscale{}
\plotone{f8.png}
\caption{Period spaing series in KIC 5309849. Top panel: a close-up view of the amplitude spectrum of KIC 5309849. The peaks of the period spacing series are marked with dashed lines. Bottom panel: observed period spacings. Error bars are smaller than the symbol size.}
\label{fig8}
\end{figure}
Besides a large number of low frequencies, we have also extracted six high frequencies, as listed in Table~\ref{tab:hfreq}. They are well separated from the low frequencies, and cannot be explained as low-order combinations. The first three frequencies in Table~\ref{tab:hfreq} have been detected in each of the long quarters Q2--Q16. Thus they are stable over time, although their amplitudes are relatively low. Therefore, KIC 5309849 is a possible SPB/$\beta$ Cep hybrid. In Figure~\ref{fig5}, KIC 5309849 lies outside the plotted theoretical $\beta$ Cep instability strip. Even when we consider the new theoretical instability strips of $\beta$ Cep and SPB stars by \citet{Moravveji2016}, which were calculated based on the improved opacity tables, KIC 5309849 is still cooler then the red edge of the overlapping region where the hybrids are expected to occur.
\begin{table}
\centering
\caption{High frequencies of KIC 5309849.}\label{tab:hfreq}
\begin{tabular}{cccc}
\tablewidth{0pt}
\hline
\hline
& \colhead{Frequency} & \colhead{Amplitude} & \colhead{{\it sig}} \\
& \colhead{(d$^{-1}$)} & \colhead{(ppm)} & \\
& \colhead{$\pm$\,0.00017}& & \\
\hline
1 & 23.80189 & 32\,$\pm$\,4 & 79 \\
2 & 23.67482 & 30\,$\pm$\,4 & 70 \\
3 & 18.14375 & 15\,$\pm$\,3 & 36 \\
4 & 23.26568 & 13\,$\pm$\,2 & 32 \\
5 & 22.89368 & 11\,$\pm$\,2 & 27 \\
6 & 23.38008 & 10\,$\pm$\,2 & 22 \\
\hline
\end{tabular}
\end{table}
\subsubsection{KIC 6462033}\label{sec4.3.2}
KIC 6462033 shows a rich spectrum of low frequencies. \citet{UMGetal2011} reported $46$ independent frequencies, whereas \citet{USIU2014} identified five independent frequencies and hundreds of combinations and harmonics. Our selection criterion resulted in 509 frequencies. We also checked the amplitude spectrum of SC data in Q3.3, and did not find significant peaks beyond 24.5\,d$^{-1}$.
The frequencies of KIC 6462033 show a clear quasi-equidistant period spacing pattern in the period range of 0.2\,d to 2.4\,d. The black line in Figure~\ref{fig9} shows the autocorrelation function of the original periodogram (transformed into period space). Although the bumps of real period spacings are visible, the three highest peaks at 0.104\,d, 0.278\,d, and 0.382\,d correspond to the period differences between the three most significant frequencies. To avoid the strong influence of a few high-amplitude frequencies on the autocorrelation function, we gave the same artificial power to all of the frequencies with $sig >50$, and set the power of the other frequencies to zero. The autocorrelation function of the modified periodogram is shown in red in Figure~\ref{fig9}. We can easily identify the peaks at 0.109\,d and approximately twice and thrice this value.
\begin{figure}[htbp]
\epsscale{}
\plotone{f9.png}
\caption{Autocorrelation function of the original periodogram (black) and a modified periodogram where frequencies with $sig > 50$ were given an artificial power, while all others were given zero power (red), both calculated from the period range of 0.2\,d to 2.4\,d.}
\label{fig9}
\end{figure}
\begin{figure}[htbp]
\epsscale{}
\plotone{f10.png}
\caption{The same as Figure~\ref{fig8}, but for KIC 6462033.}
\label{fig10}
\end{figure}
\begin{table}
\centering
\caption{Period spacing series in KIC 6462033.}\label{tab:6462033}
\begin{tabular}{ccccr}
\tablewidth{0pt}
\hline
\hline
& \colhead{Frequency} & \colhead{Period} & \colhead{Amplitude} & \colhead{{\it sig}} \\
& \colhead{(d$^{-1}$)} & \colhead{(d)} & \colhead{(ppm)} & \\
& \colhead{$\pm$\,0.00017}& & & \\
\hline
\phn1 & 0.43334 & 2.30764 & 132\,$\pm$\,12 & 108\\
\phn2 & 0.45481 & 2.19870 & \phn40\,$\pm$\,4\phn & 86\\
\phn3 & 0.47873 & 2.08887 & \phn55\,$\pm$\,4\phn & 125\\
\phn4 & 0.50651 & 1.97431 & \phn77\,$\pm$\,9\phn & 69\\
\phn5 & 0.53614 & 1.86518 & \phn57\,$\pm$\,4\phn & 137\\
\phn6 & 0.56872 & 1.75834 & \phn71\,$\pm$\,5\phn & 176\\
\phn7 & 0.60910 & 1.64177 & \phn17\,$\pm$\,2\phn & 44\\
\phn8 & 0.65686 & 1.52240 & \phn41\,$\pm$\,4\phn & 95\\
\phn9 & 0.70352 & 1.42143 & \phn47\,$\pm$\,4\phn & 110\\
10 & 0.76069 & 1.31459 & \phn46\,$\pm$\,4\phn & 114\\
11 & 0.83496 & 1.19766 & \phn34\,$\pm$\,4\phn & 69\\
12 & 0.92521 & 1.08084 & 585\,$\pm$\,14 & 1677\\
13 & 1.02317 & 0.97736 & 483\,$\pm$\,12 & 1481\\
14 & 1.18847 & 0.84142 & 186\,$\pm$\,7\phn & 614\\
15 & 1.42972 & 0.69944 & 613\,$\pm$\,14 & 1778\\
16 & 1.68451 & 0.59365 & \phn20\,$\pm$\,3\phn & 44\\
17 & 2.03655 & 0.49103 & 462\,$\pm$\,12 & 1302\\
18 & 2.60128 & 0.38443 & \phn74\,$\pm$\,5\phn & 207\\
19 & 3.43849 & 0.29083 & 443\,$\pm$\,11 & 1400\\
\hline
\end{tabular}
\end{table}
We found a series of 19 consecutive periods with quasi-equidistant spacing, as listed in Table~\ref{tab:6462033} and marked in the top panel of Figure~\ref{fig10}. The period series includes the high-amplitude frequencies between 0.2\,d--1.1\,d and extends to longer period with lower amplitude. The mean period spacing is 9681\,s, compatible with the period spacing of dipole modes of a $\sim 6M_\odot$ MS star \citep{Miglio2008, Degroote2010}. The deviation from uniform period spacing is characterized by a periodic component, as shown in the bottom panel of Figure~\ref{fig10}. The period spacing pattern provides rich information about the evolutionary status and internal structure of the star \citep{Miglio2008}. The mean spacing and periodicity indicate that KIC 6462033 is in the middle stage of core hydrogen burning. The amplitude of the periodic component has a very large value ($\sim$ 12,000\,s) at $\sim$ 0.8\,d, and decreases as the period increases. The decreasing amplitude is a signature of extra mixing outside the convective core and a smooth gradient of chemical composition, otherwise we would see no amplitude variation with period. The detailed shape of the chemical composition gradient determines the amplitude of the periodic component \citep{Degroote2010}. However, it is also possible that the period 0.84142\,d is not a member of the period series, while the real peaks at $\sim$ 0.8\,d and $\sim$ 0.9\,d are not excited to detectable amplitudes. In this scenario, the period spacings would be much smaller at these peaks, and the mean period spacing of the period series would also be smaller.
Quasi-equidistant period spacings in MS B stars have been previously detected in CoRoT targets HD 50230 \citep{Degroote2010} and HD 43317 \citep{HD43317} with smaller numbers of consecutive dipole modes. So far the slow rotator KIC 10526294 has been the only one well-studied SPB star that shows quasi-equidistant period spacing \citep{Papics2014}. Its 19 consecutive dipole modes enables \citet{Moravveji2015} to put stringent constrains on core overshooting and diffusive mixing. \citet{Triana2015} have derived its internal rotation profile using the rotationally split multiplets. Rotationally affected period series have also been detected in SPB stars using space photometry \citep{Papics2015,Papics2017,Zwintz2017}, providing rich information about stellar internal structures \citep{MTAM2016}. With the detected long period spacing series, detailed seismic modeling of KIC 6462033 is also possible. However, we have not detected reliable rotational splitting in its periodogram.
\subsubsection{KIC 8255796}\label{sec4.3.3}
For KIC 8255796, we extracted 472 significant frequencies. The amplitude spectrum shows multiple frequency groups (FGs) with quasi-equidistant distribution. The most significant peaks are in the first and third groups at $\sim 0.12$\,d$^{-1}$ and $\sim 0.37$\,d$^{-1}$, respectively. An inspection of the quarterly amplitude spectra revealed unresolved FGs and amplitude variations. Table~\ref{tab:8255796} lists the 55 frequencies with {\it sig} $> 100$ using all four years of the data. The frequencies are arranged according to their FGs.
\begin{deluxetable*}{crrcccrrc}
\renewcommand\arraystretch{1.1}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecaption{Significant frequencies of KIC 8255796.\label{tab:8255796}}
\tablehead{
\colhead{Frequency} & \colhead{Amplitude} & \colhead{{\it sig}} & \colhead{Note} & & \colhead{Frequency} & \colhead{Amplitude} & \colhead{{\it sig}} & \colhead{Note}\\
\colhead{(d$^{-1}$)} & \colhead{(ppm)} & & & & \colhead{(d$^{-1}$)} & \colhead{(ppm)} & & \\
\colhead{$\pm$\,0.00017}& & & & & \colhead{$\pm$\,0.00017}& & & }
\startdata
\multicolumn{4}{c}{FG0} & & 0.36942 &1410\,$\pm$\,31 &2051 & $2f_1+f_2$ \\
0.02191 & 80\,$\pm$\,8\phn & 114 & $-f_1+f_2-f_3+f_4$ & & 0.37445 & 133\,$\pm$\,8\phn & 222 & $f_1+2f_2+f_3-f_4$ \\
\multicolumn{4}{c}{FG1} & & 0.37702 & 119\,$\pm$\,8\phn & 183 & $2f_1-f_2+2f_3$ \\
0.11305 & 88\,$\pm$\,8\phn & 121 & $f_1-f_2+2f_3-f_4$ & & 0.38178 & 173\,$\pm$\,10 & 288 & $2f_1+f_4$ \\
0.11894 & 779\,$\pm$\,17 &2068 & $f_1$ & & 0.38238 & 513\,$\pm$\,14 &1321 & $f_1+2f_2$ \\
0.12243 & 147\,$\pm$\,9\phn & 252 & $f_1-f_2+f_3$ & & 0.38856 & 338\,$\pm$\,18 & 330 & $2f_1-2f_2+2f_3+f_4$ \\
0.12554 & 210\,$\pm$\,11 & 330 & $f_1-2f_2+2f_3$ & & 0.38886 &1008\,$\pm$\,23 &1887 & $2f_1-2f_2+2f_3+f_4$ \\
0.12603 & 292\,$\pm$\,10 & 713 & $f_1-2f_2+2f_3$ & & 0.38947 & 118\,$\pm$\,11 & 110 & $f_1+2f_3$ \\
0.12881 & 378\,$\pm$\,12 & 968 & $2f_2-f_3$ & & 0.39560 & 101\,$\pm$\,8\phn & 127 & $3f_2$ \\
0.13198 & 762\,$\pm$\,17 &1885 & $f_2$ & & 0.39762 & 92\,$\pm$\,7\phn & 144 & $4f_3-f_4$ \\
0.13530 & 378\,$\pm$\,12 & 950 & $f_3$ & & \multicolumn{4}{c}{FG4} \\
0.13827 & 208\,$\pm$\,10 & 416 & $-f_2+2f_3$ & & 0.47690 & 81\,$\pm$\,7\phn & 112 & $3f_1+2f_2-f_4$ \\
0.14133 & 125\,$\pm$\,9\phn & 191 & $f_2-f_3+f_4$ & & 0.48852 & 86\,$\pm$\,7\phn & 118 & $3f_1+f_2$ \\
0.14424 & 137\,$\pm$\,9\phn & 219 & $f_4$ & & 0.50775 & 101\,$\pm$\,8\phn & 149 & $f_1+5f_2-2f_3$ \\
\multicolumn{4}{c}{FG2} & & 0.51452 & 108\,$\pm$\,8\phn & 167 & $f_1+3f_2$ \\
0.22606 & 114\,$\pm$\,8\phn & 182 & $2f_1+f_2-f_4$ & & 0.52086 & 146\,$\pm$\,9\phn & 258 & $f_1+f_2+2f_3$ \\
0.23551 & 100\,$\pm$\,8\phn & 144 & $f_1+3f_2-f_3-f_4$ & & \multicolumn{4}{c}{FG5} \\
0.23748 & 105\,$\pm$\,8\phn & 144 & $2f_1$ & & 0.58714 & 87\,$\pm$\,8\phn & 121 & $4f_1+2f_2+f_3-2f_4$\\
0.24055 & 76\,$\pm$\,7\phn & 104 & $2f_1-f_2+f_3$ & & 0.59291 & 90\,$\pm$\,8\phn & 139 & $4f_1+3f_3-2f_4$\\
0.24229 & 93\,$\pm$\,8\phn & 130 & $f_1+f_2+f_3-f_4$ & & \multicolumn{4}{c}{FG6} \\
0.24529 & 184\,$\pm$\,9\phn & 350 & $f_1+2f_3-f_4$ & & 0.74270 & 86\,$\pm$\,8\phn & 121 & $4f_1+f_2+f_3$\\
0.25440 & 147\,$\pm$\,9\phn & 222 & $f_1+f_3$ & & \multicolumn{4}{c}{FG7} \\
0.25572 & 127\,$\pm$\,8\phn & 248 & $2f_2+f_3-f_4$ & & 0.86648 & 82\,$\pm$\,8\phn & 115 & $4f_1+2f_2+2f_3-f_4$\\
0.25692 & 111\,$\pm$\,8\phn & 168 & $f_1-f_2+2f_3$ & & 0.90579 & 83\,$\pm$\,8\phn & 115 & $f_1+5f_2+2f_3-f_4$\\
0.26371 & 84\,$\pm$\,7\phn & 119 & $2f_2$ & & 0.91529 & 133\,$\pm$\,9\phn & 222 & $f_1+2f_2+5f_3-f_4$\\
\multicolumn{4}{c}{FG3} & & \multicolumn{4}{c}{FG8} \\
0.34734 & 97\,$\pm$\,8\phn & 146 & $3f_1+f_3-f_4$ & & 1.07887 & 87\,$\pm$\,8\phn & 127 & $-f_1+6f_2+3f_3$\\
0.35253 & 141\,$\pm$\,8\phn & 249 & $4f_1-f_2-f_3+f_4$ & & \multicolumn{4}{c}{FG9} \\
0.35771 & 692\,$\pm$\,34 & 401 & $2f_1+2f_2-f_4$ & & 1.16084 & 121\,$\pm$\,9\phn & 192 & $3f_1+5f_2+f_4$\\
0.35788 &1220\,$\pm$\,24 &2449 & $2f_1+2f_2-f_4$ & & 1.17314 & 174\,$\pm$\,10 & 308 & $f_1+6f_2+3f_3-f_4$\\
0.36348 & 495\,$\pm$\,37 & 171 & $3f_1-2f_2+2f_3$ & & 1.18445 & 76\,$\pm$\,8\phn & 101 & $f_1+5f_2+3f_3$\\
0.36367 & 832\,$\pm$\,23 &1287 & $2f_1+2f_3-f_4$ & & 1.21078 & 208\,$\pm$\,10 & 397 & $7f_1+3f_2+2f_3-2f_4$\\
0.36923 & 790\,$\pm$\,28 & 751 & $2f_1+f_2$ & & & & &\\
\enddata
\end{deluxetable*}
B stars that show FGs in their periodograms were classified as a separate type of variation by \citet{Balonaetal2011}. The origin of FGs was suspected to be related to rotation due to the resemblance to Be stars. \citet{Kurtzetal2015} demonstrated that the FGs in $\gamma$ Dor, SPB, and Be stars could be explained as a few base frequencies and their combination frequencies (including harmonics). They interpreted the base frequencies as g-mode oscillations, and showed that ``the combination frequencies can have amplitudes greater than the base frequency amplitudes." For KIC 8255796, we have found that all the frequencies in Table~\ref{tab:8255796} can be explained by using only four independent base frequencies in FG1 and their combinations, although high-order combinations are needed to explain high FGs.
Amplitude modulation adds complexity to the amplitude spectrum of KIC 8255796. As can be seen in Table~\ref{tab:8255796}, there are several pairs of indistinguishable frequencies in FG1 and FG3 with frequency separations smaller than the Rayleigh frequency resolution, e.g. 0.36923\,d$^{-1}$ and 0.36942\,d$^{-1}$. These frequency pairs were derived from unresolved peaks in the amplitude spectrum as a result of amplitude modulation \citep{BK2014}. A single frequency value was unable to completely remove the signal from the amplitude spectrum because the value and amplitude of a frequency changed over four years, and caused an unresolved peak when all data were used.
In order to track the variations, we carried out time-resolved analysis by calculating the amplitude spectra of 360-d subsets of the data moved in 20-day steps. For each subset, the amplitudes and phases were calculated with the frequencies fixed at the values in Table~\ref{tab:8255796} using the software \texttt{Period04} \citep{Period04}. Figure~\ref{fig11} shows the amplitude and phase variations of the four base frequencies in FG1, i.e. $f_1=0.11894$\,d$^{-1}$ (black dots), $f_2=0.13198$\,d$^{-1}$ (purple triangles), $f_3=0.13530$\,d$^{-1}$ (blue diamonds), and $f_4=0.14424$\,d$^{-1}$ (green squares), as well as the three most significant frequencies in FG3, i.e. 0.36942\,d$^{-1}$ (red dotted line), 0.38886\,d$^{-1}$ (orange dashed line), and 0.35788\,d$^{-1}$ (yellow solid line). The differences between FG1 and FG3 are evident. Except for $f_4$, which has a relatively low amplitude, the base frequencies have quite stable amplitudes and phases, while the frequencies in FG3 show significant and almost continuous amplitude growth over 4 yr. The amplitudes of 0.36942\,d$^{-1}$ and 0.35788\,d$^{-1}$ have increased by more than $100\%$, and their phases show very similar decreases. The significant frequencies in other FGs also show amplitude variations, but at lower amplitude levels. The coexistence of stable frequencies with frequencies having large amplitude variations may indicate a more complex origin of the FGs than combination effects, because one expects to see related variations of the base frequencies and their combinations, and the combination frequencies usually have much lower amplitudes than the base frequencies \citep{BKBMH2016}.
\begin{figure}[htbp]
\epsscale{}
\plotone{f11.png}
\caption{Amplitude (top panel) and phase (bottom panel) variations of the four base frequencies in FG1 and three most significant frequencies in FG3: 0.11894\,d$^{-1}$ (black dots), 0.13198\,d$^{-1}$ (purple triangles), 0.13530\,d$^{-1}$ (blue diamonds), 0.14424\,d$^{-1}$ (green squares), 0.36942\,d$^{-1}$ (red dotted line), 0.38886\,d$^{-1}$ (orange dashed line), and 0.35788\,d$^{-1}$ (yellow solid line).}
\label{fig11}
\end{figure}
Amplitude modulation of p modes has been found in {\it Kepler} A stars \citep[e.g.][]{BK2014}. The ensemble study by \citet{BKBMH2016} shows that amplitude modulation is common among $\delta$ Sct stars. Various mechanisms can cause amplitude and/or phase variations, but the origin of variations in some stars remains unclear. Amplitude modulation has not been previously reported for B stars with FGs. A similar ensemble study would reveal whether such a phenomenon is common among these stars, and help the theoretical interpretation of FGs.
\subsubsection{KIC 8324482}\label{sec4.3.4}
We extracted 446 frequencies for KIC 8324482. The dominant frequencies are mostly within the period range of 0.3\,d--1.1\,d. Due to the dense frequency spectrum, we were unable to detect clear period spacing using the modified autocorrelation function. We manually searched for possible period spacing patterns among high-amplitude frequencies, and managed to find a period series consisting of 14 consecutive dipole modes that show quasi-equidistant period spacing. The frequencies are listed in Table~\ref{tab:8324482} and marked in the top panel of Figure~\ref{fig12}.
The average spacing of the detected period series is 8108\,s, indicating a lower effective temperature compared with KIC 6462033. Again, we see periodicity in the observed period spacings, as shown in the bottom panel of Figure~\ref{fig12}. The period series is not long enough to draw conclusions on the amplitude variation of the periodic component, but the longer period suggests that KIC 8324482 is at an earlier evolutionary stage than KIC 6462033.
Our search for period spacing started with the most significant peaks in the periodogram \citep{Papics2017}. Less significant frequencies were then added after identifying possible period spacing. Therefore, dominant frequencies are selected with priority in this approach. It is possible that real peaks are ignored due to low significance, as discussed in Section \ref{sec4.3.2}.
\begin{figure}[htbp]
\epsscale{}
\plotone{f12.png}
\caption{The same as Figure~\ref{fig8}, but for KIC 8324482.}
\label{fig12}
\end{figure}
\begin{table}
\centering
\caption{Period spacing series in KIC 8324482.}\label{tab:8324482}
\begin{tabular}{cccrr}
\tablewidth{0pt}
\hline
\hline
& \colhead{Frequency} & \colhead{Period} & \colhead{Amplitude} & \colhead{{\it sig}} \\
& \colhead{(d$^{-1}$)} & \colhead{(d)} & \colhead{(ppm)} & \\
& \colhead{$\pm$\,0.00017}& & & \\
\hline
\phn1 & 2.88685 & 0.34640 & 426\,$\pm$\,18 & 559\\
\phn2 & 2.23827 & 0.44677 & 221\,$\pm$\,11 & 347\\
\phn3 & 1.80946 & 0.55265 & 747\,$\pm$\,21 & 1206\\
\phn4 & 1.60262 & 0.62398 & 914\,$\pm$\,34 & 686\\
\phn5 & 1.38497 & 0.72204 & 1131\,$\pm$\,28 & 1553\\
\phn6 & 1.20177 & 0.83211 & 28\,$\pm$\,5\phn & 22\\
\phn7 & 1.06125 & 0.94229 & 1116\,$\pm$\,28 & 1550\\
\phn8 & 0.97479 & 1.02587 & 646\,$\pm$\,19 & 1075\\
\phn9 & 0.91047 & 1.09833 & 1849\,$\pm$\,31 & 3433\\
10 & 0.84479 & 1.18372 & 147\,$\pm$\,10 & 209\\
11 & 0.78391 & 1.27566 & 76\,$\pm$\,7\phn & 99\\
12 & 0.72497 & 1.37936 & 76\,$\pm$\,8\phn & 86\\
13 & 0.67699 & 1.47712 & 61\,$\pm$\,8\phn & 47\\
14 & 0.63845 & 1.56631 & 56\,$\pm$\,7\phn & 56\\
\hline
\end{tabular}
\end{table}
The use of KIC parameters lead to some confusion in the oscillation properties of KIC 6462033 and KIC 8324482. They were among the 12 high-temperature $\gamma$ Dor stars listed by \citet{Balona2014}. It was difficult to explain the existence of pure g-mode oscillators at $T_\mathrm{eff} \sim 8000$\,K. \citet{Balona2014} suggested the possibility of binary or incorrect $T_\mathrm{eff}$. We confirm here the latter to be the case for these two stars.
\section{Conclusion}\label{sec5}
Using LAMOST data, we have confirmed the underestimation in KIC $T_\mathrm{eff}$ and overestimation in KIC $\log g$ of B stars. The consistency of stellar parameters between LAMOST and other follow-up observations demonstrates the potential of LAMOST spectra in the variability classification and statistical study of early-type stars in the {\it Kepler} field.
Using LAMOST estimates of stellar parameters, our search for misclassifications led to the finding of four MS B stars which had been previously cataloged as A stars. We examined the spectra and confirmed their identities as MS B stars. Starting from {\it Kepler} target pixel files, we carried out detailed frequency analysis of the four stars. Although they lie very closely to each other in the Keil diagram, their amplitude spectra show very different characters. Based on the results of our analysis, we classify KIC 6462033 and KIC 8324482 as SPB stars, and KIC 5309849 as a candidate of SPB/$\beta$ Cep hybrid with rotational or ellipsoidal variability. The amplitude spectrum of KIC 8255796 shows both FGs and amplitude modulation. The cause of amplitude modulation and its relation to FGs require further investigation.
Our search for period spacing patterns lead to the detection of period series consisting of consecutive g modes in KIC 5309849, KIC 6462033, and KIC 8324482. The short tilted period series in KIC 5309849 indicates internal rotation, while the quasi-equidistant period spacing in KIC 6462033 and KIC 8324482 is a sign of slow rotation. The long period series in KIC 6462033 provides the opportunity for further seismic modeling and detailed studies of its near-core structure and internal mixing.
Our results prove the necessity of accurate stellar parameters in the study of stellar variability in the {\it Kepler} field. The combination of asteroseismology and ground-based follow-up observations will help one obtain more reliable estimations of stellar parameters \citep{DZZetal2014, Liuetal2015} and fully exploit the potential of {\it Kepler}'s high-precision photometry.
\acknowledgments
We thank the anonymous referee for helpful comments and suggestions that greatly improved the paper. This work was supported by National Natural Science Foundation of China (NSFC) through grants 11403039, 11633005, 11373032, 11333003, and 11403056, and the Joint Fund of Astronomy of NSFC and Chinese Academy of Sciences (CAS) through grants U1231119 and U1231202. C.Z. acknowledges support from the Young Researcher Grant of National Astronomical Observatories, CAS (NAOC). C.L. acknowledges support from the Strategic Priority Research Program ``The Emergence of Cosmological Structures" of CAS (Grant No. XDB09000000). J.F. acknowledges support from the National Basic Research Program of China (973 Program 2014CB845700 and 2013CB834900). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by CAS. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by NAOC.
|
{
"timestamp": "2018-02-27T02:03:44",
"yymm": "1802",
"arxiv_id": "1802.08789",
"language": "en",
"url": "https://arxiv.org/abs/1802.08789"
}
|
\section{Introduction}
Many physical models and engineering problems are described by partial differential equations (PDEs). These equations usually depends on a number of parameters, which can correspond to physical or geometrical properties of the model as well as to initial or boundary conditions of the problem, and can be deterministic or subject to some source of uncertainty. Many applications require either to compute the solutions in real time, i.e. to compute them almost instantaneously given certain input parameters, or to compute a huge number of solutions corresponding to diverse parameters. The latter is for example the case when we want to evaluate statistics of some features associated to the solution. In both cases it is necessary to have methods to get very fast, and still reliable, numerical solutions to these equations. \\
Reduced order models \cite{quarteroni2011certified,rozza:book} were developed to face this problem. The main idea of these methods is that in many practical cases the solutions obtained for different values of the parameters belong to a lower-dimensional manifold of the functional space they naturally belong to. Therefore, instead of looking for the solution in a high-dimensional approximation subspace of this functional space, one could look for it in a low-dimensional, suitably tailored, \emph{reduced order space}. Obviously, this considerably reduces the computational effort, once the reduced spaces are available. Different methods have been developed to construct these sub-spaces in an efficient and reliable way: two popular techniques are the greedy method and the Proper Orthogonal Decomposition (POD) method. \\
As said above, many of the parameters describing the problem at hand can be subject to uncertainties. In several applications, keeping track of these uncertainties could make the model more flexible and more reliable than a deterministic model. In this case we talk about PDEs with random inputs, or \emph{stochastic} PDEs.
Many different methods are available to solve such problems numerically, such as stochastic Galerkin \cite{sullivan:uq}, stochastic collocation \cite{nobile:collocation,hestaven:collocation,lorenzo}, as well as Monte-Carlo \cite{sullivan:uq}. More recently, methods based on reduced order models have been developed \cite{peng:thesis,peng:w_elliptic_rnd,peng:comparison,chen2017reduced,spannring2017weighted,torlo2017stabilized}. In practice, they aim to construct low-dimensional approximation subspaces in a \emph{weighted} fashion, i.e. more weight is given to more relevant values of the parameters, according to an underlying probability distribution. Nevertheless, to our knowledge, no reduced order method for stochastic problems based completely on the POD algorithm has been proposed so far.
\\
In this paper we consider the case of linear elliptic PDEs with the same assumption as in \cite{peng:w_elliptic_rnd}. We briefly discuss the framework and introduce the weighted greedy algorithm. The rest of the work is devoted to define a weighted proper orthogonal decomposition method, which is the original contribution of this work. While the greedy algorithm aims to minimize the error in a $L^\infty$ norm over the parameter space, the POD method looks instead at the $L^2$ norm error, which, in the stochastic case, is the mean square error. The integral of the square error is discretized according to some quadrature rule, and the reduced order space is found by minimizing this approximated error. From the algorithmic point of view, this corresponds to applying a principal component analysis to a weighted (pre-conditioned) matrix. The main difference from the greedy algorithm is that our method does not require the availability of an upper bound on the error. Nevertheless, our method requires an \emph{Offline} computation of an higher number of high-fidelity solutions, which may be costly. Therefore, to keep the computational cost low, we adopted sparse quadrature rules. These methods allow to get the same accuracy at a sensibly lower computational cost, especially in the case of high dimensional parameter space. The resulting weighted POD method shows the same accuracy as long as the underlying quadrature rule is properly chosen, i.e. accordingly to the distribution. Indeed, since the reduced order space will be a subspace of the space spanned by the solutions on the parameter nodes given by the quadrature rule, these training parameters needs to be representative of the probability distribution on the parameter space. Thus the choice of a proper quadrature rule plays a fundamental role.
\\
The work is organized as it follows. An elliptic PDE with random input data is
set up with appropriate assumptions on both the random coefficient and the forcing
term in Section 2.
In Section 3 we describe the affinity assumptions that must be made on our model to allow the use of the reduced basis methods; the greedy algorithm is presented in its weighted version. Our proposed method is then described as based on the minimization of an approximated mean square error. Sparse interpolation techniques for a more efficient integral approximation in higher dimensions are also presented.
Numerical examples (Section 4) for both a low-dimensional ($N=4$) problem and its higher dimensional counterpart ($N=9$) are presented as verification of the efficiency and convergence properties of our method.
Some brief concluding remarks are made in Section 5.
\section{Problem setting}
Let $(\Omega,\mathcal{F},P)$ be a complete probability space, where $\Omega$ is the set of possible outcomes, $\mathcal{F}$ is the $\sigma$-algebra of events and $P$ is the probability measure. Moreover, let $D\subseteq\mathbb{R}^d$ ($d=1,2,3$) be an open, bounded and convex domain with Lipschitz boundary $\partial D$. We consider the following stochastic elliptic problem: find $u:\Omega \times \overline{D} \to \mathbb{R}$, such that it holds for $P$-a.e. $\omega\in\Omega$ that
\begin{align}
- \nabla \cdot (a(\omega,x)\cdot\nabla u(\omega, x)) & = f(\omega,x), \qquad x \in D, \label{eq:stoc_ell_pde} \\
u(\omega,x) & = 0, \qquad \in \partial D. \notag
\end{align}
Here $a$ is a strictly positive random diffusion coefficient and $f$ is a random forcing term; the operator $\nabla$ is considered w.r.t. the spatial variable $x$.
Motivated by the regularity results obtained in \cite{peng:w_elliptic_rnd}, we make the following assumptions:
\begin{enumerate}
\item The forcing term $f$ is square integrable in both variables, i.e.
$$
\norm{f}^2_{L^2_P(\Omega)\otimes L^2(D)} \doteq \int_{\Omega \times D} f^2(\omega,x)\,dP(\omega)dx < \infty.
$$
\item The diffusion term is $P$-a.s. uniformly bounded, i.e. there exist $0<a_{\text{min}}<a_{\text{max}}<\infty$ such that
$$
P\parr*{a(\cdot,x)\in (a_{\text{min}},a_{\text{max}}) \,\forall\, x \in D}=1.
$$
\end{enumerate}
If we introduce the Hilbert space $\mathbb{H} = L^2_P(\Omega)\otimes H^1_0(D)$, we can consider the weak formulation of problem \eqref{eq:stoc_ell_pde}: find $u\in \mathbb{H}$ s.t.
\begin{equation}\label{eq:stoc_ell_pde_ww}
\mathbb{A}(u,v) = \mathbb{F}(v), \qquad \text{for every }v\in\mathbb{H},
\end{equation}
where $\mathbb{A}:\mathbb{H}\times\mathbb{H}\to\mathbb{R}$ and $\mathbb{F}:\mathbb{H}\to\mathbb{R}$ are, respectively, the bilinear and linear forms
$$
\mathbb{A}(u,v) = \int_{\Omega\times D} a\, \nabla u \cdot \nabla v \,dP(\omega) dx, \qquad \mathbb{F}(v) = \int_{\Omega\times D} v\cdot f\,dP(\omega) dx.
$$
The above is called weak-weak formulation. Thanks to assumption i.-ii., the Lax-Milgram theorem \cite{brezis} ensures us the existence of a unique solution $u\in\mathbb{H}$.
More than the solution itself, we will be interested in statistics of values related to the solution, e.g., the expectation
$\expval[s(u)]$, where $s(u)$ is some linear functional of the solution. In particular, the numerical experiments in the following are all performed for the computation of $\expval\parq{u}$.
\subsection{Weak-strong formulation}
To numerically solve problem \eqref{eq:stoc_ell_pde_ww} we first need to reduce $(\Omega,\mathcal{F},P)$ to a finite dimensional probability space. This can be accomplished up to a desired accuracy through, for example, the Karhunen-Lo\'eve expansion \cite{loeve:probability}. The random input is in this case characterized by $K$ uncorrelated random variables $Y_1(\omega),\dots,Y_K(\omega)$ so that we can write
$$
a(\omega,\cdot) = a(\mathbf{Y}(\omega),\cdot), \qquad
f(\omega,\cdot) = f(\mathbf{Y}(\omega),\cdot),
$$
and hence (thanks to the Doob-Dynkin lemma \cite{oksendal})
$$
u(\omega,\cdot) = u(\mathbf{Y}(\omega),\cdot),
$$
where $\mathbf{Y}(\omega) = (Y_1(\omega),\dots,Y_K(\omega))$. We furthermore assume that $\mathbf{Y}$ has a continuous probability distribution with density function $\rho:\mathbb{R}^K\to\mathbb{R}^+$ and that $Y_k(\Omega) \subseteq \Gamma_k$ for some $\Gamma_k\subset \mathbb{R}$ compact sets, $k=1,\dots,K$.
In case the initial probability density is not compactly supported, we can easily reduce to this case by truncating the function $\rho$ on a compact set up to a desired accuracy. Our problem can be reformulated in terms of a weighted parameter $y\in\Gamma\doteq \prod_{k=1}^K\Gamma_k$: find $u:\Gamma\to \mathbb{V} \doteq H_0^1(D)$ such that
\begin{equation}\label{eq:stoc_ell_pde_ws}
A(u(y),v;y) = F(v;y) \qquad \text{for all } v\in\mathbb{V},
\end{equation}
for a.e. $y\in \Gamma$, where $A(\cdot,\cdot;y)$ and $F(\cdot;y)$ are, respectively, the parametrized bilinear and linear forms defined as
$$
A(u,v;y) = \int_D a(y,x)\, \nabla u(x) \cdot \nabla v(x) \,dx, \qquad F(v;y) = \int_D v(x) f(y,x)\,dx,
$$
for $k=1,\dots,K$.
The parameter $y$ is distributed according to the probability measure $\rho(y)dy$. Problem \eqref{eq:stoc_ell_pde_ws} is called the weak-strong formulation of problem \eqref{eq:stoc_ell_pde_ww}. Again, the existence of a solution is guaranteed by the Lax-Milgram theorem.
Given an approximation space $\mathbb{V}_\delta \subseteq \mathbb{V}$ (typically a finite element space), with $\mathrm{dim}(\mathbb{V}_\delta) = N_\delta < \infty$, we consider the approximate problem: find $u_{N_\delta}:\Gamma \to \mathbb{V}_\delta$ such that
\begin{equation}\label{eq:stoc_ell_pde_fe}
A(u_{N_\delta}(y),v;y) = F(v;y) \qquad \text{for all } v\in\mathbb{V_\delta},
\end{equation}
for a.e. $y\in \Gamma$. We refer to problem \eqref{eq:stoc_ell_pde_fe} as the \emph{truth} (or high dimensional) problem and $u_{N_\delta}$ as the \emph{truth} solution. Consequently we approximate the output of interest as $s_{N_\delta} = s(u_{N_\delta}) \simeq s(u)$ and its statistics, e.g. $\expval[s_{N_\delta}] \simeq \expval[s(u)]$.
\subsection{Monte-Carlo methods}
A typical way to numerically solve problem \eqref{eq:stoc_ell_pde_ws} is to use a Monte-Carlo simulation. This procedure takes the following steps:
\begin{enumerate}
\item generate $M$ (given number of samples) independent and identically distributed (i.i.d.) copies of $\mathbf{Y}$, and store the obtained values $y^j$, $j=1,\dots,M$;
\item solve the deterministic problem \eqref{eq:stoc_ell_pde_ws} with $y^j$ and obtain solution $u_j=u(y^j)$, for $j=1,\dots,M$;
\item evaluate the solution statistics as averages, e.g.,
$$
\mathbb{E}[u] \simeq \langle u \rangle = \frac{1}{M}\sum_{j=1}^M u_j \qquad \textrm{or} \qquad
\mathbb{E}[s(u)] \simeq \langle s(u) \rangle = \frac{1}{M}\sum_{j=1}^M s(u_j)
$$
for some suitable output function $s$.
\end{enumerate}
Although the convergence rate of a Monte Carlo method is formally independent of the dimension of the random space, it is relatively slow (typically $1/\sqrt{M}$). Thus, one requires to solve a large amount of deterministic problems to obtain a desired accuracy, which implies a very high computational cost. In this framework reduced order methods turn out to be very useful in order to reduce the computational cost, at cost of a (possibly) small additional error.
\section{Weighted reduced order methods}\label{section:w_rom}
Given the truth approximation space $\mathbb{V}_\delta$, reduced order algorithms look for an approximate solution of \eqref{eq:stoc_ell_pde_fe} in a reduced order space $\mathbb{V}_N\subseteq\mathbb{V}_\delta$, with $N\ll N_\delta$. Reduced order methods consist of an \emph{offline} phase and an \emph{online} phase.
During the offline phase, an hierarchical sequence of reduced order spaces $\mathbb{V}_1\subseteq\cdots\subseteq\mathbb{V}_{N_{\mathrm{max}}}$ is built. These spaces are sought in the subspace spanned by the solutions for a discrete training set of parameters $\Xi_t = \bra{y^1,\dots,y^{n_t}}\subseteq \Gamma$,
according to some specified algorithm.
In this work we focus on the greedy and POD algorithm, as we will detail in Sections 3.2 and 3.3, respectively.
During the online phase, the reduced order problem is solved: find $u_N:\Gamma \to \mathbb{V}_N$ such that
\begin{equation}\label{eq:stoc_ell_pde_ro}
A(u_N(y),v;y) = F(v;y) \qquad \text{for all } v\in\mathbb{V}_N,
\end{equation}
for a.e. $y\in \Gamma$. At this point, we can approximate the output of interest as $s_N = s(u_N) \simeq s(u)$ and its statistics, e.g. $\expval[s_N] \simeq \expval[s(u)]$. In the stochastic case, we would also like to take into account the probability distribution of the parameter $y\in\Gamma$. \emph{Weighted} reduced order methods consist of slight modifications of the offline phase so that a \emph{weight} $w(y_i)$ is associated to each sample parameter $y_i$, according to the probability distribution $\rho(y)dy$. As we will discuss, another crucial point is the choice of the training set $\Xi_t$.
For sake of simplicity, from now on we omit the indexes $\delta$, $N_\delta$ and we assume that our original problem \eqref{eq:stoc_ell_pde_ws} coincides with the truth problem \eqref{eq:stoc_ell_pde_fe}.
\subsection{Affine Decomposition assumption}
In order to ensure efficiency of the online evaluation, we need to assume that the bilinear form $A(\cdot,\cdot;y)$ and linear form $F(\cdot;y)$ admit an \emph{affine decomposition}. In particular, we require the diffusion term $a(y,x)$ and the forcing term $f(y,x)$ to have the following form:
\begin{align*}
a(x,y) & = a_0(x) + \sum_{k=1}^K a_k(x)y_k, \\
f(x,y) & = f_0(x) + \sum_{k=1}^K f_k(x)y_k,
\end{align*}
where $a_k\in L^\infty(D)$ and $f_k\in L^2(D)$, for $k=1,\dots,K$, and $y = (y_1,\dots,y_K)\in\Gamma$. Thus, the bilinear form $A(\cdot,\cdot;y)$ and linear form $F(\cdot;y)$ can be written as
\begin{equation}
A(u,v;y) = A_0(u,v) + \sum_{k=1}^K y_k A_k(u,v), \qquad F(u,v;y) = F_0(u,v) + \sum_{k=1}^K y_k F_k(u,v) \label{eq:aff_dec}
\end{equation}
with
$$
A_k(u,v) = \int_D a_k\, \nabla u \cdot \nabla v \,dx, \qquad F_k(v) = \int_D v f_k\,dx,
$$
for $k=1,\dots,K$. In the more general case that the functions $a(\cdot; y)$, $f(\cdot; y)$ do not depend on $y$ linearly, one can reduce to this case by employing the empirical interpolation method \cite{patera:eim}. A weighted version of this algorithm has also been proposed in \cite{peng:eim}.
\subsection{Weighted greedy algorithm}
The greedy algorithms ideally aims to find the $N$-dimensional subspace that minimizes the error in the $L^\infty(\Gamma)$ norm, i.e. $\mathbb{V}_N\subseteq \mathbb{V}$ such that the quantity
\begin{equation}
\sup_{y\in\Gamma} \norm{u(y)-u_N(y)}_\mathbb{V}. \label{eq:gre_min_qua}
\end{equation}
is minimized.
The reduced order spaces are build hierarchically as
$$
\mathbb{V}_N = \mathrm{span}\bra{u(y^1),\dots,u(y^N)},
$$
for $N=1,\dots,N_{\mathrm{max}}$, where the parameters $y^N$ are sought as solution of an $L^\infty$ optimization problem in a greedy way:
$$
y^N = \argsup_{y\in\Gamma} \norm{u(y)-u_{N-1}(y)}_\mathbb{V}.
$$
To actually make the optimization problem above computable, the parameter domain $\Gamma$ is replaced with the training set $\Xi_t$. The greedy scheme is strongly based on the availability of an error estimator $\eta_N(y)$ such that
$$
\norm{u(y)-u_N(y)}_\mathbb{V} \leq \eta_N(y).
$$
Such an estimator needs to be both \emph{sharp}, meaning that there exists a constant $c$ (possibly depending on $N$) as close as possible to $1$ such that $\norm{u(y)-u_N(y)}_\mathbb{V} \geq c\cdot\eta_N(y)$, and \emph{efficiently} computable. An efficient way to compute $\eta_N$ is described in detail e.g. in \cite{rozza:book} (the affine decomposition \eqref{eq:aff_dec} plays an essential role here). The greedy scheme actually looks for the maximum of the estimator $\eta_N$ instead of the quantity in \eqref{eq:gre_min_qua} itself. In the case of stochastic parameter $y\in\Gamma$, the idea is to modify $\eta_N(y)$, multiplying it by a weight $w(y)$, chosen accordingly to the distribution $\rho(y)dy$. The estimator $\eta_N(y)$ is thus replaced by $\widehat{\eta}_N(y) \doteq w(y)\eta_N(y)$ in the so called weighted scheme. Weighted greedy methods have been originally proposed and developed in \cite{peng:thesis,peng:w_elliptic_rnd,peng:comparison,chen2017reduced,peng:multilevel}. An outline of the weighted greedy algorithm for the construction of the reduced order spaces is reported in Algorithm \ref{alg:w_gre}.
\begin{algorithm}[tb]
\caption{Weighted greedy algorithm}\label{alg:w_gre}
\begin{algorithmic}[1]
\STATE sample a training set $\Xi_t \subseteq \Gamma$;
\STATE choose the first sample parameter $y^1\in\Xi_t$;
\STATE solve problem \eqref{eq:stoc_ell_pde_fe} at $y^1$ and construct $\mathbb{V}_1 = \mathrm{span}\bra{u(y^1)}$;
\FOR{$N=2,\dots,N_{\mathrm{max}}$}
\STATE{compute the weighted error estimator $\widehat{\eta}_N(y)$, for all $y\in\Xi_t$;}
\STATE{choose $y^N = \argsup_{y\in\Xi_t} \widehat{\eta}_N(y)$;}
\STATE{solve problem \eqref{eq:stoc_ell_pde_fe} at $y^N$ and construct $\mathbb{V}_N = \mathbb{V}_{N-1}\oplus\mathrm{span}\bra{u(y^N)}$;}
\ENDFOR
\end{algorithmic}
\end{algorithm}
A greedy routine with estimator $\widehat{\eta}_N$ hence aims to minimize the distance of the solution manifold from the reduced order space in a weighted $L^\infty$ norm on the parameter space. Now, depending on what one wants to compute, different choices can be made for the weight function $w$. For example, if we are interested in a statistics of the solution, e.g., $\mathbb{E}[u]$, we can choose $w(y)=\rho(y)$. Thus, for the error committed computing the expected value using the reduced basis, the following estimates holds:
$$
\big\lVert \mathbb{E}[u]-\mathbb{E}[u_N] \big\rVert_\mathbb{V} \leq \int_\Gamma \norm{u(y)-u_N(y)}_\mathbb{V} \rho(y) \, dy
\leq \abs{\Gamma} \sup_{y\in\Gamma}\widehat{\eta}_N(y).
$$
If we are interested in evaluating the expectation of a linear output $s(u)$, $\mathbb{E}[s(u)]$, using the same weight function, we get the error estimate:
$$
\big| \mathbb{E}[s(u)]-\mathbb{E}[s(u_N)]\big| \leq \int_\Gamma \norm{s}_\mathbb{V'}\norm{u(y)-u_N(y)}_\mathbb{V} \rho(y) \, dy
\leq \norm{s}_\mathbb{V'}\cdot\abs{\Gamma}\sup_{y\in\Gamma}\widehat{\eta}_N(y).
$$
Instead, taking $w(y)=\sqrt{\rho(y)}$ one gets the estimate for the quadratic error:
\begin{equation}
\label{eq:w_gre_sqrt_est}
\norm{u(\mathbf{Y})-u_N(\mathbf{Y})}_\mathbb{H}^2 = \expval\norm{u-u_N}_\mathbb{V}^2 \leq \abs{\Gamma} \sup_{y\in\Gamma} \widehat{\eta}_N(y)^2.
\end{equation}
\subsection{Weighted POD algorithm}
The Proper Orthogonal Decomposition is a different method to build reduced order spaces, which does not require the evaluation of an error bound. The main idea in the deterministic (or uniform, i.e. when $\rho\equiv 1$) case is to find the $N$-dimensional subspace that minimizes the error in the $L^2(\Gamma)$ norm, i.e. $\mathbb{V}_N\subseteq \mathbb{V}$ such that the quantity
\begin{equation}
\int_\Gamma \norm{u(y)-u_N(y)}_\mathbb{V}^2\,dy. \label{eq:pod_min_qua}
\end{equation}
is minimized. As with the greedy algorithm, the method does not aim to minimize the quantity \eqref{eq:pod_min_qua} directly, but a discretization of it, namely
$$
\sum_{y\in\Xi_t}\norm{u(y)-u_N(y)}_\mathbb{V}^2 = \sum_{i=1}^{n_t}\norm{\varphi_i-P_N(\varphi_i)}_\mathbb{V}^2,
$$
where $\varphi_i = u(y^i)$, $i=1,\dots,n_t$, and $P_N:\mathbb{V}\to\mathbb{V}_N$ the projection operator associated with the subspace $\mathbb{V}_N$. One can show (see e.g. \cite{rozza:book}) that the minimizing $N$-dimensional subspace $\mathbb{V}_N$ is given by the subspace spanned by the $N$ leading eigenvectors of the linear map
$$
c : \mathbb{V} \to \mathbb{V}, \quad v \mapsto c(v) = \sum_{i=1}^{n_t} \prodscal{v,\varphi_i}_\mathbb{V}\cdot \varphi_i,
$$
where $\prodscal{\cdot, \cdot}_\mathbb{V}$ denotes the inner product in $\mathbb{V}$.
Computationally, this is equivalent to find the $N$ leading eigenvectors of the symmetric matrix $C\in\mathbb{R}^{n_t\times n_t}$ defined as $C_{ij} = \prodscal{\varphi_i,\varphi_j}_\mathbb{V}$. In the case of stochastic inputs we would rather like to find the $N$-dimensional subspace that minimizes the following error
\begin{equation}
\norm{u(\mathbf{Y})-u_N(\mathbf{Y})}_\mathbb{H}^2 = \int_\Gamma \norm{u(y)-u_N(y)}_\mathbb{V}^2\rho(y)\,dy.
\label{eq:w_pod_min_qua}
\end{equation}
Based on this observation, we propose a weighted POD method, which is based on the minimization of a discretized version of \eqref{eq:w_pod_min_qua}, namely
\begin{equation}
\sum_{y\in\Xi_t}w(y)\norm{u(y)-u_N(y)}_\mathbb{V}^2 = \sum_{i=1}^{n_t}w_i\norm{\varphi_i-P_N(\varphi_i)}_\mathbb{V}^2,
\label{eq:w_pod_min_qua_dis}
\end{equation}
where $w:\Xi_t \to [0,\infty)$ is a weight function prescribed according to the parameter distribution $\rho(y)dy$, and $w_i=w(y^i)$, $i=1,\dots,n_t$. Again, this is computationally equivalent to finding the $N$ maximum eigenvectors of the preconditioned matrix $\hat{C} \doteq P \cdot C$, where $C$ is the same as defined before and $P=\mathrm{diag}(w_1,\dots,w_{n_t})$. We note that the matrix $\hat{C}$ is not symmetric in the usual sense, but it is with respect to the scalar product induced by the matrix $C$ (i.e. it holds that $\hat{C}^TC = C\hat{C}$).
Thus, spectral theorem still holds and there exists an orthonormal basis of eigenvectors, i.e., $\widehat{C}$ is diagonalizable with an orthogonal change of basis matrix. The discretized parameter space $\Xi_t$ can be selected with a sampling technique, e.g., using an equispaced tensor product grid on $\Gamma$ or taking $M$ realizations of a uniform distribution on $\Gamma$. Note that if we build $\Xi_t$ as the set of $M$ realizations of a random variable on $\Gamma$ with distribution $\rho(y)dy$, and we put $w\equiv 1$, the quantity \eqref{eq:w_pod_min_qua_dis} we minimize is just a Monte Carlo approximation of the integral \eqref{eq:w_pod_min_qua}. Following this observation, a possible approach would be to select $\Xi_t$ and $w$ as the nodes and the weights of a quadrature rule that approximates the integral \eqref{eq:w_pod_min_qua}.
That is, if we consider a quadrature operator $\mathcal{U}$, defined as
$$
\mathcal{U}(f) = \sum_{i=1}^{n_t} \omega_i f(x^i)
$$
for every integrable function $f:\Gamma\to\mathbb{R}$, where $\bra{x^1,\dots,x^{n_t}}\subset\Gamma$ are the nodes of the algorithm and $\omega_1,\dots,\omega_{n_t}$ the respective weights, then we can take $\Xi_t = \bra{x^1,\dots,x^{n_t}}$ and%
\footnote{We assume that $\mathcal{U}$ is a quadrature rule for integration with respect to $dy$. If a quadrature rule $\mathcal{U}_{\rho}$ for integration with respect to the weighted measure $\rho dy$ is used instead, that is suffices to take $w_i = \omega_i$.}
$w_i = \omega_i\rho(x^i)$.
Therefore, varying the quadrature rule $\mathcal{U}$ used, one obtain diverse ways of (at the same time) sampling the parameter space and preconditioning the matrix $C$. An outline of the weighted POD algorithm for the construction of the reduced order spaces is reported in Algorithm \ref{alg:w_pod}. \\
\begin{algorithm}[tb]
\caption{Weighted POD algorithm}\label{alg:w_pod}
\begin{algorithmic}[1]
\STATE choose a training set $\Xi_t = \bra{y^1,\dots,y^{n_t}} \subseteq \Gamma$ and a weight function $w$ according to some quadrature method;
\STATE compute the solutions $\varphi_i$ by solving problem \eqref{eq:stoc_ell_pde_fe} at $y^i$, $i=1,\dots,n_t$;
\STATE assemble the matrix $\hat{C}_{ij} = w_i\prodscal{\varphi_i,\varphi_j}_\mathbb{V}$ and compute its $N$ maximum eigenvectors $n^1,\dots,n^j$;
\STATE compute $N$ maximal eigenvectors $\xi^1,\dots,\xi^N\in\mathbb{V}$ as $\xi^i = \sum_{j=1}^{n_t} n^i_j \varphi_j$ and construct $\mathbb{V}_N = \mathrm{span}\bra{\xi^1,\dots,\xi^N}$;
\end{algorithmic}
\end{algorithm}
Since the proposed weighted POD method requires to perform $n_t$ truth solve and compute the eigenvalues of a $n_t\times n_t$ matrix, the dimension $n_t$ of $\Xi_t$ should better not be too large. A possible way to keep the sample size $n_t$ low is to adopt a sparse grid quadrature rule, as we describe in the following.
\subsection{Sparse grid interpolation}
In order to make the weighted POD method more efficient as the dimension of the parameter space increases, we use Smolyak type sparse grid instead of full tensor product ones for the quadrature operator $\mathcal{U}$. These type of interpolation grids have already been used in the context of weighted reduced order methods \cite{peng:thesis,peng:comparison} as well as of other numerical methods for stochastic partial differential equations, like e.g. in stochastic collocation \cite{nobile:collocation,hestaven:collocation}. Full tensor product quadrature operator are simply constructed as product of univariate quadrature rules. If $\Gamma = \prod_{k=1}^K \Gamma_k$ and $\bra{\mathcal{U}_i^{(k)}}_{i=1}^\infty$ are sequences of univariate quadrature rules on $\Gamma_k$, $k=1,\dots,K$, the tensor product multivariate rule on $\Gamma$ of order $q$ is given by
$$
\mathcal{U}^K_q \doteq \bigotimes_{k=1}^K\,\mathcal{U}_q^{(k)} = \sum_{\substack{\abs{\alpha}_\infty \leq q \\ \alpha \in \mathbb{N}^K}} \bigotimes_{k=1}^K \Delta_{\alpha_k}^{(k)},
$$
where we introduced the \emph{differences operators} $\Delta_0^{(k)}=0$, $\Delta_{i+1}^{(k)}=\mathcal{U}_{i+1}^{(k)}-\mathcal{U}_i^{(k)}$ for $i\geq 0$. Given this, the Smolyak quadrature rule of order $q$ is defined as
$$
\mathcal{Q}_q^K \doteq \sum_{\substack{\abs{\alpha}_1 \leq q \\ \alpha \in \mathbb{N}^K}} \bigotimes_{k=1}^K \Delta_{\alpha_k}^{(k)}.
$$
Therefore, Smolyak type rules can be seen as delayed sum of ordinary full tensor product rules. One of their main advantage is that the number of evaluation points is drastically reduced as $K$ increases. Indeed, if $X^{(k)}_i$ is the set of evaluation points for the rule $\mathcal{U}_i^{(k)}$, then the set of evaluation points for the rule $\mathcal{U}_q^K$ is given by
$$
\Theta_F^{q,K} = X_q^{(1)} \times \dots \times X_q^{(K)},
$$
while the set of evaluation points for the rule $\mathcal{Q}_q^K$ is given by
$$
\Theta_S^{q,K} = \bigcup_{\substack{\abs{\alpha}_1 = q \\ \alpha \in \mathbb{N}^n, \; \alpha \geq \mathbf{1}}} X_{\alpha_1}^{(1)} \times \dots \times X_{\alpha_n}^{(n)}.
$$
In practice, for $K$ large, one has that $\abs{\Theta_S^{q,K}}$ grows much slower as $q$ increases w.r.t. $\abs{\Theta_F^{q,K}}$, which has an exponential behavior (see Fig. 1 for a comparison using a Clenshaw-Curtis quadrature rule). This implies that the adoption of Smolyak quadrature rules has a much lower computational cost than common full tensor product rules for high dimensional problem. Moreover, the performance of Smolyak quadrature rules are comparable with standard rules. For more details about error estimates and computational cost of Smolyak rules we refer e.g. to \cite{novak:sparse_grids,gerstner:sparse_grids,holtz:sparse_grids,novak:sparse_grids_quadrature,wasil:sparse_grids}.
\begin{figure}[tb]
\centering
\begin{minipage}[c]{.47\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig1.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.47\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig2.pdf}
\end{minipage}
\caption{Two dimensional grids based on nested Clenshaw-Curtis nodes for $q=6$. The left one is based on a Smolyak rule ($145$ nodes), while the right one on a full tensor product rule ($1089$ nodes).}
\end{figure}
\section{Numerical tests}
We tested and compared the weighted greedy algorithm and the weighted POD algorithm for the solution of problem \eqref{eq:stoc_ell_pde} on the unit square domain $D = [0,1]^2$. To solve the below problems we used the $\mathtt{RBniCS}$ library \cite{rbnics:link}, built on top of $\mathtt{FEniCS}$ \cite{logg2012automated}.
\subsection{Thermal block problem}\label{section:4dz}
In this section we describe the test case we considered to asses the numerical performance of the diverse algorithms. Let $D = [0,1]^2 =\cup_{k=1}^K \Omega_k$ be a decomposition of the spatial domain. We consider problem \eqref{eq:stoc_ell_pde} with $f \equiv 1$ and
$$
a(x;y) = \sum_{k=1}^K y_k \mathbbm{1}_{\Omega_k}(x), \qquad \textrm{for $x\in D$}
$$
and $y=(y_1,\dots,y_K)\in \Gamma = [1,3]^K$.
In other words, we are considering a diffusion problem on $D$, where the diffusion coefficient is constant on the elements of a partition of $D$ in $K$ zones.
We study the case of a stochastic parameter $\mathbf{Y}=(Y_1,\dots,Y_K)$ where $Y_k$'s are independent random variables with a shifted and re-scaled Beta distribution:
$$
Y_k \sim 2\cdot\mathrm{Beta}(\alpha_k,\beta_k) + 1,
$$
for some positive distribution parameters $\alpha_k,\beta_k$, $k=1,\dots,K$. We consider a uniform decomposition of the domain $D$ for either $K=4$ (low-dimensional case) or $K=9$ (higher-dimensional case), as illustrated in Figure \ref{fig:problem_domains}.
\begin{figure}[tb]
\centering
\begin{minipage}[c]{.3\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig25.pdf}
\end{minipage}
\hspace{4mm}
\centering
\begin{minipage}[c]{.38\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig26.pdf}
\end{minipage}
\caption{The considered decompositions of the spatial domain $D = [0,1]^2$. \label{fig:problem_domains}}
\end{figure}
\subsection{Weighted reduced order methods}
We implemented the standard (non-weighted) and the weighted greedy algorithm for construction of reduced order spaces. We took $\omega = \sqrt{\rho}$ in the weighted case (this choice being motivated by \eqref{eq:w_gre_sqrt_est}) and we sampled $\Xi_t$ using diverse techniques:
\begin{itemize}
\item sampling from uniform distribution;
\item sampling from equispaced grid;
\item sampling from the (Beta) distribution.
\end{itemize}
We choose $y^1 = (2,\dots,2)\in\mathbb{R}^K$ as first parameter in Algorithm \ref{alg:w_gre} (the mode of the distribution $\mathbf{Y}$). The size of the training set was chosen to be $n_t = 1000$ for the case $K = 4$ and $n_t = 2000$ for the case $K = 9$.
Furthermore, we implemented diverse versions of the weighted POD algorithm and we compared their performance. As explained above, different version of the weighted POD algorithm are based on different quadrature formulae. Each formula is specified by a set of nodes $\Xi_t = \bra{y_1,\dots,y_{n_t}}$ and a respective set of weights $w_i$ (note that these can be different from the weights $\omega_i$ of the quadrature formula; below we also denote $\rho_i=\rho(y_i)$). In particular, we experimented the following weighted variants of the POD algorithm with non-sparse grids:
\begin{itemize}
\item Uniform Monte-Carlo: the nodes are uniformly sampled and the weights are given by $w_i = \rho_i/n_t$;
\item Monte-Carlo: the nodes are sampled from the distribution of $\mathbf{Y}$ and uniformly weighted;
\item Clenshaw-Curtis (Gauss-Legendre, respectively): the nodes are the ones of Clen- shaw-Curtis (Gauss-Legendre, resp.) tensor product quadrature formula and the weights are given by $w_i = \rho_i\omega_i$;
\item Gauss-Jacobi: the nodes are the ones of the Gauss-Jacobi tensor product quadrature formula and the weights are given by $w_i = \omega_i$.
\end{itemize}
In the low-dimensional case ($K=4$) we tested the methods for two different values of the training set size, $n^1_t$ and $n^2_t$. In the first test we chose $\Xi_t=\Xi_t^1$ to be the smallest\footnote{When using tensor product quadrature rule, we can not impose the cardinality of $\Xi_t$ a priori. We also note that when we use the Clenshaw-Curtis approximation, the majority of the points in $\Xi_t$ lies on $\partial\Gamma$: these point are completely negligible, since $\rho|_{\partial\Gamma} \equiv 0$. So, in this case, we need to take a considerably larger value for $n_t$ to reach the desired cardinality of nodes in the interior $\mathring{\Gamma}$.} possible set such that $\abs{\Xi_t^1 \setminus \partial\Gamma}\geq 100$ and in the second test we chose $\Xi_t=\Xi_t^2$ to be the smallest possible set such that $\abs{\Xi_t^2 \setminus \partial\Gamma}\geq 500$. For the higher-dimensional case ($K=9$), the sizes of the sets of nodes of Gauss-Legendre and Gauss-Jacobi rules was $n_t=2^9=512$; indeed, the next possible choice $n_t=3^9=19683$ was computationally impracticable. Moreover, Clenshaw-Curtis formula was not test, because a very large training set would have been required as $\abs{\Xi_t \cap \mathring{\Gamma}}=1$ for (the already impractible choice) $n_t=3^9$. In this case, the use of sparse Gauss-Legendre/Gauss-Jacobi quadrature rules provided a more representative set of nodes consisting of just $n_t = 181$ nodes.
A summary of the formulae and training set sizes used is reported in Table \ref{table:4dz_pod}.
\begin{table}[tbp]
\centering
{\renewcommand\arraystretch{1.2}
\begin{tabular}{|l|c|c|c|c|}
\hline
& \multirow{2}{*}{$w_i$} & \multicolumn{2}{|c|}{$K=4$} & \multicolumn{1}{|c|}{$K=9$} \\
\cline{3-5}
& & $n_t^1$ & $n_t^2$ & $n_t$ \\
\hline
Standard & $1/n_t$ & $100$ & $500$ & $500$ \\
\hline
Monte-Carlo & $1/n_t$ & $100$ & $500$ & $500$ \\
\hline
Uniform Monte-Carlo & $\rho_i/n_t$ & $100$ & $500$ & $2000$ \\
\hline
Clenshaw-Curtis & $\omega_i\rho_i$ & $1296$ & $2401$ & $-$\\
\hline
Gauss-Legendre & $\omega_i\rho_i$ & $256$ & $625$ & $512$ \\
\hline
Gauss-Jacobi & $\omega_i$ & $256$ & $625$ & $512$ \\
\hline
Sparse Gauss-Legendre & $\omega_i\rho_i$ & $-$ & $-$ & $181$ \\
\hline
Sparse Gauss-Jacobi & $\omega_i$ & $-$ & $-$ & $181$ \\
\hline
\end{tabular}}
\caption{Weights and sizes of training sets in the diverse weighted POD variants tested.\label{table:4dz_pod}}
\end{table}
\subsection{Results}
We compared the performance of the weighted greedy and weighted POD algorithms for computing the expectation of the solution. In particular, we computed the error \eqref{eq:w_pod_min_qua}
using a Monte Carlo approximation, i.e.,
\begin{equation}
\label{eq:w_pod_err}
\norm{u(\mathbf{Y})-u_N(\mathbf{Y})}^2_\mathbb{H}
\simeq \frac{1}{M} \sum_{m=1}^M \norm{u(y^m)-u_N(y^m)}^2_\mathbb{V},
\end{equation}
where $y^1,\dots,y^M$ are $M$ independent realizations of $\mathbf{Y}$. For truth problem solutions, we adopted $\mathbb{V}_\delta$ to be the classical $\mathbb{P}^1$-FE approximation space. Three cases were carried out:
\begin{enumerate}
\item distribution parameters $(\alpha_i,\beta_i)=(10,10)$, for $i = 1, \hdots, K = 4$;
\item distribution parameters $(\alpha_i,\beta_i)=(10,10)$, for $i = 1, \hdots, K = 9$;
\item distribution parameters $(\alpha_i,\beta_i)=(75,75)$, for $i = 1, \hdots, K = 9$, resulting in a more concentrated distribution than case 2.
\end{enumerate}
Figures \ref{figure:LOW:wPOD}--\ref{figure:HIGH:MIX} show the graphs of the error \eqref{eq:w_pod_err} (in a $\log_{10}$ scale), as a function of the reduced order space dimension $N$, for different methods. In particular, Figs 3--5 collect the results for case 1, as well as Figs. 6--8 those of case 2, while results for case 3 are shown in Figs 9--11. From these plots, we can gather the following conclusions:
\begin{itemize}
\item \emph{Weighted vs standard}. The weighted versions of the POD and greedy algorithms outperform the performance of their standard counter-parts in the stochastic setting, see Figures \ref{figure:LOW:wPOD} (POD) and \ref{figure:LOW:WG+MIX} (Greedy) for case 1. Such a difference is even more evident in for higher parametric dimension (compare Figure \ref{figure:LOW:wPOD} to Figure \ref{figure:HIGH:wPOD} for the POD case) or in presence of higher concentrated parameters distributions (compare Figure \ref{figure:HIGH:wPOD} to Figure \ref{figure:HIGH:cPOD} for the POD case, and see Figures \ref{figure:HIGH:GR} and \ref{figure:HIGH:MIX} for the greedy case).
\item \emph{Importance of representative training set}.
The Monte-Carlo and Gauss-Jacobi POD algorithms outperform the other weighted POD variants (Figures \ref{figure:LOW:qPOD} for case 1, as well as Figures \ref{figure:HIGH:wPOD} and \ref{figure:HIGH:qPOD} for case 2). In the low-dimensional case 1, we can still recover the same accuracy also with the other weighted variants, at the cost of using a larger training set $\Xi_t$ (Figure \ref{figure:LOW:qPOD}). However, for the higher dimensional case 2, the choice of the nodes plays a much more fundamental role (see Figures \ref{figure:HIGH:wPOD} and \ref{figure:HIGH:qPOD}). Monte-Carlo and Gauss-Jacobi methods perform significantly better because the underlying quadrature rule is designed for the specific distribution $\rho(y)\,dy$ (a Beta distribution). For more concentrated distributions, methods lacking a representative training set may eventually lead to a very inaccurate sampling of the parameter space, even resulting in numerically singular reduced order matrices despite orthonormalization of the snapshots. This is because the subspace built by the reduced order methods is a subspace of $\bra{u(y) \st y \in \Xi_t }$. Thus, if $\Xi_t$ contains only points $y$ with low probability $P\bra{Y = y}$, adding a linear combination of solutions $\bra{u(y) \st y \in\Xi_t }$ to the reduced space does not increase the accuracy of the computed statistics. Instead the weighting procedure tends to neglect such solutions, resulting in linearly dependent vectors defining the reduced order subspace. This is can also be observed for the weighted greedy algorithm (Figure \ref{figure:HIGH:GR}).
\item \emph{Breaking curse of dimensionality through sparse grids}. The presented algorithms also suffer from the fact that for increasing parameter space dimensions $K$, the number of nodes, $n_t$, increases exponentially. In the weighted POD algorithm we can mitigate this problem by adopting a sparse quadrature rule.
Figure \ref{figure:HIGH:sPOD} shows the performances of the (tensor product) Gauss-Jacobi POD versus its sparse version. Not only does the use of sparse grids make the method much more efficient, reducing significantly the size of the training set used (in our specific case, from $n_t = 512$ to $n_t=181$), but it also results in a negligibly higher accuracy.
On the other hand, the importance of a well-representative training set is highlighted by the use of sparse grids.
Sparse Gauss-Legendre results in numerically singular matrices at lower $N$ (than its tensor product counter-part); sparsity makes its associated training set less representative. For this reason, the plots of the error obtained with this method were not reported.
\item \emph{Weighted POD vs weighted greedy}. The weighted POD seems to work slightly better than the weighted greedy (see Figure \ref{figure:LOW:WG+MIX} (right) for case 1 and Figure \ref{figure:HIGH:MIX} for case 3). This is because weighted POD is designed to minimize (in some sense) the quantity \eqref{eq:w_pod_min_qua}; however, the difference in terms of the error is practically negligible. The main difference in the two algorithms lies in the different training procedure. Thanks to the availability of an inexpensive error estimator, we are able to use large training sets for greedy algorithms, while still requiring a moderate computational load during the training phase. On the flip side, the availability of different techniques for the POD algorithms also allows control of the computational cost of this algorithm, which does not require the construction of an ad-hoc error estimator, making it more suitable to study problems for which such error estimation procedure is not yet available.
\end{itemize}
\begin{figure}[p]
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig4.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig5.pdf}
\end{minipage}
\caption{\emph{Low-dimensional case ($K=4$).} Plots of the error \eqref{eq:w_pod_err} obtained using standard (POD - uniform), uniform Monte-Carlo (Weighted POD - Uniform) and Monte-Carlo (POD - distribution) POD algorithms. Left: $\Xi_t^1$. Right: $\Xi_t^2$.\label{figure:LOW:wPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig6.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig7.pdf}
\end{minipage}
\caption{\emph{Low-dimensional case ($K=4$).} Plots of the error \eqref{eq:w_pod_err} obtained using Clenshaw-Curtis, Gauss-Legendre and Gauss-Jacob POD algorithms. Left: $\Xi_t^1$. Right: $\Xi_t^2$.\label{figure:LOW:qPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig3.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig9.pdf}
\end{minipage}
\caption{\emph{Low-dimensional case ($K=4$).} Left: Plots of the error \eqref{eq:w_pod_err} obtained using standard greedy algorithm and weighted greedy algorithm with uniform sampling and sampling of the distribution. Right: Plots of the error \eqref{eq:w_pod_err} obtained using standard and weighted greedy and standard and Gauss-Jacobi POD algorithms. \label{figure:LOW:WG+MIX}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig12.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig13.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using standard, uniform Monte-Carlo and Monte-Carlo POD algorithms. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:wPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig14.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig15.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using (tensor product) Gauss-Legendre and Gauss-Jacobi POD algorithms. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:qPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig20.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig21.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using Gauss-Legendre POD and sparse Gauss-Legendre POD algorithms. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:sPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig16.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig17.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using standard, Monte-Carlo and Gauss-Jacobi POD algorithms. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:cPOD}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig10.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig11.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using standard and weighted greedy algorithms with uniform sampling and from the distribution. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:GR}}
\end{figure}
\begin{figure}[p]
\centering
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig18.pdf}
\end{minipage}
\hspace{4mm}
\begin{minipage}[c]{.45\textwidth}
\includegraphics[width=\textwidth,
keepaspectratio]{Fig19.pdf}
\end{minipage}
\caption{\emph{Higher-dimensional case ($K=9$).} Plots of the error \eqref{eq:w_pod_err} obtained using standard and weighted Greedy and standard and Monte-Carlo POD algorithms. Left: $\alpha_i=\beta_i=10$. Right: $\alpha_i=\beta_i=75$. \label{figure:HIGH:MIX}}
\end{figure}
\afterpage{\clearpage}
\section{Conclusion and Perspectives}
In this work we developed a weighted POD method for elliptic PDEs. The algorithm is introduced alongside the previously developed weighted greedy algorithm. While the latter aims to minimize the error in the $L^\infty$ norm, the former minimizes an approximation of the mean squared error, which is of better interpretation when weighted. Differently from the greedy, the introduced algorithm does not require the availability of error estimation. It is instead based on a quadrature rule, which can be chosen accordingly to the parameter distribution. In particular, this allows to implement sparse quadrature rules to reduce the computational cost of the offline phase as well. \\
A numerical example on a thermal block problem was carried out to test the proposed reduced order method, for either a low dimensional parametric space or two higher dimensional space of parameters. For this problem, we assessed that the weighted POD method is an efficient alternative to the weighted greedy algorithm. The numerical tests also highlighted the importance of a training set which is representative of the underlying parameter distribution. In case of a representative rule, the sparse quadrature rule based algorithm showed to perform better for what concerns accuracy and a lower number of training snapshots.
\\
Possible future investigations could concern applications to problems with more involved stochastic dependence, as well as non-affinely parametrized problems. The latter ones could require the use of an ad-hoc weighted empirical interpolation technique \cite{peng:eim}. Another extension, especially in the greedy case, would be that of providing accurate estimation for the error. Such estimation were obtained for linear elliptic coercive problems in \cite{peng:w_elliptic_rnd}, but it would be useful to generalize them to different problems.
Finally, the proposed tests and methodology could also be used as the first step to study non-linear problems.
\paragraph{Acknowledgements}
We acknowledge the support by European Union Funding for Research and Innovation - Horizon 2020 Program - in the framework of European Research Council Executive Agency: H2020 ERC Consolidator Grant 2015 AROMA-CFD project 681447 ``Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics''. We also acknowledge the INDAM-GNCS projects ``Metodi numerici avanzati combinati con tecniche di riduzione computazionale per PDEs parame- trizzate e applicazioni'' and ``Numerical methods for model order reduction of PDEs''.
The computations in this work have been performed with RBniCS \cite{rbnics:link} library, developed at SISSA mathLab, which is an implementation in FEniCS \cite{logg2012automated} of several reduced order modelling techniques; we acknowledge developers and contributors to both libraries.
\bibliographystyle{plain}
|
{
"timestamp": "2018-02-27T02:01:14",
"yymm": "1802",
"arxiv_id": "1802.08724",
"language": "en",
"url": "https://arxiv.org/abs/1802.08724"
}
|
\section{Introduction}
In models of computation, various notions of \emph{guardedness} serve
to control cyclic behaviour by allowing only guarded cycles, with the
aim to ensure properties such as solvability of recursive equations or
productivity. Typical examples are guarded process algebra
specifications~\cite{Milner89,BaetenBastenEtAl10}, coalgebraic guarded
(co-)recursion~\cite{Rutten00,Milius05}, finite delay in online Turing
machines~\cite{BookGreibach70}, and productive definitions in
intensional type theory~\cite{AbelPientka13,Mgelberg14}, but also
contractive maps in (ultra-)metric spaces~\cite{KrishnaswamiBenton11}.
A highly general model for unrestricted cyclic computations, on the
\begin{wrapfigure}{r}{0.3\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[align=c,scale=.3]{feedback_ax.pdf}
\end{center}
\vspace{-10pt}
\caption{Guarded trace}\label{fig:guarded-trace}
\vspace{-20pt}
\end{wrapfigure}
other hand, are \emph{traced monoidal
categories}~\cite{JoyalStreetEtAl96}; besides \emph{recursion} and
\emph{iteration}, they cover further kinds of cyclic behaviour, e.g.\
in Girard's \emph{Geometry of
Interaction}~\cite{Girard89,AbramskyHaghverdiEtAl02} and quantum
programming~\cite{AbramskyCoecke04,Selinger04}. In the present paper
we parametrize the framework of traced symmetric monoidal categories
with a notion of guardedness, arriving at \emph{(abstractly) guarded
traced categories}, which effectively vary between two extreme
cases: symmetric monoidal categories (nothing is guarded) and traced
symmetric monoidal categories (everything is guarded). In terms of the
standard diagrammatic language for traced monoidal categories, we
decorate input and output gates of boxes to indicate guardedness; the
diagram governing trace formation would then have the general form
depicted in Figure~\ref{fig:guarded-trace}
-- that is, we can only form traces connecting guarded (black) output
gates to input gates that are unguarded (black), i.e.\ not assumed to
be already guarded.
We provide basic structural results on our notion of abstract
guardedness, and identify a wide array of examples. Specifically, we
establish a geometric characterization of guardedness in terms of
paths in diagrams; we identify a notion of \emph{guarded ideal}, along
with a construction of guardedness structures from guarded ideals and
simplifications of this construction for the (co-)Cartesian and the
Cartesian closed case; and we describe `vacuous' guardedness
structures where traces do not actually generate proper diagrammatic cycles. In
terms of examples, we begin with the case where the monoidal structure
is either product (Cartesian), corresponding to guarded recursion, or
coproduct (co-Cartesian), for guarded iteration; the axioms for
guardedness allow for a basic duality that indeed makes these two
cases precisely dual. For total traces in Cartesian categories,
Hasegawa and Hyland observed that trace operators are in one-to-one
correspondence with \emph{Conway fixpoint
operators}~\cite{Hasegawa97,Hasegawa99}; we extend this
correspondence to the guarded case, showing that guarded trace
operators on a Cartesian category are in one-to-one correspondence
with guarded Conway operators. In a more specific setting, we relate
\emph{guarded} traces in Cartesian categories to \emph{unguarded}
categorical uniform fixpoints as studied by Crole and
Pitts~\cite{CrolePitts90} and by Simpson and
Plotkin~\cite{Simpson92,SimpsonPlotkin00}. Concluding with a case
where the monoidal structure is a proper tensor product, we show that
the partial trace operation on (infinite-dimentional) Hilbert spaces is an instance of
vacuous guardedness; this result relates to work by Abramsky, Blute,
and Panangaden on traces over nuclear ideals, in this case over
\emph{Hilbert-Schmidt operators}~\cite{AbramskyBluteEtAl99}.\medskip
\noindent\textbf{Related work}\quad Abstract guardedness serves to determine
definedness of a guarded trace operation, and thus relates to work on
partial traces. We discuss work on nuclear
ideals~\cite{AbramskyBluteEtAl99} in Section~\ref{sec:dag}. In
\emph{partial traced
categories}~\cite{HaghverdiScott10,MalherbeScottEtAl12}, traces are
governed by a partial equational version (consisting of both strong
and directed equations) of the Joyal-Street-Verity axioms; morphisms
for which trace is defined are called \emph{trace class}. A key
difference to the approach via guardedness is that being trace class
applies only to morphisms with inputs and outputs of matching types
while guardedness applies to arbitrary morphisms, allowing for
compositional propagation. Also, the axiomatizations are incomparable:
Unlike for trace class morphisms~\cite[Remark 2.2]{HaghverdiScott10},
we require guardedness to be closed under composition with arbitrary
morphisms (thus covering contractivity but not, e.g., monotonicity as
in the modal $\mu$-calculus); on the other hand, as noted by
Jeffrey~\cite{Jeffrey12}, guarded traces, e.g.\ of contractions, need
not satisfy Vanishing II as a Kleene equality as assumed in partial
traced categories. Some approaches treat traces as partial over
objects~\cite{BluteCockettEtAl00,Jeffrey97}. In concrete algebraic
categories, partial traces can be seen as induced by total traces in
an ambient category of relations~\cite{ArthanMartinEtAl09}. We discuss
work on guardedness via endofunctors in Remark~\ref{rem:ml}.
\section{Preliminaries}\label{sec:prelim}
We recall requisite categorical notions; see~\cite{MacLane71} for a
comprehensive introduction.\medskip
\noindent\textbf{Symmetric Monoidal Categories}\quad
A \emph{symmetric monoidal category} $(\BC,\tensor, I)$ consists of a
category $\BC$ (with object class $|\BC|$), a bifunctor $\tensor$
(\emph{tensor product}), and a \emph{(tensor) unit} $I\in |\BC|$, and
coherent isomorphisms witnessing that $\tensor$ is, up to isomorphism,
a commutative monoid structure with unit~$I$. For the latter, we
reserve the notation
$\alpha_{A,B,C}:(A\tensor B)\tensor C\cong A\tensor (B\tensor C)$
(\emph{associator}), $\gamma_{A,B}: A\tensor B\cong B\tensor A$
(\emph{symmetry}), and $\upsilon_A:I\tensor A\cong A$ (\emph{left
unitor}); the \emph{right unitor} $\hat\upsilon_A:A\tensor I\cong A$
is expressible via the symmetry.
A symmetric monoidal category is \emph{Cartesian} if the monoidal
structure is finite product (i.e.\ $\tensor =\times$, and $I=1$ is a
terminal object), and, dually, \emph{co-Cartesian} if the monoidal
structure is finite coproduct (i.e.\ $\tensor=+$, and $I=\iobj$ is an
initial object). Coproduct injections are written
$\inj_i:X_i\to X_1+X_2$ ($i=1,2$), and product projections
$\pr_i:X_1\times X_2\to X_i$. Various notions of algebraic tensor
products also induce symmetric monoidal structures; see
Section~\ref{sec:dag} for the case of Hilbert spaces. One has an
obvious expression language for objects and morphisms in symmetric
monoidal categories~\cite{Selinger11}, the former obtained by
postulating basic objects and closing under $I$ and $\tensor$, and the
latter by postulating basic morphisms of given profile and closing
under $\tensor$, $I$, composition, identities, and the monoidal
isomorphisms, subject to the evident notion of
\emph{well-typedness}. Morphism expressions are conveniently
represented as \emph{diagrams} consisting of boxes representing the
basic morphisms, with input and output gates corresponding to the
given profile. Tensoring is represented by putting boxes on top of
each other, and composition by wires connecting outputs to
inputs~\cite{Selinger11}. In a \emph{traced symmetric monoidal
category} one has an additional operation (\emph{trace}) that
essentially enables the formation of loops in diagrams, as in
Figure~\ref{fig:guarded-trace} (but without decorations).
\textbf{Monads and (Co-)algebras}\quad A(n \emph{$F$-)coalgebra} for a
functor $F:\BC\to\BC$ is a pair $(X,f:X\to FX)$ where $X\in |\BC|$,
thought of as modelling states and generalized
transitions~\cite{Rutten00}. A \emph{final coalgebra} is a final
object in the category of coalgebras (with $\BC$-morphisms $h:X\to Y$ such
that $(Fh) f = g h$ as morphisms $(X,f)\to (Y,g)$), denoted
$(\nu F,\oname{out}:\nu F\to F\nu F)$ if it exists.
Dually, an \emph{$F$-algebra} has the form $(X,f:FX\to X)$. %
A \emph{monad} $\BBT=(T,\mu,\eta)$ on a category $\BC$ consists of an
endofunctor $T$ on $\BC$ and natural transformations $\eta:\Id\to T$
(\emph{unit}) and $\mu:T^2\to T$ (\emph{multiplication}) subject to
standard equations~\cite{MacLane71}. As observed by
Moggi~\cite{Moggi91a}, monads can be seen as capturing
\emph{computational effects} of programs, with $TX$ read as a type of
computations with side effects from $T$ and results in~$X$. %
In this view, the \emph{Kleisli category} $\BC_\BBT$ of $\BBT$, which
has the same objects as $\BC$ and
$\Hom_{\BC_\BBT}(X,Y)= \Hom_{\BC}(X,TY)$, is a category of
side-effecting programs. A monad is \emph{strong} if it is equipped
with a \emph{strength}, i.e.\ a natural transformation
$X\times TY\to T(X\times Y)$ satisfying evident coherence conditions
(e.g.~\cite{Moggi91a}). A $T$-algebra $(A,a)$ is an
\emph{(Eilenberg-Moore) $\BBT$-algebra} (for the \emph{monad}~$\BBT$)
if additionally $a\comp\eta=\id$ and
$a\comp (Ta) = a\mu_A$; the category of
$\BBT$-algebras is denoted $\BC^{\BBT}$.
\section{Guarded Categories}\label{sec:guarded}
\vspace{-0.09em}
\noindent We now introduce our notion of guarded structure. A standard
example of guardedness are guarded definitions in process
algebra. E.g.\ in the definition $P = a.P$, the right hand occurrence
of $P$ is guarded, ensuring unique solvability (by a process that
keeps outputting $a$). A further example is contractivity of maps
between complete metric spaces. We formulate abstract closure
properties for \emph{partial} guardedness where only some of the
inputs and outputs of a morphism are guarded. Specifically, we
distinguish \emph{guarded outputs} and \emph{guarded inputs} ($D$
and~$B$, respectively, in the following definition), with the intended
reading that guarded outputs yield guarded data \emph{provided}
guarded data is already provided at guarded inputs, while unguarded
inputs may be fed arbitrarily.
\begin{figure}[t]
\begin{center}%
\small%
\includegraphics[scale=0.23]{assoc_ax.pdf}
\qquad
\includegraphics[scale=0.23]{inter_ax.pdf}
\qquad
\includegraphics[scale=0.23]{seq_ax.pdf}
\qquad
\includegraphics[scale=0.23]{par_ax.pdf}
\caption{Axioms of guarded categories}
\label{fig:gmon}
\end{center}
\vspace{-5ex}
\end{figure}
\begin{defn}[Guarded category]
\label{def:guard_sm}
An \emph{(abstractly) guarded category} is a symmetric monoidal
category $(\BC, \tensor, I)$ equipped with distinguished subsets
$\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)\subseteq\Hom({A\tensor B},C\tensor D)$
of \emph{partially guarded morphisms} for $A,B,C,D\in|\BC|$,
satisfying the following conditions:
\begin{description}
\item[(uni${}_{\tensor}$)] $\gamma_{I,A}\in\Hom^{\kern-1pt\bullet}(I\tensor A, A\tensor I)$;
\item[(vac${}_{\tensor}$)] $f\tensor g\in \Hom^{\kern-1pt\bullet}(A\tensor B, C\tensor D)$ for all $f:A\to C$, $g:B\to D$;
\item[(cmp${}_{\tensor}$)] $g\in\Hom^{\kern-1pt\bullet}(A\tensor B, E\tensor F)$ and\/
$f\in\Hom^{\kern-1pt\bullet}(E\tensor F, C\tensor D)$ imply $f\comp g\in \Hom^{\kern-1pt\bullet}(A\tensor B, C\tensor D)$;
\item[(par${}_{\tensor}$)] for $f\in\Hom^{\kern-1pt\bullet}(A\tensor B, C\tensor D)$,
$g\in\Hom^{\kern-1pt\bullet}(A'\tensor B', C'\tensor D')$, the evident transpose of
$f\tensor g%
%
$ %
%
%
%
%
%
is in $\Hom^{\kern-1pt\bullet}((A\tensor A')\tensor (B\tensor B'), (C\tensor C')\tensor (D\tensor D'))$.
\end{description}
We emphasize that $\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ is meant to depend
individually on $A$, $B$, $C$, $D$ and not just on $A\tensor B$ and
$C\tensor D$.
\end{defn}
\noindent
\begin{wrapfigure}{r}{0.3\textwidth}
\vspace{-30pt}
\begin{center}
\includegraphics[align=c,scale=.3]{g-morph.pdf}
\end{center}
\vspace{-30pt}
\end{wrapfigure}
One easily derives a \emph{weakening} rule stating that if
$f\in\Hom^{\kern-1pt\bullet}((A\tensor A')\tensor B,C\tensor (D'\tensor D))$, then the
obvious transpose of $f$ is in
$\Hom^{\kern-1pt\bullet}(A\tensor (A'\tensor B),(C\tensor D')\tensor D)$.
We extend the standard diagram language for symmetric monoidal
categories (Section~\ref{sec:prelim}), representing morphisms
$f\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ by \emph{decorated boxes} as shown
on the right, with black bars marking the \emph{unguarded input} gates
$A$ and the \emph{guarded output} gates $D$. Weakening then
corresponds to shrinking the black bars of decorated boxes.
Figure~\ref{fig:gmon} depicts the above axioms in this language.
Solid boxes represent the assumptions, while dashed boxes represent
the conclusions. The latter only occur in the derivation process and
do not form part of the actual diagrams representing concrete
morphisms. We silently identify object expressions and sets of gates
in diagrams. Given a (well-typed) morphism expression $e$, a judgement
$e\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$, called a \emph{guardedness typing}
of~$e$, is \emph{derivable} if it can be derived from the assumed
guardedness typing of the constituent basic boxes of $e$ using the
rules in Definition~\ref{def:guard_sm}. We have an obvious notion of
(directed) \emph{paths} in diagrams; a path is
\emph{guarded}\label{def:guarded-path} if it passes some basic box~$f$
through an unguarded input gate and a guarded output gate
(intuitively, guardedness is then introduced along the path as the
passage through~$f$ will guarantee guarded output without assuming
guarded input). We then have the following geometric characterization
of guardedness typing:
\begin{thm}\label{thm:gcompl}
For a well-typed morphism expression
$e\in\Hom(A\tensor B,C\tensor D)$, the guardedness typing
$e\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ is derivable iff in the diagram
of~$e$, every path from an input gate in $A$ to an output gate in
$D$ is guarded.
\end{thm}
\noindent Every symmetric monoidal category has both a largest
($\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)=\Hom(A\tensor B,C\tensor D)$) and a
least guarded structure:
\begin{lemdefn}[Vacuous guardedness]\label{lem:triv}
Every symmetric mono\-idal category is guarded under taking
$f\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ iff $f$ factors~as
\begin{align*}
A\tensor B \xto{\id_A\tensor g} A\tensor E\tensor D \xto{h\tensor\id_D} C\tensor D
\end{align*}
(eliding associativity) with $g:B\to E\tensor D$, $h:A \tensor E\to C$. This is the least
guarded structure on~$\BC$, the \emph{vacuous guarded structure}.
\end{lemdefn}
E.g.\ the natural guarded structure on Hilbert spaces
(Section~\ref{sec:dag}) is vacuous.
\begin{rem}[Duality]\label{rem:dual}
The rules and axioms in Figure~\ref{fig:gmon} are stable under
$180^\degree$\dash rotation, that is, under reversing arrows and
applying the monoidal symmetry on both sides (this motivates
decorating the \emph{unguarded} inputs). Consequently, if $\BC$ is
guarded, then so is the dual category $\BC^{op}$, with guardedness
given by $f\in\Hom^{\kern-1pt\bullet}_{\BC^{op}}(A\tensor B, C\tensor D)$ iff the
obvious transpose of $f$ is in $\Hom^{\kern-1pt\bullet}_{\BC}(D\tensor C,B\tensor A)$.
\end{rem}
\noindent In case $\tensor$ is coproduct, we can simplify the
description of partial guardedness:
\begin{prop}\label{prop:guard_equiv}
Partial guardedness in a co-Cartesian category\/ $(\BC, +,\iobj)$ is
equivalently determined by distinguished subsets
$\Hom_{\sigma}(X,Y)\subseteq\Hom(X,Y)$ with $\sigma$ ranging over
coproduct injections $Y_2\to Y_1+Y_2\cong Y$, subject to the rules
on the right hand side of Figure~\ref{fig:co-cart-g}, where
$f:X\to_\sigma Y$ denotes $f\in\Hom_\sigma(X,Y)$,
with $f\in\Hom^{\kern-1pt\bullet}(X_1+X_2,Y_1+Y_2)$ iff $(f\inl)\in\Hom_{\inr}(X_1,Y_1+Y_2)$.
\end{prop}
\begin{figure}[t!]
\begin{center}%
\small%
\begin{flalign*}
&
\lrule{\textbf{(vac${}_{\times}$)}}{f:X\to Z}{f\pr_1:X\times Y\to^{\pr_2} Z} &&
\lrule{\textbf{(vac${}_{\mplus}$)}}{f:X\to Z}{\inl f:X\to_{\inr} Z+Y}\\[2ex]
&
\lrule{\textbf{(cmp${}_{\times}$)}}{
\begin{array}{rl}
f:&X\times Y\to^{\pr_2} Z\\[1ex]
g:&V\to^{\sigma} X\qquad h:V\to Y
\end{array}}
{f\comp\brks{g,h}:V\to^{\sigma} Z}&&
\lrule{\textbf{(cmp${}_{\mplus}$)}}{
\begin{array}{rl}
f:&X\to_{\inr} Y+Z\\[1ex]
g:&Y\to_{\sigma}V\qquad h:Z\to V
\end{array}}
{[g,h]\comp f:X\to_{\sigma} V}
\\[2ex]
&
\lrule{\textbf{(par${}_{\times}$)}}{f:X\to^{\sigma} Y\qquad g:X\to^{\sigma} Z}{\brks{f,g}:X\to^\sigma Y\times Z}&&
\lrule{\textbf{(par${}_{\mplus}$)}}{f:X\to_{\sigma} Z\qquad f:Y\to_{\sigma} Z}{[f,g]:X+Y\to_{\sigma} Z}
\end{flalign*}
\caption{Axioms of Cartesian (left) and co-Cartesian (right) guarded categories}
\label{fig:co-cart-g}
\end{center}
\vspace{-5ex}
\end{figure}
We have used the mentioned rules for $\to_\sigma$ in previous work on
guarded iteration~\cite{GoncharovSchroderEtAl17} (with
\textbf{(vac${}_{\times}$)} called \textbf{(trv)}, and together with
weakening, which as indicated above turns out to be derivable).
By duality (Remark~\ref{rem:dual}), we immediately have a corresponding
description for the Cartesian case:
\begin{corollary}\label{cor:guard_equiv}
Partial guardedness in a Cartesian category\/ $(\BC,\times,1)$ is
equivalently determined by distinguished subsets
$\Hom^{\sigma}(X,Y)\subseteq\Hom(X,Y)$ with $\sigma$ ranging over
product projections $X\cong X_1\times X_2 \to X_1$, subject to the
rules on the left hand side of Figure~\ref{fig:co-cart-g}, where
$f:X\to^\sigma Y$ denotes $f\in\Hom^\sigma(X,Y)$, with
$f\in\Hom^{\kern-1pt\bullet}(X_1\times X_2, Y_1\times Y_2)$ iff
$\pr_2 f\in\Hom^{\pr_1}(X_1\times X_2 ,Y_2)$.
\end{corollary}
\begin{rem}\label{rem:triv-cocartesian}
In a co-Cartesian category, vacuous guardedness
(Lemma~\ref{lem:triv}) can equivalently be described by
$f\in\Hom^{\kern-1pt\bullet}(A+B,C+D)$ iff $f$ decomposes as $f=[\inl h,g]$ (uniquely
provided that $\inl$ is monic), or in terms of the description from
Proposition~\ref{prop:guard_equiv}, $u\in\Hom_{\inr}(X,Y+Z)$ iff $u$
factors through $\inl$. Of course, the dual situation obtains in
Cartesian categories.
\end{rem}
\begin{expl}[Process algebra]\label{expl:pa}
Fix a monad $\BBT$ on $(\BC,+,\iobj)$ and an endofunctor
$\Sigma:\BC\to\BC$ such that the generalized coalgebraic resumption
transform $T_{\Sigma}= \nu\gamma.\ T(\argument+\Sigma\gamma)$
exists; we think of $T_\Sigma X$ as a type of processes that have
side-effects in $\BBT$ and perform communication actions from
$\Sigma$, seen as a generalized signature. The Kleisli category
$\BC_{\BBT_{\Sigma}}$ of $\BBT_{\Sigma}$ is again
co-Cartesian. Putting
\begin{equation*}
f:X\to_{\inr} T_\Sigma (Y+Z) \iff \oname{out} f\in\{T(\inl+\id)\comp g\mid g:X\to T(Y+\Sigma T_{\Sigma} (Y+Z))\}
\end{equation*}
(cf.\ Section~\ref{sec:prelim} for notation), we make
$\BC_{\BBT_{\Sigma}}$ into a guarded
category~\cite{GoncharovSchroderEtAl17}. The standard motivating
example of finitely nondeterministic processes is obtained by taking
$\BBT=\FSet$ (finite powerset monad) and $\Sigma = A\times\argument$
(action prefixing).
\end{expl}
\begin{expl}[Metric spaces]\label{expl:m-space}
Let $\BC$ be the Cartesian category of metric spaces and
non-expansive maps. Taking $f:X\times Y\to^{\pr_2} Z$ iff
$\lambda y.\, f(x,y)$ is contractive for every $x\in X$ makes $\BC$
into a guarded Cartesian category.
\end{expl}
\section{Guardedness via Guarded Ideals}\label{sec:ideals}
Most of the time, the structure of a guarded category is determined by
morphisms with only unguarded inputs and guarded outputs, which form
an \emph{ideal}:
\begin{defn}[Guarded morphisms]
A morphism $f:X\to Y$ in a guarded category is \emph{guarded} (as
opposed to only partially guarded) if
$\upsilon^\mone_{Y}\comp f\comp\hat\upsilon_{X}\in\Hom^{\kern-1pt\bullet}(X\tensor
I\comma I\tensor Y)$;
we write $\Hom^{\kern-.2pt\scalebox{.58}{$\grd$}}(X,Y)$ for the set of guarded morphisms $f:X\to Y$.
\end{defn}
\begin{defn}[Guarded ideal]
A family~$G$ of subsets $G(X,Y)\subseteq\Hom(X,Y)$ ($X,Y\in |\BC|$)
in a monoidal category $(\BC,\tensor,I)$ is a \emph{guarded ideal}
if it is closed under $\tensor$ and under composition with arbitrary
$\BC$-morphisms on both sides, and $G(I,I)=\Hom(I,I)$.
\end{defn}
There is always a \emph{least guarded ideal},
$G(X,Y) = \{g\comp f\mid f:X\to I, g:I\to Y\}$. Moreover, as indicated
above:
\begin{lemdefn}\label{lem:total}
In a guarded category, the sets $\Hom^{\kern-.2pt\scalebox{.58}{$\grd$}}(X,Y)$ form a guarded ideal,
the guarded ideal \emph{induced} by the guarded structure.
\end{lemdefn}
\noindent Conversely, it is clear that every guarded ideal
\emph{generates} a guarded structure by just closing under the rules
of Definition~\ref{def:guard_sm}.
\begin{defn}[Ideally guarded category]\label{def:ideally-guarded}
A guarded category is \emph{ideal} or \emph{ideally guarded} (over
$G$) if it is generated by some guarded ideal~($G$).
\end{defn}
\noindent We give a more concrete description:
\begin{thm}\label{thm:t_to_p}
Let $(\BC,\tensor,I)$ be ideally guarded over $G$. Then
$\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ consists of the morphisms of the form
\begin{equation*}%
\vcenter{\hbox{\includegraphics[scale=.3]{norm_long.pdf}}}
\medskip
\end{equation*}
for $g_i$ in $G$ and arbitrary $p$, $q$, $f_i$, $h_i$.
\end{thm}
The transitions between guarded ideals and guarded structures are not
in general mutually inverse: The guarded structure generated the
guarded ideal induced by a guarded structure may be smaller than the
original one (Example~\ref{expl:non-ideal}), and the guarded ideal
induced by the guarded structure generated by a guarded ideal $G$ may
be larger than~$G$ (Remark~\ref{rem:triv-ideal}). We proceed to
analyse details.
\begin{prop}\label{prop:triv-ideal}
On every symmetric monoidal category, the least guarded structure
(Lemma~\ref{lem:triv}) is ideal.
\end{prop}
\begin{rem}\label{rem:triv-ideal}
Vacuously guarded categories need not induce the least guarded ideal
(although by the next results, this does hold in the Cartesian and
the co-Cartesian case). In fact, by Lemma~\ref{lem:triv}, the
guarded ideal induced by the vacuous guarded structure consists of
the morphisms of the form $(h\tensor\id_D)(\id_A\tensor g)$ (eliding
associativity and the unitor) where $g:I\to E\tensor D$,
$h:A \tensor E\to I$:
\begin{equation}\label{eq:trivial-ideal}
\includegraphics[align=c,scale=.3]{trivial-ideal.pdf}
\end{equation}
This ideal will resurface in the discussion of Hilbert spaces
(Section~\ref{sec:dag}).
\end{rem}
\noindent The situation is simpler in the Cartesian and, dually, in the
co-Cartesian case.
\begin{lem}\label{lem:induce} Let $\BC$ be ideally guarded over $G$, and
suppose that every $f\in G({X\tensor Y}\comma Z)$ factors through
$\hat f\tensor\id:X\tensor Y\to V\tensor Y$ for some $\hat f\in G(X,V)$. Then
the guardedness structure of $\BC$ induces~$G$.
\end{lem}
If $\tensor=+$, the premise of the lemma is automatic, since
$f\in G(X+Y,Z)$ can be represented as
$[f\inl,f\inr] = [\id,f\inr]\comp (f\inl+\id)$ where
$f\inl\in G(X,Z)$ by the closure properties of guarded ideals. Hence, we
obtain
\begin{thm}\label{thm:cart-total}
The guarded structure generated by a guarded ideal $G$ on a
co-Cartesian category is equivalently described by
$\Hom_{\inr}(X,Y+Z)= \{[\inl,g] h\mid g\in G(W, Y+Z), h:X\to Y+
W\}$, and hence induces $G$.
\end{thm}
\begin{corollary}\label{cor:cart-total1}
The guarded structure generated by a guarded ideal $G$ on a Cartesian
category is equivalently described by
$\Hom^{\fst}(X\times Y,Z)= \{h\comp \brks{g, \snd}\mid g\in
G(X\times Y,W), h:W\times Y\to Z\}$, and hence induces $G$.
\end{corollary}
The description can be further simplified in the Cartesian closed
case.
\begin{corollary}\label{cor:cart-total2}
Given a guarded ideal $G$ on a Cartesian closed category, put
$f:X\times Y\to^{\pr_1} Z$ iff\/ $\curry f\in G(X,Z^Y)$. This
describes the guarded structure induced by $G$ iff $G$ is
\emph{exponential}, i.e.\ $f\in G(X,Y)$ implies $f^V\in G(X^V,Y^V)$.
\end{corollary}
(We leave it as an open question whether a similar characterization
holds in the monoidal closed case.) Natural examples of both ideal and
non-ideal guardedness are found in metric spaces:
\begin{expl}[Metric spaces]\label{expl:non-ideal}
The guarded structure on metric spaces from
Example~\ref{expl:m-space} fails to be ideal: It induces the guarded
ideal of contractive maps, which however generates the (ideal)
guarded structure described by $f:X\times Y\to^{\snd} Z$ iff
$f(x,y)$ is \emph{uniformly} contractive in $y$, i.e.\ there is
$c<1$ such that for every $x$, $\lambda y.\,f(x,y)$ is contractive
with contraction factor $c$.
\end{expl}
\noindent A large class of ideally guarded structures arises as
follows.
\begin{prop}\label{prop:mult-guard}
Let\/ $\BC$ be a Cartesian category equipped with an endofunctor \mbox{$\operatorname{\blacktriangleright}:\BC\to\BC$} and
a natural transformation $\oname{next}:\Id\to\operatorname{\blacktriangleright}$. Then the following definition
yields a guarded ideal in $\BC$:
$G(X,Y) = \{f\oname{next}\mid f:\operatorname{\blacktriangleright} X\to Y\}$.
The arising guarded structure is
$\Hom^{\fst}(X\times Y,Z) = \{f\brks{\oname{next},\snd}\mid f:\operatorname{\blacktriangleright}
(X\times Y)\times Y\to Z\}$.
If moreover $\oname{next}:X\times Y\to\operatorname{\blacktriangleright}(X\times Y)$ factors through
$\oname{next}\times\id:X\times Y\to\operatorname{\blacktriangleright} X\times Y$, then
$\Hom^{\fst}(X\times Y,Z) = \{f\comp(\oname{next}\times\id)\mid f:\operatorname{\blacktriangleright}
X\times Y\to Z\}$.
\end{prop}
\begin{rem}\label{rem:ml}
Proposition~\ref{prop:mult-guard} connects our approach to previous
work based precisely on the assumptions of the
proposition~\cite{MiliusLitak17} (in fact, the term guarded traced
category is already used there, with different meaning). A
limitation of the approach via a functor $\operatorname{\blacktriangleright}$ arises from the
need to fix $\operatorname{\blacktriangleright}$ globally, so that, e.g., the ideal guarded
structure on metric spaces (Example~\ref{expl:non-ideal}) is not
covered -- capturing contractivity via $\operatorname{\blacktriangleright}$ requires fixing a
single global contraction factor.
\end{rem}
\noindent The following instance of Proposition~\ref{prop:mult-guard}
has received extensive recent interest in programming semantics:
\begin{expl}[Topos of Trees]\label{expl:tt}
Let $\BC$ be the \emph{topos of
trees}~\cite{BirkedalMgelbergEtAl12}, i.e.\ the presheaf category
$\catname{Set}^{\omega^{op}}$ where~$\omega$ is the preorder of natural
numbers (starting from $1$) ordered by inclusion. An object $X$ of
$\BC$ is thus a family $(X(n))_{n=1,2\ldots}$ of sets with
restriction maps $r_n:X(n+1)\to X(n)$. The \emph{later}-endofunctor
$\operatorname{\blacktriangleright}:\BC\to\BC$ is defined by $\operatorname{\blacktriangleright} X(1) = \{\unit\}$ and
$\operatorname{\blacktriangleright} X(n+1) = X(n)$, and the natural transformation
$\oname{next}_X:X\to\,\operatorname{\blacktriangleright} X$ by
$\oname{next}_X (1)=\bang:X(1)\to \{\star\}$,
$\oname{next}_X (n+1)=r_{n+1}:X{(n+1})\to X(n)$. Guarded morphisms
according to Proposition~\ref{prop:mult-guard} are called \emph{contractive},
generalizing the metric setup. Contractive morphisms form an
exponential ideal, so partial guardedness is described as in
Corollary~\ref{cor:cart-total2}, and hence agrees with contractivity
in part of the input as in
\cite[Definition~2.2]{BirkedalMgelbergEtAl12}.
%
%
%
%
%
%
%
%
%
%
\end{expl}
\section{Guarded Traces}%
As indicated previously, the main purpose of our notion of abstract
guardedness is to enable fine-grained control over the formation of
feedback loops, viz, \emph{traces}.
\begin{figure}[t!]
\begin{center}%
\begin{subfigure}[t]{0.35\textwidth}
\flushleft
\includegraphics[scale=0.2]{vanishing1_ax.pdf}
\caption{Vanishing I}
\end{subfigure}%
\qquad
\begin{subfigure}[t]{0.55\textwidth}
\flushright
\includegraphics[scale=0.2]{sliding_ax1.pdf}
\caption{Sliding I}
\end{subfigure}%
\\[1em]
\begin{subfigure}[t]{0.4\textwidth}
\flushleft
\includegraphics[scale=0.2]{vanishing2_ax.pdf}
\caption{Vanishing II}
\end{subfigure}%
\quad
\begin{subfigure}[t]{0.55\textwidth}
\flushright
\includegraphics[scale=0.2]{sliding_ax2.pdf}
\caption{Sliding II}
\end{subfigure}%
\\[1em]
\begin{subfigure}[t]{0.41\textwidth}
\flushleft
\includegraphics[scale=0.2]{superposing_ax.pdf}
\caption{Superposing}
\end{subfigure}%
\quad
\begin{subfigure}[t]{0.53\textwidth}
\flushright
\includegraphics[scale=0.2]{sliding_ax3.pdf}
\caption{Sliding III}
\end{subfigure}%
\\[1em]
\begin{subfigure}[t]{0.58\textwidth}
\centering
\includegraphics[scale=0.2]{tightening_ax.pdf}
\caption{Tightening}
\end{subfigure}%
\qquad
\begin{subfigure}[t]{0.35\textwidth}
\centering
\includegraphics[scale=0.2]{yanking_ax.pdf}
\caption{Yanking}
\end{subfigure}%
\\[1em]
\caption{Axioms of guarded traced categories}
\label{fig:gtrace}
\end{center}
\vspace{-5ex}
\end{figure}%
\begin{defn}[Guarded traced category]\label{def:guarded-trace}
We call a guarded category $(\BC,\tensor,I)$ \emph{guarded traced} if it is
equipped with a \emph{guarded trace operator}
\begin{displaymath}
\oname{tr}_{A,B,C,D}^U:\Hom^{\kern-1pt\bullet}((A\tensor U)\tensor B,C\tensor (D\tensor U)) \to \Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D),
\end{displaymath}
visually corresponding to the diagram formation rule in Figure~\ref{fig:guarded-trace},
so that the adaptation of the Joyal-Street-Verity axiomatization of
traced symmetric monoidal categories~\cite{JoyalStreetEtAl96} shown in
Figure~\ref{fig:gtrace} is satisfied.
\end{defn}
\begin{rem}
The versions of the sliding axiom in Figure~\ref{fig:gtrace} differ in
the way the loop is guarded. They are in line with duality
(Remark~\ref{rem:dual}): Sliding II arises from Sliding I by
$180^\degree$ rotation, and Sliding III is symmetric under
$180^\degree$ rotation.
\end{rem}
We proceed to investigate the geometric properties of guarded traced
categories, partly extending Theorem~\ref{thm:gcompl}. The
syntactic setting extends the one for guarded categories by
additionally closing morphism expressions under the trace operator
(interpreted diagrammatically as in Figure~\ref{fig:guarded-trace}),
obtaining \emph{traced morphism expressions}. Term formation thus
becomes mutually recursive with guardedness typing: if $e$ is a traced
morphism expression such that
$e\in\Hom^{\kern-1pt\bullet}((A\tensor U)\tensor B,C\tensor (D\tensor U))$ is derivable,
then $\oname{tr}_{A,B,C,D}(e)$ is a traced morphism expression, and
$\oname{tr}_{A,B,C,D}(e)\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ is derivable.
\emph{Traced diagrams} consists of finitely many (decorated) basic
boxes and wires connecting output gates of basic boxes to input gates,
with each gate attached to at most one wire; open gates are regarded
as inputs or outputs, respectively, of the whole diagram. Of course,
acyclicity is not required. %
We first note that the easy direction of
Theorem~\ref{thm:gcompl} adapts straightforwardly to the setting
with traces:
\begin{prop}\label{prop:ggcompl}
Let $e$ be a traced morphism expression such that
$e\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ is derivable. Then in the diagram
of $e$,
all loops and all paths from input gates in $A$ to
output gates in $D$ are guarded (p.~\pageref{def:guarded-path}).
\end{prop}
\noindent Remarkably, the converse of Proposition~\ref{prop:ggcompl}
in general fails in several ways:
\begin{expl}\label{expl:n-idl-counter}
The left diagram below
\begin{equation}\label{eq:ctr-expls}
\includegraphics[align=c,scale=.25]{ctr_ex-new.pdf}\qquad\qquad
\includegraphics[align=c,scale=.25]{ctr_ex2.pdf}
\end{equation}
shows that guardedness typing is not closed under equality of traced
morphism expressions: Write $e$ for the expression inducing the
dashed box. By Proposition~\ref{prop:ggcompl},~$e$, and hence
$\oname{tr}(e)$, fail to type as indicated. However, $\oname{tr}(e)=gf$, for which
the overall guardedness typing indicated is easily derivable.
Moreover, the diagram on the right above satisfies the necessary
condition from Proposition~\ref{prop:ggcompl} but is not induced by
an expression for which the indicated guardedness typing is
derivable, essentially because both ways of cutting the loop violate
the necessary condition from Proposition~\ref{prop:ggcompl}.
\end{expl}
\noindent However, if $\BC$ is ideally guarded over a guarded
ideal $G$, we do have a converse to Proposition~\ref{prop:ggcompl}: By
Theorem~\ref{thm:t_to_p}, we can then restrict basic boxes in diagrams
to be either \emph{guarded}, i.e.\ have only black gates, or
\emph{unguarded}, i.e.\ have only white gates. %
We call
the correspondingly restricted diagrams \emph{ideally guarded}. (We
emphasize that the guardedness typing of \emph{composite} ideally
guarded diagrams still needs to mix guarded and unguarded inputs and
outputs.) A path in an ideally guarded diagram is guarded iff it passes
through a guarded basic box.
The left-hand diagram in~\eqref{eq:ctr-expls} is in fact ideally
guarded, so guardedness typing fails to be closed under equality also
in the ideally guarded case. However, for ideally guarded diagrams we
have the following converse of Proposition~\ref{prop:ggcompl}.
\begin{thm}\label{thm:ggcompl-conv}
Let $\Delta$ be an ideally guarded diagram, with sets of input and
output gates disjointly decomposed as $A\charfusion[\mathbin]{\cup}{\cdot} B$ and $C\charfusion[\mathbin]{\cup}{\cdot} D$,
respectively. If every loop in $\Delta$ and every path from a gate
in $A$ to a gate in $D$ is guarded, then $\Delta$ is induced by a
traced morphism expression $e$ such that
$e\in\Hom^{\kern-1pt\bullet}(A\tensor B,C\tensor D)$ is derivable.
\end{thm}
We next take a look at the Cartesian and co-Cartesian cases. Recall
that by Proposition~\ref{prop:guard_equiv}, the definition of guarded
category can be simplified if $\tensor=+$ (and dually if
$\tensor=\times$). This simplification extends to guarded traced
categories by generalizing Hyland-Hasegawa's equivalence between
Cartesian trace operators and Conway fixpoint
operators~\cite{Hasegawa97,Hasegawa99}.
\begin{defn}[Guarded Conway operators]
Let $\BC$ be a guarded co-Cartesian category. We call an operator $(\argument)^\istar$
of profile
\begin{align}\label{eq:conway}
f\in\Hom_{\sigma+\id}(X,Y+X)\mapsto f^\dagger\in \Hom_{\sigma}(X,Y)
\end{align}
a \emph{guarded iteration operator} if it satisfies
\begin{itemize}
\item\emph{fixpoint:} $f^{\istar} = [\id,f^\istar]\comp f$ for $f:X\to_{\inr} Y+X$;
\end{itemize}
and a \emph{Conway iteration operator} if it additionally satisfies
\begin{itemize}
\item\emph{naturality:} $g\comp f^{\istar} = ((g+\id) \comp f)^{\istar}$ for $f:X\to_{\inr} Y+X$, $g : Y \to Z$;
\item\emph{dinaturality:} $([\inl, h]\comp g)^{\istar} = [\id, ([\inl, g] \comp h)^{\istar}] \comp g$ for $g : X \to_{\inr} Y + Z$ and $h:Z\to Y+X$ or $g : X \to Y + Z$ and $h:Z\to_{\inr} Y+X$;
\item\emph{(co)diagonal:} $([\id,\inr] \comp f)^{\istar} = f^{\istar\istar}$ for $f : X \to_{\inr+\id} (Y + X) + X$.
\end{itemize}
Furthermore, we distinguish the following principles:
\begin{itemize}
\item\emph{squaring~\cite{Esik99}:} $f^{\istar} = ([\inl,f]\comp f)^\istar$ for $f:X\to_{\inr} Y+X$;
\item\emph{uniformity w.r.t.\ a subcategory $\BS$ of $\BC$}:
$(\id+ h)\comp f = g\comp h$ implies $f^{\istar} = g^{\istar}\comp h$
for all $f:X\to_{\inr} Z+X$, $g:Y\to_{\inr} Z+Y$ and $h: Y\to X$ from $\BS$;
\end{itemize}
and call $(\argument)^\istar$ \emph{squarable} or \emph{uniform}
if it satisfies squaring or uniformity, respectively.
\end{defn}
\emph{Guarded (Conway) recursion operators} $(\argument)_{\istar}$ on
guarded Cartesian categories are defined dually in a straightforward
manner. %
We collect the following facts about guarded iteration operators for further
reference.
\begin{lem}\label{lem:iter}
Let $(\argument)^\istar$ be a guarded iteration operator on $(\BC,+,\iobj)$.
\begin{enumerate}
\item If $(\argument)^\istar$ is uniform w.r.t.\ some co-Cartesian
subcategory of $\BC$ and satisfies the codiagonal identity then
it is squarable.
\item If $(\argument)^\istar$ is squarable and uniform w.r.t.\
coproduct injections then it is dinatural.
\item If $(\argument)^\istar$ is Conway then
it is uniform w.r.t.\ coproduct injections.
\end{enumerate}
\end{lem}
\begin{prop}\label{prop:g-co-cart}
A guarded co-Cartesian category\/ $\BC$ is traced iff it is equipped
with a guarded Conway iteration operator $(\argument)^\istar$, with
mutual conversions like in the total case~\cite{Hasegawa97,Hasegawa99}.
\end{prop}
\begin{expl}[Guarded Conway operators]
We list some examples of guarded Conway iteration/recursion
operators. In all cases except~\ref{item:conway}, Conwayness follows
from uniqueness of fixpoints~\cite[Theorem
17]{GoncharovSchroderEtAl17}.
\begin{cenumerate}
\item In a vacuously guarded co-Cartesian category
(Remark~\ref{rem:triv-cocartesian}), $f:X\to_{\inr} Y+Z$ iff
$f=\inl g$ for some $g:X\to Y$. If coproduct injections are monic,
then $g$ is uniquely determined, and $f^\istar = g$ defines a
guarded Conway operator.
\item\label{item:conway} Every Cartesian category $\BC$ is guarded
under $\Hom^{\pi}(X,Y) = \Hom(X,Y)$ (making every morphism
guarded). Then $\BC$ has a guarded Conway recursion operator iff
$\BC$ is a \emph{Conway category}~\cite{Esik15}, i.e.\ models
standard total recursion.
\item The guarded Cartesian category of complete metric spaces as in
Example~\ref{expl:m-space} is traced: For
$f:X\times Y\to^{\pr_2} Y$, define $f^\dagger(x)$ as the unique
fixpoint of $\lambda y.\,f(x,y)$ according to Banach's fixpoint
theorem.
\item Similarly, the topos of trees, ideally guarded as in
Example~\ref{expl:tt}, has a guarded Conway recursion operator
obtained by taking unique
fixpoints~\cite[Theorem~2.4]{BirkedalMgelbergEtAl12}.
\item The guarded co-Cartesian category $\BC_{\BBT_{\Sigma}}$ of
side-effecting processes (Example~\ref{expl:pa}) has a guarded
Conway iteration operator obtained by taking unique fixpoints,
thanks to the universal property of the final coalgebra
$T_{\Sigma}X$~\cite{PirogGibbons14}.
\end{cenumerate}
\end{expl}
\noindent\textbf{Guarded vs. unguarded recursion}
We proceed to present a class of examples relating guarded and unguarded
recursion. For motivation, consider the category $(\catname{Cpo},\times,1)$ of
complete partial orders (cpos) and continuous maps. This category nearly
supports recursion via least fixpoints, except that, e.g.,
$\id:X\to X$ only has a least fixpoint if $X$ has a bottom. The following equivalent approaches involve the \emph{lifting
monad} $(\argument)_{\bot}$, which adjoins a fresh bottom
$\bot$ to a given \mbox{$X\in |\catname{Cpo}|$}.
\begin{citemize}
\item[]\hspace{-1ex}\emph{Classical
approach}~\cite{Winskel93,SimpsonPlotkin00}: Define a total
recursion operator $(-)_\iistar$ on the category $\catname{Cpo}_{\bot}$ of
\emph{pointed cpos} and continuous maps, using least fixpoints.
\item[]\hspace{-1ex}\emph{Guarded approach} (cf.\,\cite{MiliusLitak17}):
Extend $\catname{Cpo}$ to a guarded category: $f:X\times Y\to^{\snd} Z$ iff
$f\in \{g\comp (\id\times\eta)\mid g:X\times Y_{\bot}\to Z\}$ (see
Proposition~\ref{prop:mult-guard}), and define a guarded recursion
operator sending $f=g\comp (\id\times\eta):Y\times X\to^{\snd} X$ to
$f_{\istar} = g\comp\brks{\id,\hat f} :Y\to X$ with
$\hat f(y)\in X_{\bot}$ calculated as the least fixpoint of
$\lambda z.\, \eta g(y, z)$.
\end{citemize}
Pointed cpos happen to be always of the form $X_{\bot}$ with
$X\in |\catname{Cpo}|$, which indicates that $(\argument)_{\iistar}$ is a
special case of $(\argument)_{\istar}$. This is no longer true in more
general cases when the connection between $(\argument)_{\iistar}$ and
$(\argument)_{\istar}$ is more intricate. We show that
$(\argument)_{\iistar}$ and $(\argument)_{\istar}$ are nevertheless
equivalent under reasonable assumptions.
\begin{defn}[\cite{CrolePitts90}] \label{def:let-ccc}
A \emph{let-ccc with a fixpoint object} is a tuple $(\BC, \BBT, \Omega, \omega)$,
consisting of a Cartesian closed category $\BC$, a strong monad $\BBT$ on it,
an initial $T$-algebra $(\Omega,\oname{in})$ and an equalizer $\omega:1\to\Omega$
of $\oname{in}\eta:\Omega\to\Omega$ and $\id:\Omega\to\Omega$.
\end{defn}
The key requirement is the last one, satisfied, e.g., for $\catname{Cpo}$ and
the lifting monad.
Given a monad $\BBT$ on $\BC$, $\BC^{\BBT}_{\star}$ denotes the
category of $\BBT$-algebras and $\BC$-morphisms (instead of
$\BBT$-algebra homomorphisms).
\begin{prop}[{\cite[Theorem~4.6]{Simpson92}}]\label{prop:simpson}
Let $(\BC, \BBT, \Omega, \omega)$ be a let-ccc with a fixpoint object.
Then $\BC^{\BBT}_{\star}$ has a unique $\BC^{\BBT}$-uniform recursion operator $(\argument)_{\iistar}$.
\end{prop}
By~\cite[Theorem 4]{SimpsonPlotkin00}, the operator~$(\argument)_{\iistar}$
in Proposition~\ref{prop:simpson} is Conway, in particular, by Lemma~\ref{lem:iter}, squarable,
if $\BC$ has a natural numbers object and $\BBT$ is an \emph{equational lifting
monad}~\cite{BucaloFuhrmannEtAl03}, such as $(-)_\bot$. %
There are however further squarable operators obtained via
Proposition~\ref{prop:simpson}, e.g.\ for the partial state monad
$TX = {(X\times S)^S_{\bot}}$~\cite{CrolePitts90}. By
Lemma~\ref{lem:iter}, the following result applies in particular in
the setup of Proposition~\ref{prop:simpson} under the additional
assumption of squarability.
\begin{thm}\label{thm:grec_from_rec}
Let $\BBT$
be a strong monad on a Cartesian category~$\BC$.
The following gives a bijective correspondence between squarable
dinatural recursive operators $(\argument)_{\iistar}$
on $\BC^\BBT_\star$
and squarable dinatural guarded recursive operators
$(\argument)_{\istar}$
on $\BC$
ideally guarded over $\Hom^{\kern-.2pt\scalebox{.58}{$\grd$}}(X,Y) = \{f\comp\eta\mid f:TX\to Y\}$:
\begin{align}
\label{eq:def-iistar}(f:B\times A\to A)_{\iistar} =\;&a\comp(\eta f(\id\timesa))_{\istar} && \text{for $(A,a)\in|\BC^\BBT_\star|$}\\
\label{eq:def-istar}(f=g\comp (\id\times\eta):Y\times X\to X)_{\istar} =\;& g\brks{\id,(\eta g)_{\iistar}}
\end{align}
(in~\eqref{eq:def-istar} we call on a slight extension of
$(\argument)_{\iistar}$
(Lemma~\ref{lem:par_ext}); the right hand side
of~\eqref{eq:def-iistar} is defined because $\eta
f(\id\timesa)$ factors as $\eta f(\id\timesa
(Ta)\eta)$). Moreover,
$(\argument)_\istar$ is Conway iff so is $(\argument)_\iistar$.
\end{thm}
\section{Vacuous Guardedness and Nuclear Ideals}\label{sec:dag}
\noindent We proceed to discuss traces in vacuously guarded categories
(Lemma~\ref{lem:triv}), and show that the partial trace operation in
the category of (possibly infinite-dimensional) Hilbert
spaces~\cite{AbramskyBluteEtAl99} in fact lives over the vacuous
guarded structure. We first note that vacuous guarded structures are
traced as soon as a simple rewiring operation satisfies a suitable
well-definedness condition (similar to one defining traced nuclear
ideals~\cite[Definition~8.14]{AbramskyBluteEtAl99}):
\begin{prop}\label{prop:trivial-trace}
Let $(\BC,\tensor,I)$ be vacuously guarded. If for
$f\in\Hom^{\kern-1pt\bullet}(A\tensor B\comma{C\tensor D})$ with factorization
$f=(h\tensor\id_{D\tensor U})(\id_{A\tensor U}\tensor g)$ (eliding
associativity), $g:B\to E\tensor D\tensor U$,
$h:A \tensor U\tensor E\to C$ as per Lemma~\ref{lem:triv}, the
composite
\begin{equation}\label{eq:trivial-trace}
A\tensor B\xto{\id_A\tensor g} A\tensor E\tensor D\tensor U \cong A\tensor U\tensor E\tensor D\xto{h\tensor\id_D} C\tensor D
\end{equation}
depends only on $f$, then $\BC$ is guarded traced, with
$\oname{tr}^U_{A,B,C,D}(f)$ defined as~\eqref{eq:trivial-trace}.
\end{prop}
Diagrammatically, the trace in a vacuously guarded category is thus given
by
\begin{center}
\includegraphics[scale=.3]{nuclear_tr.pdf}
\end{center}
We proceed to instantiate the above to Hilbert spaces. On a more
abstract level, a \emph{dagger symmetric monoidal
category}~\cite{Selinger07} (or \emph{tensored
$*$-category}~\cite{AbramskyBluteEtAl99}) is a symmetric monoidal
category~$(\BC,\tensor,I)$ equipped with an identity-on-objects
strictly involutive functor $(\argument)^\dagger:\BC\to\BC^{op}$
coherently preserving the symmetric monoidal structure. %
The main motivation for
dagger symmetric monoidal categories is to capture categories that are
similar to (dagger) compact closed categories in that they admit a
canonical trace construction for certain morphisms, but fail to be
closed, much less compact closed. The ``compact closed part'' of a
dagger symmetric monoidal category is axiomatized as follows.
\begin{defn}[Nuclear Ideal,~\cite{AbramskyBluteEtAl99}]\label{def:nucl}
A \emph{nuclear ideal} $\oname{N}$ in a dagger symmetric monoidal category
$(\BC,\tensor, I, (\argument)^\dagger)$ is a family of subsets
$\oname{N}(X,Y)\subseteq\Hom_{\BC}(X,Y)$, $X,Y\in |\BC|$, satisfying the
following conditions:
\begin{cenumerate}
\item $\oname{N}$ is closed under $\tensor$, $(\argument)^\dagger$, and
composition with arbitrary morphisms on both sides;
\item There is a bijection
$\theta:\oname{N}(X,Y)\to\Hom_{\BC}(I,X^\dagger\tensor Y)$, natural in~$X$
and~$Y$, coherently preserving the dagger symmetric monoidal
structure. %
\item (\emph{Compactness}) For $f\in\oname{N}(B,A)$ and $g\in\oname{N}(B,C)$, the
following diagram commutes:
\begin{equation*}
\begin{tikzcd}
A
\ar[r, "\cong"]
\ar[d, "g\comp f^\dagger"']&
A\tensor I
\ar[r, "\id_A\tensor\theta(g)"] &[3em]
A\tensor (B^\dagger\tensor C)
\ar[d, "\cong"] \\[-.5em]
C&
I\tensor C
\ar[l, "\cong"']&
( B^\dagger\tensor A)\tensor C
\ar[l, "(\theta(f))^\dagger\tensor\id_C"]
\end{tikzcd}
\end{equation*}
\end{cenumerate}
\end{defn}
The above definition is slightly simplified in that we elide a
covariant involutive functor $\overline{(\argument)}:\BC\to\BC$,
capturing, e.g.\ complex conjugation; i.e., we essentially restrict to
spaces over the reals.
We proceed to present a representative example of a nuclear ideal in
the category of Hilbert
spaces. %
Recall that a \emph{Hilbert space}~\cite{KadisonRingrose97} $H$ over
the field $\bm{\mathsf{R}}$ of reals is a vector space with an \emph{inner
product} $\brks{\argument,\argument}:H\times H\to \bm{\mathsf{R}}$ that is
complete as a \emph{normed space} under the induced \emph{norm}
$\norm{x}=\sqrt{\brks{x,x}}$. %
Let $\catname{Hilb}$ be the category of Hilbert spaces and bounded linear
operators. %
Clearly, $\bm{\mathsf{R}}$ itself is a Hilbert space; linear operators $X\to\bm{\mathsf{R}}$
are conventionally called \emph{functionals}. More generally, we
consider \emph{(multi-)linear} functionals
$X_1\times \ldots\times X_n\to \bm{\mathsf{R}}$, i.e.\ maps that are linear in every
argument. Such a functional is \emph{bounded} if
$|f(x_1,\ldots,x_n)|\leq c\comp \norm{x_1}\cdots \norm{x_n}$ for some
constant $c\in\bm{\mathsf{R}}$. %
We can move between bounded linear operators and bounded linear
functionals, similarly as we can move between relations and functions
to the Booleans:
\begin{prop}[{\cite[Theorem 2.4.1]{KadisonRingrose97}}]\label{prop:fun_op}
Given a bounded linear operator $f:X\to Y$,
$f^\circ(x,y) = \brks{fx, y}$ defines a bounded linear
functional~$f^\circ$, and every bounded linear functional
$X\times Y\to\bm{\mathsf{R}}$ arises in this way.
\end{prop}
\begin{defn}[Hilbert-Schmidt operators/functionals]
A bounded linear functional $f:X_1\times \ldots\times X_n\to \bm{\mathsf{R}}$ is
\emph{Hilbert-Schmidt} if the sum
%
\begin{displaymath}\textstyle
\sum\nolimits_{x_1\in B_1}\ldots\sum\nolimits_{x_n\in B_n} (f(x_1, \ldots, x_n))^2
\end{displaymath}
is finite for some, and then any, orthonormal bases $B_1,\ldots,B_n$
of $X_1,\ldots,X_n$, respectively. A bounded linear operator
$f:X\to Y$ is \emph{Hilbert-Schmidt} if the induced functional
$f^\circ$ (Proposition~\ref{prop:fun_op}) is Hilbert-Schmidt,
equivalently if $\sum_{x\in B} \norm{f x}^2$ is finite for some, and
then any, orthonormal basis $B$ of $X$. We denote by $\oname{HS}(X,Y)$ the
space of all Hilbert-Schmidt operators from $X$ to $Y$.
\end{defn}
For $X,Y\in|\catname{Hilb}|$, the space of Hilbert-Schmidt functionals
$X\times Y\to\bm{\mathsf{R}}$ is itself a Hilbert space, denoted $X\tensor Y$,
with the pointwise vector space structure and the inner product
$\brks{f,g} = \sum\nolimits_{x\in B}\sum\nolimits_{y\in B'} f(x,
y)\comp g(x, y)$. %
where $B$ and $B'$ are orthonormal bases of $X$ and $Y$, respectively.
By virtue of the equivalence between $f$ and $f^\circ$, this induces a
Hilbert space structure on $\oname{HS}(X,Y)$, with induced norm
$\norm{f}_2 =\sqrt{\sum\nolimits_{x\in B} \norm{fx}^2}$. The operator
$\tensor$ forms part of a dagger symmetric monoidal structure on
$\catname{Hilb}$, with unit $\bm{\mathsf{R}}$. For a bounded linear operator $f:X\to Y$,
$f^\dagger:Y\to X$ is the \emph{adjoint operator} uniquely determined
by equation $\brks{x, f^\dagger y} = \brks{fx, y}$. The tensor
product of $f:A\to B$ and $g:C\to D$ is the functional sending
$h:A\times C\to\bm{\mathsf{R}}$ to
$h\comp (f^\dagger\times g^\dagger):B\times D\to\bm{\mathsf{R}}$. Given
$a\in A$ and $c\in C$, let us denote by $a\tensor c\in A\tensor C$ the
functional $(a',c')\mto \brks{a,a'}\comp\brks{c,c'}$, and so, with the
above $f$ and $g$, $(f\tensor g)(a\tensor c) = f(a)\tensor g(c)$.
\begin{prop}\label{prop:hs-nucl}
\cite{AbramskyBluteEtAl99} The Hilbert-Schmidt operators form a
nuclear ideal in $\catname{Hilb}$ with
$\theta:\oname{HS}(X,Y)\cong\Hom(\bm{\mathsf{R}},X^\dagger\tensor Y)$ defined by
\begin{equation*}
\theta(f:X\to Y)(r:\bm{\mathsf{R}})(x:X, y:Y) = r\comp\brks{fx,y}.
\end{equation*}
\end{prop}
A crucial fact underlying the proof of Proposition~\ref{prop:hs-nucl}
is that $\oname{HS}(X,Y)$ is isomorphic to $X^\dagger\tensor Y$, naturally in
$X$ and $Y$. We emphasize that what makes the case of $\catname{Hilb}$
significant is that we do not restrict to finite-dimensional Hilbert
spaces. In that case all bounded linear operators would be
Hilbert-Schmidt and the corresponding category would be (dagger)
compact closed~\cite{Selinger07}. In the infinite-dimensional case,
identities need not be Hilbert-Schmidt, so $\oname{HS}$ is indeed only an
ideal and not a subcategory.
Let $\oname{N}^2(X,Y) = \{g^\dagger h:X\to Y\mid h\in\oname{N}(X,Z), g\in\oname{N}(Y,Z)\}$ for any
nuclear ideal $\oname{N}$. The main theorem of the section now can be stated as follows.
\begin{thm}\label{thm:hilbert-schmidt}
\begin{cenumerate}
\item\label{item:hs2} The guarded ideal induced by the vacuous guarded
structure on $\catname{Hilb}$ (see~\eqref{eq:trivial-ideal}) is precisely $\oname{HS}^2$, and $\catname{Hilb}$ is guarded
traced over $\oname{HS}^2$.
\item\label{item:hilb-trace-dagger} Guarded traces in $\catname{Hilb}$ commute
with $(-)^\dagger$ in the sense that if
$f\in\Hom^{\kern-1pt\bullet}((A\tensor U)\tensor B,C\tensor (D\tensor U))$, then
$\gamma_{B,A\tensor U} f^\dagger \gamma_{D\tensor U,C}\in
\Hom^{\kern-1pt\bullet}((D\tensor U)\tensor C,B\tensor (A\tensor U))$
and
$\oname{tr}^U_{D,C,B,A}(\gamma_{B,A\tensor U} f^\dagger \gamma_{D\tensor
U,C}) =
\gamma_{A,B}\comp(\oname{tr}_{A,B,C,D}^U(f))^\dagger\comp\gamma_{C,D}$.
\end{cenumerate}
\end{thm}
Clause~\ref{item:hs2} is a generalization of the result
in~\cite[Theorem 8.16]{AbramskyBluteEtAl99} to parametrized
traces. Specifically, we obtain agreement with the conventional
mathematical definition of trace:
given $f\in\oname{HS}^2(X,X)$, $\oname{tr}(f) = \sum_i \brks{f(e_i), e_i}$
for any choice of an orthonormal basis $(e_i)_i$, and $\oname{HS}^2(X,X)$
contains precisely those $f$ for which this sum is absolutely
convergent independently of the basis.
\section{Conclusions and Further Work}
\noindent We have presented and investigated a notion of abstract
\emph{guardedness} and guarded \emph{traces}, focusing on foundational
results and important classes of examples. We have distinguished a
more specific notion of \emph{ideal guardedness}, which in many
respects appears to be better behaved than the unrestricted one, in
particular ensures closer agreement between structural and geometric
guardedness. An unexpectedly prominent role is played by `vacuous'
guardedness, characterized by the absence of paths connecting
unguarded inputs to guarded outputs; e.g., partial traces in Hilbert
spaces~\cite{AbramskyBluteEtAl99} turn out to be based on this form of
guardedness. Further research will concern a coherence theorem for
guarded traced categories generalizing the well-known unguarded
case~\cite{JoyalStreetEtAl96,Selinger04}, and a generalization of the
Int-construction~\cite{JoyalStreetEtAl96}, which would relate guarded
traced categories to a suitable guarded version of compact closed
categories. Also, we plan to investigate guarded traced categories as
a basis for generalized Hoare logics, extending and unifying previous
work~\cite{ArthanMartinEtAl09,GoncharovSchroder13a}.
\newpage
\bibliographystyle{myabbrv}
|
{
"timestamp": "2018-02-27T02:02:12",
"yymm": "1802",
"arxiv_id": "1802.08756",
"language": "en",
"url": "https://arxiv.org/abs/1802.08756"
}
|
\section{Introduction}
Semi-classical approaches to investigate quantum gravity phenomena are enormously useful, and any achievement coming from those should be expected to be present in the not yet known fully-fledged theory of quantum gravity. In this sense, an interesting phenomenological framework to study quantum gravity effects is the one of Doubly General Relativity or, as it is mostly known, Gravity's Rainbow \cite{Magueijo:2002am,Magueijo:2002xx, AmelinoCamelia:2008qg}. In this framework, the definition of a nontrivial dual space, as a consequence of a nonlinear Lorentz transformation in momentum space, implies that the metric describing the spacetime should be energy-dependent, leading to a modification of the relativistic dispersion relation. This energy is due to the particle(s) which ultimately probes the spacetime since for each value of the particle frequency it feels a different geometry.
In the context of Gravity's Rainbow, there are in fact two quantities which are observer-independent (invariant): the speed of light and the Planck length (energy) \cite{AmelinoCamelia:2000mn,Magueijo:2001cr, Galan:2004st}. Therefore, at high-energy scales (of the order of the Planck scale) the relativistic dispersion relation should acquire corrections \cite{Jacob:2010vr, AmelinoCamelia:1997gz}. This is in accordance, for instance, with ultra-high energy cosmic rays and TeV photons phenomena detected in experiments, suggesting the need of a modification of the relativistic dispersion relation \cite{AmelinoCamelia:1997gz,Magueijo:2002xx,AmelinoCamelia:2000mn, AmelinoCamelia:2008qg}.
The semi-classical approach of Gravity's Rainbow has been used, for instance, to investigate Cosmological and Astrophysical phenomena in a variety of contexts ranging from Friedmann-Robertson-Walker Universe \cite{Khodadi:2016bcx, Awad:2013nxa, Majumder:2013ypa, Hendi:2017vgo}, black hole thermodynamics \cite{Hendi:2016dmh, Hendi:2015cra, Hendi:2015bba, Hendi:2016vux, Hendi:2016njy} to neutron stars' properties \cite{Hendi:2015vta}, massive scalar field in the Schwarzschild metric \cite{Leiva:2008fd, Li:2008gs, Bezerra:2017hrb} and Casimir effect \cite{Bezerra:2017zqq}. These works show us a growing interest in the semi-classical approach of Rainbow's Gravity as to investigate quantum gravity effects through corrections to the dispersion relation of relativistic quantum fields. Hence, in this work, we raise a discussion within the context of gravity's rainbow by means of the effects that stem from high-order corrections to the relativistic dispersion relation of the Dirac oscillator in the cosmic string spacetime. The Dirac oscillator \cite{Moshinsky,Villalba:1993fq} is a relativistic model for the well-known harmonic oscillator \cite{Landau, Griffiths}, which is characterized by a coupling that keeps the Dirac equation being linear in both spatial coordinates and momenta. Besides, in the nonrelativistic limit of the Dirac equation, it recovers the spectrum of energy of the harmonic oscillator with a strong spin-orbit coupling. It has inspired a great deal of work \cite{Rozmej:1999jv, Boumali2013, Quesne:2004pp, 2008PhRvA..77c3832B, Karwowski2007, Mandal:2009we, PhysRevA.84.032109, Bakke:2013wla, Bakke:2012zz, Bakke:2013sla, Bakke:2011qr, Hassanabadi:2015hxa}. Therefore, our interest is to extend the discussion about the semi-classical approach of gravity's by analysing the influence of backgrounds determined by gravity's rainbow on the spectrum of energy of the Dirac oscillator.
The structure of this paper is: in Sec.2 we introduce the cosmic string spacetime and the essential aspects of gravity's rainbow framework that will modify the spacetime considered. Next, we solve the Dirac equation in the modified cosmic string spacetime, obtain the energy levels in two gravity's rainbow scenarios presented and discuss the results comparing it with the results for the energy levels of the Dirac oscillator without gravity's rainbow considered previously. Finally, in Sec.3 we present the conclusions. Throughout the paper we use natural units $G = \hbar = c = 1.$
\section{Dirac equation in the cosmic string gravity's rainbow}
In this section, we study the behaviour of the Dirac oscillator in two gravity's rainbow scenarios, which we shall specify latter. Firstly, we start by introducing the spacetime background that we wish to work with, namely, the cosmic string spacetime described by the following line element \cite{VS,hindmarsh}:
\begin{eqnarray}
ds^{2}=-dt^{2}+dr^{2}+\eta^{2}r^{2}d\varphi^{2}+dz^{2},
\label{1.1}
\end{eqnarray}
where the parameter $\eta$ is related to the deficit angle which is defined as $\eta=1-4\varpi$, with $\varpi$ being the linear mass density of the cosmic string. In the cosmic string spacetime, we have that $\eta<1$ \cite{Katanaev:1992kh,Furtado:1994nq}. Furthermore, in the cylindrical symmetry we have that $0\,<\,r\,<\,\infty$, $0\leq\varphi\leq2\pi$ and $-\infty\,<\,z\,<\,\infty$. Cosmic string is a topological defect characterized by a spacetime with a conical topology and may have been produced by phase transitions in the early universe as it is predicted in extensions of the Standard Model of Particle Physics \cite{VS,hindmarsh}. Cosmic strings are also predicted in the framework of String Theory \cite{Hindmarsh:2011qj}. Once formed, cosmic strings can evolve in the Universe and contribute to a variety of astrophysical, cosmological and gravitational phenomena \cite{VS,hindmarsh,escidoc:153364, Mota:2014uka}, making the physics associated to them being of great relevance.
Let us now consider the framework of gravity's rainbow. In the latter, as it has been said previously, the high-energy scale regime dictates that the relativistic dispersion relation, associated with a given quantum field of mass $m$ and momentum $p$, has to be modified according to \cite{Magueijo:2002am,Magueijo:2002xx,Bezerra:2017zqq}
\begin{eqnarray}
E^{2}\,g_{0}^{2}\left(x\right)-p^{2}\,g_{1}^{2}\left(x\right)=m^{2}.
\label{1.2}
\end{eqnarray}
The functions $g_{0}\left(x\right)$ and $g_{1}\left(x\right)$ are called rainbow functions and their argument, $x=E/E_{P}$, is the ratio of the energy of the probe particle to the Planck energy $E_{P}$. This ratio regulates the level of the mutual relation between the spacetime background and the probe particles. Therefore, at low-energy regimes we must have
\begin{equation}
\lim_{x\rightarrow 0} g_{i}(x) = 1, \qquad \text{with} \qquad i=0,1.
\end{equation}
It should be mentioned that it is possible to get gravity's rainbow scenarios from theories with non-commutative geometries or other theories for quantum gravity such as loop quantum gravity \cite{Ashour:2016cay, 2016arXiv160806824F, Alsaleh:2017oae}. It means that the rainbow functions can also be derived from these theories.
In the context of gravity's rainbow, the line element of the cosmic string (\ref{1.1}) becomes
\begin{eqnarray}
ds^{2}=-\frac{1}{g_{0}^{2}\left(x\right)}\,dt^{2}+\frac{1}{g_{1}^{2}\left(x\right)}\left[dr^{2}+\eta^{2}r^{2}d\varphi^{2}+dz^{2}\right].
\label{1.a}
\end{eqnarray}
This modification of the cosmic string spacetime by gravity's rainbow has been analysed previously in Ref. \cite{Momeni:2017cvl}.
In the next (sub-)section we shall consider the Dirac oscillator in the background described by the line element \eqref{1.a} in order to see how the resulting energy levels associated with it get modified by the considered rainbow functions. They should be a generalization of the energy levels associated with the Dirac oscillator obtained in the background given by the line element \eqref{1.1}. As it is known \cite{Bakke:2013wla}, the energy levels of the Dirac oscillator obtained in the background of the cosmic string are given by the expression
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig1}
\includegraphics[width=0.4\textwidth]{fig2}
\caption{\small{Plot of the energy levels \eqref{1.23} of the Dirac oscillator in the cosmic string spacetime, in units of the Planck energy, in terms of the ratio of the frequency $\omega$ to the Planck energy. For both plots we consider $\frac{m}{E_P}=0.8$ and $\eta = \frac{1}{q}$.}}
\label{f1}
\end{center}
\end{figure}
\begin{eqnarray}
E_{\sigma}=\pm\sqrt{m^{2}+4m\omega\left[n+\frac{\left|\nu\right|}{2\,\eta}-s\frac{\nu}{2\eta}\right]},
\label{1.23}
\end{eqnarray}
where $\omega$ is the frequency of the Dirac oscillator, $m$ is the mass and $\sigma = (n,l,s)$ is the set of quantum numbers which take the values: $n=0,1,2,...$, $l=0,\pm 1, \pm 2,...$ and $s=\pm 1$. The parameter $\nu$ is given by
\begin{equation}
\nu = l+\frac{1}{2}\left(1-s\right)+\frac{s}{2}\left(1-\eta\right).
\label{nu}
\end{equation}
Therefore, we can see that the energy levels above depends not only on the quantum numbers, mass, frequency but also on the cosmic string spacetime parameter $\eta$. In the limit $\eta\rightarrow 1$, one recovers the Dirac oscillator in Minkowski spacetime \cite{Bakke:2013wla}.
The expression in Eq. \eqref{1.23} is plotted in Fig.\ref{f1}, in units of the Planck energy, in terms of the ratio of the Dirac oscillator frequency to the Planck energy. For both plots we consider $\frac{m}{E_P} = 0.8$. The scale in which these energy levels are plotted is more appropriate for latter comparison with the ones from gravity's rainbow scenarios. On the left, the plot shows symmetric curves for values of the parameters indicated in the figure. The two internal curves (solid-blue and dashed-red) are for $n=0$ while the two external ones (solid-blue and dashed-red) are for $n=40$. It is clear that if we decrease the cosmic string parameter $\eta=\frac{1}{q}$, the positive and negative energies increase and decrease, respectively. On the other hand, the plot on the right shows symmetric curves for other set of parameter values as indicated in the figure. The dashed-orange and solid-blue curves are for different values of the spin, $s=\pm 1$. We can see that, for the same values of $n$ the energy gets bigger or smaller depending on whether we take $s=1$ or $s=-1$, respectively. One could take different values for the quantum numbers $n$ and $l$ but this would only increase or decrease the energy, keeping the same shape for the graph. Note that although the plots may lead the reader to believe that the energy levels go to zero when the frequency goes also to zero, that is not true. In fact, the energy gets a nonzero values when $\omega = 0$.
\subsection{First scenario of gravity's rainbow}
Let us now construct a scenario of the cosmic string gravity's rainbow by using the following rainbow functions \cite{Bezerra:2017zqq,Khodadi:2016bcx, Awad:2013nxa,Hendi:2017vgo}:
\begin{eqnarray}
g_{0}\left(x\right)=g_{1}\left(x\right)=\frac{1}{1-\epsilon x},
\label{1.3}
\end{eqnarray}
where we shall consider $\epsilon$ a first order parameter for our purposes. These rainbow functions fit the requirement of a constant velocity of light and solves the horizon problem puzzle \cite{Bezerra:2017zqq,Khodadi:2016bcx, Awad:2013nxa,Hendi:2017vgo}. In this first scenario of gravity's rainbow, described by the rainbow functions in Eq. \eqref{1.3}, the cosmic string line element \eqref{1.1} is modified according to \eqref{1.a} as
\begin{eqnarray}
ds^{2}=-\left(1-\epsilon x\right)^{2}\,dt^{2}+\left(1-\epsilon x\right)^{2}\left[dr^{2}+\eta^{2}\,r^{2}\,d\varphi^{2}+dz^{2}\right].
\label{1.4}
\end{eqnarray}
Thus, we can note that by taking $\epsilon\rightarrow0$ we recover the cosmic string line element (\ref{1.1}), as it should be.
We can go further by using the spinor theory in curved space \cite{birrell1984quantum}. In this way, spinors are defined locally by introducing a noncoordinate basis $\hat{\theta}^{a}=e^{a}_{\,\,\,\mu}\left(x\right)\,dx^{\mu}$, whose components $e^{a}_{\,\,\,\mu}\left(x\right)$ are called tetrads and give rise to the local reference frame of the observers. The tetrads satisfy the following relation:
\begin{eqnarray}
g_{\mu\nu}\left(x\right)=e^{a}_{\,\,\,\mu}\left(x\right)e^{b}_{\,\,\,\nu}\left(x\right)\,\eta_{ab},
\label{1.5}
\end{eqnarray}
where $\eta_{ab}=\mathrm{diag}\left(-\,+\,+\,+\right)$ is the Minkowski tensor. We also have the inverse of the tetrads, which is given by $dx^{\mu}=e^{\mu}_{\,\,\,a}\left(x\right)\,\hat{\theta}^{a}$, with $e^{a}_{\,\,\,\mu}\left(x\right)e^{\mu}_{\,\,\,b}\left(x\right)=\delta^{a}_{b}$. Thereby, from the modified line element of the cosmic string spacetime given in Eq. (\ref{1.4}), we can write the tetrads as
\begin{eqnarray}
\hat{\theta}^{0}=\left(1-\epsilon x\right)dt;\,\,\,\hat{\theta}^{1}=\left(1-\epsilon x\right)dr;\,\,\,\hat{\theta}^{2}=\eta\,r\left(1-\epsilon x\right)d\varphi;\,\,\,\hat{\theta}^{3}=\left(1-\epsilon x\right)dz.
\label{1.6}
\end{eqnarray}
Then, by solving the Maurer-Cartan structure equations in the absence of torsion \cite{nakahara2003geometry}, $d\hat{\theta}^{a}+\omega^{a}_{\,\,\,b}\wedge\hat{\theta}^{b}$ (where $\omega^{a}_{\,\,\,b}=\omega^{\,\,\,a}_{\mu\,\,\,b}\left(x\right)\,dx^{\mu}$), we obtain $\omega_{\varphi\,2\,1}\left(x\right)=-\omega_{\varphi\,1\,2}\left(x\right)=\eta$. Thus, the spinorial connection $\Gamma_{\mu}\left(x\right)=\frac{i}{4}\,\omega_{\mu ab}\left(x\right)\,\Sigma^{ab}$ ($\Sigma^{ab}=\frac{i}{2}\left[\gamma^{a},\gamma^{b}\right]$) has the following component
\begin{eqnarray}
\Gamma_{\varphi}\left(x\right)=-\frac{i}{2}\,\eta\,\Sigma^{3}.
\label{1.7}
\end{eqnarray}
Note that we have defined the $\gamma^{a}$ matrices in the local reference frame, where they correspond to the Dirac matrices in the Minkowski spacetime \cite{greiner1990relativistic,Bjorken}:
\begin{eqnarray}
\gamma^{0}=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}\right);\,\,\,\,\,\,
\gamma^{i}=\left(
\begin{array}{cc}
0 & \sigma^{i} \\
-\sigma^{i} & 0 \\
\end{array}\right);\,\,\,\,\,\,\Sigma^{i}=\left(
\begin{array}{cc}
\sigma^{i} & 0 \\
0 & \sigma^{i} \\
\end{array}\right),
\label{1.8}
\end{eqnarray}
with $\vec{\Sigma}$ as being the spin vector. The matrices $\sigma^{i}$ are the Pauli matrices and satisfy the relation $\frac{1}{2}\left(\sigma^{i}\,\sigma^{j}+\sigma^{j}\,\sigma^{i}\right)=\eta^{ij}$. The $\gamma^{\mu}$ matrices are related to the $\gamma^{a}$ matrices via $\gamma^{\mu}=e^{\mu}_{\,\,\,a}\left(x\right)\gamma^{a}$ \cite{birrell1984quantum}.
The Dirac oscillator \cite{Moshinsky,Villalba:1993fq} is a relativistic model for the harmonic oscillator, which is introduced into the Dirac equation through the coupling $\vec{p}\rightarrow\vec{p}-im\omega_{0}\,r\,\gamma^{0}\,\hat{r}$. With this coupling, the Dirac equation remains linear in both spatial coordinates and momenta. Therefore, the covariant form of the Dirac equation for this relativistic quantum oscillator is
\begin{eqnarray}
i\gamma^{\mu}\partial_{\mu}\psi+i\gamma^{\mu}\Gamma_{\mu}\left(x\right)\psi+i\,m\,\omega\,\gamma^{\mu}\,X_{\mu}\,\gamma^{0}\psi=m\psi,
\label{1.9}
\end{eqnarray}
where $X_{\mu}=\left(0,r,0,0\right)$. In this way, by using (\ref{1.6}), (\ref{1.7}) and (\ref{1.8}), the Dirac equation for the Dirac oscillator becomes ($\hbar=c=1$)
\begin{eqnarray}
i\frac{\partial\psi}{\partial t}=m\,\left(1-\epsilon x\right)\,\gamma^{0}\psi-i\gamma^{0}\gamma^{1}\left(\frac{\partial}{\partial r}+\frac{1}{2r}+m\omega r\,\gamma^{0}\right)\psi-i\frac{\gamma^{0}\gamma^{2}}{\eta\,r}\,\frac{\partial\psi}{\partial\varphi}-i\gamma^{0}\gamma^{3}\,\frac{\partial\psi}{\partial z}.
\label{1.10}
\end{eqnarray}
The solution to the Dirac equation (\ref{1.10}) is given in the form:
\begin{eqnarray}
\psi=e^{-i\,E\,t}\,\left(
\begin{array}{c}
\phi_{1}\\
\phi_{2}\\
\end{array}\right),
\label{1.11}
\end{eqnarray}
where $\phi_{1}$ and $\phi_{2}$ are spinors of two-components \cite{Bakke:2013wla}. Then, we obtain two coupled equations for $\phi_{1}$ and $\phi_{2}$, where the first one is
\begin{eqnarray}
\left[E-m\,\left(1-\epsilon x\right)\right]\phi_{1}=-i\sigma^{1}\left[\frac{\partial}{\partial r}+\frac{1}{2r}-m\omega r\right]\phi_{2}-i\frac{\sigma^{2}}{\eta\,r}\,\frac{\partial\phi_{2}}{\partial\varphi}-i\sigma^{3}\,\frac{\partial\phi_{2}}{\partial z},
\label{1.12}
\end{eqnarray}
while the second coupled equation is
\begin{eqnarray}
\left[E+m\,\left(1-\epsilon x\right)\right]\phi_{2}=-i\sigma^{1}\left[\frac{\partial}{\partial r}+\frac{1}{2r}+m\omega r\right]\phi_{1}-i\frac{\sigma^{2}}{\eta\,r}\,\frac{\partial\phi_{1}}{\partial\varphi}-i\sigma^{3}\,\frac{\partial\phi_{1}}{\partial z}.
\label{1.13}
\end{eqnarray}
By eliminating $\phi_{2}$ in Eqs. (\ref{1.12}) and (\ref{1.13}), therefore, we obtain the following equation for $\phi_{1}$:
\begin{eqnarray}
\left[E^{2}-m^{2}\left(1-\epsilon x\right)^{2}\right]\phi_{1}&=&-\frac{\partial^{2}\phi_{1}}{\partial r^{2}}-\frac{1}{r}\frac{\partial\phi_{1}}{\partial r}-\frac{1}{\eta^{2}r^{2}}\frac{\partial^{2}\phi_{1}}{\partial\varphi^{2}}-\frac{\partial^{2}\phi_{1}}{\partial z^{2}}+\frac{i\sigma^{3}}{\eta\,r^{2}}\frac{\partial\phi_{1}}{\partial\varphi}+\frac{1}{4\,r^{2}}\phi_{1}\nonumber\\
&+&m^{2}\omega^{2}\,r^{2}\,\phi_{1}-m\omega\,\phi_{1}+\frac{2m\omega}{\eta}\,i\sigma^{3}\,\frac{\partial\phi_{1}}{\partial\varphi}-2m\omega\,r\,i\sigma^{2}\,\frac{\partial\phi_{1}}{\partial z}.
\label{1.14}
\end{eqnarray}
Observe that $\sigma^{3}\phi_{1}=\pm\phi_{1}=s\phi_{1}$, where $s=\pm1$ and $\phi_{1}=\left(f_{+}\,\,\,f_{-}\right)^{T}$. Furthermore, due to the cylindrical symmetry of system, we can write $\phi_{1}=e^{i\left(l+1/2\right)\varphi}\,e^{i\,p_{z}\,z}\,\left(f_{+}\left(r\right)\,\,\,f_{-}\left(r\right)\right)^{T}$, where $l=0,\pm1,\pm2,\ldots$ and $-\infty\,<\,p_{z}\,<\,\infty$. From now on, let us take $p_{z}=0$, hence, we obtain the following second order differential equation for both $f_{+}\left(r\right)$ and $f_{-}\left(r\right)$:
\begin{eqnarray}
f_{s}''+\frac{1}{r}\,f_{s}'-\frac{\nu^{2}}{\eta^{2}r^{2}}\,f_{s}-m^{2}\omega^{2}r^{2}\,f_{s}+\beta\,f_{s}=0,
\label{1.15}
\end{eqnarray}
where we have defined the parameterss
\begin{eqnarray}
\beta&=&E^{2}-m^{2}\left(1-\epsilon x\right)^{2}+2s\,m\,\omega\,\frac{\gamma_{s}}{\eta}+2m\omega;\nonumber\\
[-2mm]\label{1.16}\\[-2mm]
\nu&=&l+\frac{1}{2}\left(1-s\right)+\frac{s}{2}\left(1-\eta\right).\nonumber
\end{eqnarray}
By defining $\xi=m\omega\,r^{2}$, we obtain the following equation
\begin{eqnarray}
\xi\,f_{s}''+f_{s}'-\frac{\nu^{2}}{4\eta^{2}\xi}\,f_{s}-\frac{\xi}{4}\,f_{s}+\frac{\beta}{4m\omega}\,f_{s}=0.
\label{1.17}
\end{eqnarray}
The solutions to this equation are given by
\begin{eqnarray}
f_{s}\left(\xi\right)=e^{-\frac{\xi}{2}}\,\,\,\xi^{\left|\nu\right|/2\eta}\,\,\,_{1}F_{1}\left(\frac{\left|\nu\right|}{2\eta}+\frac{1}{2}-\frac{\beta}{4m\omega},\,\,\frac{\left|\nu\right|}{\eta}+1;\,\,\xi\right),
\label{1.18}
\end{eqnarray}
where $\,_{1}F_{1}\left(\frac{\left|\nu\right|}{2\eta}+\frac{1}{2}-\frac{\beta}{4m\omega},\,\,\frac{\left|\nu\right|}{\eta}+1;\,\,\xi\right)$ is the confluent hypergeometric function \cite{Abramowitz,Arfken}. Observe that the asymptotic behaviour of the confluent hypergeometric function for large values of its argument is given by \cite{Abramowitz}
\begin{eqnarray}
\,_{1}F_{1}\left(a,\,b\,;x\right)\approx\frac{\Gamma\left(b\right)}{\Gamma\left(a\right)}\,e^{x}\,x^{a-b}\left[1+\mathcal{O}\left(\left|x\right|^{-1}\right)\right],
\label{1.19}
\end{eqnarray}
therefore, it diverges when $x\rightarrow\infty$. With the purpose of obtaining bound states solutions to the Dirac equation, we need to impose that $a=-n$ ($n=0,1,2,3,\ldots$), i.e., we need that $\frac{\left|\nu\right|}{2\eta}+\frac{1}{2}-\frac{\beta}{4m\omega}=-n$. With this condition, the confluent hypergeometric function becomes well-behaved when $x\rightarrow\infty$. Then, from the relation $\frac{\left|\nu\right|}{2\eta}+\frac{1}{2}-\frac{\beta}{4m\omega}=-n$, we obtain
\begin{eqnarray}
E^{2}+\frac{2\epsilon\,m^{2}}{E_{P}\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}\,E-\frac{\left(m^{2}+\lambda\right)}{\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}=0,
\label{1.20}
\end{eqnarray}
where $\lambda=4m\omega\left[n+\frac{\left|\nu\right|}{2\,\eta}-s\frac{\nu}{2\eta}\right]$. Observe that Eq. (\ref{1.20}) is a second degree algebraic equation for $E$. Therefore, the allowed energies of the system are
\begin{eqnarray}
\label{el1}
E_{\sigma}&=&-\frac{\epsilon\,m^{2}}{E_{p}\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}\nonumber\\
[-2mm]\label{1.21}\\[-2mm]
&\pm&\frac{1}{\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}\sqrt{\left(m^{2}+4m\omega\left[n+\frac{\left|\nu\right|}{2\,\eta}-s\frac{\nu}{2\eta}\right]\right)\cdot\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)+\frac{\epsilon^{2}m^{4}}{E_{p}^{2}}}.\nonumber
\end{eqnarray}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig3}
\caption{\small{Plot of the energy levels \eqref{el1} of the Dirac oscillator in the cosmic string gravity's rainbow, in units of the Planck energy, in terms of the ratio of the frequency $\omega$ to the Planck energy. For this plot we consider $\frac{m}{E_P}=0.8$ and $\eta = \frac{1}{q}$. Note that the black thin curves (for $q=1$), describe the energy levels in the absence of gravity's rainbow while the color curves describe the energy leves with the gravity's rainbow modifications.}}
\label{f2}
\end{center}
\end{figure}
By comparing with the spectrum of energy \eqref{1.23}, obtained in Ref. \cite{Bakke:2013wla}, we can see that the modified background of the cosmic string spacetime changes the spectrum of energy of the Dirac oscillator by yielding the allowed energies (\ref{1.21}). The effects that stem from the topology of the cosmic string are given by the presence of the parameter $\eta$ in the allowed energies (\ref{1.21}). By taking the limit $\eta\rightarrow1$ in Eq. (\ref{1.21}), we have
\begin{eqnarray}
\label{el2}
E_{\sigma}&=&-\frac{\epsilon\,m^{2}}{E_{p}\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}\nonumber\\
[-2mm]\label{1.22}\\[-2mm]
&\pm&\frac{1}{\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)}\sqrt{\left(m^{2}+4m\omega\left[n+\frac{\left|\bar{\gamma}\right|}{2}-s\frac{\bar{\gamma}}{2}\right]\right)\cdot\left(1-\frac{\epsilon^{2}\,m^{2}}{E_{p}^{2}}\right)+\frac{\epsilon^{2}m^{4}}{E_{p}^{2}}}.\nonumber
\end{eqnarray}
where $\bar{\gamma}=l+\frac{1}{2}\left(1-s\right)$. In this case, we have a modified background of the Minkowski spacetime. Therefore, the effects of gravity's rainbow modifies the energy levels of the Dirac oscillator in contrast to that obtained in Refs. \cite{Moshinsky, Villalba:1993fq} in the Minkowski spacetime.
In Fig.\ref{f2} we have plotted the energy levels \eqref{el1} of the Dirac oscillator in the cosmic string gravity's rainbow, in units of the Planck energy, in terms of the ratio of the frequency $\omega$ to the Planck energy. The values of the parameters considered are indicated on the figure. Also, for this plot we consider $\frac{m}{E_P}=0.8$ and $\eta = \frac{1}{q}$. Note that the black thin curves describe the energy levels, taking $q=1$, in the absence of gravity's rainbow while the color curves describe the energy leves with gravity's rainbow modifications. It is clear that, compared with the black thin curves (see also Fig.\ref{f1}), the energy leves described by the solid-blue curves (which is also for $q=1$) increases (decreases) in the cosmic string gravity's rainbow context. The same is true for the dashed-red line, which is for $q=2.5$. It is interesting to note that, the curves for the positive energy levels are shifted in the vertical direction when we considered gravity's rainbow scenario given by the rainbow functions \eqref{1.3}. Thereby, the symmetry shown in Fig.\ref{f1} between the curves for the positive and negative energy levels is lost. This happens because of the first term in Eqs. \eqref{el1} and \eqref{el2}. Note again that, for $\omega = 0$, the curves do not go to zero as we can verify from Eqs. \eqref{el1} and \eqref{el2}.
\subsection{Second scenario of gravity's rainbow}
In this section, we analyse the behaviour of the Dirac oscillator in another scenario of gravity's rainbow. Thus, the second scenario of gravity's rainbow we want to consider is the one defined by the rainbow functions \cite{Bezerra:2017zqq,Khodadi:2016bcx, Awad:2013nxa,Hendi:2017vgo}:
\begin{eqnarray}
g_{0}\left(x\right)=1;\,\,\,\,g_{1}\left(x\right)=\sqrt{1-\epsilon\,x^{2}}.
\label{2.1}
\end{eqnarray}
Gravity's rainbow scenario constructed with the rainbow functions \eqref{2.1} can be achieved as a limiting case from theories based on non-commutative geometries and loop quantum gravity \cite{Bezerra:2017zqq,Khodadi:2016bcx, Awad:2013nxa,Hendi:2017vgo}. These rainbow functions have also been considered in many different contexts to study the effects of gravity's rainbow on the Friedmann-Robertson-Walker Universes \cite{Khodadi:2016bcx, Awad:2013nxa, Majumder:2013ypa, Hendi:2017vgo}.
By considering the rainbow functions \eqref{2.1}, the line element of the cosmic string spacetime in the context of gravity's rainbow (\ref{1.a}) is given by
\begin{eqnarray}
ds^{2}=-\,dt^{2}+\frac{1}{\left(1-\epsilon x^{2}\right)}\left[dr^{2}+\eta^{2}\,r^{2}\,d\varphi^{2}+dz^{2}\right].
\label{2.2}
\end{eqnarray}
With the line element (\ref{2.2}), let us write the tetrads as
\begin{eqnarray}
\hat{\theta}^{0}=dt;\,\,\,\hat{\theta}^{1}=\frac{1}{\left(1-\epsilon x^{2}\right)^{1/2}}\,dr;\,\,\,\hat{\theta}^{2}=\frac{\eta\,r}{\left(1-\epsilon x^{2}\right)^{1/2}}\,\,d\varphi;\,\,\,\hat{\theta}^{3}=\frac{1}{\left(1-\epsilon x^{2}\right)^{1/2}}\,dz.
\label{2.3}
\end{eqnarray}
Then, by solving again the Maurer-Cartan structure equations in the absence of torsion \cite{nakahara2003geometry}, we also obtain $\omega_{\varphi\,2\,1}\left(x\right)=-\omega_{\varphi\,1\,2}\left(x\right)=\eta$ and the spinorial connection (\ref{1.7}). Hence, the Dirac equation becomes
\begin{eqnarray}
i\frac{\partial\psi}{\partial t}&=&m\,\gamma^{0}\psi-i\left(1-\epsilon x^{2}\right)^{1/2}\,\gamma^{0}\gamma^{1}\left(\frac{\partial}{\partial r}+\frac{1}{2r}+m\omega r\,\gamma^{0}\right)\psi-i\left(1-\epsilon x^{2}\right)^{1/2}\,\frac{\gamma^{0}\gamma^{2}}{\eta\,r}\,\frac{\partial\psi}{\partial\varphi}\nonumber\\
[-2mm]\label{2.4}\\[-2mm]
&-&i\left(1-\epsilon x^{2}\right)^{1/2}\gamma^{0}\gamma^{3}\,\frac{\partial\psi}{\partial z}.\nonumber
\end{eqnarray}
Next, by taking the solution to the Dirac equation (\ref{1.11}), we obtain the following coupled equations for $\phi_{1}$ and $\phi_{2}$
\begin{eqnarray}
\left[\frac{E-m}{\sqrt{1-\epsilon x^{2}}}\right]\phi_{1}=-i\sigma^{1}\left[\frac{\partial}{\partial r}+\frac{1}{2r}-m\omega r\right]\phi_{2}-i\frac{\sigma^{2}}{\eta\,r}\,\frac{\partial\phi_{2}}{\partial\varphi}-i\sigma^{3}\,\frac{\partial\phi_{2}}{\partial z},
\label{2.5}
\end{eqnarray}
while the second coupled equation is
\begin{eqnarray}
\left[\frac{E+m}{\sqrt{1-\epsilon x^{2}}}\right]\phi_{2}=-i\sigma^{1}\left[\frac{\partial}{\partial r}+\frac{1}{2r}+m\omega r\right]\phi_{1}-i\frac{\sigma^{2}}{\eta\,r}\,\frac{\partial\phi_{1}}{\partial\varphi}-i\sigma^{3}\,\frac{\partial\phi_{1}}{\partial z}.
\label{2.6}
\end{eqnarray}
By eliminating $\phi_{2}$ in Eqs. (\ref{2.5}) and (\ref{2.6}), therefore, we obtain the following equation for $\phi_{1}$:
\begin{eqnarray}
\left[\frac{E^{2}-m^{2}}{\left(1-\epsilon\,x^{2}\right)}\right]\phi_{1}&=&-\frac{\partial^{2}\phi_{1}}{\partial r^{2}}-\frac{1}{r}\frac{\partial\phi_{1}}{\partial r}-\frac{1}{\eta^{2}r^{2}}\frac{\partial^{2}\phi_{1}}{\partial\varphi^{2}}-\frac{\partial^{2}\phi_{1}}{\partial z^{2}}+\frac{i\sigma^{3}}{\eta\,r^{2}}\frac{\partial\phi_{1}}{\partial\varphi}+\frac{1}{4\,r^{2}}\,\phi_{1}\nonumber\\
&+&m^{2}\omega^{2}\,r^{2}\,\phi_{1}-m\omega\,\phi_{1}+\frac{2m\omega}{\eta}\,i\sigma^{3}\,\frac{\partial\phi_{1}}{\partial\varphi}-2m\omega\,r\,i\sigma^{2}\,\frac{\partial\phi_{1}}{\partial z}
\label{2.7}
\end{eqnarray}
Hence, by following the steps from Eq. (\ref{1.14}) to Eq. (\ref{1.20}), we obtain
\begin{eqnarray}
E_{\sigma}=\pm\sqrt{\frac{m^{2}+4m\omega\left[n+\frac{\left|\nu\right|}{2\,\eta}-s\frac{\nu}{2\eta}\right]}{\left(1+\frac{4m\omega\epsilon}{E_{P}^{2}}\,\left[n+\frac{\left|\nu\right|}{2\,\eta}-s\frac{\nu}{2\eta}\right]\right)}},
\label{2.8}
\end{eqnarray}
which is the spectrum of energy of the Dirac oscillator in the modified cosmic string spacetime (\ref{2.2}). In contrast to the energy levels of the Dirac oscillator in the cosmic string spacetime obtained in Ref. \cite{Bakke:2013wla}, we have a different spectrum of energy of the Dirac oscillator yielded by the effects of gravity's rainbow scenario determined by the rainbow functions (\ref{2.1}). We can also see that this spectrum of energy differs from that obtained in Eq. (\ref{1.21}). Hence, distinct scenarios of the rainbow gravity yield different spectra of energy of the Dirac oscillator. Observe in Eq. (\ref{2.8}) that the effects of the topology of the cosmic string are also given by the presence of the parameter $\eta$. By taking the limit $\eta\rightarrow1$ in Eq. (\ref{2.8}), we have the modified background of the Minkowski spacetime, and thus we obtain
\begin{eqnarray}
E_{\sigma}=\pm\sqrt{\frac{m^{2}+4m\omega\left[n+\frac{\left|\bar{\gamma}\right|}{2}-s\frac{\bar{\gamma}}{2}\right]}{\left(1+\frac{4m\omega\epsilon}{E_{P}^{2}}\,\left[n+\frac{\left|\bar{\gamma}\right|}{2}-s\frac{\bar{\gamma}}{2}\right]\right)}},
\label{2.9}
\end{eqnarray}
where $\bar{\gamma}=l+\frac{1}{2}\left(1-s\right)$. Therefore, we can also observe the effects of gravity's rainbow in the energy levels \eqref{2.9} of the Dirac oscillator in contrast to that obtained in Refs. \cite{Moshinsky,Villalba:1993fq} in the Minkowski spacetime.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig4}
\caption{\small{Plot of the energy levels \eqref{2.8} of the Dirac oscillator in the cosmic string gravity's rainbow, in units of the Planck energy, in terms of the ratio of the frequency $\omega$ to the Planck energy. For this plot we consider $\frac{m}{E_P}=0.8$ and and $\eta = \frac{1}{q}$. Note that the black thin curves (for $q=1$), describe the energy levels in the absence of gravity's rainbow while the colour curves (dashed-red for $q=2.5$ and solid-blue for $q=1$) describe the energy levels with gravity's rainbow modifications. }}
\label{f3}
\end{center}
\end{figure}
Finally, by taking $\epsilon\rightarrow0$ in Eq. (\ref{2.8}), we recover the energy levels of the Dirac oscillator in the cosmic string spacetime \cite{Bakke:2013wla}, i.e., we recover Eq. (\ref{1.23}).
In Fig.\ref{f3} we have plotted the energy levels \eqref{2.8} of the Dirac oscillator in the cosmic string gravity rainbow, in units of the Planck energy, in terms of the ratio of the frequency $\omega$ to the Planck energy. The values of the parameters considered are indicated on the figure. Also, for this plot we consider $\frac{m}{E_P}=0.8$ and and $\eta = \frac{1}{q}$. The conclusion here is similar to the one in Fig.\ref{f2}, that is, the black thin curves describe the energy levels, taking $q=1$, in the absence of gravity's rainbow while the colour curves describe the energy levels with gravity's rainbow modifications. Compared with the black thin curves (see also Fig.\ref{f1}), the energy levels described by the solid-blue curves (which is also for $q=1$) increase (decrease) in the cosmic string gravity rainbow context. The same is true for the dashed-red line, which is for $q=2.5$. Note that, increasing the cosmic string parameter $q$, the energy does not seem to change. Note also that, in this case, we do not have shifted curves for the positive energy levels, like in Fig.\ref{f2}. The reason for that is that we do not have a term like the first one in Eq. \eqref{el1}. Therefore, the curves shown in Fig.\ref{f3} preserve the symmetry between the positive and negative energy levels. Here is more evident that for $\omega = 0$, the curves do not go to zero.
\section{conclusions}
%
We have analysed the Dirac oscillator in the cosmic string spacetime modified by two scenarios of gravity's rainbow. The first scenario is characterized by the rainbow function \eqref{1.3} while the second scenario is characterized by the rainbow function \eqref{2.1}, both of them considered in many different contexts. The line elements, \eqref{1.4} and \eqref{2.2}, describing the cosmic string gravity's rainbow, were then used to solve the Dirac equation, which made possible to obtain the modified energy levels \eqref{el1}-\eqref{el2} in the first scenario and \eqref{2.8}-\eqref{2.9} in the second scenario. We also pointed out that in the limit where gravity's rainbow parameter $\epsilon$ is taken to be zero, we recover the energy levels \eqref{1.23} that has been considered previously by the authors in \cite{Bakke:2013wla}.
We have then plotted the energy levels \eqref{el1}-\eqref{el2} which are shown in Fig.\ref{f2} and compared it with the curves obtained from the energy levels \eqref{1.23} that is shown in Fig.\ref{f1}. The same is done with the plot of the energy levels \eqref{2.8}-\eqref{2.9} in Fig.\ref{f3}. In both cases we clearly saw that the curves for the energies are increased (decreased) for given values of the parameters involved in the problem. We emphasize that in the first case, shown in Fig.\ref{f2}, the curves for the positive part of the energy levels are shifted when compared with the standard case, without gravity's rainbow. Thus, the symmetry between the curves described by the negative and positive energy levels, as seen in Fig.\ref{f1}, is lost as indicated in Fig.\ref{f2}.
\section*{Acknowledgments}
The authors would like to thank Prof. H. Hassanabadi for interesting discussions, and the Brazilian agency CNPq (Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico - Brazil) for financial support.
|
{
"timestamp": "2018-03-16T01:09:47",
"yymm": "1802",
"arxiv_id": "1802.08711",
"language": "en",
"url": "https://arxiv.org/abs/1802.08711"
}
|
\section{Introduction}
In this paper we concern ourselves with two-player impartial combinatorial games under normal play. Thus the games we consider are perfect-information, both players are allowed the same set of moves given the same configuration of the game board, and the game eventually terminates. The player whose move terminates the game wins. From now on, we simply refer to these as games. For an overview of such games see \cite{winningways}.
\par Games can be modelled by a directed graph $(V, E)$ which we call the game tree. $V$ denotes the set of game states, whereas an edge $(v_1, v_2)$ denotes the existence of a move from state $v_1$ to state $v_2$. The leafs of the tree are then the terminal positions. It follows by easy induction on the game tree that from every position, either $P1$ or $P2$ has a winning strategy. Given a game $G=(V,E)$, the Sprague-Grundy (SG) function $\mathcal{N}:V\rightarrow\mathbb{N}$ generalizes this partition. From $v\in V$, the player who is about to play has a winning strategy if and only if $\mathcal{N}(v)\neq 0$. We usually call the Sprague-Grundy value of a game-state $v$ its \textit{nimber}.
\par A lot of our results build on the following recursive definition of the Sprague-Grundy function:
\begin{definition}
Let $G=(V, E)$ be a game. If $v\in V$ is terminal, $\mathcal{N}(v)=0$. Otherwise, $\mathcal{N}(v)=mex\,\{\,\mathcal{N}(v')\,|\,(v,v')\in E\}$, where $mex$ denotes the minimum excluded value of a set in $\mathbb{N}$.
\end{definition}
\vspace{5mm}
In \textit{On Numbers and Games} \cite{numbersandgames} Conway suggests three potential rules for moving in compound games where games $G$ and $H$ are played simultaneously:
\begin{itemize}
\item The disjunctive compound, denoted $G\oplus{H}$. Here players make a legal move in either $G$ or $H$ on their turn.
\item The selective compound, denoted $G\boxplus{H}$. Here on a player's turn they select either $G$, $H$, or both and makes legal moves in the ones selected.
\item The conjunctive compound, where players always make legal moves in both component games.
\end{itemize}
\par Given enough information about each of the component games the Sprague-Grundy theorem makes it easy to determine the $SG$-function $\mathcal{N}{}$ for the disjunctive sum of two games: $\mathcal{N}{(G\oplus{H})}=\mathcal{N}{(G)}\oplus \mathcal{N}{(H)}$, where the second $\oplus$ denotes the bitwise xor operation on $\mathcal{N}{(G)}$ and $\mathcal{N}{(H)}$. As an example, by $*k$ we denote the game of a Nim pile with $k$ stones. A valid move is to remove an arbitrary amount of stones from the pile. Then clearly by Definition 1, $\mathcal{N}(*k)=k$. One pile Nim is not a very interesting game; however, $(*k)\oplus(*l)\oplus(*m)$ can be easily navigated by computing nimbers, even though there isn't an intuitive winning strategy always.
\par The $SG$-function of selective compound games, however, is not characterized by the nimbers of its component games: for example $\mathcal{N}{(*1\boxplus{*0})}=1\neq \mathcal{N}{(*1\boxplus{(*1\oplus *1)})}=3$ even though the nimbers of the component games agree. In fact, even for games as simple as these determining the $SG$-function can be rather complicated. In 2015 Boros et. al. \cite{exconim} gave a partial analysis of $\mathcal{N}{(*a\boxplus{(*b\oplus *c)})}$ and noted that this function behaves rather chaotically. We continue this analysis by proving some of the conjectures presented in \cite{exconim} as well as extending results to the game $\mathcal{N}{(*x_1\boxplus{(*x_2\oplus...\oplus *x_n}))}$. We call this game \textit{Auxiliary Nim} and more generally, for a given game $G$ we call the game $*k\boxplus G$ \textit{Auxiliary G}.
\par A lower bound and an upper bound can easily by derived for the nimber of a Auxiliary Nim game. We show the following bounds in Corollary \ref{thm:easyBounds}:
$$x_1+(x_2\oplus x_3 \oplus \cdots \oplus x_n)\leq\mathcal{N}(x_1,x_2,x_3, \cdots, x_n)\leq x_1+x_2+x_3+\cdots+x_n$$
\par Two of our main results characterize when these extreme points are realized.
\vspace{2mm}
\textbf{Question 1: }Under which circumstances $\mathcal{N}{(*x_1\boxplus{(*x_2\oplus...\oplus *x_n})}=x_1$, the lowest achievable value by Corollary \ref{thm:easyBounds}?\vspace{1mm}
\par Theorem \ref{thm:firstTheorem} completely answers this question:
\begin{theorem}\label{thm:firstTheorem}
$\mathcal{N}(*x_1\boxplus (*x_2\oplus \cdots \oplus *x_n))=x_1 \Leftrightarrow$ $ (*x_2\oplus \cdots \oplus *x_n)=0$ and $ 2^{\lfloor \log_2 x_1\rfloor + 1}$ divides all of $x_2,x_3, \cdots, x_n$.
\end{theorem}
\textbf{Question 2: } Under which circumstances is the upper bound from Corollary \ref{thm:easyBounds} realized? \vspace{1mm}
\par The answer turns out to be that the upper bound is realized when $x_1$ is sufficiently large compared to the other $x_i$s. We first define $A(x_2,...,x_n)$ to be the least value of $x_1$ such that $\forall a\geq x_1$, $\mathcal{N}{(*a\boxplus{(*x_2\oplus...\oplus *x_n})}=a+x_2+...+x_n$.
\begin{theorem}\label{thm:bigA}
Let $(x_1,x_2 \cdots, x_n)$ be an Auxiliary-Nim game with $n$-many piles. Then, $A(x_2, \cdots, x_n)$ is well-defined. Furthermore, $A(x_2, \cdots, x_n)$ grows quadratically with respect to the sum $x_2+ \cdots + x_n$.
\end{theorem}
Further, in the special case of $n=3$, we prove a linear upper bound. In Lemma \ref{lemma:a0UpperBound}, we show that $$A(b,c) \leq min(\sim b,\sim c)+1$$ where $\sim x$ denotes the bitwise complement. We also provide some sufficient conditions for this upper bound to be realized. The Analysis of the $n=3$ case brings us to the next question.
\vspace{2mm}
\textbf{Question 3: } Can we come up with a closed-form, non-recursive way to describe the behaviour of $\mathcal{N}{(*a\boxplus(*b\oplus *c)}$, the Auxiliary Nim game with only $3$ piles?
\vspace{1mm}
\par Question 3 is still open. We to show a linear upper bound on $A(b,c)$, and partially resolved the cases where $b$ and $c$ are sufficiently close to a power of $2$. In particular, we show the following:
\begin{theorem}\label{thm:same-char}
Suppose $b=2^i+k$ and $c=2^i+l$ with $k<l<2^i$. Then
\[ \mathcal{N}{(a,b,c)}= \begin{cases}
a+b+c & a \geq 2^i-l\\
2a+c+k+l & 2^i-k-l\leq a<2^i-l\,;\, l\leq2^{i-1}\\
\geq\mathcal{N}{(a,k,l)} & l>2^{i-1}\,;\,\mathcal{N}{(a,k,l)}\geq 2^i\\
\mathcal{N}{(a,k,l)} & \mathcal{N}{{(a,k,l)}<2^i}
\end{cases}
\]
\end{theorem}
This recursive structure causes the $SG$ function to become rather complicated, even in simple circumstances. For a qualitative view of this complexity, see Figure \ref{fig:fig}.
\vspace{1 mm}
We also get closer to a complete characterization of $\mathcal{N}(*1\boxplus (*b\oplus*c))$:
\begin{theorem}\label{thm:oddb}
For $b$ odd, if $c\geq 2^{2\lfloor \log_2 b \rfloor+1}-2^{\lfloor \log_2 b \rfloor+2}-1$ then $\mathcal{N}{(*1\boxplus (*b\oplus*c))}=1+b+c$.
\end{theorem}
Therefore, there are at least some cases where the $SG$-function of this game is well-behaved. But outside the domain of the assumptions of the previous theorems, even in the analysis of the simplest possible Auxiliary Game, the function $\mathcal{N}{(*1\boxplus (*b\oplus*c))}$ seems to result in combinatorial chaos.
\begin{figure}
\centering
\includegraphics{1ij_grid_bigger.png}
\caption{A heat-map for the Sprague-Grundy values (nimbers) for the game $(*1)\boxplus (*x\oplus *y)$. The behavior of the blocks of size $2^n$ along the diagonal are characterized by Theorem \ref{thm:same-char}. The structure of the fixed blocks ``decay'' as they are translated to the right/down. This is partially explained by Theorem \ref{thm:oddb}.}
\label{fig:fig}
\end{figure}
\begin{figure}
\centering
\includegraphics{8ij_grid_bigger_new.png}
\caption{A heat-map for the Sprague-Grundy values (nimbers) for the game $(*8)\boxplus (*x\oplus *y)$. Notice that the auxiliary pile size is larger compared to the game in Figure \ref{fig:fig}, and the heat-map looks more ``orderly''. This is partially explained by Theorem \ref{thm:bigA}, in particular, by the fact that $A(b,c)\leq min(\sim b,\sim c)+1$ (Lemma \ref{lemma:a0UpperBound}).
Nimbers achieve the lower bound (in this case, $8$) only along the diagonal when $b=c$ is a multiple of $16$, as shown by Theorem \ref{thm:firstTheorem}.}
\label{fig:fig2}
\end{figure}
\section{Results}
From now on, we will refer to the game $(*a)\boxplus(*b\oplus *c)$ simply as $(a,b,c)$, and similarly $(*x_1)\boxplus(*x_2\oplus *x_3 \oplus \cdots \oplus *x_n)$ as $(x_1,x_2,x_3, \cdots, x_n)$. Also, $\mathcal{N}(a,b,c)$ denotes the Sprague-Grundy value of the game $(a,b,c)$. Finally, we use $(a,b,c)\rightarrow N$ to state that the game $(a,b,c)$ can reach a game with nimber $N$ through some legal move. Similarly, $(a,b,c)\nrightarrow N$ means that the game $(a,b,c)$ cannot reach a game with nimber $N$. Observe that $(a,b,c)\rightarrow N$ implies $\mathcal{N}(a,b,c)\neq N$.
We begin with some preliminary results:
\begin{lemma} \label{thm:NimberIncreasesWithA}
$\mathcal{N}(a,b,c)>\mathcal{N}(a-1,b,c)$. $\forall a\in\mathds{N}$
\end{lemma}
\begin{proof}
We see that if $(a-1,b,c)\rightarrow N$, $(a,b,c)\rightarrow N$, by first setting $a$ to $a-1$, and replicating the remaining move. Moreover, $(a,b,c)\rightarrow \mathcal{N}(a-1,b,c)$, thus $\mathcal{N}(a,b,c)>\mathcal{N}(a-1,b,c)$ as desired.
\end{proof}
\begin{corollary}
$\mathcal{N}(x_1,x_2,x_3, \cdots, x_n)>\mathcal{N}(x_1-1,x_2,x_3, \cdots, x_n)$. $\forall x_1\in\mathds{N}$
\end{corollary}
\begin{lemma} \label{thm:CrudeBounds}
$a+(b\oplus c)\leq\mathcal{N}(a,b,c)\leq a+b+c$
\end{lemma}
\begin{proof}
The upper bound is trivial, since $a+b+c$ is the depth of the game $(a,b,c)$. We prove the lower-bound by induction on $a$. Let $b,c$ be arbitrary and fixed. For the base case, clearly $\mathcal{N}(0,b,c)=b\oplus c \geq 0 + (b\oplus c)$. Assuming that the bounds holds for lower values of a, we get $\mathcal{N}(a-1,b,c)\geq a-1 + (b\oplus c)$ by hypothesis. By Lemma 1, we have that $\mathcal{N}(a,b,c)\geq a + (b\oplus c)$ as desired.
\end{proof}
\begin{corollary}\label{thm:easyBounds}
$x_1+(x_2\oplus x_3 \oplus \cdots \oplus x_n)\leq\mathcal{N}(x_1,x_2,x_3, \cdots, x_n)\leq x_1+x_2+x_3+\cdots+x_n$
\end{corollary}
\begin{proof}
We see that the lower bound in Lemma \ref{thm:easyBounds} immediately generalizes to the case where we have arbitrary number of piles, as moves on the right hand side, as well as in the auxiliary pile can be replicated in a similar fashion. The upper bound also does, as the depth of the game still is a trivial upper bound on the nimber of the game.
\end{proof}
Now, we begin by providing a necessary and a sufficient condition for $\mathcal{N}(a,b,c)$ to simply evaluate to $a$, and then we generalize this to a complete proof of Theorem $\ref{thm:firstTheorem}$.
\begin{lemma}\label{thm:specialCase}
$\mathcal{N}(a,b,c)=a \Leftrightarrow$ $\exists k\in\mathds{N}$. $ b=c=k\cdot 2^{\lfloor \log_2 a\rfloor + 1}$.
\end{lemma}
The theorem claims that $\mathcal{N}(a,b,c)=a$ if and only if $b=c$ is a multiple of a power of 2 strictly greater than $a$. Note this is just a special case of Theorem \ref{thm:firstTheorem}. The proof of the special case is easier to formalize, and generalizes painlessly, so we provide a proof.
\begin{proof}
We begin by the $(\Leftarrow)$ direction. If $k=0$, the statement is trivial. Therefore, let $b=c$ be a multiple of a power of 2 strictly greater than $a$. Thus in the binary representation, $b$ has as at least as many $0$s as the number of bits in $a$. It suffices to show $(a,b,b)\nrightarrow a$ to conclude $\mathcal{N}(a,b,b)=a$, since we have $\mathcal{N}(a,b,b)\geq a+(b\oplus b) = a$ by Lemma 2.
\begin{equation}
\frac{
\begin{array}[b]{r}
\left( 1 \cdots x \cdots y \right)\\
\left( 1\cdots 1\cdots 0\cdots0 \cdots0 \right) \\
\boxplus\oplus \left( 1\cdots1\cdots0\cdots0\cdots0 \right)
\end{array}
}{
\left( \mathcal{N}(a,b,c)\right)
}
\end{equation}
From diagram 1, we observe that any move that decreases $b$ to $b'$ ensures that $b\oplus b' > a$, since a decrease in $b$ implies flipping a $1$ bit to the left of the leftmost bit in $a$, therefore in the xor operation, the bit from the other $b$ will fall down, to the left of $a$. So by the lower bound in Lemma 2, any such move will never obtain a nimber equal to $a$, since $\mathcal{N}(a,b,b')>a$. \\[0.04in]
We still need to show that $(a,b,b)\nrightarrow a$, but we are now only concerned with moves only decrease the first pile. For this case, we induct on $a$. Since we assume we can decrease $a$, $a$ has to be non-zero. When $a=1$, decreasing $a$ is equivalent to removing the first pile, thereby resulting in the game $(b\oplus b)$ with nimber $0\neq a$. In the inductive step, we assume that we decrease the size of the first pile by $k$, yielding game $(a-k,b,b)$. By assumption, we have $b=k\cdot 2^{\lfloor \log_2 a\rfloor + 1}$. But by a decrease in $a$, we cannot change the fact that b is still a multiple of a power of two strictly greater than $a$. Hence, $b=l\cdot 2^{\lfloor \log_2 a-k\rfloor + 1}$, and the inductive hypothesis applies to show $\mathcal{N}(a-k, b, b)=a-k\neq a$. This concludes the induction, and the $(\Leftarrow)$ direction of the Theorem. \\[0.06in]
We will show the $(\Rightarrow)$ direction by contrapositive. When $b$ and $c$ are not the multiple of the power of two that we require, we want to show $\mathcal{N}(a,b,c)\neq a$ Suppose first that $b\neq c$. Then $b\oplus c \neq 0$, and by the bound from Lemma 2, we see that $\mathcal{N}(a,b,c)>a$, so we are done. \\[0.04in]
Now, suppose $b=c$, but $b$ is not a multiple of a power of two strictly greater than a. We will show $(a,b,b)\rightarrow a$ by induction on a. \\ [0.04in]
In the base case, $a=1$. Then,
\begin{align*}
2^{log_2 \lfloor a \rfloor +1}
&= 2^{log_2 \lfloor 1 \rfloor +1} \\
&= 2^{1} \\
\end{align*}
Therefore, we deduce $b\neq2k$ by assumption, i.e. $b$ is odd. We observe that $b\oplus (b-1)=1$, as $b-1$ is simply $b$ with the right-most bit inverted, since b is odd. Thus, $(1, b, b)\rightarrow 1$, and we have a base case. \\[0.04in]
In the inductive step, we consider $(a,b,b)$. We assume $b$ is not a multiple of a power of 2 strictly greater than $a$.\\[0.03in]
\textbf{Case 1. } $b$ also is not a multiple of a power of 2 strictly greater than $a-1$. In this case, the hypothesis applies to the game $(a-1, b, b)$, to show $(a-1, b, b)\rightarrow a-1$. From the bounds in Lemma 1 and 2, it is evident that:
\begin{align*}
\mathcal{N}(a,b,b)&>\mathcal{N}(a-1,b,b)\\
&\geq (a-1) + 1 \\
&= a
\end{align*}
and thus we are done. \\[0.03in]
\textbf{Case 2. } $b$ is a multiple of a power of 2 strictly greater than $a-1$, but not a multiple of a power of two strictly greater than $a$. We conclude that in this case, $a=2^k$ for some $k$, as that is the only way the power of $2$ strictly greater than $a-1$ would not also be strictly greater than $a$. \\[0.03in]
We also see that $b$ is a multiple of $a$ in this case, and we thus see that in the base $2$ representation, $b$ has to have a $1$ bit at the $k^{th}$ index and thus contain a ``copy'' of $a$, as otherwise, $b$ would simply be the multiple of $2^{k+1}$, contradicting our assumption. (This is equivalent to stating that $b$ is an odd multiple of $a$.) Thus we have, $(a,b,b)\rightarrow b \oplus (b-a)=a$, as desired. We show this bit argument in the diagram below.
\begin{equation}
\frac{
\begin{array}[b]{r}
\left( 1 \cdots 0 \cdots 0 \right)\\
\left( 1\cdots 1\cdots 1\cdots0 \cdots0 \right) \\
\boxplus\oplus \left( 1\cdots1\cdots1\cdots0\cdots0 \right)
\end{array}
}{
\left( \mathcal{N}(a,b,b)\right)
}
\end{equation}
The above diagram gets converted to the below diagram, with the move that eliminates the first pile, and decreases $a$ from the second pile. Note that in the case when $a=b$, this procedure simply amounts to removing piles 1 and 2.
\begin{equation}
\frac{
\begin{array}[b]{r}
\left( 1\cdots 1\cdots 0\cdots0 \cdots0 \right) \\
\oplus \left( 1\cdots1\cdots1\cdots0\cdots0 \right)
\end{array}
}{
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left(1 \cdots 0 \cdots 0 \right)
}
\end{equation}
\end{proof}
We are now ready to prove Theorem \ref{thm:firstTheorem} in full generality. For convenience, we restate it below:\vspace{2mm}
\par \textbf{Theorem 1. } $\mathcal{N}(x_1,x_2,x_3, \cdots, x_n)=x_1 \Leftrightarrow$ $ (x_2\oplus x_3\oplus \cdots \oplus x_n)=0$ and $ 2^{\lfloor \log_2 x_1\rfloor + 1}$ divides all of $(x_2,x_3, \cdots, x_n)$.\vspace{2mm}
\par The Theorem strengthens Lemma \ref{thm:easyBounds} to characterize all the Auxiliary Nim games where the nimbers are equivalent to the size of the first pile. Note that unlike in the statement of Lemma \ref{thm:easyBounds}, we do not and cannot mandate that all the values $(x_2,x_3, \cdots, x_n)$ are equivalent. We merely require that all the values xor to $0$ (in the $3$ pile game, this is equivalent to saying $b=c$).
\begin{proof}
For the $(\Leftarrow)$ direction, we have that all of $(x_2,x_3, \cdots, x_n)$ xor to $0$ and each have as many $0$s as the number of bits of $x_1$. Thus for any move that is not solely a decrease in the $(*x_1)$ pile, a decrease in a pile $(*x_i)$ to $(*x'_i)$ ensures that $(x_2\oplus \cdots x'_i \oplus \cdots \oplus x_n)>x_1$, as a bit falls down to the left of $x_1$, and the xor was $0$ before the move, by assumption. For moves that decrease only the $(*x_1)$ pile, we can induct on the value of $(*x_1)$ to show that all such decreases will yield a nimber of $x_1-d$, where $d$ is the decrease. This step is identical in the proof for Theorem 1. \\[0.07in]
We now show the $(\Rightarrow)$ direction, again by contrapositive. By the lower bound in Lemma \ref{thm:easyBounds}, it follows immediately that $(x_2\oplus x_3 \oplus \cdots \oplus x_n)=0$, as otherwise, $\mathcal{N}(x_1,x_2,x_3, \cdots, x_n)>x_1$. So we assume that there exists $x_i\in(x_2,x_3,\cdots, x_n)$ such that $2^{\lfloor \log_2 x_1\rfloor + 1}$ does not divide $x_i$. We again induct on the value of $x_1$. In the base case when $x_1=1$, we conclude $x_i$ is odd, thus the move that sets $x'_i=x_i-1$ will yield nimber $1$, just like in the previous proof, contradicting $\mathcal{N}(a,b,c, \cdots, z)=a$. We again separate our inductive step into two cases. If our inductive hypothesis applies to the same game with $x'_1=x_1-1$, we are done. Otherwise, $x_1=2^m$ for some $m$ power of $2$, and we have a $x_i$ such that $x_i$ is a multiple of $2^m$, but not $2^{m+1}$, and thus $x_i$ contains a ``copy'' of the bits of $x_1$, i.e. $x_i$ has a $1$ bit at the $m^{th}$ index. We set $x'_i=x_i-x_1$ to yield a game with nimber $x_1$, thus showing that the nimber of the original game could not have been $x_1$, concluding the proof.
\end{proof}
We note that despite the fact that the SG-values of $(*a)\boxplus(*b\oplus*c)$ is complex when the value of $a$ is low, the SG value merely equals $a+b+c$, i.e. the upper bound, when $a$ is large enough. In the following section of the paper, we formalize this notion, and give some characterizations of the cases for when $\mathcal{N}(a,b,c)=a+b+c$.
\begin{definition}
For any $b,c \in \mathbb{N}$, we define $A(b,c)$ to be the minimum $a \in \mathbb{N}$ st. $\mathcal{N}(a,b,c) = a + b + c$
\end{definition}
Note that it is not necessarily clear from the definition that $A$ is even well defined. Soon, however, will prove this, by establishing an upper bound on $A(b,c)$.
\begin{lemma} \label{thm:NimberIsAlwaysSum}
If $A(b,c)$ is defined, for every $a > A(b,c)$, $\mathcal{N}(a,b,c) = a + b + c$.
\end{lemma}
\begin{proof}
This follows immediately from the lower bound provided by \Cref{thm:NimberIncreasesWithA} and the upper bound provided by \Cref{thm:CrudeBounds}.
\end{proof}
\begin{definition}
Let $n_i$ be the value of the $i^{th}$ digit of n in its binary representation, indexing from zero and the right.
We call $n$ a $gap$ in $a \oplus b$ if $n=a \oplus b$ or if at the leftmost index $i$ in the binary representation of n where n differs from $a \oplus b$ we have $n_i=1$ and $a_i=b_i=0$.
\end{definition}
Note that if $n \geq 2^{\lceil log_2(max(a,b)) \rceil}$ then n is always a gap.
\begin{lemma} \label{lemma:nongapsareattainable}
If n is not a gap in $b \oplus c$ then $(0,b,c) \rightarrow n$
\end{lemma}
\begin{proof}
Consider the left most bit $i$ in n that differs from $b \oplus c$. Note that to get from $(0,b,c)$ to $n$ we will never have to alter bits to the left of index $i$. There are two cases.
\textbf{Case 1: }$n_i=1$. Then $b_i=c_i=1$. Let $b'=c \oplus n$. Clearly $b' \oplus c=n$. Further, $b'_j=b_j$ $\forall j>i$ as $b_j \oplus c_j=n_j$ by the definition of a gap, and $b_i=1 > b'_i=0$, so $b>b'$. Thus the move from $(0,b,c)$ to $(0,b',c)$ is valid, so $(0,b,c) \rightarrow n$ as desired.
\textbf{Case 2: } $n_i=0$. WLOG let $b_i=1$ and $c_i=0$. Letting $b'=c \oplus n$ as before, by the same logic we have the move from $(0,b,c)$ to $(0,b',c)$ is valid and $\mathcal{N}{(0,b',c)}=n$ as desired.
\end{proof}
We say $n$ is the $j^{th}$ gap in $a \oplus b$ if $n$ is a gap and there are precisely $j-1$ gaps $n'$ such that $n'<n$. Note that $a\oplus b$ will always be the first gap in $a\oplus b$.
\begin{lemma} \label{lemma:gapsarelowerbounds}
Let n be the $j^{th}$ gap in $b \oplus c$. Then $\mathcal{N}{(j-1,b,c)} \geq n$.
\end{lemma}
\begin{proof}
Induction on $j$. If $j=1$, then there are no gaps in $b \oplus c$ less than $n$, so by \cref{lemma:nongapsareattainable}, $\mathcal{N}{(0,b,c)} \geq n$. Now suppose $j>1$ and let $n'$ be the $(j-1)^{st}$ gap. Then by the inductive hypothesis, $\mathcal{N}{(j-2,b,c)} \geq n'$, so $(j-1,b,c) \rightarrow i$. $\forall i \in[n']$, by reducing $j-1$ to $j-2$ and replicating the rest of the move. But as there are no gaps between $n$ and $n'$, $(0,b,c) \rightarrow i$. $\forall i \in[n'+1,n-1]$. Thus $\mathcal{N}{(j-1,b,c)} \geq n$ as desired.
\end{proof}
\begin{lemma} \label{lemma:a0UpperBound}
For any $b,c \in \mathbb{N}$, $A(b,c)$ is defined, and $A(b,c) \leq min(\sim b,\sim c)+1$, where $\sim x$ denotes the bitwise complement.
\end{lemma}
This theorem establishes a linear upper bound on $A(b,c)$ for any $b$ and $c$, thereby proving that $A(b,c)$ is well-defined for arbitrary values. Further, it proves Conjecture $2$ and a special case of conjecture $3$ posed in \cite{exconim}.
\begin{proof}
Let $a=min(\sim b,\sim c)+1$. It suffices to show $\forall n<a+b+c .$ $(a,b,c) \rightarrow n$. Then by the upper bound from Lemma 2, $\mathcal{N}{(a,b,c)=a+b+c}$, so $A(b,c) \leq min(\sim b,\sim c)+1$ as desired.
Proceed by induction on $b+c$.
\\[0.04in]
For the base case, if $b+c=0$, $b=c=0$ and the claim is trivially true. Now suppose $b+c>0$. We will case on whether $\sim b$ or $\sim c$ have the greater value, and assume WLOG that $\sim b \leq \sim c$. So $a = \sim b + 1$.
\par We will first show that $(a,b,c) \rightarrow n$ $\forall n \in[a+b, a+b+c-1]$. Let $i \in[c]$. Observe that $min(\sim b, \sim c) \geq min(\sim b, \sim (c-i))$. So by the induction hypothesis $a \geq A(b,c-i)$ and thus $\mathcal{N}{(a,b,c-i)}=a+b+c-i$.
We will now cover the rest of the range, so we want to show $(a,b,c) \rightarrow n$ $\forall n \in[0, a+b-1]$. From Lemma 5, we have that if there are $n$ gaps in $b \oplus c$ less than or equal to $b$ $+ \sim b=a+b-1$ then $\mathcal{N}{(n,b,c)} \geq n'>b$ $+\sim b$ where $n'$ is the $(n+1)^{st}$ gap. So it suffices to show that there are at most $\sim b+1=a$ gaps less than $a+b$. But by the definition of gaps, the number of gaps less than $a+b$ is maximized if whenever $b_i=0$ it is also the case that $c_i=0$ for $i \leq log_2(b)$. If this is the case there are precisely $2^i$ gaps for each $i \leq log_2(b)$ such that $b_i=0$ and one gap to account for $b \oplus c$. Summing over all of these gaps, there are $a=\sim b + 1$ in total and the proof is complete.
\end{proof}
There are indeed non-trivial instances where the upper bound provided by \Cref{lemma:a0UpperBound} is strict, as we will show shortly. However, it is natural to suspect from the proof of the Theorem that the actual number of gaps less than $a+b$ is a suitable candidate for a better upper bound (we had assumed that the number of gaps is as large as it possibly can be in the proof). We now prove an extension to \Cref{lemma:a0UpperBound} for when this actually is the case.
\begin{lemma}\label{lemma:NumberOfGapsIsAnUpperBound}
Let $b=2^i+k$ and $c=2^j+l$ where $k<2^i$ and $l<2^j$ and $j>i$. Also assume whenever $b$ has a $1$ bit at the $n^{th}$ index of its binary representation, so does $c$. Then, $A(b,c)$ is bounded above by the number of gaps in $b\oplus c$ less than $c$ $|$ $(b+\sim b)$, where $|$ is the bitwise $or$ operator.
\end{lemma}
\begin{proof}
Proof is by induction on $b$.\\
For the base, note $b=1$ and the claim holds for any valid choice of $c$ by \Cref{lemma:a0UpperBound}.
Now let $b$ be given, $c$ satisfying the conditions of the claim, and $a$ the number of gaps in $b\oplus c$ less than $c$ $|$ $(b+\sim b)$. We will show $\mathcal{N}(a,b,c)=a+b+c$. \par We begin by noting that when we decrease $b$ to $b'$, the number of gaps less than $c$ $|$ $(b+\sim b)$ cannot increase. This is because by making a decrease in $b$, we cannot create a new index $n$ where $b$ and $c$ have both $0$ bits that did not exist originally, by the assumption. Therefore, $(a,b,c) \rightarrow n.$ $\forall n\in[a+c,a+b+c-1]$ by reducing $b$ and applying the induction hypothesis. For the rest of the range, note that $a+c=c$ $|$ $(b+\sim b)$. Values less than $a+c$ are either attainable by bit arguments by \Cref{lemma:nongapsareattainable} or they are one of the $a$ gaps less than in $b\oplus c$. In that case by \Cref{lemma:gapsarelowerbounds} $(a,b,c) \rightarrow n.$ $\forall n\in[0,a+c]$. Thus $\mathcal{N}(a,b,c)=a+b+c$, and $A(b,c)\leq a$, as desired.
\end{proof}
\par Unfortunately, the upper bound shown in \Cref{lemma:a0UpperBound} does not generalize in the obvious sense to the game with arbitrary amount of piles. However, we can show that $A(x_1,x_2, \cdots, x_n)$ is well-defined, and is bounded above quadratically. This was the statement of Theorem \ref{thm:bigA}, which we reproduce below for convenience.\vspace{2mm}
\par \textbf{Theorem 2. } Let $(x_1,x_2 \cdots, x_n)$ be an Auxiliary-Nim game with $n$-many piles. Then, $A(x_2, \cdots, x_n)$ is well-defined. Furthermore, $A(x_2, \cdots, x_n)$ grows quadratically with respect to the sum $x_2+ \cdots + x_n$. \vspace{2mm}
\begin{proof}
Proof is by induction on $x_2+\cdots+x_n$. When the sum is $0$, $A(x_2, \cdots, x_n)$ is trivially $0$ also. Otherwise let the sum be any positive integer. We know that if we make a decrease in any of the piles $x_2$ through $x_n$, the resulting collection of piles have a well-defined $A$ value, by induction. We set: $$a^*=A(x_2 -1, x_3, \cdots, x_n) + x_2 + \cdots + x_n$$ Then $\mathcal{N}(a^*, x_2, \cdots, x_n)>a^*$ by the lower bound from Corollary 2. For the remaining nimbers, we can simply consider the move when we subtract $1$ from the first pile ($*x_2$), and the nimber of the resulting game will hit the upper bound as long as we don't subtract more than $ x_2 + \cdots + x_n$ from $a^*$. Luckily, we only need to remove up to this much to hit all the nimbers in the range $[a^*, a^* + x_2 + \cdots + x_n]$. This concludes the proof.
\end{proof}
We are now in a position to begin proving explicit characterizations of $A(b,c)$ in several cases. We will make use of the following lemma which lower bounds the size of $A(b,c)$. Afterwards, we will show that in some non-trivial instances, the lower bound matches the upper bound derived from \Cref{lemma:a0UpperBound}.
\begin{lemma} \label{lemma:recursiveLowerBound}
$A(b,c)\geq min(A(b-1,c), A(b,c-1))$
\end{lemma}
\begin{proof}
AFSOC, $a=A(b,c)<min(A(b-1,c), A(b,c-1))$ and consider $(a,b,c)$. Then $(a,b,c)\rightarrow a+b+c-1$. But as $\mathcal{N}{(a,b,c)}\leq a+b+c$ by the upper bound from Lemma 2, we can only reach this value reducing one of $a,b,c$ by exactly $1$. But none of $\mathcal{N}{(a-1,b,c)}$, $\mathcal{N}{(a,b-1,c)}$, or $\mathcal{N}{(a,b,c-1)}$ can be $a+b+c-1$ by the definition of $A(b,c)$, and the assumption. This is a contradiction.
\end{proof}
Note that the proof for \Cref{lemma:recursiveLowerBound} generalizes similarly to give a lower bound for $A(x_1,x_2,...,x_n)$.
\begin{corollary}
$A(x_1, \cdots, x_n)\geq min(A(x_1-1, \cdots, x_n), \cdots ,A(x_1, \cdots, x_n-1))$
\end{corollary}
\Cref{lemma:recursiveLowerBound} also allows us to characterize $A(b,c)$ when b and c are sufficiently close, as will be explicitly stated in \Cref{lemma:a0bothPowerOfTwo}.
\begin{lemma}\label{lemma:a0bothPowerOfTwo}
$A(2^i+x,2^i+y)=2^i-max(x,y)$ for $0\leq x,y < 2^i$.
\end{lemma}
\begin{proof}
$2^i - max(x,y)=\sim(2^i + x)+1$ is precisely the upper bound given by \Cref{lemma:a0UpperBound}, so it suffices to show our lower bound derived from Lemma \ref{lemma:recursiveLowerBound} corresponds with this as well. This is done by induction on $x+y$.
For the base cases, let $y=0$. Then by \Cref{thm:conjectureone}, $A(2^i + x, 2^i)=2^i-x$, as $x<2^i$.
Now suppose the claim holds for $x+y=n$ and consider the case where $x'+y'=n+1$. WLOG, we can consider the case where $x'=x+1$ and $y'=y$. We can also assume $x,y>0$ since the other cases are covered already, meaning we can safely assume $y-1\geq0$ and apply the inductive hypothesis. By \Cref{lemma:recursiveLowerBound}, we have that:
\begin{align*}
A(2^i+x+1, 2^i+y)&\geq min(A(2^i+x, 2^i+y), A(2^i+x+1, 2^i+y-1)) \\
&=min(2^i - max(x,y), 2^i-max(x+1, y-1)) &&\text{By IH} \\
&=2^i - max(x+1,y)
\end{align*}
Thus the lower bound matches the upper bound by induction.
\end{proof}
\par With this we can give a characterization of $\mathcal{N}{(a,b,c)}$ when $\lfloor\log_2(b)\rfloor=\lfloor\log_2(c)\rfloor$. In order to do this, however, we will need a result from Boros et al., which we restate below for convenience:
\begin{lemma}\label{lemma:addingPowerOfTwo}
Suppose that $a,b,c,i \in \mathbb{N}$. If $\mathcal{N}(a,b,c) < 2^i$ then $\mathcal{N}(a,b+2^i,c+2^i) = \mathcal{N}(a,b,c)$. On the other hand, if $\mathcal{N}(a,b,c) < 2^i$ then $\mathcal{N}(a,b+2^i,c+2^i) \geq \mathcal{N}(a,b,c)$
\end{lemma}
\begin{proof}
See the proof of Lemma 7 in \cite{exconim}.
\end{proof}
We are now ready to prove Theorem \ref{thm:same-char}, which we restate below.\vspace{2mm}\\
\textbf{Theorem 3. }
Suppose $b=2^i+k$ and $c=2^i+l$ with $k<l<2^i$. Then
\[ \mathcal{N}{(a,b,c)}= \begin{cases}
a+b+c & a \geq 2^i-l\\
2a+c+k+l & 2^i-k-l\leq a<2^i-l; l\leq2^{i-1}\\
\geq\mathcal{N}{(a,k,l)} & l>2^{i-1};\mathcal{N}{(a,k,l)}\geq 2^i\\
\mathcal{N}{(a,k,l)} & \mathcal{N}{{(a,k,l)}<2^i}
\end{cases}
\]
\begin{proof}
The first and last two cases are covered by \Cref{lemma:a0bothPowerOfTwo} and \Cref{lemma:addingPowerOfTwo} respectively, so it suffices to show $\mathcal{N}{(a,b,c)}=2a+b+c-(2^i-l)$ whenever $0^i-k-l<a<2^i-l$ and $l\leq 2^{i-1}$. This can be done via induction on $k+l$: for the base case when $k=0$ see \Cref{thm:conjectureone}. Otherwise, suppose $k+l>0$ and that the result holds for all previous examples. In general, we can cover all values in the range $[2^i-1,2^{i+1}-1]$ by bit arguments alone. For $a=2^i-k-l$, values in the range $[2^{i+1},3*2^{i}-k-l-1]$ can be reached by moving to the positions $(a',2^i-l-1,2^i+l)$ for $0<a'\leq a$ as $2^i-l-1 \oplus c=2^i-l-1+c$. Finally, values in the range $[2^{i+1}+l,3*2^{i}-k-1]$ can be reached by moving to the position $(a',2^i-1,2^i+l)$ for $1\leq a'\leq a$.
To complete this case we need only show that there is no valid move to a position with nimber $3*2^i-k$. But this is clear: as $k<l\leq 2^{i-1}$ and $a=2^i-k-l$ by \Cref{lemma:addingPowerOfTwo} we cannot reach this nimber by a reduction in $a$ only, and we cannot achieve this value by a reduction in $b$ or $c$ by induction. Therefore, the claim holds when $a=2^i-k-l$. To see that the claim holds in the other cases as well, note that from the induction hypothesis we have that incrementing $a$ while reducing $b$ by $1$ fills in the nimber. Similarly, while $a<2^i-l$ induction also gives us the necessary upper bound.
\end{proof}
While \Cref{thm:same-char} explicitly characterizes nimbers for larger values of $a$, if $\lfloor\log_2(k)\rfloor\neq \lfloor\log_2(l)\rfloor$ then for smaller $a$'s the theorem provides little information. Therefore, we move on to analyzing $\mathcal{N}{(a,b,c)}$ in the cases where $\lfloor\log_2(b)\rfloor\neq \lfloor\log_2(c)\rfloor$.
We begin with an instance where we can explicitly determine the values of the Sprague-Grundy function:
\begin{theorem}\label{thm:conjectureone}
Suppose $b=2^i$, $c=(2k+1)2^i+r$, and $a<2^i-r$. Then $\mathcal{N}{(a,b,c)}$ is the $(a+1)^{st}$ gap in $b\oplus c$.
\end{theorem}
\begin{proof}
This is done via a nested induction on $a,r,$ and $k$.
For the base, suppose $a=r=k=0$. Then $\mathcal{N}{(a,b,c)}=0$, the $1^{st}$ gap.
Now suppose $0<a<A(b,c)$, $r=k=0$ and assume the claim holds for all smaller values of a. Then $b=c=2^i$ and we can reach all values less than the $(a+1)^{st}$ gap by either bit arguments or \cref{lemma:gapsarelowerbounds}. Therefore it suffices to show that there is no move to a position with nimber $a$. But this is clear: this value cannot be obtained by a reduction in $a$ only (by induction) and any reduction in $b$ to $b'$ (or equivalently $c$) results in a position with nimber at least $b'\oplus c+a\geq2^i>a$.
Next, suppose $0<r<2^i$, $0<a<2^i-r$, $k=0$, and the claim holds for all previous values of $a$ and $r$. Similarly to above, it suffices to show there is no move to a position with the nimber of the $(a+1)^{st}$ gap in $b\oplus c$ (in this case the value is just $a+(b\oplus c)=a+r<2^i$). As above, reducing only a, c by more than r, or b at all cannot possibly result in this value (by induction in the first case and the lower bound in the latter two). Similarly, reducing $c$ by less than $r$ results in some $r'$ in a position with nimber at most $a+r'<a+r$, so there is no valid move to the $(a+1)^{st}$ gap.
Finally, suppose that $a,k,r>0$. The only additional case to check in this instance are moves that reduce $c$ by more than $2^i$. However, as any move of this form can only reduce the value of the $(a+1)^{st}$ gap we are done by induction.\end{proof}
Unfortunately, when neither $b$ nor $c$ are a power of two the function's behavior is in general far worse. While we cannot explicitly characterize the SG function in any more general cases, we can show that when $c$ is sufficiently larger than $b$ order starts to reappear, even for small values of $a$. We prove this for $b$ odd in the next theorem, but first a lemma:
\begin{lemma} \label{lemma:trailingones}
Let $n>0$ and suppose $b=(2^{i}-1)+n2^i$ and $n2^i\oplus c=n2^i+c$ with $i>0$. Then $A(b,c)\leq 1$.
\end{lemma}
\begin{proof}
Suppose we have a $b$ and $c$ of the desired form and express $c$ uniquelly as $c=m2^i+k$, where $m\geq 0$ and $k<2^i$. The proof is an induction on $i$ and $k$.
If $i=1$ then $k\in\{0,1\}$. As $A(b,c)=0$ in the first case for all values of $i$ (taking care of the base cases for each value of $i$) suppose $k=1$. Then $\mathcal{N}{(0,b,c)}=b+c-2$, $\mathcal{N}{(0,b-1,c)}=b+c-1$, and $\mathcal{N}{(1,b-1,c)}=b+c$ and we are done.
Now suppose the claim holds for all previous values of $i$ and $k$. Similar to above, we have that $\mathcal{N}{(0,b,c)}=b+c-2k$, so it suffices to show that $(1,b,c)\rightarrow x$ for all $x\in [b+c-2k,b+c]$. If $x\in [b+c-2k,b+c-k]$ then $(0,b,c)\rightarrow x$ by reducing $c$ by some appropriate value less than $k$. If $x\in [b+c-k,b+c]$ then $(1,b,c)\rightarrow x$ by the I.H. as $\mathcal{N}{(1,b,c-k+r)}=b+c-k+r$ for $0\leq r<k$.
\end{proof}
We are now ready to prove Theorem \ref{thm:oddb}\vspace{2mm}.\\
\textbf{Theorem 4. }For $b$ odd, if $c\geq 2^{2\lfloor \log_2 b \rfloor+1}-2^{\lfloor \log_2 b \rfloor+2}-1$ then $\mathcal{N}{(1,b,c)}=1+b+c$.
\begin{proof}
We begin by showing this in the case were all of the gaps less than $b$ in $b\oplus c$ are consecutive and then showing that the results carry over.
Let $b=2^i+2^j-1$ where $i>j>1$. From \Cref{lemma:trailingones} we already have that if $c$ does not have a $1$ in its $i$ bit then $\mathcal{N}{(1,b,c)}=1+b+c$. Now consider the sequence of $c$'s where $c$ does have a $1$ in its $i$ bit. The first such run of $c$'s is $c\in [2^i,2^{i+1}-1]$ and \Cref{thm:same-char} already characterizes these: $\mathcal{N}{(1,b,2^i+k)}=k+2^j$ for $k\in [0,2^i-2^j]$ and $b+2^i<\mathcal{N}{(1,b,2^i+k)}$ for $k\in [2^i-2^j+1,2^i-1]$. We use this as the base of an induction showing that for $n=2m+1$ with $m\geq 0$ and $c\in [n2^i,(n+1)2^{i}-1]$ then there are at least $m+1$ values of $c$ for which $b+n2^i<\mathcal{N}{(1,b,n2^i+k)}$. In fact, we claim something slightly stronger: after the $n=1$ case, the $\mathcal{N}{(1,b,n2^i+k)}$ 'counts up' along values starting from $(n-2)2^i+1+b$, skipping over at least the $m$ values of $\mathcal{N}{(1,b,c)}$ found in the last stage of the induction.
To make this clearer, for each $n>1$ as defined before, as all values in $[(n-3)2^i+b,(n-2)2^i+b]$ can be covered via a reduction in $c$ (from \cref{lemma:trailingones}) it is the case for all appropriate values of $k$ that $b+(n-2)2^i<\mathcal{N}{(1,b,n2^i+k)}$. Now, if there were no reductions in $b$ that could result in a position with nimber $N$ such that $b+(n-2)2^i<N<n2^i$ then as $k$ increases $\mathcal{N}{(1,b,n2^i+k)}$ would count up by $1$ for each increase in $k$ but skipping over the $x\geq m$ values found in the last iteration of the induction. This would happen until the nimber counts up to $n2^i-1$, after which point there are no more gaps in $b\oplus c$ less than $(n+1)2^i$. Further, as all values in the range $[(n-1)2^i+b,n2^i+b]$ can be covered by \Cref{lemma:trailingones}, once the nimbers have counted up to $n2^i-1$ the remaining values will all be greater than $n2^i+b$. Therefore, as at most $(x-1)$ values were skipped in the last iteration of the induction, leading to $x$ values in the sequence such that $b+n2^i<\mathcal{N}{(1,b,n2^i+k)}$, skipping over $x$ values in the count produces at least $x+1$ of the desired values in this iteration.
Note that after at most $2^{i+1}*(2^i-2)$ iterations (in which case $c\geq 2^{2\lfloor \log_2 b \rfloor+1}-2^{\lfloor \log_2 b \rfloor+2}-1$) it is the case that for all greater values of $c$ we have $b+n2^i<\mathcal{N}{(1,b,c)}$. We claim that at this point $\mathcal{N}{(1,b,c)}=1+b+c$. We already had that values in the range $[(n+1)2^i,(n+2)2^i-1]$ for some $n$ achieved the upper bound by \Cref{lemma:trailingones}. For $c\in[(n)2^i,(n+1)2^i-1]$ for large enough $n$, consider the first value: $c=n2^i$. In this case the condition that $b+n2^i=b+c<\mathcal{N}{(1,b,n2^i)}$ already tells us that $\mathcal{N}{(1,b,c)}=1+b+c$. This in turn inductively tells us that all values of $c$ in this range reach the maximum.
Now, we must deal with the possibility of reductions in $b$ that lead to positions such that $b+(n-2)2^i<\mathcal{N}{(a',b,c)}<n2^i$. To show that such moves cannot lead to issues, consider the first position $c'$ in this iteration of the induction where the nimber differs from the count described in the previous paragraph. As all values in the range $[(n-1)2^i+b,n2^i+b]$ can still be covered by a reduction in $c$, there are two cases: either $b+(n-2)2^i<\mathcal{N}{(1,b,c')}<n2^i$ or $n2^i+b<\mathcal{N}{(1,b,c')}$. In the first case the count is potentially set back by at most $1$ temporarily, but skips the value of $\mathcal{N}{(1,b,c')}$ later in the count for no net change. Similarly, in the latter case although the count can potentially be set back by $1$ for its entire duration, $\mathcal{N}{(1,b,c')}$ becomes one of the $m$ values needed for the induction to work. As this is the case whenever a position differs from what is predicted by the count no problems arise.
Finally, suppose that not all gaps of $b \oplus c$ are consecutive. Then $b=2^j-1+(2n+1)2^{i}$ for some $i>j+1>0$ and note that applying the procedure from before on $2^i+2^j-1$ shows that for large enough $c$ $\mathcal{N}{(1,2^i+2^j-1,c)}=1+2^i+2^j-1+c$. As none of the arguments necessary to prove this are effected by the addition of leading $1$'s in $c$, this procedure can be applied inductively to each sub-component of $b$ (based on the number of leading ones in $b$) to show the result in general.
\end{proof}
For $b$ even, while a similar analysis can provide periodicity results in the $a=1$ case, doing so is far more dependent on the initial conditions of the induction. This is due to the following lemma, which ensures that $\mathcal{N}{(1,b,c)}\neq 1+b+c$ for values when $c$ is also even and $b \oplus c \neq b+c$, and thus complicates the recursive structure of $(1,b,c)$.
\begin{lemma} \label{lemma:evens}
If $b$ and $c$ are both even, then $A(b,c)\neq1$
\end{lemma}
\begin{proof}
Suppose $b=2^i+2m$ and $c=2^{i+r}+2n$. The proof is again via nested induction:
From Theorems 2 and 6, if any of $m$, $n$ or $r$ are $0$ then either $A(b,c)=0$ or $A(b,c)\geq 2$ as desired. This covers the base case for each part of the induction.
Now suppose $b=2^i+2m$ and $c=2^{i+r}+2n$ where $m,n,r>0$ and the claim holds for all previous values $m,n,r$. If $b\oplus c=b+c$ then we are done. If not, there are two cases: either the bit representation of $b$ and $c$ intersect only in their rightmost filled bit or not.
If we are in the first case, let $x$ be the index of the rightmost filled bit of $b$ and $c$. Then $b\oplus c=b+c-2^{x+1}$ and $(1,b,c)\nrightarrow b+c-2^x-1$. This is because the trivial upper and lower bounds give that this value can only possibly be achieved by a reduction in $b$ or $c$ by either $2^x-1$ or $2^x-2$. However, in the first case \cref{lemma:trailingones} gives us that the resulting nimber will be too large, and in the latter case the IH gives the resulting nimber will be too small. Therefore, in this case $\mathcal{N}{(1,b,c)}\leq b+c-2^x-1$.
Now suppose we are in the second case. Consider how $(1,b,c)\rightarrow b+c$ and $(1,b,c)\rightarrow b+c-1$. To reach $b+c$, it must be the case that either $A(b-1,c)$ or $A(b,c-1)=1$, so WLOG assume $A(b-1,c)=1$. Then as $b$ and $c$ overlap somewhere other than their rightmost filled bit it's the case that both $A(b-2,c)$ and $A(b,c-2)\neq0$. Therefore, by the IH $(1,b,c)$ cannot reach $b+c-1$ by a reduction in $b$ or $c$ by two. Therefore, unless $\mathcal{N}{(1,b,c-1)}=b+c-1$ the claim holds. However, under these circumstances in order for $A(b-1,c)=1$ it must be the case that $A(b-1,c-1)=1$. But then it's impossible for $\mathcal{N}{(1,b,c-1)}=b+c-1$ and the proof is complete.
\end{proof}
Therefore, while we can prove periodicity results for $b$ even and $a=1$ in several cases, there are enough exceptions to the general rule that we cannot do so in general. However, for $a=2$ a similar analysis to Theorem $\ref{thm:oddb}$ should show that $\mathcal{N}{(2,b,c)}=1+b+c$ for all large $c$.
\section{Discussion}
\subsection{Further Directions with Auxiliary Nim}
To recap, at this point we have characterized the Sprague-Grundy function of $(a,b,c)$ whenever: (1) $a$ is sufficiently large; (2) $\lfloor log_2(b)\rfloor=
\lfloor log_2(c)\rfloor$, or (3) $c>>b$. In some cases we have also extended these results to general auxiliary-nim games.
One potential line of further work is doing a more detailed analysis of the remaining cases: can we give a closed form expression for $\mathcal{N}{(a,b,c)}$?
\begin{question} Determine a non-recursive description of the behaviour of $\mathcal{N}(a,b,c)$.
\end{question}
Figure \ref{fig:fig} suggests that a closed-form solution, at least a simple one, is unlikely to emerge.\\[0.01in]
\par We have also not fully analyzed how the results regarding the $c>>b$ cases might generalize to the general Auxiliary Nim.
\begin{question}Characterize $\mathcal{N}(x_1,x_2,\cdots,x_n)$ when $x_n$ is ``sufficiently large''.\end{question}
Perhaps more interestingly, however, more general ``auxiliary'' games could be analyzed. What can we say about the game $(*k)\boxplus{A}$, where $A$ is an arbitrary impartial combinatorial game?
\begin{question}Characterize the games $A$ where $\exists k_0\in \mathbb{N}$ such that $\forall k>k_0$, $\mathcal{N}{((*k)\boxplus A)}=k+depth(A)$.
\end{question}
We already know that Nim has this property. Do more exotic games?\\[0.05in]
\par Using the notation presented in \cite{tetrishypergraphs} we note that $n$ heap auxiliary-nim is the game $NIM_\mathcal{H}$ where $\mathcal{H}=\{\{1\},...,\{n\}, \{1,2\},\{1,3\},...,\{1,n\}\}$. Here, the game $NIM_\mathcal{H}$ is played on $|V(\mathcal{H})|$ heaps were a valid move is selecting a hyperedge in $\mathcal{H}$ and making reductions in all non-empty heaps within that edge. Are there more general hypergraphs $\mathcal{H}$ where $NIM_{\mathcal{H}}$ behaves similarly to Auxiliary Nim?
\begin{question} Do results presented here extend to more general hypergraph games?\end{question}
\subsection{Periodicity}
We do know that not all games $A$ satisfy the property mentioned in Question $3$. For example, consider games of the following form:
\begin{definition}
A general subtraction game is a sequence of games $G_n$ such that the set of positions that $G_n$ can move to is $\{G_m \mid m \in g(n)\}$ where $g: \mathbb{N} \rightarrow 2^{\mathbb{N}}$ is such that $\forall \: n \in \mathbb{N}$, $g(n) \subseteq [n-1]$. We call $g$ the function associated with $G_n$
\end{definition}
\begin{definition}
A finite fixed set subtraction game is a subtraction game $G_n$ such that there exists a set $S \subseteq N$ for some $N\in \mathbb{N}$ such that the function $g$ associated with $G_n$ satisfies $g(n) = \{n-x \mid x \leq n \wedge x \in S\}$. We call $S$ the set of $G_n$.
\end{definition}
It is not hard to prove that the Sprague-Grundy values for $*k\boxplus{G_n}$ is periodic with respect to $n$ if $G_n$ is a finite subtraction game, although the upper bound on the length of the period is exponential. Note that periodicity immediately tells us that the property mentioned in Question $3$ cannot hold.
\begin{theorem}
If $G_n$ is a finite subtraction game, then the Sprague-Grundy function of $G_n \boxplus *k$ is periodic for any $k \in \mathbb{N}$.
\end{theorem}
\begin{proof}
Let $G_n$ be a finite subtraction game with set $S$, and let $m = \text{max}(S)+1$. Since any position in $G_n \boxplus *k$ has at most $m$ choices for which move to make in the left game (note that $m$ is larger now because we include the possibility of not moving in the left game), and at most $k+1$ choices for which move to make in the right game, the total number of moves possible from $G_n \boxplus *k$ is at most $(k+1)m$, and thust $\mathcal{N}{(G_n \boxplus *k)} \leq (k+1)m$ (so the nimbers are bounded).
Note also that the nimber of $G_n \boxplus *k$ is completely determined by the nimbers of $G_{n-x} \boxplus *(k-y)$ where $0 < x \leq m$ and $0 \leq y \leq k$. Note that we need not consider $x = 0$, because in fact the nimbers for the positions of this form where $x = 0$ are completely determined by the rest. That is, the $\mathcal{N}{(G_n \boxplus *0)}$ is completely determined by $\{\mathcal{N}{(G_{n-x} \boxplus *0)}\}$, and thus $\mathcal{N}{(G_n \boxplus *1)}$ is completely determined by $\{\mathcal{N}{(G_{n-x} \boxplus *0)}\} \cup \{\mathcal{N}{(G_{n-x} \boxplus *1)}\} $, and so on.
Thus, if we have that for some $a,b \in \mathbb{N}$, and for every $0 < x \leq m$ and $0 \leq y \leq k$, $\mathcal{N}{(G_{a-x} \boxplus *(k-y))} = \mathcal{N}{(G_{b-x} \boxplus *(k-y)))}$, then we must also have that for every $0 \leq y \leq k$, $\mathcal{N}{(G_a \boxplus *(k-y))} = \mathcal{N}{(G_b \boxplus *(k-y))}$. Thus, if such an $a$ and $b$ exist with $a \neq b$ we have, by induction, that $G_n \boxplus *k$ is periodic with period at most $|b-a|$.
To see that such an $a$ and $b$ must exist, we simply note that since the nimbers are bounded by $(k+1)m$, and the number of choices for $x$ and $y$ is only $(k+1)m$, there are only $((k+1)m)^{(k+1)m}$ possibilities for the nimbers of the positions for the form $G_{n-x} \boxplus *(k-y)$, so by PHP, there must exist $0 \leq a < b < m + ((k+1)m)^{(k+1)m}$ such that for every $0 < x \leq m$ and $0 \leq y \leq k$, $\mathcal{N}{(G_{a-x} \boxplus *(k-y))} = \mathcal{N}{(G_{b-x} \boxplus *(k-y))}$, and thus, by the above observations, $G_n \boxplus *k$ is periodic.
\end{proof}
It's not hard to construct artificial sequences of games $A_n$ such that $A_n$ is periodic, but $*1\boxplus A_n$ is not. However, it appears as though if the sequence is constructed with certain structural regularities, such as the case of finite subtraction games, periodicity seems to be preserved. Therefore, we have another interesting question at hand.
\begin{question} For which sequences of games $A_n$ is $\mathcal{N}{(*k\boxplus{A_n})}$ periodic with respect to $n$ for any $k\in \mathbb{N}$?
\end{question}
\par For instance, consider the game $GRAPH_G$ played on a simple graph $G$: on each turn, the players select a vertex, and remove a positive integer many edges incident on that vertex. Terminal positions are edgeless graphs. When this game is played on a path graph, it is isomorphic to a game of $Kayles$ \cite{siegel}. $KAYLES_n$ (or $GRAPH_{P_n}$ where $P_n$ is a path of edge-length $n$) is known to be periodic with a period of $12$. The proof of this fact is data-driven: there exists a threshold value of $N$ such that when $KAYLES_n$ is verified computationally to be periodic up to the threshold value, then we can deduce that it will remain periodic forever. This threshold argument works for a large class of games.
\begin{definition}
An octal game is a game played with tokens divided into heaps, where valid moves are one of the following:
\begin{itemize}
\item Remove some (possibly all) of the tokens in one heap
\item Remove some (not all) of the tokens in a heap, and divide the rest into two non-empty heaps.
\end{itemize}
\end{definition}
Observe that normal single-heap Nim is an octal-game, but not periodic. The following theorem formalizes the threshold argument for most octal games. Call $G_n$ (starting configuration is single heap with $n$ tokens) a bounded octal game if the number of tokens that can be removed from any single heap is bounded.
\begin{theorem}\label{periodicity}
Let $G_n$ be a bounded octal game with bound $k\in\mathbb{N}$. Suppose that $\exists\, n_0,p\geq 1$ such that $\mathcal{N}(G_n)=\mathcal{N}(G_n+p)$ for all $n$ satisfying $n_0\leq n\leq 2n_0+p+k$. Then, $G_n$ is periodic.
\end{theorem}
The proof follows by a simple induction on $n$. For a proof and a more extensive survey, see $\cite{siegel}$. A prominent conjecture in combinatorial game theory, initially proposed by John Conway, is the following:
\begin{conjecture}
All bounded octal games are periodic.
\end{conjecture}
The conjecture is convincing, but it offers no upper bound on the period and computational verification on a large scale is mostly intractable.
\par Disappointingly, other than through Theorem \ref{periodicity} and computational search, we don't have a way to prove that a sequence of games will be periodic, even given that a sequence with almost identical structure is periodic. We believe however that this is a promising direction. Consider the following game:
\begin{definition}
$STARKAYLES_{k,n}$ is the game $GRAPH_G$, where $G$ is obtained by starting with a star graph on $k$ vertices, and then extending one of the branches to be a path of edge-length $n$.
\end{definition}
\par Observe that $STARKAYLES_{1_n}$ is the same as $KAYLES_n$. We have computationally verified for small values of $k$ that $STARKAYLES_{k,n}$ is periodic, with period a multiple of $12$. We conjecture that this generalizes, since the fixed star should not intuitively have a structural effect on the asymptotic behaviour of the sequence.
\begin{conjecture}
\begin{figure}
\centering
\begin{tikzpicture}
\renewcommand*{\VertexInterMinSize}{12pt}
\SetVertexNoLabel
\Vertex{Z} \Vertices{circle}{A,B,C,D,E} \EA(A){F} {\GraphInit[vstyle=Empty]\SetVertexLabel \EA[L=$\cdots$](F){G}} \EA(G){H} \EA(H){I}
\Edge(Z)(A) \Edge(Z)(B) \Edge(Z)(C) \Edge(Z)(D) \Edge(Z)(E) \Edge(A)(F) \Edge(F)(G) \Edge(G)(H) \Edge(H)(I)
\end{tikzpicture}
\caption{The game $STARKAYLES_{5,n}$. A valid move is picking a vertex, and removing a positive number of edges from it. We conjecture that all games of this form will be periodic.}
\end{figure}
For all $k$, $STARKAYLES_{k,n}$ is periodic, with period a multiple of $12$.
\end{conjecture}
To move beyond computational verification, we suggest the following direction of research:
\begin{question}
Can we prove that $STARKAYLES_{2,n}$ is periodic, without relying on Theorem \ref{periodicity}, and only on the fact that $KAYLES_n$ is periodic?
\end{question}
Of course, there should not be anything special about starting with a star as opposed to any other fixed graph, and extending a path of length $n$ from a vertex. However, $STARKAYLES_{2,n}$ seems to be the simplest extension to $KAYLES$ that also preserves periodicity.
\par The operation $\boxplus(*k)$ cannot model attaching a fixed graph to a vertex in $KAYLES_n$; however, it's similar. We also conjecture the following:
\begin{conjecture}
$KAYLES_n\boxplus(*1)$ is periodic.
\end{conjecture}
This conjecture is virtually impossible to computationally verify, since computing nimbers involve looking at roughly $P(n)$ (partition number of $n$) many games (which is exponential in $n$), as the $\boxplus(*1)$ prevents us from calculating the nimber of a disjoint union of $KAYLES$ games by simply XORing the nimbers. We hope that techniques that can address Question $6$ can generalize to prove Conjecture $3$.
\newpage
|
{
"timestamp": "2018-02-27T02:00:35",
"yymm": "1802",
"arxiv_id": "1802.08700",
"language": "en",
"url": "https://arxiv.org/abs/1802.08700"
}
|
\section{Introduction}
\label{sec:intro}
In recent years, the challenge of modelling football outcomes has gained attention, in large part due to the potential for making substantial profits in betting markets. According to the current literature, this task may be achieved
by adopting two different modelling strategies: the \textit{direct} models, for the number of goals scored by two competing teams; and the \textit{indirect} models, for estimating the
probablility of the categorical outcome of a win, a draw, or a loss, which will hereafter be referred to as a \textit{three-way} process.
The basic assumption of the direct models is that the number of goals scored by the two teams follow two Poisson distributions. Their dependence structure and the specification of their
parameters are the other most relevant assumptions, according to the literature.
The scores' dependence issue is, in fact, the subject of much debate, and the discussion cannot yet be concluded. As one of the first contributors to the modelling of football scores, \cite{maher1982modelling} used two conditionally independent Poisson distributions, one for the goals scored by the home team, and another for the away team.
\cite{dixon1997modelling} expanded upon Maher's work and extended his model, introducing a parametric dependence between the scores. This also represents the justification for the bivariate Poisson
model, introduced in \citet{karlis2003analysis} in a frequentist perspective, and in \cite{ntzoufras2011bayesian} under a Bayesian perspective. On the other hand, \cite{baio2010bayesian} assume the conditional independence within hierarchical Bayesian models, on the grounds that the correlation of the goals is already taken into account by the hierarchical
structure. Similarly, \cite{groll2013spain} and \cite{groll2015prediction} show that, up to a certain amount, the scores' dependence on two competing teams may
be explained by the inclusion of some specific teams' covariates in the linear predictors. However, \cite{dixon1998birth} note that modelling the dependence along a single match is possible: in such a case, a
temporal structure in the 90 minutes is required.
The second common assumption is the inclusion in the
models of some teams' effects to describe the attack and the defence strengths of the competing teams. Generally, they are
used for modelling the scoring rate of a given team, and in much of the aforementioned literature they do not vary over time. Of course,
this is a major limitation of these models. \cite{dixon1997modelling} tried to overcome this problem by downweighting the likelihood exponentially over time in order to reduce the impact of matches far from the current time of evaluation. However, over the last 10 years the advent of some dynamic models allowed these teams' effects to vary over the season, and to have a temporal structure. The independent Poisson model proposed by \cite{maher1982modelling} has been extended to a Bayesian dynamic independent model, where the evolution
structure is based on continuous time \citep{rue2000prediction}, or is specified for discrete times, such as a random walk for both the attack and defence parameters \citep{owen2011dynamic}. Instead the non-dynamic bivariate Poisson model is extended in \cite{koopman2015dynamic} and \cite{koopman2017forecasting}, and is expressed as a state space model where the teams' effects vary in function of a state vector.
For our purposes, the scores' dependence assumption may be relaxed, and in this paper we adopt a conditional independence
assumption. From a purely conceptual point of view, we have several reasons for adopting two independent
Poisson: (i) as discussed by \cite{baio2010bayesian}, assuming two conditionally independent Poisson hierarchical Bayesian
models implicitely allows for correlation, since the observable variables are mixed at an upper level; (ii) as
noted by \cite{mchale2011modelling}, there is empirical evidence that goals of two teams in seasonal leagues display
only slightly positive correlation, or no correlation at all, whereas goals are negatively correlated for national teams; (iii) bivariate
Poisson models \citep{karlis2003analysis}, which represent the most typical choice for modelling correlation, only allow for
non-negative correlation. Moreover, the independence assumption allows for a simpler formulation for the likelihood
function and simplifies the inclusion of the bookmakers' odds in our model. Concerning the dynamic assumption of the teams'-
specific effects, we use an autoregressive model by centring the effect of seasonal time $\tau$ at the lagged effect in $
\tau-1$, plus a fixed effect.
Whatever the choices for the two assumptions discussed above, the model proposed in this context was built with both a
descriptive and a predictive goal, and its parameters' estimates/model probabilities were often used
for building efficient betting strategies \citep{dixon1997modelling, londono2015sports}. In fact, the well
known expression `beating the bookmakers' is often considered a mantra for whoever tries to predict football---or more
generally, sports---results. As mentioned by \cite{dixon1997modelling}, to win money from the bookmakers
requires a determination of probabilities, which is sufficiently more accurate than those obtained from the odds. On the other hand, it is empirically known that betting odds are the most accurate source of information for forecasting sports
performances \citep{vstrumbelj2014determining}. However, at least two issues deserve a deep analysis: how to determine probability forecasts from the raw
betting odds, and how to use this source of information within a forecasting model (e.g., to predict the number of goals).
Concerning the first point, it is well known that the betting odds do not correspond to probabilities; in fact, to make a profit, bookmakers set unfair odds, and they have a `take' of 5-10\%. In order to derive a set of coherent probabilities from these odds, many researchers have used the \textit{basic
normalization} procedure, by normalising the inverse odds up to their sum. Alternatively, \cite{forrest2005odds} and \cite{forrest2002outcome} propose a
regression model-based approach, modelling the betting probabilities through an historical set of betting odds and match outcomes.
But, \cite{vstrumbelj2014determining} shows that Shin's procedure \citep{shin1991optimal, shin1993measuring} gives the best results overall, being preferable both to the basic normalisation and regression approaches.
Concerning the second issue, a small amount of literature focused on using the existing betting odds as \textit{part} of a
statistical model for improving the predictive accuracy and the model fit. \cite{londono2015sports} used the
betting odds for eliciting the hyperparameters of a Dirichlet distribution, and then updated them based on observations of the categorical
three-way process. No researcher has tried to implement a similar strategy within the framework of the direct models.
\setlength{\parskip}{0pt}
In this paper we try to fill the gap, creating a bridge between the betting odds and betting probabilities on one hand and the statistical modelling of the scores.
Once we transform the inverse betting odds into probabilities, we will develop a procedure to (i) infer from these the implicit scoring intensities, according to the bookmakers, and
(ii) use these implicit intensities directly in the conditionally independent Poisson model for the scores,
within a Bayesian perspective. We are interested in both the estimation of the
models parameters, and in the prediction of a new set of matches. Intuitively, the latter task is much more difficult than the
former, since football is intrinsically noisy and hardly predictable. However, we believe that combining the
betting odds with an historical set of data on match results may give predictions that are more accurate than those obtained from a single source of information.
\setlength{\parskip}{0pt}
In Section~\ref{sec:elicited} we introduce two methods, proposed by the current literature, for transforming the
three-way betting odds favoured by bookmakers into probabilities. In Section~\ref{sec:model}, we introduce the full model, along with the implicit scoring rates. The results and predictive accuracy of
the model on the top four European leagues---Bundesliga, Premier League, La Liga and Serie A---are presented in Section~\ref{sec:data}, and are summarised through posterior probabilities and
graphical checks. Some profitable betting strategies are briefly presented in Section~\ref{sec:betting}. Section~\ref{sec:concl} concludes our analysis.
\section{Transforming the betting odds into probabilities}
\label{sec:elicited}
The connection between betting odds and probabilities has been broadly investigated over the last decades. Before proceeding,
we will introduce the formal definition of odd and the related notation we are going to use throughout the rest of the paper.
The odds of any given event are usually specified as the amount of money we would win if we bet one unit on that event.
Thus, the odd 2.5 corresponds to 2.5 euro (or pounds) we would win betting 1 euro. The inverse odd---usually denoted
as 1:2.5---corresponds to the unfair probability associated to that event. In fact, as is widely known, the
betting odds do not correspond to probabilities: the sum of the inverse odds for a single match needs to be greater than one \citep{dixon1997modelling} in order to guarantee the bookmakers' profit. Here, $O_{m}= \{o_{Win}, o_{Draw}, o_{Loss} \}
$, $
\Pi_{m}=(\pi_{Win}, \pi_{Draw}, \pi_{Loss} )$, and $\Delta_{m}=
\{ \mbox{`Win'},
\mbox{`Draw'}, \mbox{`Loss'} \}$ denote the vector of the inverse betting odds, the vector of the
estimated betting probabilities, and the set of the three-way possible results for the $m$-th game, respectively.
There is empirical evidence that the betting odds are the most accurate available source of
probability forecasts for sports \citep{vstrumbelj2014determining}; in other words, forecasts based on odds-probabilities have been shown to be better, or at least as good as, statistical models, which use sport-specific predictors and/or expert tipsters.
However, some issues remain open. Among these is a strong debate over which method to
use for inferring a set of probabilities from the raw betting odds. We can transform them into
probabilities by using the two procedures proposed in the literature: the \textit{basic normalisation}---dividing the
inverse odds by the booksum, i.e. the sum of the inverse betting odds, as broadly explained in \cite{vstrumbelj2014determining}---and \textit{Shin's procedure}
described in \cite{shin1991optimal, shin1993measuring}. \cite{vstrumbelj2014determining}, \cite{cain2002one,
cain2003favourite}, and \cite{smith2009bookmakers} show that Shin's probabilities improve over the basic normalisation: in \cite{vstrumbelj2014determining} this result has been achieved by the application of the Ranked Probability Score (RPS) \citep{epstein1969scoring}, which may be defined as a discrepancy measure between the probability of a three-way process outcome and the actual outcome.
In this paper we will not focus focus on comparing these two procedures; rather, we are interested in using the probabilities derived from each for statistical and prediction purposes, as will become clearer in later sections.
\newpage
\begin{description}
\item[(A)]\textit{Basic normalisation}
\begin{equation}
\pi_{i}=\frac{o_{i}}{\beta}, \ i \in \Delta_{m},
\label{p:betting:dixon}
\end{equation}
where $\beta=\sum_{i}o_{i}$ is the so called booksum \citep{vstrumbelj2014determining}. The method has gained a great popularity due to its simplicity.
\item[(B)] \textit{Shin's procedure}
In the model proposed by \citet{shin1993measuring}, the bookmakers specify their odds in order to maximise their expected profit in a market with uninformed bettors and insider traders. The latter are those particular actors who, due to superior information, are assumed to \textit{already} know the outcome of a given event---e.g. football match, horse race, etc.---before the event takes place. Their contribution in the global betting volume is quantified by the percentage $z$. \cite{jullien1994measuring} used Shin's model to explicitly work out the expression for the betting probabilities:
\begin{equation}
\pi(z)_{i}= \frac{\sqrt{z^{2}+4(1-z)\frac{o_{i}^{2}}{\sum_{i} o_{i}}}-z}{2(1-z)} , \ i \in \Delta_{m},
\label{p:betting:shin}
\end{equation}
so that $\sum_{i=1}^{3}\pi(z)_{i}=1$.
The current literature refers to these as Shin's probabilities.
The formula above is a function depending on the insider trading rate $z$, which \cite{jullien1994measuring} suggested should be estimated by nonlinear least squares as:
\[
\mbox{Arg} \underset{z}{\min} \{\sum_{i=1}^{3}\pi(z)_{i}-1 \}.
\]
The value here obtained may be defined as the minimum rate of insider traders that yields probabilities corresponding to the vector of inverse betting odds $O$.
\end{description}
\vspace{0.5cm}
Both of these methods yield probabilities, with the difference that Shin's procedure entails a function of the insider traders'
rate which needs to be minimised for every match. Figure~\ref{fig01} displays the three-way betting probabilities
obtained through the two procedures described above for La Liga (Spanish championship), from the season 2007-2008 to the
season 2016-2017. As may be noted, the Draw probabilities obtained with the basic normalisation tend to be higher than
those obtained with Shin's procedure. Conversely, as a home win and an away win tend to become more likely, Shin's procedure tends to favour them.
\begin{figure}
\centering
\subfloat[Home win]{
\includegraphics[height=5cm, width=5cm]{ComparingProbWin.pdf}}~
\subfloat[Draw]{
\includegraphics[height=5cm, width=5cm]{ComparingProbDraw.pdf}}~
\subfloat[Away win]{
\includegraphics[height=5cm, width=5cm]{ComparingProbAway.pdf}}
\caption{\label{fig01} Comparison between Shin probabilities ($x$-axis) and basic normalised probabilities ($y$-axis) for the Spanish La Liga championship (seasons from 2007/2008 to 2016/2017), according to seven different bookmakers.}
\end{figure}
As is intuitive, a higher probability of a home win should somehow be associated with a greater number of goals scored by the home team, and the same for an away team.
\section{Model}
\label{sec:model}
\subsection{Model for the scores}
Here, $\bm{y}=(y_{m1}, y_{m2})$ denotes the vector of observed scores, where $y_{m1}$ and $y_{m2}$ are the number
of goals scored by the home team and by the away team respectively in the $m$-th match of the dataset.
According to the motivations provided by \cite{baio2010bayesian}, in this paper we adopt a conditional
independence assumption between the scores. This choice allows for a simpler formulation for the likelihood function
and, later on, for the direct inclusion of the bookmakers' odds into the model through the Skellam distribution
\citep{karlis2009bayesian}.
The model for the scores is then specified as:
\begin{align}
\begin{split}
y_{m1}| \theta_{m1} &\sim \mathsf{Poisson}(\theta_{m1})\\
y_{m2}| \theta_{m2} &\sim \mathsf{Poisson}(\theta_{m2}),\\
y_{m1} \perp &y_{m2} | \theta_{m1}, \theta_{m2},
\end{split}
\label{y}
\end{align}
where $\bm{y}$ is modelled as \textit{conditionally} independent Poisson and the joint parameter $\bm{\theta}
=(\theta_{m1}, \theta_{m2}) $ represents the scoring intensities in the $m$-th game, for the home team and for the
away team respectively. In what follows, we will refer to \eqref{y} as the \textit{basic} model, which is estimated
using the past scores. The main novelty of this paper consists of enriching this specification by including the extra
information which stems from the bookmakers' betting odds. Thus, for each pair of match $m$ and bookmaker $s, \ s=1,...,S
$ the betting probabilities $\pi^{s}_{i,m},\ i \in \Delta_{m}$, derived with one of the methods in Section~\ref{sec:elicited}, may be used to find out the values $\hat{\bm{\theta}}^{s}=(\hat{\theta}^{s}
_{m1}, \hat{\theta}^{s}_{m2})$, which solve the following nonlinear system of equations:
\vspace{0.3cm}
\begin{align}
\begin{split}
\pi^{s}_{Win, m}+\pi^{s}_{Draw, m}=& P( y_{m1} \ge y_{m2}| \theta^{s}_{m1}, \theta^{s}_{m2} ) \\
\pi^{s}_{Loss, m}=&P( y_{m1}< y_{m2}| \theta^{s}_{m1}, \theta^{s}_{m2} ).\\
\end{split}
\label{system}
\end{align}
\vspace{0.3cm}
The existence of these values is guaranteed by the fact that, under~\eqref{y}, $y_{m1}-y_{m2} \sim PD(\theta_{m1},
\theta_{m2})$, where $PD$ denotes the Poisson-Difference
distribution, also known as Skellam distribution, with parameters $\theta_{m1}, \theta_{m2}$ and mean $\theta_{m1}-\theta_{m2}$. In such a way, we obtain for each pair $(m,s)$ the \textit{implicit} scoring rates $\hat{\theta}^{s}
_{m1}, \hat{\theta}^{s}_{m2}$, somehow inferring the scoring intensities implicit in the three-way bookmakers' odds. Now, we consider our augmented dataset by including as auxiliary data the observed $\hat{\theta}^{s}
_{m1}, \hat{\theta}^{s}_{m2}$. For every $m$, our new data vector is represented by:
$$ (\bm{y}, \bm{\hat{\theta}}^{s})=(y_{m1}, y_{m2}, \hat{\theta}^{s}
_{m1}, \hat{\theta}^{s}_{m2},\ s =1,...,S ).$$
Now, from Equation~\eqref{y} we move to the following specification:
\begin{align}
\begin{split}
y_{m1}|\theta_{m1}, \lambda_{m1} &\sim \mathsf{Poisson}(p_{m1}\theta_{m1}+(1-p_{m1})\lambda_{m1})\\
y_{m2}|\theta_{m2}, \lambda_{m2} &\sim \mathsf{Poisson}(p_{m2}\theta_{m2}+(1-p_{m2})\lambda_{m2}),
\end{split}
\label{y:mixture}
\end{align}
where $\lambda_{m1}, \ \lambda_{m2}$ are bookmakers' parameters introduced for modelling the additional data $\hat{\theta}^{s}
_{m1}, \hat{\theta}^{s}_{m2},\ s =1,...,S$, as explained in the next section. Parameters $p_{m1}, p_{m2}$ are assigned a non-informative prior distribution, with hyper-parameters $a$ and $b$, e.g. $p_{m\cdot} \sim \mathsf{Beta}(a,b)$.
\subsection{Model for the rates}
Equation~\eqref{y:mixture} introduced a convex combination for the Poisson parameters, accounting for both the scoring rates $\theta_{\cdot 1}, \theta_{\cdot 2}$ and the bookmakers' parameters $\lambda_{\cdot 1}, \lambda_{\cdot 2}$. Denoting with $T$ the number of teams, the common specification for the scoring intensities is a log-linear model in which for each $t, \ t=1,...,T$:
\begin{align}
\begin{split}
\log(\theta_{m1})&=\mu+att_{t[m]1}+def_{t[m]2}\\
\log(\theta_{m2})&=att_{t[m]2}+def_{t[m]1}
\end{split}
\label{theta}
\end{align}
with the nested index $t[m]$ denoting the team $t$ in the $m$-th game. The parameter $\mu$ represents the well-known
football advantage of playing at home, and is assumed to be constant for all the teams over time, as in the current
literature. The attack and defence strengths of the competing teams are modelled by the parameters $att$ and $def$
respectively. \cite{baio2010bayesian} and \cite{dixon1997modelling} assume that these team-specific
effects do not vary over the time, and this represents a major limitation in their models. In fact, \cite{dixon1998birth}
show that the attack and defence effects are not static and and may even vary during a single match; thus, a static
assumption is often not reliable for making predictions and represents a crude approximation of the reality.
\cite{rue2000prediction} propose a generalised linear Bayesian model in which the team-effects at match time $\tau$
are drawn from a Normal distribution centred at the team-effects at match time $\tau-1$, and with a variance term
depending on the time difference. We make a seasonal assumption
considering the effects for the season $\tau$ following a Normal distribution centred at the previous seasonal effect
plus a fixed component. For each $t=1,\ldots,T, \ \tau=2,\ldots,\mathcal{T}$:
\begin{align}
\begin{split}
att_{t,\tau} &\sim \mathsf{N}(\mu_{att}+att_{t, \tau-1}, \sigma^{2}_{att}) \\
def_{t, \tau} &\sim \mathsf{N}(\mu_{def}+def_{t, \tau-1}, \sigma^{2}_{def}),
\end{split}
\label{att:def}
\end{align}
while, for the first season, we assume:
\begin{align}
\begin{split}
att_{t,1} &\sim \mathsf{N}(\mu_{att}, \sigma^{2}_{att}) \\
def_{t, 1} &\sim \mathsf{N}(\mu_{def}, \sigma^{2}_{def}).
\end{split}
\label{att:def:1}
\end{align}
As outlined in the literature, we need to impose a `zero-sum' identifiability constraint within each season to these random effects:
$$ \sum_{t=1}^{T} att_{t, \tau}=0, \ \ \ \ \sum_{t=1}^{T}def_{t, \tau}=0, \ \ t=1,\ldots, T, \ \tau=1,\ldots \mathcal{T}, $$
whereas $\mu$ and the hyperparameters of our model are assigned weakly informative priors:
\vspace{-0.5cm}
\begin{align*}
\mu, \mu_{att}, \mu_{def} \sim & \mathsf{N}(0,10)\\
\sigma_{att}, \sigma_{def} \sim & \mathsf{Cauchy}^{+}(0,2.5),\\
\end{align*}
where $\mathsf{Cauchy^+}$ denotes the half-Cauchy distribution, centred in 0 and with scale 2.5.\footnote{On the
choice of the half-Cauchy distribution for scale parameters, see
\citet{gelman2006prior}.} The team-specific effects modelled through Equations~\eqref{att:def} and~\eqref{att:def:1} are estimated from the past scores in the dataset. As expressed in \eqref{y:mixture}, we add a level to the hierarchy, by including the implicit scoring rates as a separate data model. Given, then, a further level which consists of $S$ bookmakers, it is natural to consider $ \lambda_{m1}, \lambda_{m2}$ as the model parameters for the observed $\hat{\theta}^{s}_{m1}, \hat{\theta}^{s}_{m2}$. More precisely, these parameters represent the means
of two truncated Normal distributions for the further implicit scoring rates model:
\begin{align}
\begin{split}
\hat{\theta}^{1}_{m1},...,\hat{\theta}^{S}_{m1} & \sim \mbox{trunc}\mathsf{ N}( \lambda_{m1}, \tau^{2}_{1}, 0, \infty)\\
\hat{\theta}^{1}_{m2},...,\hat{\theta}^{S}_{m2} & \sim \mbox{trunc}\mathsf{ N}( \lambda_{m2}, \tau^{2}_{2}, 0, \infty),
\end{split}
\label{theta_bm}
\end{align}
where $ \mbox{trunc}\mathsf{ N}(\mu, \sigma^{2}, a,b) $ is the
common notation for the density of a truncated Normal with
parameters $\mu \in \mathbb{R}, \sigma^{2} \in \mathbb{R}^{+}$
and defined in the interval $[a,b]$. $\lambda_{m1}, \lambda_{m2}
$ are in turn assigned two truncated Normal distributions:
\begin{align}
\begin{split}
\lambda_{m1}& \sim \mbox{trunc} \mathsf{ N}( \alpha_{1}, 10, 0, \infty)\\
\lambda_{m2} & \sim \mbox{trunc}\mathsf{ N}( \alpha_{2} , 10, 0, \infty),
\end{split}
\label{lambda_bm}
\end{align}
with hyperparameters $\alpha_{1}, \alpha_{2}$.
\section[Applications and results]{Applications and results: top four European leagues }
\label{sec:data}
\subsection{Data}
We collected the exact scores for the top four European professional leagues---Italian Serie A, English Premier League,
German Bundesliga, and Spanish La Liga---from season 2007/2008 to 2016/2017. Moreover, we also collected all the three-way odds for
the following bookmakers: Bet365, Bet\&Win, Interwetten, Ladbrokes, Sportingbet, VC Bet, William Hill. All these data have
been downloaded from the public available page \url{http://www.football-data.co.uk/}. We are interested in both (a)
posterior predictive checks in terms of replicated data under our models, and (b) out-of-sample predictions for a new dataset.
According to point (b), which appears to be more appealing for fans, bettors and statisticians, let $\mathcal{T}_{r}$ denote
the \textit{training set}, and $\mathcal{T}_{s}$ the \textit{test set}. Our training set contains the results of nine seasons for each
professional league, and our test set contains the results of the tenth season.
The model coding has been implemented in {\ttfamily WinBUGS}
\citep{spiegelhalter2003winbugs} and in {\ttfamily Stan} \citep{rstan}. We ran our MCMC simulation for $H=5000$
iterations, with a burn-in period of $1000$, and we monitored the convergence using the usual MCMC diagnostic \citep{gelman2014bayesian}.
\subsection{Parameter estimates}
As broadly explained in Section~\ref{sec:model}, the model in~\eqref{y:mixture} combines historical information about the scores and betting information about the odds. We acknowledge that the scoring rate is a convex combination that \textit{borrows strengths} from both the sources of information.
Figure~\ref{fig02b} displays the posterior estimates for the attack and the defence parameters associated
with the teams belonging to the English Premier League during the test set season 2016-2017. The larger is the team-
attack parameter, and the greater is the attacking quality for that team; conversely, the lower is the team-
defence parameter, and the better is the defence power for that team. As a general comment, after reminding the
reader that these quantities are estimated using only the historical results, the pattern seems to reflect the actual
strength of the teams across the seasons. For example Chelsea and Manchester City register the highest effects for the attack and the
lowest for the defence across the nine seasons considered: consequently, the out-of-sample estimates for the tenth season
mirror previous performance. Conversely, weaker teams are associated with an inverse pattern: see for instance Hull City, Middlesbrough, and Sunderland, all relegated at the end of the season. It is worth noting
that some wide posterior bars are associated to those teams with fewer seasonal observations: in fact, for simplicity, we
do not account for a relegation system, and some teams have been observed less during the seasons considered.
\begin{sidewaysfigure}
\centering
\makebox{\includegraphics[scale=1.3]{TeamEffects1617PL.pdf}}
\caption{\label{fig02b} Posterior 50\% confidence bars for the attack (red) and the defence (blue) effects along the 10 seasons for the teams belonging to the Premier League 2016/2017. Wider posterior bars are associated with teams reporting fewer observations.}
\end{sidewaysfigure}
Figure~\ref{fig03p} displays the ordered 50\% confidence bars for the marginal posteriors of the probabilities parameter $p_{m1}, p_{m2}, m=1,\ldots,M$, which appear in~\eqref{y:mixture}, computed for the German Bundesliga. Despite the high variability, these plots suggest that the amount of information that stems from the bookmakers is comparable with that arising from historical information. Then, the convex combination in~\eqref{y:mixture} seems to be an adequate option for our purposes.
\begin{figure}
\centering
\subfloat[$p_{\cdot 1}$]
{\includegraphics[scale=0.45]{p1Ger.pdf}}~
\subfloat[$p_{\cdot 2}$]
{\includegraphics[scale=0.45]{p2Ger.pdf}}
\caption{\label{fig03p} Ordered posterior 50\% confidence bars for parameters $p_{\cdot 1}, \ p_{\cdot 2}$ for German Bundesliga (from 2007-2008 to 2015-2016), 2754 matches.}
\end{figure}
\subsection{Model fit}
As broadly explained in \cite{gelman2014bayesian}, once we obtain some estimates from a Bayesian model we should assess the fit of this
model to the data at hand and the plausibility of such model, given the purposes for which it was built. The
principal tool designed for achieving this task is \textit{posterior predictive checking}. This post-model
procedure consists of verifying whether some additional replicated data under our model are similar to the observed
data. Thus, we draw simulated values $y^{rep}$ from the joint predictive distribution of replicated data:
\begin{equation*}
p(y^{rep}|y) = \int_{\Theta} p({y}^{rep}, \theta|y) d\theta= \int_{\Theta} p(\theta|y)p({y}^{rep}|\theta) d\theta.
\label{posterior_predicitve_1}
\end{equation*}
It is worth noting that the symbol $y^{rep}$ used here is different from the symbol $\tilde{y}$ used in the next section. The former is just a replication of $y$, the latter is any future observable value.
Then, we define a test statistic $T(y)$ for assessing the discrepancy between the model and the data. A lack of fit of the model with respect to the posterior predictive distribution may be measured by tail-area posterior probabilities, or Bayesian $p$-values
\begin{equation}
p_{B}= P(T(y^{rep})>T(y)|y).
\label{eq:bayesian_p}
\end{equation}
As a practical utility, we usually do not compute the integral in~\eqref{posterior_predicitve_1}, but compute the posterior
predictive distribution through simulation. If we denote with $\theta^{(s)}, \ s=1,...,S$ the $s$-th MCMC draw from the
posterior distribution of $\theta$, we just draw $y^{rep}$ from the predictive distribution $p(y^{rep}| \theta^{(s)})$.
Hence, an estimate for the Bayesian $p$-value is given by the proportion of the $S$ simulations for which the quantity
$T(y^{rep \ (s)})$ exceeds the observed quantity $T(y)$. From an interpretative point of view, an extreme $p$-value---too
close to 0 or 1---suggests a lack of fit of the model compared to the observed data.
Rather than comparing the posterior distribution of some statistics with their observed values
\citep{gelman2014bayesian}, we propose a slightly different approach, allowing for a broader comparison of the replicated
data under the model. Figure~\ref{fig03dens} displays the replicated distributions $y^{rep}_{1}-y^{rep}_{2}$ (grey
areas) and the observed goals' difference (red horizontal line) from the top four European
leagues. From this plots the fit of the model seems good: in other words, the replicated data under the model are plausible
and close to the data at hand. As it may be noted, the variability of the replicated goals' difference amounting to
-1, 0, 1 is greater than the variability for a goals' difference of -3 or 3. Moreover, the observed goals'
differences always fall within the replicated distributions. In correspondence of a draw---goal difference of 0---the
observed goals' differences register an high posterior probability if compared with the corresponding replicated distribution.
\begin{figure}[ht]
\centering
\subfloat[Bundesliga]
{\includegraphics[scale=0.5]{discrete_ger.jpeg}}~
\subfloat[La Liga]
{\includegraphics[scale=0.5]{discrete_spa.jpeg}}\\
\subfloat[Premier League]
{\includegraphics[scale=0.5]{discrete_eng.jpeg}}~
\subfloat[Serie A]
{\includegraphics[scale=0.5]{discrete_ita.jpeg}}\\
\caption{\label{fig03dens} PP check for the goals' difference $y_{1}-y_{2}$ against the replicated goals' difference $y^{rep}_{1}-y^{rep}_{2}$ for the top four European leagues . For each league, the graphical posterior predictive checks show that the model fits the data well.}
\end{figure}
\subsection{Prediction and posterior probabilities}
The main appeal of a statistical model relies on its predictive accuracy. As usual in a Bayesian framework, the prediction for a new dataset may be performed directly via the
posterior predictive distribution for our unknown set of observable values. Following the same notation of
\cite{gelman2014bayesian}, let us denote with $\tilde{y}$ a generic unknown observable. Its distribution is then conditional on the observed $y$,
\begin{equation*}
p(\tilde{y}|y) = \int_{\Theta} p(\tilde{y}, \theta|y) d\theta= \int_{\Theta} p(\theta|y)p(\tilde{y}|\theta) d\theta,
\label{posterior_predicitve_2}
\end{equation*}
where the conditional independence of $y$ and $\tilde{y}$ given $\theta$ is assumed. Figure~\ref{fig03} displays the posterior predictive
distributions for Real Madrid-Barcelona, Spanish La Liga 2016/2017, and for Sampdoria-Juventus, Italian Serie A.
The red square indicates the observed result, (2,3) for the first match and (0,1) for the second match respectively.
Darker regions are associated with higher posterior probabilities. According to the model, the most likely result for the first game is (2,1), with an associated
posterior probability slightly greater than 0.08, whereas the most likely result coincide with the actual result (0,1) for the second game.
These plots are not actually suggesting the most likely result: would it be smart to bet on an event with an associated probability about 0.09? Maybe, not. Rather, these plots provide
a picture that acknowledges the large uncertainty of the prediction. We are not really interested in a model
that often indicates a rare result that has been observed as the most likely outcome; we suspect, in fact, that a model which would
favour the outcome (2,3) as most (or quite) likely, is probably not a good model. Rather, being aware of the unpredictable nature
of football, we would like to grasp the posterior uncertainty of a match outcome in such a way that the actual result is not extreme in the predictive distribution.
\begin{figure}
\centering
\makebox{
\includegraphics[scale=0.45]{HeatmapRealBarca.pdf}~
\includegraphics[scale=0.45]{HeatmapSampJuve.pdf}}
\caption{\label{fig03} Posterior predictive distribution of the possible results for the match Real Madrid-Barcelona, Spanish La Liga 2016/2017, and Sampdoria-Juventus, Italian Serie A 2016-2017. Both the plots report the posterior uncertainty related to the exact predicted outcome. Darker regions are associated with higher posterior probabilities and red square corresponds with the observed result.}
\end{figure}
Table~\ref{tab01} and Table~\ref{tab02} report the estimated posterior probabilities for each team being the first,
the second, and the third; the first relegated, the second relegated, and the third relegated for each of the top four
leagues, together with the observed rank and the achieved points, respectively. At the beginning of the 2016-2017 season,
Bayern Munich had an estimated probability 0.8168 of winning the German
league, which it actually did; in Italy, Juventus had an high probability of being the first (0.592) as well. Conversely,
Chelsea had a low associated probability to win the English Premier League at the beginning of the season, and this is mainly due to the bad
results obtained by Chelsea in the previous season. Of course, the model does not account for the players'/managers' transfer market occurring in the summer period. In July 2016,
Chelsea hired Antonio Conte, one of the best European managers, who won the English Premier League on his first attempt. For the relegated teams, it is worth noting that Pescara has high estimated probability to be the worst
team of the Italian league (0.46). Globally, the model appears able to identify the teams with an associated high relegation's posterior probability.
\begin{table}
\caption{\label{tab01} Estimated posterior probabilities for each team being the first, the second, and the third in the Bundesliga, Premier League, La Liga and Serie A 2016-2017, together with the observed rank and the number of points achieved.}
\centering
\bgroup
\def0.7{0.7}
\begin{tabular}{lcccccc}
\hline
&Team & P(1st) & P(2nd) & P(3rd) & Actual rank & Points \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{deutche.png}}&
Bayern Munich & 0.8168 & 0.1508 & 0.0248 & 1 & 82 \\
&RB Leipzig & 0.008 & 0.0284 & 0.0608 & 2 & 67 \\
&Dortmund & 0.1332 & 0.4712 & 0.1856 & 3 & 64 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{inglese.png}}&
Chelsea & 0.1396 & 0.1592 & 0.1584 & 1 & 93 \\
&Tottenham & 0.1096 & 0.132 & 0.1424 & 2 & 86 \\
&Man City & 0.3904 & 0.2004 & 0.1388 & 3 & 78 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{spagnola.png}}&
Real Madrid & 0.3868 & 0.4844 & 0.1076 & 1 & 93 \\
& Barcelona & 0.5652 & 0.3536 & 0.0728 & 2 & 90 \\
&Ath Madrid & 0.046 & 0.1348 & 0.5556 & 3 & 78 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{italiana.png}}&
Juventus & 0.592 & 0.2335 & 0.107 & 1 & 91 \\
&Roma & 0.1535 & 0.263 & 0.2595 & 2 & 87 \\
&Napoli & 0.206 & 0.2965 & 0.213 & 3 & 86 \\
\hline
\end{tabular}
\egroup
\end{table}
\begin{table}
\caption{\label{tab02} Estimated posterior probabilities for each team being the first, the second, and the third relegated team in the Bundesliga, Premier League, La Liga, and Serie A 2016-2017, together with the observed rank and the number of points achieved.}
\centering
\bgroup
\def0.7{0.7}
\begin{tabular}{lcccccc}
\hline
&Team & P(1st rel) & P(2nd rel) & P(3d rel) & Actual rank & Points \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{deutche.png}}& Wolfsburg & 0.0212 & 0.0236 & 0.0064 & 16 & 37 \\
&Ingolstadt & 0.0952 & 0.0904 & 0.0912 & 17 & 32 \\
&Darmstadt & 0.1192 & 0.1552 & 0.2528 & 18 & 25 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{inglese.png}}& Hull & 0.1384 & 0.1512 & 0.1428 & 18 & 34 \\
& Middlesbrough & 0.118 & 0.1448 & 0.1812 & 19 & 28 \\
& Sunderland & 0.1272 & 0.1228 & 0.1144 & 20 & 24 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{spagnola.png}}&
Sp Gijon & 0.1132 & 0.1112 & 0.1016 & 18 & 31 \\
&Osasuna & 0.1464 & 0.174 & 0.228 & 19 & 22 \\
&Granada & 0.138 & 0.1748 & 0.2476 & 20 & 20 \\
\hline
\multirow{3}{*}{\includegraphics[scale=0.05]{italiana.png}}&
Empoli & 0.0795 & 0.066 & 0.0415 & 18 & 32 \\
&Palermo & 0.132 & 0.1765 & 0.1205 & 19 & 26 \\
&Pescara & 0.1215 & 0.178 & 0.46 & 20 & 18 \\
\hline
\end{tabular}
\egroup
\end{table}
\begin{figure}
\centering
\subfloat[Bundesliga]
{\includegraphics[scale=0.5]{RankPlotGer.pdf}}~
\subfloat[Premier League]
{\includegraphics[scale=0.5]{RankPlotEng.pdf}}\\
\subfloat[La Liga]
{\includegraphics[scale=0.5]{RankPlotSpa.pdf}}~
\subfloat[Serie A]
{\includegraphics[scale=0.5]{RankPlotIta.pdf}}
\caption{\label{fig04} Posterior 50\% confidence bars (grey ribbons) for the achieved final points of the top-four European leagues 2016-2017. Black dots are the observed points. Black lines are the posterior medians. At a first glance, the pattern of the predicted ranks appears to match the pattern of the observed ones, and the model calibration appears satisfying.}
\end{figure}
Figure~\ref{fig04} provides posterior 50\% confidence bars (grey ribbons) for the predicted achieved points for each team in top four European leagues 2016-2017 at the end of their
respective seasons, together with the observed final ranks. At a first glance, the four predicted posterior ranks appear to
detect a pattern similar to the observed ones, with only a few exceptions. As may be noticed for Bundesliga (Panel (a)),
Bayern Munich's prediction mirrors its actual strength in the 2016-2017 season, whereas RB Leipzig was definitely
underestimated by the model. Still, the model cannot handle the budget's information, and RB Leipzig was one of the
richest teams in the Bundesliga in 2016-2017. In the English Premier League (Panel (b)), Chelsea was definitely
underestimated by the model, whereas Manchester City actually gained the predicted number of points (78). The predicted
pattern for the Spanish La Liga (Panel (c)) is extremely close to the one we observed, apart from the winner (our model
favoured Barcelona, second in the observed rank). The worst teams (Sporting Gijon, Osasuna and Granada) are correctly
predicted to be relegated. Also, for the Italian Serie A, the predicted ranks globally match the observed ranks. The outlier
is represented by Atalanta, a team that performed incredibly well and qualified for the Europa League at the end of the last season. As a general comment, we may conclude that these plots show a good model calibration, since more or less
half of the observed points fall in the posterior 50\% confidence bars.
\section{A preliminary betting strategy}
\label{sec:betting}
In this section we provide a real betting experiment, assessing the performance of our model compared to the
existing betting odds. In a betting strategy, two main questions arise: it is worth betting on a given single match?
If so, how much is worth betting? In Section~\ref{sec:elicited}, we described two different procedures for
inferring a vector of betting probabilities $\Pi$ from the inverse odds vector $O$. The common expression `beating the
bookmakers' may be interpreted in two distinct ways: from a probabilistic point of view, and from a profitable point of
view. According to the first definition, which is more appealing for statisticians, a bookmaker is beaten whenever
our matches' probabilities are more favorable than their probabilities. As before, $\pi^{s}_{i,m} $ denotes the
betting probability provided by the $s$-th bookmaker for the $m$-th game, with $i \in \Delta_{m}= \{ \mbox{`Win'},
\mbox{`Draw'}, \mbox{`Loss'} \}$. Additionally, let $Y_{m1}$ and $Y_{m2}$ denote the random variables representing
the number of goals scored by two teams in the $m$-th match. From our model in~\eqref{y:mixture}, we can compute the
following three-way model's posterior probabilities: ${p}_{Win,m}=P(Y_{m1}>Y_{m2}), \ {p}_{Draw, m}=P(Y_{m1}=Y_{m2}),
\ {p}_{Loss, m}=P(Y_{m1}<Y_{m2})$ for each $m \in \mathcal{T}_{s}$, using the results of the Skellam distribution outlined
in Section~\ref{sec:model}. In fact, $Y_{m1}-Y_{m2} \sim PD( \hat{\gamma}_{m1}, \hat{\gamma}_{m2})$, where $\hat{\gamma}
_{m1}=\hat{p}_{m1}\hat{\theta}_{m1}+(1-\hat{p}_{m1})\hat{\lambda}_{m1}$ and $\hat{\gamma}_{m2}=\hat{p}_{m2}
\hat{\theta}_{m2}+(1-\hat{p}_{m2})\hat{\lambda}_{m2}$ are the convex combinations of the posterior estimates obtained
through the MCMC sampling. Thus, the global average probability of a correct prediction for our model may be defined as:
\begin{equation}
\bar{p}=\frac{1}{M}\sum_{m=1}^{M} \prod_{i \in \Delta_{m}} {{p}_{i,m}}^{\delta_{im }},
\label{eq:correct_prob}
\end{equation}
where $\delta_{im}$ denotes the Kronecker's delta, with $\delta_{im}=1$ if the observed result at the $m$-th match is $i,
\ i \in \Delta_{m}$. This quantity serves as a global measure of performance for comparing the predictive accuracy between the
posterior match probabilities provided by the model and those obtained from the bookmakers' odds.
\begin{table}
\caption{\label{tab03} Average correct probabilities $\bar{p}$ of three-way bets, obtained through our model, Shin probabilities and basic probabilities (here we take the average of the seven bookmakers considered). Greater values indicate better predictive accuracy.}
\centering
\centering
\bgroup
\def0.7{0.7}
\begin{tabular}{llccc}
& & Model & Shin & Basic \\
\hline
\includegraphics[scale=0.05]{deutche.png} & Bundesliga & 0.4010 & 0.4100 &0.4072 \\
\includegraphics[scale=0.05]{inglese.png} & Premier League & 0.4349 & 0.4516 & 0.4480 \\
\includegraphics[scale=0.05]{spagnola.png}&La Liga & 0.4553 & 0.4584 & 0.4549 \\
\includegraphics[scale=0.05]{italiana.png}&Serie A & 0.4430 & 0.4554 &0.4507\\
\hline
\end{tabular}
\egroup
\end{table}
As reported in Table~\ref{tab03}, our model is very close to the bookmakers' probabilities (Shin's method and basic procedure). At a first
glance, one may be tempted to say that, according to this measure, our model does not improve the bookmakers' probabilities. However, this index is only an average measure
of the predictive power, which does not take into account the possible profits for the single matches.
\begin{figure}
\subfloat[Bundesliga]
{\includegraphics[scale=0.45]{Profits_Bundesliga.pdf}}~
\subfloat[Liga]
{\includegraphics[scale=0.45]{Profits_Liga.pdf}}\\
\subfloat[Premier League]
{\includegraphics[scale=0.45]{Profits_PL.pdf}}~
\subfloat[Serie A]
{\includegraphics[scale=0.45]{Profits_SerieA.pdf}}
\caption{\label{fig_profits} Expected profits (\%/100) $\pm$ standard errors for the seven bookmakers considered, for each of the top four European leagues. }
\end{figure}
According to the second definition, `beating the bookmaker' means earning money by betting according to our model's probabilities. One
could bet one unit on the three-way match outcome with the highest expected return (Strategy A) or place different
amounts, basing each bet on the match's profit variability, as suggested in \cite{rue2000prediction} (Strategy B). The
expected profits (percentages divided by 100) are reported in Figure~\ref{fig_profits}, along with their standard errors. Although not explicitly
shown here, gambling with the betting odds probabilities, we would always incur a sure loss. Conversely, betting with
our posterior model probabilities yields high positive returns for each league and each bookmaker.
\section{Discussion and further work}
\label{sec:concl}
We have proposed a new hierarchical Bayesian Poisson model in which the rates are convex combinations of parameters
accounting for two different sources of data: the bookmakers' betting odds and the historical match results. We transformed
the inverse betting odds into probabilities and we worked out the bookmakers' scoring rates through the Skellam distribution. A wide graphical and numerical analysis for the top four European leagues
has shown a good predictive accuracy for our model, and surprising results in terms of expected profits. These results
confirm on one hand that the information contained in the betting odds is relevant in terms of football prediction; on
the other hand that, combining this information with historical data allows for a natural extension of the existing models for football scores.
|
{
"timestamp": "2018-02-27T02:06:07",
"yymm": "1802",
"arxiv_id": "1802.08848",
"language": "en",
"url": "https://arxiv.org/abs/1802.08848"
}
|
\section{Introduction}
In recent years, age progression has received considerable interest from the computer vision community.
Starting from the predominant approaches that require lots of time and professional skills with the support from forensic artists, several breakthroughs have been achieved. Numerous automatic age progression approaches from anthropology theories to deep learning models have been proposed.
In general, the age progression methods can be technically classified into four categories, i.e. modeling, reconstruction, prototyping and deep learning based methods.
The methods in the first three categories usually tend to simulate the aging process of facial features by (1) adopting prior knowledge from anthropometric studies
; or (2) representing the face geometry and appearance by a set of parameters via conventional models such as Active Appearance Models (AAMs), 3D Morphable Models (3DMM) and manipulate these parameters via learned aging functions. Although they have achieved some inspiring synthesis results, these face representations are still linear and facing lots of limitations in modeling the non-linear aging process.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{Fig_Aging_Video.jpg}\end{center}
\caption{Given input images, Age Progression task is to predict the future faces of that subject many years later.}
\label{fig:Aging_Example_Trump}
\end{figure}
Meanwhile, the fourth category introduces modern approaches with the state-of-the-art \textbf{\textit{Deep Generative Models}} (DGM) for both face modeling and aging embedding process. Since deep learning structures have more capabilities of interpreting and transferring the highly non-linear features of the input signals, they are more suitable for modeling the human aging process. As a result, superior synthesized facial images \cite{Duong_2016_CVPR,Duong_2017_ICCV}, \cite{duong2017learning,Zhang_2017_CVPR,wang2016recurrent} can be generated.
Inspired by these state-of-the-art results, in this paper, we aim to provide a review of recent developments for face age progression.
Both \textit{structures and formulations} of several Deep Generative Models, i.e. Restricted Boltzmann Machines (RBM), Deep Boltzmann Machines (DBM), and Generative Adversarial Network (GAN), as well as \textit{the way they are adopted to age progression problem} will be presented.
Moreover, several common face aging databases are also reviewed.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{Fig_database_all.png}\end{center}
\caption{Normalized Age Distribution of Face Aging Databases}
\label{fig:FG_DB_SUM}
\end{figure}
\section{Face Aging Databases} \label{sec:Database}
Database collection for face aging is also a challenging problem.
There are several requirements during the collecting process. Not only should each subject have images at different ages, but also the covered age range should be large.
Therefore, face aging databases are still limited in terms of age labels and the number of available databases.
The characteristics and age distributions of several current existing face aging databases are summarized in Table \ref{tab:AgingDatabaseProperties} and Fig. \ref{fig:FG_DB_SUM}.
\begin{table*}[!t] \label{tab:ConventionalAP}
\small
\centering
\caption{A summary of Conventional Approaches for Age Progression.}
\label{tb:ConventionalAPApproaches}
\begin{tabular}{|l|c|c|c|l|}
\hline
\textbf{Method}
& \textbf{Approach}
& \begin{tabular}[l]{@{}l@{}} \textbf{Represent}\textbf{ation}\end{tabular}
& \textbf{Architecture}
& \textbf{Summary} \\
\hline \hline
\begin{tabular}[l]{@{}l@{}} Lanitis et al. \\2002 \cite{lanitis2002toward} \end{tabular}& Model based & AAMs&\xmark & \begin{tabular}[l]{@{}l@{}} Model generic and specific aging processes\\ four types of aging functions \end{tabular}\\
\hline
\begin{tabular}[l]{@{}l@{}}Pattersons et al. \\ 2006\cite{patterson2006automatic} \end{tabular}& Model based &AAMs & \xmark & \begin{tabular}[l]{@{}l@{}} Learning Effects of morphological changes \\ More efforts on the adult aging stage\end{tabular}\\
\hline
\begin{tabular}[l]{@{}l@{}}Luu et al. \\2009\cite{luu2009Automatic} \end{tabular}&Model based &AAMs &\xmark & Incorporated familial facial cues\\
\hline
\begin{tabular}[l]{@{}l@{}}Geng et al. \\2007 \cite{geng2007automatic} \end{tabular}&Model based & Aging Patterns & Aging Pattern Subspace & Grammatical face model \\
\hline
\begin{tabular}[l]{@{}l@{}}Tsai et al. \\2014 \cite{tsai2014human} \end{tabular}&Model based &Aging Patterns & Aging Pattern Subspace & \begin{tabular}[l]{@{}l@{}}Guidance faces according to subject's feature \end{tabular}\\
\hline
\begin{tabular}[l]{@{}l@{}}Suo et al. \\2010 \cite{suo2010compositional } \end{tabular}&Model based & Part-based & And-Or-Graph & Markov Chain on Parse Graphs \\
\hline
\begin{tabular}[l]{@{}l@{}}Suo et al. \\2012 \cite{suo2012concatenational } \end{tabular}& Model based& Part-based & And-Or-Graph & Composition of short-term graph evolution\\
\hline
\begin{tabular}[l]{@{}l@{}} Kemelmacher-\\Shlizerman et al. \\2014 \cite{kemelmacher2014illumination} \end{tabular}& Prototype & Image pixel & \xmark & \begin{tabular}[l]{@{}l@{}} Illumination normalization and subspace \\alignment before transferring difference \\ between prototypes. \end{tabular}\\
\hline
\begin{tabular}[l]{@{}l@{}} Shu et al.\\2015 \cite{Shu_2015_ICCV} \end{tabular}& Reconstructing& Sparse Representation & Coupled dictionaries& \begin{tabular}[l]{@{}l@{}}Dictionary Learning with personality-aware \\coupled reconstruction loss\end{tabular}\\
\hline
\begin{tabular}[l]{@{}l@{}}Yang et al. \\2016 \cite{yang2016face} \end{tabular}& Reconstructing&Sparse Representation & Hidden Factor Analysis & \begin{tabular}[l]{@{}l@{}}Sparse reconstruction with age-specific \\feature \end{tabular}\\
\hline
\end{tabular}
\end{table*}
Further than these databases, a large-scale in-the-wild dataset, named \textbf{AGing Face in-the-Wild (AGFW)}, was also introduced in our work \cite{Duong_2016_CVPR} with 18,685 facial images with individual ages sampled ranging from 10 to 64.
In this database, images are divided into 11 age groups with the span of 5 years, each group contains 1,700 images on average. This database is then extended to \textbf{AGFW-v2} with double scale, i.e. 36,325 images with an average of 3,300 images per age group.
\section{Conventional Approaches} \label{sec:ConventionalApproaches}
In this section, we provide a brief review of conventional age progression approaches including modeling, prototyping, and reconstructing based approaches. Their properties are also summarized in Table \ref{tab:ConventionalAP}.
\subsection{Modeling-based approach}
Modeling-based approach is among the earliest categories presented for face age progression.
These methods usually exploit some kinds of appearance models, i.e. Active Appearance Models (AAM), 3D Morphable Models (3DMM), to represent the shapes and texture of the input face by a set of parameters. Then the aging process is simulated by learning some aging functions from the relationship of the parameter sets of different age groups.
In particular, Pattersons et al. \cite{patterson2006automatic} and Lanitis et al. \cite{lanitis2002toward} employed a set of Active Appearance Models (AAMs) parameters with four aging functions to model both the general and the specific aging processes.
Four variations of aging functions were introduced: Global Aging Function, Appearance Specific Aging Function (ASA), Weighted Appearance Aging Function (WAA), and Weighted Person Specific Aging Function (WSA).
Also by employing AAMs during the modeling step, Luu et al. \cite{luu2009Automatic} later incorporated familial facial cues to the process of face age progression.
Another direction of modeling was proposed in \cite{geng2007automatic} with a definition of AGing pattErn Subspace (AGES). In this approach, the authors construct a representative subspace for aging patterns as a chronological sequence of face images. Then given an image, the proper aging pattern is determined by the projection in this subspace that produces smallest reconstruction error. Finally, the synthesized result at a target age is obtained by the reconstructed faces corresponding to that age position in the subspace.
Tsai et al. \cite{tsai2014human} then enhanced the AGES using guidance faces corresponding to the subject's characteristics to produce more stable results.
Suo et al. \cite{suo2010compositional, suo2012concatenational} introduced the three-layer And-Or Graph (AOG) of smaller parts, i.e. eyes, nose, mouth, etc., to model a face. Then, the face aging process was learned for each part using a Markov chain.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=7.4cm]{Fig_IAAP_Results.png}\end{center}
\caption{Examples of age-progressed faces obtained by Illumination-Aware Age Progression approach\cite{kemelmacher2014illumination}.}
\label{fig:IAAP_results}
\end{figure}
\subsection{Prototyping approach}
The main idea of the methods in this category is to predefine some types of \textit{aging prototypes} and transfer the difference between these prototypes to produce synthesized face images. Usually, the aging prototypes are defined by the average faces of all age groups \cite{rowland1995manipulating}. Then, input face image can be progressed to the target age by incorporating the differences between the prototypes of two age groups \cite{burt1995perception}. Notice that this approach requires a good alignment between faces in order to produce plausible results.
Kemelmacher-Shlizerman et al. \cite{kemelmacher2014illumination} then proposed to construct high quality average prototypes from a large-scale set of images. Sharper average faces are obtained via the collection flow method introduced in \cite{liu2011sift} to align and normalize all the images in one age group. Then illumination normalization and subspace alignment technique are employed to handle images with various lighting conditions.
Figure \ref{fig:IAAP_results} illustrates the results obtained in \cite{kemelmacher2014illumination}.
\subsection{Reconstructing-based approach}
Rather than constructing aging prototypes for each age group, the reconstructing-based methods focus on constructing the ``aging basis'' for each age group and model aging faces by the combination of these bases. Dictionary learning techniques are usually employed for this type of approach.
Shu et al. \cite{Shu_2015_ICCV} proposed to use the aging coupled dictionaries (CDL) to model personalized aging patterns by preserving personalized facial features. The dictionaries are learned using face pairs from neighboring age groups via a personality-aware coupled reconstruction loss.
Yang et al. \cite{yang2016face} represented person-specific and age-specific factors independently using sparse representation hidden factor analysis (HFA). Since only age-specific gradually changes over time, the age factor is transformed to the target age group via sparse reconstruction and then combined with the identity factor to achieve the aged face.
\section{Deep Generative Models for Face Aging} \label{sec:DGMApproaches}
In this section, we firstly provide an overview of the structures and formulations of the common Deep Generative Models before going through the age progression techniques developed from these structures.
\subsection{From Linear Models to Deep Structures}
Compared to linear models such as AAMs and 3DMM, deep structures have gained significant attention as one of the emerging research topics in both representing higher-level data features and learning the distribution of observations. For example, being designed following the concepts from Probabilistic Graphical Models (PGM), the RBM-based models organize their non-linear latent variables in multiple connected layers with an energy function such that each layer can learn a different factor to represent the data variations.
This section introduces the structures, formulations of several Deep Generative Models including RBM, Deep Boltzmann Machines (DBM), Generative Adversarial Networks (GANs).
\subsubsection{Restricted Boltzmann Machines (RBM)\cite{hinton2002training}} are undirected graphical models consisting two layers of stochastic units, i.e. visible
$\mathbf{v}$ and hidden units $\mathbf{h}$. This is a simplified version of Boltzmann Machines where no intra connections between units in the same layer is created. RBM structure is a bipartite graph where visible and hidden units are pairwise conditionally independent.
Given a binary state of $\mathbf{v,h}$,
the energy of RBM and the joint distribution of visible and hidden units can be computed as
\small
\begin{equation} \label{eqn:RBM}
\begin{split}
-E(\mathbf{v,h}) &= \mathbf{v}^T\mathbf{Wh}+\mathbf{b}^T\mathbf{v}+\mathbf{a}^T\mathbf{h}\\
P(\mathbf{v,h};\mathbf{\theta}) &= \frac{1}{Z(\mathbf{\theta})} \exp \{-E(\mathbf{v,h})\}
\end{split}
\normalsize
\end{equation}
where $\mathbf{\theta}=\{\mathbf{W,b,a}\}$ denotes the parameter set of RBM including the connection weights and the biases of visible and hidden units, respectively.
The conditional probabilities RBM structure can be computed as $p(h_j = 1 | \mathbf{v}) = \sigma (\sum_i v_i w_{ij} + a_j)$ and $p(v_i=1|\mathbf{h}) = \sigma(\sum_j h_j w_{ij} + b_i)$ where $\sigma(\cdot)$ is the logistic function.
In the original RBM, both visible and hidden units are binary.
To make it more powerful and be able to deal with real-valued data, an extension of RBM, named \textit{Gaussian Restricted Boltzmann Machine}, is introduced in \cite{krizhevsky2009learning}.
In Gaussian RBM, the visible units are assumed to have values in $[-\infty, \infty]$ and normally distributed with mean $b_i$ and variance $\sigma_i^2$. Another extension of RBM is Temporal Restricted Boltzmann Machines (TRBM) \cite{sutskever2007learning} which was designed to model complex time-series structure. The structure of TRBM is shown in Fig. \ref{fig:RBM_TRBM_DBM} (b). The major difference between the original RBM and TRBM is the directed connections from both visible and hidden units of previous states to the current states.
With these new connections, the short history of their activations can act as ``memory'' and is able to contribute to the inference step of current states of visible units.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{Fig_RBM_TRBM.png}\end{center}
\caption{Structures of (a) RBM, (b) TRBM and (c) DBM.}
\label{fig:RBM_TRBM_DBM}
\end{figure}
\subsubsection{Deep Boltzmann Machines (DBM)}
As an extension of RBM with more than one hidden layer, the structure of DBM contains several RBMs organized in layers. Thanks to this structure, the hidden units in higher layer can learn more complicated correlations of features captured in lower layer. Another interesting point of DBM is that these higher representations can be built from the training data in an unsupervised fashion. Unlike other models such as Deep Belief Network \cite{hinton2006reducing} or Deep Autoencoders \cite{bengio2009learning}, all connections between units in two consecutive layers are undirected. As a result, each unit receives both bottom-up and top-down information and, therefore, can better propagate uncertainty during the inference process.
Let $\{\mathbf{h}^{(1)},\mathbf{h}^{(2)}\}$ be the set of units in two hidden layers, the energy of the state $\{\mathbf{v}, \mathbf{h}^{(1)},\mathbf{h}^{(2)}\}$ is given as follows.
\small
\begin{equation} \label{eqnEnergyDBM}
\begin{split}
-E(\mathbf{v}, \mathbf{h}^{(1)},\mathbf{h}^{(2)};\boldsymbol{\theta}) = &\mathbf{v}^\top\mathbf{W}^{(1)}\mathbf{h}^{(1)}
+ \mathbf{h}^{(1)\top}\mathbf{W}^{(2)}\mathbf{h}^{(2)}
\end{split}
\end{equation}
\normalsize
where $\boldsymbol{\theta} = \{\mathbf{W}^{(1)},\mathbf{W}^{(2)}\}$ are the weights of visible-to-hidden and hidden-to-hidden connections. Notice that the bias terms for visible and hidden units are ignored in Eqn. (\ref{eqnEnergyDBM}) for simplifying the representation.
Exploiting the advantages of DBM, Deep Appearance Models (DAM) \cite{duong2015beyond} and Robust Deep Appearance Models (RDAM) \cite{quach2016robust} have been introduced and proven to be superior to other classical models such as AAMs in inferencing a representation for new face images under various challenging conditions.
\subsubsection{Generative Adversarial Networks (GAN)} \label{sec:GANStructure}
In order to avoid the intractable Markov chain sampling during the training stage of RBM, Goodfellow et al. \cite{goodfellow2014generative} borrowed the idea from adversarial system to design their Generative Adversarial Networks (GAN). The intuition behind this approach is to set up a game between \textit{generator} and \textit{discriminator}.
On one hand, the discriminator learns to determine whether given data are from the generator or real samples. On the other hand, the generator learns how to fool the discriminator by its generated samples. This game continues as the learning process takes place. The learning process will stop at a point that the discriminator can't distinguish between real data and the ones produced by the generator. This is also an indication that the generator has already learned the distribution of input data.
Formally, let $\mathbf{x}$ be the input data, $p_g$ be the distribution learned from generator, and $p_z(\mathbf{z})$ be the prior distribution of variable $\mathbf{z}$. Then GAN is defined by two neural networks representing two differentiable functions for the generator $G(\mathbf{z}, \theta_g): \mathbf{z} \mapsto \mathbf{x}$ and discriminator $D(\mathbf{x}, \theta_d): \mathbf{x} \mapsto y$
where $y$ denotes the probability that $\mathbf{x}$ comes from the data distribution rather than $p_g$; $\theta_g$ and $\theta_d$ are the parameters of the CNNs representing $G$ and $D$, respectively. The training process is then formulated as maximizing the probability $D(\mathbf{x})$ while minimizing $\log \left(1 - D(G(\mathbf{z}))\right)$:
\small
\begin{equation} \label{eqn:GANformulation}
\begin{split}
\min_G \max_D V(D,G) = & \mathbb{E}_{\mathbf{x}\sim p_{data}(\mathbf{x})}\left[\log D(\mathbf{x})\right] \\
& + \mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}\left[\log \left( 1 - D(G(\mathbf{z}))\right)\right]
\end{split}
\end{equation}
\normalsize
In original GAN, the use of fully connected neural network for its generator makes it very hard to generate high-resolution face images.
Then numerous extensions of GAN focusing on different aspects of this structure have been proposed in literature such as Laplacian pyramid Generative Adversarial Networks (LAPGAN) \cite{denton2015deep}, Deep Convolutional Generative Adversarial Networks (DCGAN) \cite{radford2015unsupervised}, Info-GAN \cite{chen2016infogan}, Wasserstein GAN \cite{arjovsky2017wasserstein}.
\begin{table*}[!t]
\small
\centering
\caption{Properties of Deep Generative Model Approaches for Age Progression. Deep Learning (DL), Log-Likelihood (LL), Inverse Reinforcement Learning (IRL), Probabilistic Graphical Models (PGM), Adversarial (ADV)}
\label{tb:ComputerAPApproaches}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\textbf{Method}
& \textbf{Approach}
& \textbf{Architecture} & \begin{tabular}[l]{@{}l@{}} \textbf{Loss}\\ \textbf{Function} \end{tabular}& \begin{tabular}[l]{@{}l@{}} \textbf{Non}\\\textbf{-Linearity} \end{tabular}& \begin{tabular}[l]{@{}l@{}} \textbf{Tractable} \\\textbf{Model} \end{tabular}
& \begin{tabular}[l]{@{}l@{}}\textbf{Subject}\\\textbf{Dependent}\end{tabular}
& \begin{tabular}[l]{@{}l@{}}\textbf{Multiple} \\\textbf{Input}\\\textbf{support}\end{tabular}
\\
\hline \hline
\begin{tabular}[l]{@{}l@{}}Wang et al. 2016 \cite{wang2016recurrent} \end{tabular}& DL
& RNN & $\ell_2$ & \cmark & \cmark & \xmark & \xmark\\
\hline
\begin{tabular}[l]{@{}l@{}}Zhang et al. 2017 \cite{Zhang_2017_CVPR} \end{tabular}& DL
& GAN & ADV + $\ell_2$ & \cmark & \cmark & \xmark & \xmark\\
\hline
\begin{tabular}[l]{@{}l@{}}Antipov et al. 2017 \cite{antipov2017face} \end{tabular}& DL
& GAN & ADV + $\ell_2$ & \cmark & \cmark & \xmark & \xmark\\
\hline
\begin{tabular}[l]{@{}l@{}}Li et al. 2018 \cite{li2018global} \end{tabular}& DL
& GAN & \begin{tabular}[l]{@{}l@{}}ADV + $\ell_2$ + ID + Age \end{tabular}& \cmark & \cmark & \xmark & \xmark\\
\hline \hline
\begin{tabular}[l]{@{}l@{}}\textbf{Ours, 2016 \cite{Duong_2016_CVPR}} \end{tabular}& \textbf{DL}
& \textbf{TRBM} & \textbf{LL} & \cmark & \xmark & \xmark & \xmark\\
\hline
\begin{tabular}[l]{@{}l@{}}\textbf{Ours, 2017 \cite{Duong_2017_ICCV}} \end{tabular}& \textbf{DL}
& \begin{tabular}[l]{@{}l@{}}\textbf{PGM + CNN }\end{tabular}& \begin{tabular}[l]{@{}l@{}}\textbf{LL} \end{tabular}& \cmark & \cmark & \xmark & \xmark\\
\hline
\begin{tabular}[l]{@{}l@{}}\textbf{Ours, 2017 \cite{duong2017learning}} \end{tabular}& \textbf{DL + IRL}
& \begin{tabular}[l]{@{}l@{}}\textbf{PGM + CNN }\end{tabular}& \begin{tabular}[l]{@{}l@{}}\textbf{LL} \end{tabular}& \cmark & \cmark & \cmark & \cmark\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=17cm]{Fig_TRBM_AP.png}\end{center}
\caption{Architectures of (A) the TRBM-based model \cite{Duong_2016_CVPR} with post-processing steps, and (B) GAN-based Age Progression model \cite{Zhang_2017_CVPR}.}
\label{fig:TRBM_GAN}
\end{figure*}
\subsection{Deep Aging Models for Age Progression}
Thanks to the power of Deep Learning models in terms of non-linear variations modeling, many deep learning based age progression approaches have been recently developed and achieved considerable results in face age progression.
Table \ref{tb:ComputerAPApproaches} summarizes the key features of these deep learning based approaches.
\noindent
\paragraph{\textbf{TRBM-based model}}
In addition to single face modeling, a TRBM based age progression model is introduced in \cite{Duong_2016_CVPR} to embed the temporal relationship between images in a face sequence.
By taking the advantages of log-likelihood objective function and avoiding the $\ell_2$ reconstruction error during training, the model is able to efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges with more aging details. This approach has presented a carefully designed architecture with the combination of both RBM and TRBM for age variation modeling and age transformation embedding.
Fig. \ref{fig:TRBM_GAN}(A) illustrates the aging architecture with TRBM proposed in \cite{Duong_2016_CVPR}.
In this approach, the long-term aging development is considered as a composition of short-term changes and can be represented as a sequence of that subject faces in different age groups. After the decomposition, a set of RBMs is employed to model the age variation of each age group as well as the wrinkles presented in the faces of older ages. Then the TRBM based model is constructed to embed the aging transformation between faces of consecutive age groups.
Particularly, keeping similar form of the energy function as original TRBM and RBM , the bias terms are defined as
\small
\begin{equation}
\begin{split}
b_i^t =& b_i + B_i \mathbf{v}^{t-1} + \sum_l P_{li}\mathbf{s}_l^{<=t}\\
a_j^t =& a_j + A_j \mathbf{v}^{t-1} + \sum_l Q_{lj}\mathbf{s}_l^{<=t}
\end{split}
\end{equation}
\normalsize
where $\{\mathbf{A,B,P,Q} \}$ are the model parameters; and $\mathbf{s}^{<=t} = \{\mathbf{s}^t, \mathbf{s}^{t-1}\}$ denote the reference faces produced by the set of learned RBM. With this structure, both linear and non-linear interactions between faces are efficiently exploited.
Finally, some wrinkle enhancement together with geometry constraints are incorporated in post-processing steps for more consistent results.
Therefore, plausible synthesized results can be achieved using this technique.
A comparison in term of synthesis quality between this model and other conventional approaches is shown in Fig. \ref{fig:CompareTRBM_IAAP}.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=17cm]{Fig_RNN_TNVP_SDAP.png}\end{center}
\caption{Structures of (A) RNN-based model \cite{wang2016recurrent}, (B) Temporal Non-Volume Preserving (TNVP) approach \cite{Duong_2017_ICCV}, and (C) Subject-dependent Deep Aging Path (SDAP) \cite{duong2017learning}. While SDAP shares the Mapping function with TNVP, it aims at embedding the aging transformation of the whole aging sequence.}
\label{fig:RNN_TNVP_SDAP}
\end{figure*}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=7.5cm]{Fig_CompareTRBM_IAAP.jpg}\end{center}
\caption{A comparison between age-progressed sequence generated by TRBM based model\cite{Duong_2016_CVPR} and IAAP approach\cite{kemelmacher2014illumination}.}
\label{fig:CompareTRBM_IAAP}
\end{figure}
\paragraph{\textbf{Recurrent Neural Network-based model}}
Approaching the age progression in a similar way of decomposition, instead of using TRBM, Wang et al. \cite{wang2016recurrent} proposed to use a Recurrent Neural Network with two-layer gated recurrent unit (GRU) to model aging sequence. With the recurrent connection between the hidden units, the model can efficiently exploit the information from previous faces as \textit{``memory''} to produce smoother transition between faces during synthesizing process.
Fig. \ref{fig:RNN_TNVP_SDAP}(A) illustrates the architecture of the proposed RNN for age progression. In particular,
let $\mathbf{x}_t$ be the input face at young age, this network firstly encodes it into latent representation $\mathbf{h}_t$ (hidden/memory units) by the bottom GRU and then decodes this representation into an older face $\mathbf{\hat{h}}_t$ of the subject using the top GRU. The relationship between $\mathbf{x}_t$ and $\mathbf{h}_t$ can be interpreted as follows.
\small
\begin{equation}
\begin{split}
\mathbf{z}_t &= \sigma (\mathbf{W}_{zh} \mathbf{h}_{t-1} + \mathbf{W}_{zx}\mathbf{x}_{t} + \mathbf{b}_z)\\
\mathbf{r}_t &= \sigma (\mathbf{W}_{rh} \mathbf{h}_{t-1} + \mathbf{W}_{rx}\mathbf{x}_{t} + \mathbf{b}_r)\\
\mathbf{c}_t &= \tanh (\mathbf{W}_{ch} \mathbf{r}_{t} \odot \mathbf{h}_{t-1} + \mathbf{W}_{cx}\mathbf{x}_{t} + \mathbf{b}_c)\\
\mathbf{h}_t &= (1-\mathbf{z}_t) \odot \mathbf{h}_{t-1} + \mathbf{z}_t \odot \mathbf{c}_t\\
\end{split}
\end{equation}
\normalsize
Similar formulations are also employed for the relationship between $\mathbf{h}_t$ and $\mathbf{\hat{h}}_t$. Then the difference between $\mathbf{\hat{h}}_t$ and the ground-truth aged face is computed in a form of $\ell_2$ loss function. The system is then trained to obtain the synthesis capability. Finally, in order to generate the wrinkles for the aged-faces, the prototyping-style approach is adopted for wrinkle transferring. Although this approach has produced some improvements comparing to classical approaches, the use of a fixed reconstruction loss function has limited its synthesis ability and usually resulted in blurry faces.
\paragraph{\textbf{GAN-based model}}
Rather than \textit{step-by-step synthesis} as in previous approaches, Antipov et al. \cite{antipov2017face}, Zhang et al. \cite{Zhang_2017_CVPR}, and Li et al. \cite{li2018global} turned into another direction of age progression, i.e. \textit{direct approach}, and adopted the structure of GAN in their architectures. Fig. \ref{fig:TRBM_GAN}(B) illustrates the structure of the Conditional Adversarial Autoencoder (CAAE) \cite{Zhang_2017_CVPR}. From this figure, one can easily see that the the authors have adopted the GAN structure as presented in Section \ref{sec:GANStructure} with an additional age label feature in the representation of latent variables. By this way, they can further encode the relationship between subject identity related high-level features of the input face and its age label. After training, by simply changing the aging label according to the target age, the deep-neural-network generator is able to synthesize the aged face at that age. Compared to Eqn. \eqref{eqn:GANformulation}, the new objective function is adapted as.
\small
\begin{equation} \nonumber
\begin{split}
\min_{E,G} & \max_{D_z,D_{img}} \lambda \mathcal{L}(\mathbf{x}, G(E(\mathbf{x}),\mathbf{l})) + \gamma TV(G(E(\mathbf{x}),\mathbf{l}))\\
& + \mathbb{E}_{\mathbf{z}^*\sim p(\mathbf{z})}\left[\log D_z(\mathbf{z}^*)\right] + \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})} \left[\log(1-D_z(E(\mathbf{x})))\right]\\
& + \mathbb{E}_{\mathbf{x,l} \sim p_{data}(\mathbf{x,l})}\left[\log D_{img}(\mathbf{x,l})\right] \\
& + \mathbb{E}_{\mathbf{x,l} \sim p_{data}(\mathbf{x,l})}\left[\log(1-D_{img}(G(E(\mathbf{x}),\mathbf{l})))\right]\\
\end{split}
\end{equation}
\normalsize
where $\mathbf{l}$ denotes the vector represented age label; $\mathbf{z}$ is the latent feature vector; $E$ is the decoder function, i.e. $E(\mathbf{x}) = \mathbf{z}$. $\mathcal{L}(\cdot, \cdot)$ and $TV(\cdot)$ are the $\ell_2$ norm and total variation functions, respectively. $p_{data}(\mathbf{x})$ denotes the distribution of the training data. As one can see, the conditional constraint on the age label is represented in the last two terms of the loss function.
Although this model type can avoid the requirement of longitudinal age database during training, it is not easy to be converged due to the step of maintaining a good balance between generator and discriminator which is hard to achieve. Moreover, similar to RNN-based approach, GAN-based models also incorporate the $\ell_2$-norm in their objective functions. Therefore, their synthesized results are limited in terms of the image sharpness.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=14cm]{Fig_Comparison_all.png}\end{center}
\caption{A comparison between Deep Learning Approaches, i.e. TNVP \cite{Duong_2017_ICCV}, TRBM \cite{Duong_2016_CVPR}, against conventional approaches including IAAP \cite{kemelmacher2014illumination}, Exemplar based (EAP) \cite{shen2011exemplar}, and Craniofacial Growth (CGAP) \cite{ramanathan2006modeling} models.}
\label{fig:CompareTNVP_IAAP}
\end{figure*}
\paragraph{\textbf{Temporal Non-Volume Preserving transformation}}
Recently, addressing a limitation of intractable learning process of TRBM based model as well as the image quality of RNN-based and GAN-based approaches,
the Temporal Non-Volume Preserving (TNVP) approach is introduced in \cite{Duong_2017_ICCV} for embedding the feature transformations between faces in consecutive stages while keeping a tractable density function, exact inference and evaluation.
Unlike previous approaches which incorporate only PGM or CNN structures, this proposed model enjoys the advantages of both architectures to improve its image synthesis quality and highly non-linear feature generation.
The idea of this model start from a PGM with relationships between variables in image and latent domains (see Fig. \ref{fig:RNN_TNVP_SDAP}(B)) given by
\small
\begin{equation}
\begin{split}\label{eqn:ModelFormulation}
\mathbf{z}^{t-1}& = \mathcal{F}_1 (\mathbf{x}^{t-1}; \theta_1)\\
\mathbf{z}^{t} &= \mathcal{H}(\mathbf{z}^{t-1},\mathbf{x}^t; \theta_2, \theta_3) \\& = \mathcal{G}(\mathbf{z}^{t-1};\theta_3) + \mathcal{F}_2(\mathbf{x}^t;\theta_2)
\end{split}
\end{equation}
\normalsize
where $\mathcal{F}_1, \mathcal{F}_2$ denote the bijection functions mapping $\mathbf{x}^{t-1}$ and $\mathbf{x}^{t}$ to their latent variables $\mathbf{z}^{t-1}, \mathbf{z}^{t}$, respectively. $\mathcal{G}$ is the function embedding the aging transformation between latent variables.
Then the probability density function is derived by.
\small
\begin{equation}
\begin{split} \label{eqn:likelihood}
p_{X^t}(\mathbf{x}^t|\mathbf{x}^{t-1};\theta)&=p_{X^t}(\mathbf{x}^t|\mathbf{z}^{t-1};\theta)\\
&=p_{Z^t}(\mathbf{z}^t|\mathbf{z}^{t-1};\theta)\left|\frac{\partial \mathcal{H}(\mathbf{z}^{t-1}, \mathbf{x}^t;\theta)}{\partial \mathbf{x}^t}\right|\\
&=p_{Z^t}(\mathbf{z}^t|\mathbf{z}^{t-1};\theta)\left|\frac{\partial \mathcal{F}_2(\mathbf{x}^t;\theta)}{\partial \mathbf{x}^t}\right|
\end{split}
\end{equation}
\normalsize
where $p_{X^t}(\mathbf{x}^t|\mathbf{x}^{t-1};\theta)$ and $p_{Z^t}(\mathbf{z}^t|\mathbf{z}^{t-1};\theta)$ denote the conditional distribution of $\mathbf{x}^t$ and $\mathbf{z}^t$, respectively.
By a specific design of mapping functions $\mathcal{F}_1, \mathcal{F}_2$, the two terms on the right-hand-side of Eqn. \eqref{eqn:likelihood} can be computed exactly and effectively.
As a result, the authors can form a deep CNN network optimized under the concepts of PGM. While keeping the tractable log-likelihood density estimation in its objective function, the model turns age progression architectures into new direction where the CNN network can avoid using a fix reconstruction loss function and obtain high-quality synthesized faces. Fig. \ref{fig:CompareTNVP_IAAP} illustrates the synthesized results achieved by TNVP in comparison with other approaches.
\paragraph{\textbf{Subject-dependent Deep Aging Path (SDAP) model}}
Inspiring from the advantages of TVNP, the Inverse Reinforcement (IRL) Learning is also taken into account in the structure of Subject-dependent Deep Aging Path (SDAP) model \cite{duong2017learning}.
Under the hypothesis that each subject should have his/her own facial development, Duong et al. \cite{duong2017learning} proposed to use an additional aging controller in the structure of TNVP. Then rather than only embedding the aging transformation between pairwise relationship between consecutive age groups, the SDAP structure learns from the aging transformation of the whole face sequence for better long-term aging synthesis. This goal is achieved via a Subject-Dependent Aging Policy Network which guarantees to provide an appropriate planning aging path for the age controller corresponding to the subject's features. The most interesting point of SDAP is that this is one of the pioneers incorporating IRL framework into age progression task.
In this approach, let $\zeta_i = \{\mathbf{x}^1_i, \mathbf{a}^1_i, \ldots, \mathbf{x}^T_i\}$ be the age sequence of $i$-th subject where $ \{\mathbf{x}^1_i, \ldots, \mathbf{x}^T_i\}$ are the face sequence representing the facial development of $i$-th subject and $ \mathbf{a}^j_i $ denote the variables control the aging amount added to $ \mathbf{x}^j_i $ to become $ \mathbf{x}^{j+1}_i $. The probability of $\zeta_i$ can be formulated via an energy function $E_\Gamma(\zeta_i)$ by
\small
\begin{equation} \label{eqn:ProbabilitySDAP}
P(\zeta_i) = \frac{1}{Z} \exp(-E_\Gamma(\zeta_i))
\end{equation}
\normalsize
where $Z$ is the partition function. Notice that the formulation of Eqn. \eqref{eqn:ProbabilitySDAP} is very similar to joint distribution between variables of RBM as in Eqn. \eqref{eqn:RBM}.
Then the goal is to learn a Subject-Dependent Aging Policy Network that can predict $ \mathbf{a}^j_i $ for each $ \mathbf{x}^j_i $ during synthesized process. The objective function is defined as.
\small
\begin{equation}
\label{eqn::Log_likelihood_sequence}
\begin{split}
\Gamma^* &= \arg \max_\Gamma \mathcal{L} (\zeta;\Gamma)=\frac{1}{M} \log \prod_{\zeta_i \in \zeta} P(\zeta_i)
\end{split}
\end{equation}
\normalsize
\begin{figure}[!t]
\begin{center}
\includegraphics[width=7.2cm]{Fig_Recognition_Acc.pdf}\end{center}
\caption{An example of age invariant face recognition by using age progression models, i.e. TVNP \cite{Duong_2017_ICCV} and SDAP \cite{duong2017learning}. By incorporated their synthesized results, the accuracy of face recognition (FR) system can be improved significantly. Note that the results of other FR methods are provided in Megaface website\cite{kemelmacher2016megaface}.}
\label{fig:CompareRecognition}
\end{figure}
Finally, a specific design of IRL framework is proposed to learn the Policy Network.
From the experimental results, SDAP has shown its potential to outperform TNVP and other approaches on synthesis results and cross-age verification accuracy. As shown in Fig. \ref{fig:CompareRecognition}, SDAP can help to significantly improve the accuracy for face recognition system.
\section{Conclusion}
In this paper, we have reviewed the main structures of Deep Generative Models for Age Progression task. Compared to other classical approaches,
Deep Learning has shown its potential either in learning the highly non-linear age variation or aging transformation embedding. As a result, not only do their synthesized faces improve in the image quality but also help to significantly boost the recognition accuracy for cross-age face verification system.
Several common aging databases that support the facial modeling and aging embedding process are also discussed.
\bibliographystyle{ieee}
|
{
"timestamp": "2018-02-27T02:01:16",
"yymm": "1802",
"arxiv_id": "1802.08726",
"language": "en",
"url": "https://arxiv.org/abs/1802.08726"
}
|
\section{Introduction}\label{Introduction}
Efficient preparation and manipulation of ultracold atomic gases in optical lattices (OL) have applications in many
fields, including quantum simulation of many-body systems, the
realization of quantum computation, quantum optics, and high-precision atomic clocks~\cite{BlochRev,Dervianko,Cronin,book}. There is a common concern
how to quickly transfer the Bose-Einstein condensates (BEC) from the initial harmonic
trap into a desired band of an OL with high fidelity and robustness. For example, to load atoms into the
ground band in an OL, one chooses to ramp up the lattice depth adiabatically, the time scale usually
lasts up to tens of milliseconds. To shorten the time of transfer, different techniques were proposed, sharing the concept of "shortcuts to adiabaticity"~\cite{reviewSTA13}. They promise to reach the same target state as the adiabatic process but within a very short time. One kind of shortcut is the continuous action method, including counter-diabatic driving, fast-forward protocols and inverse engineering. They are developed and exploited extensively in rapid manipulations of cold atoms, such as expansion/compression, rotation, transport and loading, etc.~\cite{Chen,Schaff-2,Oliver,Sofia,JorgeSciRep,Campo,Masuda,Strungari}. The other is optimal control~\cite{Rabitz,Jorge} or composite pulses like in nuclear magnetic resonance~\cite{Levitt}. These techniques have been used for atomic clocks, atomic interferometry and quantum computing~\cite{Schleier,Butts,Lee}. As for loading atoms into an OL, theoretical proposals such as adding a supplementary driving potential~\cite{Campo,Takahashi} are very attractive.
Recently ultracold gases in higher bands of OL attracted much attention. Many interesting many-body phenomena, e.g., supersolid quantum phases in cubic lattices~\cite{Scarola}, quantum stripe ordering in
triangular lattices~\cite{Wu}, orbital degeneracy~\cite{Lewenstein} can
appear with ultracold atoms in excited-band states. However, the most widely-used adiabatic approaches can not directly
transfer atoms into the excited bands. Several experimental techniques
have been developed including: (i) coherent manipulation of vibrational bands by stimulated Raman transitions~\cite{Muller}, (ii) using a moving lattice to load a BEC into an excited-band~\cite{Browaeys}%
, (iii) swapping population to selectively exciting the atoms
into the P-band~\cite{Wirth1} or F-band~\cite{M} of a bipartite
square OL. All these approaches required
to transfer atoms into the S-band firstly. Fast and high fidelity shortcut directly loading into the desired band is lacking.
In this paper, we demonstrate an effective method for transferring atoms from an harmonic trap into the desired band of an OL.
This shortcut stems from nonholonomic coherent control~\cite{Lloyd,Harel}, and is composed by standing-wave pulse sequences which are imposed on the system before the lattice is switched on. The time duration and interval in each step are optimized in order to reach the target state with a high fidelity and robustness. This process can be completed within several tens
of microseconds, reducing the loading time by up to three orders of magnitude as compared to adiabatic loading. It can be applied to load different excited bands and open up the possibility to study their dynamic behavior. Furthermore, we demonstrate the manipulation of the superposition of Bloch states and loading into two-dimensional (2D) and three-dimensional (3D) OL. Our experimental results are in good agreement with the theoretical model.
The structure of this manuscript is organized as follows. In Sec.~\ref{Method}, we
introduce our idea of shortcut loading and optimization of pulse sequences to the S-band with zero quasi-momentum. The demonstration of loading atoms into odd parity excited bands such as the D- or G-band, and even parity excited bands such as the P-band in OL are given in Sec.~\ref{Higher}.
In Sec.~\ref{Manipulation}, the shortcut loading atoms into S-band with non-zero quasi-momentum and into superpositions of band states are implemented. The case of 2D or 3D OL with degenerate energies of the excited states are shown in Sec.~\ref{3D}. Finally, the main results are summarized in Sec.~\ref{Conclusions}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\textwidth]{f1.pdf}
\end{center}
\caption{The system before and after the preparation process. (a) Before the process, the atoms are confined in a weak
harmonic trap. (b) After the preparation, the atoms are transferred into a desired band of a 3D OL. (c) The band structure of 1D OL versus quasi-momentum $q$ in the first Brillouin zone for $V_0=10E_r$. (d) From the bottom to top, the solid lines are spatial density distributions of different Bloch states (S,P,D,F,G) with $q=0$. The dashed line is the spatial distribution of the lattice potential.}\label{fig:f1}
\end{figure}
\section{The shortcut loading method}\label{Method}
\subsection{The idea of shortcut}\label{shortcut}
We consider the general situation for transferring atoms into an OL. Before the preparation, atoms are confined in a weak
harmonic trap $V_{harm}=\frac{1}{2} m (\omega_{x}^{2}x^{2}+\omega_{y}^{2}y^{2}+\omega_{z}^{2} z^{2})$ with the initial wavefunction $\left| {\psi_i} \right\rangle=|p=0\rangle$, as shown in Fig.~\ref{fig:f1}(a), where $m$ is the atom mass, $\omega$ is the trap frequency and $p$ the atomic momentum. The loading process then transfers atoms into a target state of an OL. The lattice is constructed by a set of laser beams with electric field amplitude $\vec{E_i}$, whose potential can be written as
\begin{equation} \label{e1}
V(\vec{r})\propto-\sum_{i,j}\vec{E}_i\cdot\vec{E}_j\cos((\vec{k}_i-\vec{k}_j)\cdot\vec{r}+(\alpha_{i}-\alpha_{j})),
\end{equation}
where $k_{i}=\lambda_{i}/(2\pi)$ is the wave number, $\lambda_i$ is the wavelength and $\alpha_{i}$ is the initial phase of laser beam $i$. For a cubic lattice, we can assume $k_j=-k_i$ and $i=x,y,z$, as shown in Fig.~\ref{fig:f1}(b). When neglecting atom-atom interactions because loading time is very short and for the lattice laser sufficiently far detuned, the single-atom Hamiltonian in OL is given by $(\hbar =1)$, %
\begin{equation}
\hat{H}=\frac{\hat{p}^{2}}{2m}+V(\vec{r}).
\label{e1a}
\end{equation}
According to the Bloch's theorem, the eigenstates of the Hamitonian $\hat{H}$ can be expressed as
$\left|{n,\vec{q}}\right\rangle=u_{n,\vec{q}}(\vec{r})e^{i\vec{q}\cdot \vec{r}}$, with the index of the energy band $n=1,2,3...$ and the quasi-momentum $\vec q$.
We first consider the 1D case for simplicity. The potential can be expressed as $V(x)=\frac{V_0}{2}(1+\cos{2kx})$, where $V_0$ is the lattice depth (here the harmonic trap is ignored during the preparation process because it is small compared with the OL potential). The Bloch states can be written as
\begin{equation}
\left|{n,q}\right\rangle =\sum_{\ell}c_{n,\ell}\left|{2\ell k+q}\right\rangle.
\label{e2}
\end{equation}
This target state $\left| {\psi_a} \right\rangle$ can be decomposed over a reduced basis of plane waves $| 2{\ell}k+q\rangle$.
In the quasi-momentum space, for a pure Bloch state at $q=0$, the parity is given by $\Omega=\sum_{\ell}\left|c_{n,\ell}-c_{n,-\ell}\right|^2/4$, where $\Omega=1$ stands for a state with odd parity and $\Omega=0$ even. As shown in Fig.~\ref{fig:f1}(c), the Bloch state with $n=1,3,5...$ correspond to the S-, D-, G-bands with even parity, and $n=2,4,...$ to the P- and F-bands with odd parity ( $V_0=10E_r$ where $E_r$ is one-photon recoil energy $E_r=k^2/(2m)$). The corresponding wave functions for the different Bloch states (S,P,D,F,G) with $q=0$ are also shown in Fig.~\ref{fig:f1}(d).
To achieve fast loading we will apply a $m$-step preloading sequence on the initial state $\left|\psi_i\right\rangle$ before switching on the lattice with the optical depth $V_0$. The state after the preloading sequence $\left| {\psi_f} \right\rangle$ is given by:
\begin{equation}
\centering|\psi_f\rangle= \prod_{j=m}^{1} \hat{U}_j|\psi_i\rangle,
\label{sequence}
\end{equation}
where $\hat{U}_j=e^{-i\hat{H}_j t_j}$ is the evolution operator of the $j^{th}$ process. For the target state $\left| {\psi_a} \right\rangle$, the parameters $\hat{H}_j$ and $t_j$ can be determined via maximizing the fidelity
\begin{equation}
\zeta=|\langle \psi_a|\psi_f\rangle|^2.
\label{fidelity}
\end{equation}
When $\zeta=1$ all the atoms would be prepared in the state $\left| {\psi_a} \right\rangle$. $1-\zeta$ describes the difference between the achieved atomic state $|\psi_f\rangle$
and the target state $\left| {\psi_a} \right\rangle$. In the other word, the deviation rate $N_e$ is:
\begin{equation}
\centering N_e=1-|\langle \psi_a|\psi_f\rangle|^2 .
\end{equation}
Our goal is to properly choose $\hat{H}_j$ and $t_j$ so that $N_e$ is small enough
to be neglected in the experiment.
\subsection{Calculating the time sequences}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{f2.pdf}
\end{center}
\caption{(a) Four different time schemes for non-adiabatic loading into the ground (S) band. The OL turns on abruptly (a1); a four-step preloading sequence with both the potential depth and the duration of each step set as free parameters (a2); a one pulse (a3) and a two pulses (a4) preloading sequence with fixed potential depth and variable duration in each step. (b) Deviation rate $N_e$ for different lattice depths, where the blue diamonds, red circles, black stars and green points corresponding to (a1)-(a4), respecively. A logarithmic scale for the vertical axis is used. (c) Typical loading time for one pulse (a3)(solid line) and two pulses(a4) (dash line).}\label{fig:f2}
\end{figure}
One obvious choice for each $\hat{H}_j$ is to take the Hamiltonian corresponding to the interaction of atoms with a standing wave with the same periodicity as the OL. For this purpose, the power of the same laser as the one used for the final lattice loading is simply adjusted and each Hamiltonian $\hat{H}_j$
is obtained after substitution of $V_0$ by the new lattice depth $V_j$. More precisely, as $\hat{H}_j$ has spatial
periodicity, we get its eigenstates by solving the equation
$\hat{H}_j|n,q,V_j\rangle=E_{n,q}|n,q,V_j\rangle$~\cite{JPB02}, where $E_{n,q}$ is the corresponding eigenenergy. We use the notation
$|n,q,V_j\rangle$ for denoting the Bloch states for a $V_j$ lattice depth. Since only states with $q=0$ are initially populated, no other quasimomenta can be populated during the sequence of pulses. Then the state of the system can be written in the momentum eigenstates basis $| 2{\ell} k+q\rangle$, independent on the
potential depth $V_j$ and the
evolution operator can be written as the following matrix:
\begin{equation}
\hat{U}_j(V_j,t_j)=\hat{C}(V_j)\hat{E}(V_j,t_j)\hat{C}(V_j)^{\dagger},
\end{equation}
where $\hat{C}(V_j)$ is the unitary matrix of transition between the Bloch states basis and the momentum eigenstates basis with matrix elements
\begin{equation}
\hat{C}(V_j)_{{\ell}n}=\langle 2{\ell} k+q|n,
q,V_j\rangle,
\end{equation}
and $\hat{E}(V_j,t_j)$ is a diagonal matrix with elements
\begin{equation}
\hat{E}(V_j,t_j)_{nn}=\exp(-iE_{n,q}(V_j)t_j).
\end{equation}
Because of the simple form of the potential, it is easy to obtain these matrices, from which the wave function's evolution can be calculated optimally for a specific target state.
We can obtain the values of $V_j$ and $t_j$ by optimising for the specific target state using for example a gradient descent algorithm.
Let's start with four steps, for which depth $V_j$ and duration time $t_j$ ($j=1,2,3,4$) are independently adjusted as shown in Fig.~\ref{fig:f2}(a2). Optimising for maximum fidelity (minimal deviation rate $N_e$ and constraining $t_j$ between $0\mu s$ to $50\mu s$ and $V_j$ from $0 E_r$ to $30 E_r$) we find $N_e$ to be in the rage of $10^{-5}$ or smaller for lattice depth of up to $30 E_r$ (Fig.~\ref{fig:f2}(b)).
Next we turn to a simpler control: keeping the lattice strength $V_j=V_0$ for $j=1,3$ (fixed to the final lattice potential $V_0$), and $V_j=0$ for $j=2,4$, the times $t_j$ being free parameters. This makes the sequence very easy to implement experimentally. A series of on and off pulses can be combined to a pulse sequence of length $m$, where the $j^{th}$ component is composed of a duration $t_{j1}$ where the OL is on and an interval $t_{j2}$ where the OL is off (Fig.~\ref{fig:f2}(a4)). To obtain an optimised shortcut scheme we have to find the proper time sequences so that the fidelity $\zeta = {\left| {\left\langle {{\psi_f}} \right.\left| {{\psi_a}} \right\rangle } \right|^2} \to 1$. From the green points in Fig.~\ref{fig:f2}(b), we can see the deviation rate is still lower than $0.1\%$ for all lattice depths. If we only use one optimized pulse, as shown in Fig.~\ref{fig:f2}(a3), the fidelity is lower than $99\%$ for most of the considered OL depths, but still much better then just switching on the OL (Fig.~\ref{fig:f2}(a1)). On the other side using more pulses, extending the sequence Fig.~\ref{fig:f2}(a4) there is still a small improvement. In addition, the typical times of the loading process under a two-level model approximation are given in Fig.~\ref{fig:f2}(c) for one pulse and two pulses. The improvement of fidelity comes at the expense of the loading time.
In the rest of our study we choose the simple scheme illustrated in Fig.~\ref{fig:f2}(a4).
\subsection{Experimental methods to probe the final state}
All our measurements are done in absorption imaging after 31 ms of \textit{Time of Flight} (TOF). The image thereby reflects the momentum of the atoms after the release from the optical lattice.
If we switch off the lattice abruptly (non-adiabatic switch off (NAS)), we project the wave function of the atoms in the lattice onto its momentum states. If there is coherence in the trapped wave function, one observes diffraction peaks after time of flight.
If we switch off the lattice adiabatically, then we map the atom wave function in different bands to different momentum components. This so called \textit{Band Mapping} (BM) ~\cite{Muller, EsslingerMap1,SpreeuwMap,EsslingerMap2} allows to investigate in which bands the atoms reside. In addition the distribution of the atoms inside the mapped Brillouin zone allow to measure the distribution of quasi momenta. In our experiments the 'adiabatic switch off' is accomplished by exponentially ramping down the OL lattice potential in the form $e^{-t/\eta}$ where a characteristic decay time $\eta=100\mu s$ for a total length of 500 $\mu$s.
\subsection{Experimental measurement for loading atoms into $S$ band}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{f3.pdf}
\end{center}
\caption{The experimental demonstration of loading atoms into the S-band. The measured loading fidelity without (a) and with (b) shortcut method. The used time sequence, and the absorption image after band mapping (BM) are given in (a1) and (a2), respectively. We integrate the image along the $z$ direction and fit the atom distribution (blue points) by a Bi-modal function(red line) (a3,b3). The atom numbers for $S$(blue area) and $D$(green area) band can be gotten in (a4,b4). (c) After the shortcut and holding in the OL, we use a reverted pulse sequence to transfer the atoms back to the original state (c1). (c2-c4) show absorption images with non-adiabatic switching off (NAS) at the initial time, before and after using the reverted pulse sequence, respectively.}\label{fig:f3}
\end{figure}
To demonstrate our shortcut approach, we prepare a nearly pure BEC of about $1.5\times10^5$ $^{87}$Rb atoms in a hybrid trap which is formed by overlapping a single-beam optical dipole trap with wavelength 1064nm and a quadrupole magnetic trap. The resulting potential has harmonic trapping frequencies $(\omega_x,\omega_y,\omega_z)=2\pi\times(28,55,65)$Hz, respectively, and a temperature of about $60nK$. The lattice is implemented by a standing wave created by two counter-propagating laser beams along the $x$-direction, with the lattice constant being $\lambda/2=426$nm, the recoil energy $E_r$ being $3.16k\text{Hz}$.
We start with sequence Fig.~\ref{fig:f3}(a1), switching the OL with $V_{0}=10E_{r}$ on abruptly and hold the atoms in the lattice for $t=2\ ms$. The absorption image after BM (Fig.~\ref{fig:f3}(a2)) shows a significant fraction of atoms at momentum $\pm 2 k$, which means that they are in excited bands (here the D-band). We integrated this image along the direction perpendicular to the $\hat{x}$-axis and fit the experimental data points by three Bi-modal functions, as shown in Fig.~\ref{fig:f3}(a3) and (a4), respectively. The bi-modal function contains a Gaussian form that represents thermal atoms and an inverted parabolic function that denotes the condensate in S- and D-band. The blue and green area size equal to the atom numbers for S- and D-band, respectively. The measured fidelity from Fig.~\ref{fig:f3}(a3) and (a4) is $\zeta=72.6\%$.
We then realized an optimised 2 pulse shortcut sequence $(t_{11}, t_{12}, t_{21}, t_{22}) = (\textbf{5.5}, 21.0, \textbf{13.0}, 6.1) \mu s$ with the fixed depth $V_{0}=10E_{r}$ (Fig.~\ref{fig:f3}(b)). Employing band mapping we verify that the atoms are distributed in the first Brillouin zone that means atoms occupy $\left|S\right>$ band. Similar to above, we can measure the fidelity of our final state prepared by the shortcut method, and obtain $\zeta=99.2\%$.
To further show that the coherence is not destroyed in the transfer process, we first hold the atoms in the OL for 2 ms and then use two additional inverted pulses to transfer the atoms back to the original state $|\psi_i\rangle$, as shown in Fig.~\ref{fig:f3}(c). We study the state of the atoms by a non-adiabatic switching off (NAS). The images obtained with the initial condensate, the atoms loaded in the OL, and after two additional inverted pulses are shown in Fig.~\ref{fig:f3}(c2) to (c4), respectively. In Fig.~\ref{fig:f3}(c3), we can see the interference peaks, similar to the familiar pattern observed in adiabatic loading experiments, which indicates a successful loading without significant excitation and heating. Comparing Fig.~\ref{fig:f3}(c2) and (c4), we know there is little heating or disturbing effect on our BEC, which proves the effectiveness of our 'preparing' process of the ground state of OL~\cite{Liu}.
\subsection{Robustness analysis}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.95\textwidth]{f4.pdf}
\end{center}
\caption{(a): The track of the shortcut loading process on the Bloch sphere. The axis $\hat z$ and $\hat n$ are the rotation axis for the states corresponding to the OL being on and off, respectively. The dashed lines are for one pulse sequence, demonstrated from point A to the medium state point C, then the final state $| S \rangle $. The case of two pulses is shown with red and blue solid lines, which start from the initial state point A, go through the middle states points B, E, and M, and get to the final state $| S \rangle $. The variation of the fidelities with $\delta V$ (b) and $\delta t_{12}$ (c) for the different pulse sequences ($V_{0}=10E_r$). Theory curves are shown as the dash-dotted (one pulse), dotted (two pulse sequences process without phase-matching) and solid (two pulse sequences process with phase-matching) curves, while the corresponding experimental datas are shown as square, dotted and diamond points, respectively.
}\label{fig:f4}
\end{figure}
In order to analyse the robustness of the pulse sequences, we use a two-level model approximation to draw the trajectory of the evolution on the Bloch sphere. Considering the parity of the bands, a two-level model ($\left| S \right\rangle$, and $\left| D \right\rangle$ with corresponding eigenvalues $E_S$ and $E_D$) is suffitient when the OL depth is low. As shown in Fig.~\ref{fig:f4}(a), we choose the aimed state as the S-band with zero quasi-momentum. The polar axis (${\hat z}$ axis) represents the Bloch state $\left| S \right\rangle$ ($\left| D \right\rangle$) in the positive (negative) direction. Consequently, the initial plane wave represented by axis $\hat n$ can be set as $\left| {\psi_i} \right\rangle = {\left( {\cos \frac{\beta }{2},\sin \frac{\beta }{2}} \right)^T}$, where $\beta$ is the angle between axis ${\hat z}$ and ${\hat n}$. Obviously, when the pulse is imposed, the action can be seen as an counterclockwise rotation of $\varphi_s$ around axis ${\hat z}$ (the state $\left| S \right\rangle$) for a vector in the Bloch sphere. Likewise, during the time interval with the pulse being off, it is equivalent to an counterclockwise rotation of $\theta_s$ around the axis $\hat n$ (the plane wave with zero momentum) for a vector in the Bloch sphere.
As we can see in Fig~\ref{fig:f4}(a), the path (from point A to B, E to M) for ${\hat U}_{j1}$ caused the change in the phase between Bloch bands. On the other hand, the path (from point B to E, M to S) for ${\hat U}_{j2}$ mainly caused the change in the proportion of Bloch bands. We can obtain many different sequences for the same $\left| {\psi_a} \right\rangle$ with $\zeta\to1$. From the analysis of the trajectories on the Bloch sphere, we find that if the track is symmetric, it is the most robust. For the track in Fig.~\ref{fig:f4}(a), if
\begin{eqnarray}
{\varphi_1} = {\theta_2}\ \text{and}\ {\theta_1} = {\varphi_2} \label{phase}
\end{eqnarray}
and ${\rm d}^2\zeta/{\rm d}\gamma^2$ is the minimum, where $\gamma$ are the parameters in experiment such as $t_{ij}$ and $V_0$, which reminds us at a phase matching condition.
In Fig.~\ref{fig:f4}(a), it is clear that this matching conditions have mirror symmetry about the center of the whole loading path. Therefore, we should choose proper rotation angles to make the fidelity maximum and get the highest robustness. Considering the influence of higher bands, the sequence with phase matching in the two-level approximation will have to be corrected in order to satisfy the multi-level condition.
The variation of the fidelity $\zeta$ with respect to the pulse amplitude (mismatch with the lattice depth) is shown in Fig.~\ref{fig:f4}(b). The experimental results are in good agreement with the theoretical predictions. The diamond points, which represents the shortcut time sequence considering the phase matching and multi-level correction, is the most robust. There is less than $0.2\%$ variation for $\delta V=0.5E_r$. On the contrary, there is $2\%$ variation when we do not consider the phase matching as shown in the dotted points. Furthermore, although both one pulse and two pulses sequences satisfy the phase matching conditions, Fig.~\ref{fig:f4}(c) indicates that the robustness and fidelity of one pulse sequence is a lower than two pulse sequence with respect to variation of the time.
\section{Loading atoms into higher bands}\label{Higher}
The above shortcut method can be adapted to load atoms into excited bands in an OL. Since the initial state is of even parity and the parity of wave function remains unchanged during the pulse sequence, we can easily load atoms into higher bands of even parity such as D- and G-band. By adding a shift of the lattice phase we can change the parity to transfer atoms into odd parity bands such as the P- or F-bands.
\subsection{Loading atoms into D-band}\label{D band}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{f5.pdf}
\end{center}
\caption{ The demonstration of loading atoms into the D-band. (a) Time sequence and the experimental image after BM. (b) The BM diagram for atoms in the P- and D-band with zero quasi-momentum. (c) Integration of the momentum distributions in the vertical direction of the OL for experimental images after NAS. The blue points are the experimental data, and the red lines are results from the bi-modal fit. From the fit we can extract the numbers of atoms in $0$ and $\pm2k$. The three images correspond to holding time $15\mu s$, $25\mu s$ and $35\mu s$, respectively. (d) The oscillations of the relative population $W_\ell(t)=N_\ell(t)/N$ $(\ell=0, \pm1)$ with the holding time $t$ for case (1) $V_0=10E_r$ and (2) $V_0=20E_r$. We show the measured values of $N_0(t)/N$ (blue circles), $N_1(t)/N$ (red squares), and $N_{-1}(t)/N$ (green triangles) from the experiments, and the corresponding theoretical calculations as the blue solid lines and red dashed lines, respectively.
}\label{fig:f5}
\end{figure}
Similar to the loading method into the S-band, we can numerically maximize the fidelity $\zeta$
to obtain the time sequence to load atoms into the D-band. The time sequence is $(t_{11},t_{12},t_{21},t_{22})=(\textbf{24.5},28.8,\textbf{8.1},2.2)\mu s$ for $V_{0}=10E_r$, and the experimental image by BM is shown in Fig.~\ref{fig:f5}(a). However, we could not get the fidelity from this image because both D- and P-band atoms are distributed at $\pm 2 k$, as illustrated in Fig.~\ref{fig:f5}(b).
We can measure the loading fidelity by the oscillations of the relative population in different momenta from the images by NAS. After the preparation process, atoms are in state
\begin{eqnarray}
|\psi _{f}\rangle={\prod\limits_{j = m}^1 {{\hat U}_{j2} {\hat U}_{j1}} } \left| {p_x=0} \right\rangle \equiv \sum_{n}f_{n}|n,0\rangle, \label{psil}
\end{eqnarray}%
At holding time $t$ in the OL, the atomic state becomes $|\psi _{t } \rangle =\sum_{n}f_{n} e^{-iE_{n ,0}t }|n,0\rangle $, where $E_{n,0}$ and $|n,0\rangle$ are the eigen-energy and eigen-state of $H_{x}$, $H_{x}|n,0\rangle =E_{n,0}|n,0\rangle$. Therefore, the number of atoms $N_{\ell}(t)$ in the state $|p_{x}=2\ell k\rangle$ ($\ell=0, \pm1$) at time $t$ is given by $N_{\ell}(t )=N|\langle p_{x}=2\ell k|\psi _{t}\rangle |^{2}$, and satisfies
\begin{eqnarray}
W_\ell(t)\equiv\frac{N_{\ell}(t )}{N}=\left\vert \sum_{n}f_{n}c_{n,\ell}e^{-iE_{n ,0}t }\right\vert ^{2}, \label{g}
\end{eqnarray}%
where $N$ is the total atom number and $c_{n,\ell}$ is defined in Sec.~\ref{shortcut}. Eq.(\ref{g}) shows that the atom number $N_{\ell}(t )$ oscillates with time $t $.
Fig.~\ref{fig:f5}(c) shows the momentum distribution along $\hat{x}$ direction extracted from experimental images obtained by NAS. By a bi-modal fit we obtain $N_{\ell}$ and $W_\ell$ for different momentum states $|p_{x}$=$2\ell k\rangle$. The experimental values $W_\ell(t)$ oscillate with time as shown in Fig.~\ref{fig:f5}(d) and are well described by the theoretical model Eq.(\ref{g}). From a fit to the experimental data, blue solid line for $W_0(t)$) and red dash line for $W_{\pm1}(t)$ (we set $W_1(t)=W_{-1}(t)$ in the theory calculations) we extract the fidelity of the preparation process: $98.2\%$ for $10E_r$ and $97.3\%$ for $20E_r$ using the loading time sequence $(\textbf{17.2},25,\textbf{12.5},1.1)\mu s$. After loading, the lifetime of atoms in D-band can be measured~\cite{Zhai}.
\subsection{Loading atoms into G-band}\label{G band}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.8\textwidth]{f6.pdf}
\end{center}
\caption{ (a) The shortcut time sequence for loading atoms into G-band. The experimental image with BM after holding 50$\mu s$ in an OL with $V_0=5E_r$ (b1) and its Bi-modal fit for integration along the vertical direction of OL (b2). (c) The dynamical oscillations of atoms after loading into G-band from the experiment by NAS method. (d) Schematic of extended Bloch bands (P, D, F and G).
}\label{fig:f6}
\end{figure}
For even higher bands such as the G-band, theoretical calculations sugest a fidelity $\zeta=99.2\%$ for a $5E_r$ deep lattice using an eight pulses sequence as: $(\textbf{32},39,\textbf{40},14,\textbf{41},13,\textbf{13},14,\textbf{13},13,\textbf{13},14,\textbf{11},43,\textbf{11},12)\mu s$. The experimental image by BM, and its momentum distribution is given in Fig.~\ref{fig:f6}(b). We could not get the fidelity from the image by BM because both G- and F-band atoms are distributed at $\pm 4 k$.
Applying this shortcut method to load atoms into G-band at $q=0$, a dynamical oscillation is clearly visible in the images with NAS, as shown in Fig.~\ref{fig:f6}(c). This is best understood when looking at the corresponding extended band structure as drawn in Fig.~\ref{fig:f6}(d), where the energy gaps between different bands are marked with $A_s$ ($s=1,2,3,4,5,6$): After loading into G-band, the atoms fall down into the F-band due to the small gap between G and F bands~\cite{Wang}. During the G-band preparation process, we ignored the effect of the harmonic trap because the time of the shortcut is very short. However, when we observe the atoms in OL for a long time, the weak harmonic trap will affect the dynamics. Once the BEC is in the F-band, it continues to lose momentum while gaining potential energy from the harmonic confinement. This corresponds to the BEC traversing dynamically along the F-band from $A_1$, $A_6$ to $A_2$, $A_5$ in Fig.~\ref{fig:f6}(d). Once arriving at $A_2$ or $A_5$, the atoms face different dynamics depending on the lattice strength. If the lattice strength is small and the Bragg reflections at $A_2$ and $A_5$ are weak, the BEC will continue into the D-band by a Landau-Zener transition. After evolving along the entire D-band, the BEC comes to the band gap between D- and P-bands at $A_3$ and $A_4$. Due to the large gap between D and P bands all the atoms at $A_3$($A_4$) will be Bragg reflected to $A_4$($A_3$), without tunneling into the P-band. Afterwards, the BEC will reverse its dynamics by moving up in momentum from $A_4$, $A_3$ to $A_5$, $A_2$. It eventually arrives at $A_6$, $A_1$, completing half of an oscillating cycle. As illustrated in Fig.~\ref{fig:f6}(c), the oscillation period is about $24$ms.
The above atomic oscillation depends on the optical depth. When the OL is strong such as $V_0=15E_r$, the oscillation only exists within the F-band with a period of $17$ms. When the lattice strength is intermediate, for example $V_0=7.5 E_r$, there is a superposition of two kinds of oscillations: across both the F and D bands, and within the F-band~\cite{Hu}.
\subsection{Loading atoms into P-band}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{f8.pdf}
\end{center}
\caption{Schematic to prepare atoms into the P-band. (a) Two 1D lattices with a $3\pi/2$ phase shift. (b) Loading time sequences with the first series of pulses from $0$ to $t_{1}$ for $V_{even}(x)=V_0\cos
^{2}\left( kx\right)$ and the second from $t_{1}$ to $t_{2}$ for $V_{odd}(x)=V_0\cos^{2}\left(
kx+3\pi/4\right)$. (c1-c3) Superposition
coefficients $c_{\ell}$ right before $t_1$, $c^\prime_{\ell}$ at the
moment right after $t_1$ and at $t_2$, respectively. (d1-d3) The corresponding population distribution in the Bloch band.
}\label{fig:f8}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{f9.pdf}
\end{center}
\caption{ (a) Designed time sequence for loading atoms into P-band with $V_0=5E_r$. (b) The momentum distribution of absorption image by NAS along $\hat{x}$ direction. Experimental data (blue points) fitted by a bi-modal function (red line). (c) Population oscillations around momenta $-2k$(red points) and $0k$(blue stars). The solid lines are the fitting curves.
}\label{fig:f9}
\end{figure}
The parity of quantum states in the P-band with $q=0$ is odd, $\psi(-x)=-\psi(x)$~\cite{Bloch2,speed,NP,shaken,GW}, one has to change the parity in order to load into the P-band. This can be done by a spatial shift of the OL. Our preparation process therefore consists of two series of pulses, as shown in Fig.~\ref{fig:f8}(a) and (b). In the first series of pulses from $0$ to $t_{1}$, the atom experiences a spatial potential $V_{even}(x)=V_0\cos^{2}\left( kx\right)$. In the second series pulses from $t_{1}$ to $t_{2}$, the atom experiences a potential $V_{odd}(x)=V_0\cos^{2}\left(kx+3\pi/4\right)$. The coefficients $c_{\ell}$ ($c^\prime_{\ell}$) defined in Eq.(\ref{e2}) and the distribution in the Bloch bands at the same times are shown in Fig.~\ref{fig:f8}(c) and (d), respectively. At time $t_{1}$, all the components $c_\ell$ in the parity $\Omega$ would satisfy $c_\ell-c_{-\ell}=0$, as shown in Fig.~\ref{fig:f8}(c1) and the energy bands S, D and G... shown in Fig.~\ref{fig:f8}(d1).
However, from the view of the second series of pulses, by the lattice shift, the coefficient $c_\ell$ should be multiplied by a phase according to $l$, i.e. $c^\prime_\ell|\ell\rangle=c_\ell e^{i2\ell(3\pi/4)}|\ell\rangle$, and the relation between coefficients becomes $c^\prime_\ell-(-\ell)^{\ell}c^\prime_{-\ell}=0$. In our loading process, the first series of pulses ensure that coefficients $c^\prime_\ell$ with even $\ell$ are zero. At the beginning of the second series of pulses, the parity of states can be completely changed as shown in Fig.~\ref{fig:f8}(c2) and the corresponding energy bands P and F... are shown in Fig.~\ref{fig:f8}(d2). From $t_1$ to $t_2$, the parity is unchanged. So only $P$ band state is
populated at time $t_{2}$, as shown in Figs.~\ref{fig:f8}(c3) and (d3).
In the experiment, we use two acousto-optic modulators to form our designed pulse sequence with the frequency difference $\delta\omega=182.5$MHz which corresponds to a phase shift between two pulses series by $3\pi/4$. Four special pulses are used to transfer atoms into P-band for $V_0=5E_r$, where the first series is $(\textbf{31.2},39.4,\textbf{28.8},20.0)\mu s$ and the second is $(\textbf{24.0},20.0,\textbf{20.0},24.0)\mu s$, as shown in Fig.~\ref{fig:f9}(a). The momentum distributions of absorption image along $\hat{x}$ direction after NAS are shown in Fig.~\ref{fig:f9}(b). The momentum distribution nearly equals to zero at $0k$ and has significant peaks at $\pm 2k$. To obtain the loading fidelity of P-band, we can measure the oscillations of $W_\ell(t)$ $(\ell=0, -2)$ as shown in Fig.~\ref{fig:f9}(c), which is similar to D-band. By comparing the experimental data with the peripheral contour of the beating signal, we find the initial quantum state is $|\psi(t=0)\rangle=\sqrt{0.9}|P,q=0\rangle+\sqrt{0.05}|D,q=0\rangle+\sqrt{0.05}|S,q=0\rangle$~\cite{Hu}. The corresponding fidelity is about 90\% in P-band.
After loading atoms into P-band and holding for a longer time we can observe the quantum equilibration in dilute Bose gases~\cite{Niu1}. In a similar way, we can also transfer atoms into the F-band with two sets of standing wave pulses ${V_0}{\cos ^2}\left( {{k}x} \right)$ and ${V_0}{\cos ^2}\left( {{k}x{\rm{ + }}3\pi /4} \right)$~\cite{Wang}.
\section{Preparation and manipulation of superposition of Bloch states }\label{Manipulation}
Above, we presented a shortcut loading method to transfer atoms into one band at zero quasi-momentum. We now extend our scheme to load atoms into S-band with non-zero quasi-momentum and superpositions of band states. Furthermore, it can be used to construct $\pi/2$ pulse or $\pi$ pulse between S and D bands, and implement a Ramsey interferometer (RI) with motional states~\cite{Hu1}.
\subsection{Preparation of atoms in the S-band with non-zero quasi-momentum}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{f10.pdf}
\end{center}
\caption{Schematic diagrams for transferring atoms into the S-band with non-zero quasi-momentum (a): Step 1, the BEC in the ground state of a harmonic trap; Step 2, the atoms are accelerated to a momentum $p_0$; Step 3, using shortcut pulses, the atoms are transferred into the S-band of an OL with non-zero quasi-momentum. (b): The absorption images by NAS correspond to step 1 and step 2, respectively. (c) The images after step 3 by NAS (left) and BM (right) methods, respectively.}\label{fig:f10}
\end{figure}
If we want to load atoms into the S-band with non-zero quasi-momentum $q_0$, the BEC, initially with $p=0$ (Fig.~\ref{fig:f10}(a)), should be accelerated to obtain a momentum $p_0$. We can use a magnetic field gradient provided by coils, to accelerate a BEC to momentum $p=-0.8 k$ within $2$ms. Immediately afterwards the designed pulse sequence is used to transfer atoms into the S-band of the OL at quasi-momentum $q_0$. The corresponding experimental absorption images by NAS after step 1 and step 2 are shown in Fig.~\ref{fig:f10}(b). After step 3, the final state $\left|S,q=-0.8k\right\rangle$ is shown in Fig.~\ref{fig:f10}(c). The momentum distribution as measured by NAS (left image) has significant peaks at $0.8k$ and $-1.2k$. The right image as obtained by BM shows a significant peak at $0.8k$, which verifies the effectiveness of the loading.
\subsection{Preparation of atoms in superposition of Bloch states}
We can also choose the target state as a superposition states, such as $|\psi _{a}\rangle =(|S\rangle +|D\rangle)/\sqrt{2}$, as illustrated in Fig.~\ref{fig:f11}~\cite{Zhai}. Using a shortcut time sequence as $(\textbf{30},6.4,\textbf{8.7},4.5)\mu s$ we extract a measured fidelity of $\zeta=0.995$ from the fits to the measured $W_\ell(t)$ ($\ell=0,\pm 1$) as a function of the holding time $t$.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.9\textwidth]{f11.pdf}
\end{center}
\caption{Loading atoms into superposition states. (a) Schematic illustration of the superposition states $(|S\rangle +|D\rangle)/\sqrt{2}$ in OL with $q=0$ for $V_0=10E_r$. (b) The relative population as a function of holding time $t$ (similar to Fig.~\ref{fig:f5}(b)).}\label{fig:f11}
\end{figure}
\subsection{Manipulation of Bloch states}
Our shortcut method can also be employed to manipulate Bloch states~\cite{Deng,Xiong,Yue,Yang}, for instance to design $\pi/2$ or $\pi$ pulses between S and D bands constituting a pseudo-spin system. Unlike conventional Ramsey interferometer where selection rules can be used to prepare population in two states, the lattice band transition, similar to transition between vibration states in molecules~\cite{mole}, have no selection rules. For an arbitrary initial superposition between S and D band states, $|\psi_i\rangle=\sin\frac{\theta}{2}|S\rangle+e^{i\phi}\cos\frac{\theta}{2}|D\rangle$, a $\pi/2$ pulse $\hat{U}_{\pi/2}$ should transfer the initial states to the final states by
\begin{eqnarray}
\hat{U}_{\pi/2}|\psi_i\rangle=\frac{1}{\sqrt{2}}(\sin\frac{\theta}{2}+\cos\frac{\theta}{2})|S\rangle+\frac{1}{\sqrt{2}}e^{i\alpha}e^{i\phi}(\cos\frac{\theta}{2}-\sin\frac{\theta}{2})|D\rangle,\label{pi2}
\end{eqnarray}%
where $\alpha$ is the additional phase from $\hat{U}_{\pi/2}$.
If we choose $\alpha=\pi-\phi$, it is easy to prove that if $\hat{U}_{\pi/2}$ satisfies
\begin{eqnarray}
\hat{U}_\mathrm{\pi/2} |S\rangle =(|S\rangle+|D\rangle)/\sqrt{2}\ {\rm and} \ \hat{U}_\mathrm{\pi/2} |D\rangle = (-|S\rangle+|D\rangle)/\sqrt{2},\label{picon}
\end{eqnarray}
then $\hat{U}_{\pi/2}$ would satisfy Eq.(\ref{pi2}). By tuning the parameters of the pulse sequence, one can optimize the two fidelities $\zeta_{1}$=$|\langle \psi_{a_1} | \psi_{f_1}\rangle|^2$ and $\zeta_{2}$=$|\langle \psi_{a_2} | \psi_{f_2} \rangle|^2$ to their maximum, which is different from the above sequences with only one constraint, where $|\psi_{f_1}\rangle\equiv\hat{U}_\mathrm{\pi/2} |S\rangle$, $|\psi_{f_2}\rangle\equiv\hat{U}_\mathrm{\pi/2} |D\rangle$, $|\psi_{a_1}\rangle \equiv (|S\rangle+|D\rangle)/\sqrt{2}$ and $|\psi_{a_2}\rangle \equiv (-|S\rangle+|D\rangle)/\sqrt{2}$.
For $V_0=10E_\mathrm{r}$ and a time sequence for $\pi/2$ pulse of $\{\textbf{56.2}, 28, \textbf{22.6}, 23.1\}(\mu$s) we find $\zeta_1=96.9\%$ and $\zeta_2=97.9\%$. The method for designing the $\pi$ pulse is similar to the case of the $\pi/2$ pulse. Using a time sequence $\{\textbf{49.2}, 52.5, \textbf{22.1}, 26.4\}(\mu$s) we transfer atoms from $|S\rangle$ or $|D\rangle$ into the target states $|D\rangle$ or $-|S\rangle$ with the fidelity $98.5\%$ and $98.0\%$, respectively,. These time sequences have been employed to develop a matter wave Ramsey interferometer for motional quantum states exploiting the S- and D-bands of an OL ~\cite{Hu1}.
\section{Loading atoms into 2D and 3D optical lattice}\label{3D}
The above discussion has been focused on 1D optical lattices. We now show how these shortcut pulses can be extended to 2D or 3D OL, such as the square, triangular, hexagonal OLs~\cite{PRA50.5173,Becker,Struck,PRL108.045305}. Among these OLs, the square lattice is the simplest, where the total potential energy is the sum of potential energies in $\hat{k}_x$ and $\hat{k}_y$ directions, while this is not true for the triangular OL.
\subsection{2D square optical lattice}
The potential energy for 2D square OL can be divided into potential energies in the $\hat x$ and $\hat y$ directions if $||\vec{k}_x|-|\vec{k}_y||\ne 0$ or the electric field $ \vec{E}_1\perp\vec{E}_2$ in Eq.(\ref{e1}), in this case, the Hamiltonian of OL is given by
\begin{eqnarray} \label{e4-2d}
\hat{H} = - \frac{{{1}}}{{2m}}[\frac{{{\partial ^2}}}{{\partial {x^2}}}+ \frac{{{\partial ^2}}}{{\partial {y^2}}}] + \frac{1}{2}[V\left( x \right){\cos}\left( {{2k_x}x} \right)+ V\left( y \right){\cos}\left( {{2k_y}y} \right)],
\end{eqnarray}
and the wave functions can be separated in the form of $\psi (\vec{r}) = \psi_x (x)\psi_y (y)$.
\begin{figure}
\includegraphics[width=1\textwidth]{f1_2d.pdf}
\caption{Schematic of Bloch bands in 2D square lattice: (a), (b) and (c) represent S-, P- and D-bands, respectively. (d) S-, P- and D-band energies along the $\hat{x}$ direction at $q_y=0$.}\label{f1_2d}
\end{figure}
\begin{figure}[b]
\includegraphics[width=1\textwidth]{f2_2d.pdf}
\caption{(a) The correspondence between 1D lattice and 2D square lattice band energy spectra with zero quasi-momentum. Bloch states of square lattice in the form of $\psi_x \times \psi_y$ in first column represent products of two 1D Bloch states. The Bloch state $\psi_{2d}$ in square lattice and its band are shown in the second and third columns, respectively. (b) The schematic diagram of the time sequences in the $\hat x$ and $\hat y$ directions corresponding to $\psi_{2d}=\left| 6 \right\rangle$. (c) The loading processes for three different target states and their corresponding calculated fidelities for $\psi_{2d}=\left| 1 \right\rangle$, $\left| 6 \right\rangle$ and
$\frac{1}{\sqrt{2}}\left| 1 \right\rangle+\frac{1}{\sqrt{2}}\left| 4 \right\rangle$. }\label{f2_2d}
\end{figure}
In Fig.~\ref{f1_2d} we draw the schematic of Bloch bands of the square lattice for $V_x=11E_r$ and $V_y=9E_r$. There is a difference between the lattice depths in the $\hat{x}$ and $\hat{y}$ directions in order to avoid energy degeneracy. The S-band is the ground band, and there are two P bands, P$_{x}$ and P$_{y}$, and three D bands D$_{x}$, D$_{y}$ and D$_{xy}$. Fig.~\ref{f1_2d}(d) shows the energy bands along the $\hat{k}_x$ direction at $q_y=0$. For Bloch states $\psi_{2d}$ in each band, we can arrange their eigenstates according to their eigenenergies, which are shown in the second column of Fig.~\ref{f2_2d}(a). The first state is S-band and the second and third states are $ \left| P_x \right\rangle$ and $\left| P_y \right\rangle $ respectively. The 4th, 5th and 6th states are $ \left| D_x \right\rangle$, $ \left| D_y \right\rangle$ and $ \left| D_{xy} \right\rangle$ respectively. At the same time, the first column in Fig.~\ref{f2_2d}(a) displays the product forms of the corresponding two one-dimensional Bloch states.
For loading atoms into 2D square lattice, the evolution operator can be separated in the $\hat{x}$ and $\hat{y}$ directions $\widehat{U}\psi=\widehat{U}_x\psi_x\widehat{U}_y\psi_y$. If the target state in the square lattice $ |\psi_{a}\rangle =\sum\limits_{{i,j}}{\gamma_{ij}\left|n_x=i,n_y=j \right\rangle}$ can be written in the form of the product of two 1D states $|\psi_{a}\rangle=\sum\limits_{i}{\alpha_{i}\left|n_x=i\right\rangle}\otimes\sum\limits_{j}{\beta_{j}\left|n_y=j\right\rangle}$. Then
coefficients of these two forms should satisfy
\begin{equation}\label{coefficients}
\begin{bmatrix}
\alpha_{1}\\ \alpha_{2}\\ \vdots\\ \alpha_{m}
\end{bmatrix}
\begin{bmatrix}
\beta_{1}& \beta_{2}&\cdots& \beta_{m}
\end{bmatrix}
=
\begin{bmatrix}
\gamma_{11}&\gamma_{12}&\cdots&\gamma_{1m}\\ \gamma_{21}&\gamma_{22}&\cdots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ \cdots&\cdots&\cdots&\gamma_{mm}
\end{bmatrix}.
\end{equation}
We could use two separate 1D pulse sequences in the $\hat{x}$ and $\hat{y}$ directions to obtain target state with close to 100\% fidelity. If Eq.(\ref{coefficients}) does not hold, we can use numerical optimisation to obtain fidelities as high as possible. In Fig.~\ref{f2_2d}(b), we demonstrate this loading sequences for $\psi_{2d}=\left| 6 \right\rangle$, where the pulse sequences in two directions are independent but end at the same time, and the red part of the pulse sequence represents the laser with a phase shift that breaks the parity conservation. Fig.~\ref{f2_2d}(c) displays the calculated time sequences for three different target states and their fidelities that are very high.
\subsection{2D triangular optical lattice}
\begin{figure}[b]
\includegraphics[width=1\textwidth]{f3_2d.pdf}
\caption{The demonstration for loading atoms into 2D triangle OL. (a) Sketch of 2D triangle OL: three travelling-wave lasers with an angle $120$ degree between each other intersect at one point on the $\hat{x}$-$\hat{y}$ plane. (b) The time sequence for loading atoms into S-band in triangular lattice, where the sequences for three beams $k_1$, $k_2$ and $k_3$ are the same. The corresponding population distribution in momentum space at $\vec{q}=0$ for state $\left| n=1 \right\rangle$ is shown as the calculated (c) and experimental results(d), respectively.
}\label{f3_2d}
\end{figure}
For other 2D configurations, such as triangular OL constructed by three traveling-wave lasers with $|\vec{k}_1|=|\vec{k}_2|=|\vec{k}_3|=|\vec{k}|$ and $arg(\vec{k}_i,\vec{k}_j)=\pi/3$ ($i\ne j)$ (Fig.~\ref{f3_2d}(a)) we can't separate the variables in the $\hat x$ and $\hat y$ directions, and the Bloch states are written as
\begin{eqnarray}\label{e13-2d}
\left|{n,\vec{q}}\right\rangle =\sum_{\ell_1,\ell_2}c_{\ell_1,\ell_2}\left|{\ell_1 \vec{b}_1+\ell_2 \vec{b}_2+\vec{q}}\right\rangle.
\end{eqnarray}
where $\vec{b}_1=\sqrt{3}k\hat{x}$ and $\vec{b}_2=\sqrt{3}k(-\frac{1}{2}\hat{x},-\frac{\sqrt{3}}{2}\hat{y})$. For the target state $\left| {\psi_a} \right\rangle =\sum_{n}\gamma_{n}\left|{n,\vec{q}}\right\rangle=\left|{1,0}\right\rangle$, we can impose the same time sequence on the three traveling beams as shown in Fig.~\ref{f3_2d}(b). The theoretical momentum distribution is shown in Fig.~\ref{f3_2d}(c) for $V_0=10E_r$. Using the time sequence $(t_{11},t_{12},t_{21},t_{22})=(\textbf{6},22,\textbf{7},10)\mu s$ we can reach the theoretical fidelity $\zeta=0.991$ for $V_0=10E_r$. The corresponding experimental image with NAS shown in Fig.~\ref{f3_2d}(d) is in agreement with the theoretical result.
The corresponding theoretical population distributions in momentum space for higher bands in a 2D triangular lattice are shown in Fig.~\ref{f4_2d}(a), where $\left| n=i \right\rangle$ (i=2,3...) represents the $i^{th}$ eigenstate with zero quasi-momentum, and $\left|n=3+4\right\rangle$ represents the superposition of degenerate states, $\left| n=3 \right\rangle$ and $\left| n=4 \right\rangle$. For instance, if we choose the target state $\left| {\psi_a} \right\rangle =\left|{7,0}\right\rangle$, we can get $\zeta=0.92$ using time sequence $(t_{11},t_{12},t_{21},t_{22})=(\textbf{22.1},37.9,\textbf{79.9},35.6)\mu s$ for the lattice depth $V_0=10E_r$. The experimental results are shown in Fig.~\ref{f4_2d}(b). To load atoms into other excited states, more complicated pulse sequences with different phases are required.
\begin{figure}
\includegraphics[width=0.95\textwidth]{f4_2d.pdf}
\caption{ (a) The calculated population distribution in momentum space for higher bands in 2D triangular lattice, where $\left| n=i \right\rangle$ (i=2,3...) represents the $i$th eigenstate. (b) Experimental population image for $|n=7\rangle$.}\label{f4_2d}
\end{figure}
\subsection{3D optical lattice}
For the simplest 3D cubic OL, the wave functions can be separated by variables in the $\hat{x}$, $\hat{y}$ and $\hat{z}$ directions, and we can also load BEC to arbitrary target states. A more complicated 3D lattice composed of a 2D triangular lattice in the $\hat{x}$-$\hat{y}$ plane with $\lambda=1064nm$ and a 1D lattice in the $\hat{z}$ direction with $\lambda=852nm$ is shown in Fig.~\ref{f5_2d}(a). For this OL we can combine the 1D sequence and 2D sequence. Fig.~\ref{f5_2d}(b) shows the different time sequences on the $\hat{x}$-$\hat{y}$ plane and the $\hat{z}$ direction. The atoms can be transferred from the harmonic trap into the S-band of the OL in the $\hat{x}$-$\hat{y}$ plane and $\hat{z}$ direction, as shown in Fig.~\ref{f5_2d}(c), or S-band in the $\hat{x}$-$\hat{y}$ plane and D-band in the $\vec{z}$ direction as shown in Fig.~\ref{f5_2d}(d). The time sequences in Fig.~\ref{f5_2d}(c) are $(\textbf{6},22,\textbf{7},10)\mu s$ in the $\hat{x}$-$\hat{y}$ plane and $(\textbf{24.5},28.8,\textbf{8.1},2.2)\mu s$ in the $\hat{z}$ direction. The sequences in Fig.~\ref{f5_2d}(d) are $(\textbf{6},22,\textbf{7},10)\mu s$ in the $\hat{x}$-$\hat{y}$ plane and $(\textbf{5.5}, 21.0, \textbf{13.0}, 6.1) \mu s$ in the $\hat{z}$ direction. The experimental results are in agreement with the theoretical calculations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\textwidth]{f5_2d.pdf}
\end{center}
\caption{ (a) Sketch of the 3D triangular OL: an optical standing waves with $852nm$ parallel to the $z$-axis, and three travelling waves with $1064 nm$ that intersect in the $\hat{x}$-$\hat{y}$ plane. (b) Time sequences for the $\hat{x}$-$\hat{y}$ plane and the $\hat{z}$ direction are different. (c) The calculated and experimental populations for S-band in 3D lattice, (c1) the calculated momentum distributions in the $\hat{x}$-$\hat{z}$ plane, and (c2) and (c3) measured absorption images by NAS in the $\hat{y}$ direction and the $\hat{z}$ direction, respectively. (d) The case of S-band in the $\hat{x}$-$\hat{y}$ plane and D-band in the $\hat z$ direction. (d1-d3) is similar to (c1-c3).
}\label{f5_2d}
\end{figure}
\section{Conclusions}\label{Conclusions}
In summary, we present a method for effective preparation of a BEC in different bands of an optical lattice within a few tens of microseconds. This shortcut stems from nonholonomic coherent control, composed by pulse sequences which are imposed on the system before the OL switches on and fully optimised for high fidelity and robustness. With our approach, the BEC can be prepared in either pure Bloch states or superposition of states of different bands.
Furthermore we show this shortcut can also be successfully applied for 2D and 3D OLs. The experimental results are well described by the theoretical calculations . Because the duration of pulses is short enough, the atom-atom interaction can be neglected during the design of pulse sequences, and the numerical results show that the interaction leads to a change of fidelity less than $1\%$ in our designed time sequence. This efficient shortcut not only provides applications in controllable quantum systems and quantum information processing, but also is helpful for the study of orbital optical lattices, simulation of systems in condensed matter physics, and the precise measurements.
\ack{ We thank L. Yin, P. Zhang, B. Wu, G. J. Dong, H. W. Xiong and X. Chen for useful discussions. This work is supported by the National Key Research and Development Program of China (Grant
No. 2016YFA0301501), and the NSFC (Grants No.11334001, No.61475007 and No. 61727819).
JS acknowledges support by the European Research Council, ERC-AdG, Quantum Relax. }
\newpage
\section*{References}
|
{
"timestamp": "2018-02-27T02:03:18",
"yymm": "1802",
"arxiv_id": "1802.08779",
"language": "en",
"url": "https://arxiv.org/abs/1802.08779"
}
|
\section{Introduction}
\emph{Causality} is a fundamental concept in distributed computing \cite{Attiya:2004,Raynal:2013,Lynch:1996,Tel:2001}. In his influential paper \cite{Lamport78}, Lamport postulated that events in an execution of a distributed system are partially ordered by what is commonly referred to as the happens-before or causal-precedence relation. Two events that are related in the partial order can be considered \emph{causally dependent}. Tightly related is the notion of a snapshot, or global system state, which corresponds to a ``lateral cut'' through the partial order. Snapshot computations are at the heart of many distributed algorithms such as deadlock and termination detection, checkpointing, or monitoring. However, they are intricate due to the absence of a shared memory and unpredictable delay of message delivery, and they continue to constitute a fundamental research area \cite{Raynal:2013}.
\smallskip
A variety of techniques exist to obtain a consistent view of the global system state, ranging from time-stamping to ``gossiping''.
The aim of the latter is to keep track of the latest information that a process has about all other processes.
Interestingly, gossip protocols and related techniques such as asynchronous mappings have also been exploited in formal methods, in particular when it comes to establishing the expressive power of an automata model \cite{MukundS97,CMZ93,MukundKS03,DolevS97}. In particular, gossip protocols are the key to simulating high-level specifications, which include message sequence graphs and monadic second-order logic \cite{HenriksenJournal,GKM06,Kuske01,Zielonka87,tho90traces}. All these techniques and algorithms, however, require that communication be synchronous or accomplished through FIFO channels with \emph{limited} capacity.
Now, it is a standard assumption in distributed computing that channels are a priori unbounded (cf.\ \cite{Raynal:2013,Tel:2001}). In this paper, we consider the gossip problem in a message-passing environment where a finite number of processes communicate through \emph{unbounded} point-to-point FIFO channels. The problem can be stated as follows:
\begin{center}
\parbox{0.6\textwidth}{ \it Whenever process $q$ receives a message from process $r$, $q$ has to decide, for all processes $p$, whether it has more recent information on $p$ than $r$. }
\end{center}
Equivalently, $q$ has to output the most recent local state of $p$ that is still in its causal past.
The gossip protocol is superimposed on an existing system.
It is \emph{passive} (also reactive or observational) in the sense that it can add information to messages that \emph{are sent anyway}.
It is neither allowed to initiate extra communications nor to suspend the system activity.
This is fundamentally different from classical snapshot algorithms such as the one by Chandy and Lamport \cite{ChandyL85}, where the system is allowed to intersperse new send and receive events. In fact, like \cite{MukundS97,CMZ93,MukundKS03}, we will impose additional requirements: Both the set of messages and the set of local states must be finite. Besides being a natural assumption, this will allow us to exploit the gossip protocol to compare the expressive power of temporal logics and message-passing systems.
However, we will show that, unfortunately, there is no \emph{deterministic} gossip protocol.
This impossibility result is in contrast to the deterministic protocols for synchronous communication or
message-passing environments with bounded channels \cite{MukundS97,CMZ93,MukundKS03,DolevS97}.
On the positive side, and as our main contribution, we provide a non-deterministic gossip protocol: For every possible communication scenario,
\begin{itemize}
\item there is an accepting run that produces the correct output (i.e., the correct latest information);
\item there may be system runs that do not produce the correct output, but
these runs will be rejected by our gossip protocol.
\end{itemize}
The (non-deterministic) gossip protocol is an important step towards a better understanding of the expressive power of communicating finite-state machines (CFMs), which are a classical model of message-passing systems \cite{Brand1983}.
From a logical point of view, maintaining the latest information in a distributed system is a first-order property that requires \emph{three} variables: An event $e$ on process $p$ is the most recent one in the causal past of an event $f$ if all other events $g$ on $p$ that are in the causal past of $f$ are also in the past of~$e$.
Unfortunately, it is not known whether first-order formulas can always be translated into communicating finite-state machines.
However, using our gossip protocol, we show that we can deal with all formulas from classical temporal logics that have been studied for concurrent systems in the realm of partial orders \cite{Thiagarajan94,GK-fi07,DiekertG06}.
Since gossiping has been employed for implementing other high-level specifications (cf.\ \cite{Mukund12a}), we believe that our procedure can be of interest in other contexts, too, and be used to simplify or even generalize existing results.
\smallskip
To summarize, the motivation of this work comes from distributed algorithms and formal methods.
On the one hand, we tackle an important problem from distributed computing.
On the other hand, our results shed some light on the expressive power of message-passing systems.
In fact, previous logical studies of \CFMs with unbounded FIFO channels
in terms of existential MSO logic (without happens-before relation and, respectively, restricted to two first-order variables) and propositional dynamic logic \cite{BolligJournal,BKM-lmcs10,BFG-stacs18} do not allow us to solve the gossip problem or to show that CFMs capture abovementioned linear-time temporal logics.
\subparagraph{Outline.}
The paper is structured as follows:
In Section~\ref{sec:prel}, we define communicating finite-state machines (CFMs), a fundamental model of message-passing systems. The gossip problem is introduced in Section~\ref{sec:gossip}. Our (non-deterministic) solution to the gossip problem is distributed over two parts, Sections~\ref{sec:paths} and \ref{sec:preorder-cfm}. In fact, it is obtained as an instance of a more general approach, in which we are able to compare the latest information transmitted along paths described by path expressions. This general solution finally allows us to translate formulas from linear-time temporal logic into CFMs (Section~\ref{sec:logic}). We conclude in Section~\ref{sec:conclusion}.
\section{Preliminaries}\label{sec:prel}
\subparagraph{Communicating Finite-State Machines.}
We consider a distributed system with a fixed finite set of processes $\Procs$.
Processes are connected in a communication network that contains a FIFO channel
from every process $p$ to any other process $q$ such that $p \neq q$.
We also assume a finite set $\Sigma$ of \emph{labels}, which provide information
about events in a system execution such as ``enter critical region'' or ``output some value''.
In a communicating finite-state machine, each process $p \in \Procs$ can
perform local actions, or send/receive messages from a finite set of messages $\Msg$.
Process $p$ is represented as a finite transition system
$\A_p = (S_p,\init_p,\Delta_p)$ where $S_p$ is the finite set of (local) states,
$\init_p \in S_p$ is the initial state, and $\Delta_p$ is the transition relation.
A transition in $\Delta_p$ is of the form $t=(s,\gamma,s')$ where $s,s' \in S_p$ are the source
state and the target state, referred to as $\source(t)$ and $\target(t)$,
respectively. Moreover, $\gamma$ determines the effect of $t$. First, $\gamma$ may
be of the form $\loc{a}$ with $a \in \Sigma$.
In that case, $t$ performs a local computation that does not involve any communication primitive. We let $\tlabel(t) = a$.
Second, $\gamma$ may be of the form $\send{a}{\msg}{q}$. Then, in addition to performing $a \in \Sigma$, process $p$ sends message $\msg \in \Msg$ to process $q \in \Procs \setminus \{p\}$.
More precisely, $\msg$ is placed in the FIFO channel from $p$ to $q$.
We let $\receiver(t) = q$, $\tmsg(t) = \msg$, and $\tlabel(t) = a$.
Finally, if $\gamma = \rec{a}{\msg}{q}$, then $p$ receives message $\msg$ from $q$,
and we let $\sender(t) = q$, $\tmsg(t) = \msg$, and $\tlabel(t) = a$.
In addition, our system is equipped with an acceptance condition. In order for
an execution to be accepting, all channels have to be empty and the collection
of local states in which processes terminate must belong to a set $\Acc
\subseteq \prod_{p \in \Procs} S_p$. We call the tuple $\mathcal{C} = ((\A_p)_{p \in
\Procs},\Msg,\Acc)$ a \emph{communicating finite-state machine (CFM)} over $\Procs$ and $\Sigma$.
\begin{example}\label{ex:cfm}
Consider the simple CFM depicted in Figure~\ref{fig:cfm}.
The set of processes is $\Procs = \{p,q,r\}$.
Moreover, we have $\Sigma = \{\bbullet,\abullet,\resizebox{!}{1.2ex}{$\diamond$}\}$ and $\Msg = \{\bbullet,\abullet\}$.
Process $p$ sends messages to $q$ and $r$. Each message can be either
$\bbullet$ or $\abullet$, and the message sent is made ``visible'' in terms of $\Sigma$.
Process $r$ simply forwards every message it receives to $q$.
In any case, the action is $\resizebox{!}{1.2ex}{$\diamond$}$, which means that we do not want to reason about the forwarding itself.
Finally, $q$ receives and ``outputs'' messages from $p$ and $r$ in any order. Note that, in this example, there are no local transitions,
i.e., every transition is either sending or receiving.
\end{example}
\usetikzlibrary{positioning,automata}
\begin{figure}[h]
\centering
\begin{tikzpicture}[shorten >=1pt,node distance=2cm,on grid, semithick,>=stealth]
\tikzstyle{every state}=[draw=black!50,very thick,fill=white, circle, minimum size=1pt]
\node[state,initial,accepting, initial text=$p$, initial above] (q_0) at (0,0) {$s_0^p$};
\path[->] (q_0) edge [loop right] node [right]
{$\!\!\!\!\begin{array}{l}\send{\bbullet}{\bbullet}{q}\\%
\send{\bbullet}{\bbullet}{r}\\%
\send{\abullet}{\abullet}{q}\\%
\send{\abullet}{\abullet}{r}\\%
\end{array}$} (q_0);
\node[state,initial,accepting, initial text=$r$, initial above] (q_0) at (5.8,0) {$s_0^r$};
\node[state] (q_1) [left=of q_0] {$s_1^r$};
\node[state] (q_2) [right=of q_0] {$s_2^r$};
\path[->] (q_0) edge [bend right=30] node [above] {$\rec{\resizebox{!}{1.2ex}{$\diamond$}}{\bbullet}{p}$} (q_1)
(q_1) edge [bend right=30] node [below] {$\send{\resizebox{!}{1.2ex}{$\diamond$}}{\bbullet}{q}$} (q_0)
(q_0) edge [bend left=30] node [above] {$\rec{\resizebox{!}{1.2ex}{$\diamond$}}{\abullet}{p}$} (q_2)
(q_2) edge [bend left=30] node [below] {$\send{\resizebox{!}{1.2ex}{$\diamond$}}{\abullet}{q}$} (q_0);
\node[state,initial,accepting, initial text=$q$, initial above] (q_0) at (10,0) {$s_0^q$};
\path[->] (q_0) edge [loop right] node [right]
{$\!\!\!\!\begin{array}{l}\rec{\bbullet}{\bbullet}{p}\\%
\rec{\bbullet}{\bbullet}{r}\\%
\rec{\abullet}{\abullet}{p}\\%
\rec{\abullet}{\abullet}{r}\\%
\end{array}$} (q_0);
\end{tikzpicture}
\caption{A communicating finite-state machine\label{fig:cfm}}
\end{figure}
\subparagraph{Message Sequence Charts.}
An execution of $\mathcal{C}$ can be described by a diagram as depicted in Figure~\ref{fig:msc}.
Process $p$ performs eight transitions, alternately sending a message to $q$ and $r$.
Note that the execution does not keep track of states and messages (unless made ``visible'' by means of $\Sigma$).
Let us describe a structure like in Figure~\ref{fig:msc} formally.
We have a nonempty finite set $E$ of \emph{events} (in the example, $E=\{e_0,\ldots,e_7,g_0,\ldots,g_7,f_0,\ldots,f_7\}$).
With each event, we associate its process and an action from $\Sigma$, i.e., we have mappings $\mathit{loc}: E \to \Procs$ and $\lambda: E \to \Sigma$.
We let $E_p := \{e \in E \mid \mathit{loc}(e) = p\}$ be the set of events executed by process $p$.
A binary relation ${\prel} \subseteq E \times E$ connects consecutive events of a process:
For all $(e,f) \in {\prel}$, there is $p \in \Procs$ such that both $e$ and $f$ are in $E_p$.
Moreover, for all $p \in \Procs$, ${\prel} \cap (E_p \times E_p)$ is the direct successor relation of some total order on $E_p$.
Finally, the message relation ${\mrel} \subseteq E \times E$ connects a pair of events that represent a message exchange.
We require that
\begin{itemize}
\item every event belongs to at most one pair from $\mrel$, and
\item for all $(e,f),(e',f') \in {\mrel}$ such that $e,e' \in E_p$ and $f,f' \in E_q$, we have
both $p \neq q$ and (FIFO) $e \prel^\ast e'$ iff $f \prel^\ast f'$.
\end{itemize}
Finally, ${\le} := ({\prel} \cup {\mrel})^\ast$ must be a partial order.
Its strict part is denoted ${<} = ({\prel} \cup {\mrel})^+$.
We call $M=(E,\prel,\mrel,\mathit{loc},\lambda)$ a \emph{message sequence chart (MSC)} over $\Procs$ and $\Sigma$. The set of message sequence charts is denoted by $\MSCs{\Procs}{\Sigma}$.
\begin{example}\label{ex:msc}
Let us come back to the MSC from Figure~\ref{fig:msc}.
We have $\mathit{loc}(e_2) = p$, $\lambda(e_2) = \abullet$, $\lambda(f_2) = \bbullet$,
and $\lambda(g_i) = \resizebox{!}{1.2ex}{$\diamond$}$ for all $i \in \{0,\ldots,7\}$.
The process relation restricted to $p$ is $e_0 \prel e_1 \prel \ldots \prel e_7$.
We also have $g_0 \prel g_1 \prel \ldots$ and $f_0 \prel f_1 \prel \ldots$
Concerning the message relation, $e_4 \mrel f_5$ and $e_7 \mrel g_6$, among others.
\end{example}
\tikzstyle{acirc} = [draw, fill, circle, inner sep=0, minimum size=0.25cm, orange!60, draw=black]
\tikzstyle{bcirc} = [draw, fill, rectangle, inner sep=0, minimum size=0.2cm, blue!50, draw=black]
\tikzstyle{ncirc} = [draw, fill=white, diamond, inner sep=0, minimum size=0.3cm]
\begin{figure}[h]
\centering
\begin{tikzpicture}[semithick,>=stealth]
\draw[->] (-0.25,0) -- (12.5,0);
\draw[->] (-0.25,1) -- (12.5,1);
\draw[->] (-0.25,2) -- (12.5,2);
\node[bcirc,label=above:$e_0$] (e0) at (0.25,2) {};
\node[bcirc,label=below:$f_0$] (f0) at (0.25,0) {};
\draw[->] (e0) -- (f0);
\node[bcirc,label=above:$e_1$] (e1) at (1.125,2) {};
\node[ncirc,label=below:$g_0$] (g0) at (1.125,1) {};
\draw[->] (e1) -- (g0);
\node[acirc,label=above:$e_2$] (e2) at (2,2) {};
\node[acirc,label=below:$f_1$] (f1) at (2,0) {};
\draw[->] (e2) -- (f1);
\node[ncirc,label=above:$g_1$] (g1) at (3,1) {};
\node[bcirc,label=below:$f_2$] (f2) at (3.75,0) {};
\draw[->] (g1) -- (f2);
\node[bcirc,label=above:$e_3$] (e3) at (4,2) {};
\node[ncirc,label=below:$g_2$] (g2) at (4,1) {};
\draw[->] (e3) -- (g2);
\node[ncirc,label=above:$g_3$] (g3) at (5,1) {};
\node[bcirc,label=below:$f_3$] (f3) at (5.5,0) {};
\draw[->] (g3) -- (f3);
\node[bcirc,label=above:$e_4$] (e4) at (5,2) {};
\node[bcirc,label=below:$f_5$] (f5) at (9,0) {};
\draw[->] (e4) -- (f5);
\node[acirc,label=above:$e_5$] (e5) at (6,2) {};
\node[ncirc,label=below:$g_4$] (g4) at (6,1) {};
\draw[->] (e5) -- (g4);
\node[ncirc,label=above:$g_5$] (g5) at (8,1) {};
\node[acirc,label=below:$f_4$] (f4) at (8,0) {};
\draw[->] (g5) -- (f4);
\node[bcirc,label=above:$e_6$] (e6) at (9,2) {};
\node[bcirc,label=below:$f_6$] (f6) at (10.5,0) {};
\draw[->] (e6) -- (f6);
\node[acirc,label=above:$e_7$] (e7) at (11,2) {};
\node[ncirc,label=below:$g_6$] (g6) at (11,1) {};
\draw[->] (e7) -- (g6);
\node[ncirc,label=above:$g_7$] (g7) at (12,1) {};
\node[acirc,label=below:$f_7$] (f7) at (12,0) {};
\draw[->] (g7) -- (f7);
\node at (-0.5,0) {$q$};
\node at (-0.5,1) {$r$};
\node at (-0.5,2) {$p$};
\end{tikzpicture}
\caption{A message sequence chart\label{fig:msc}}
\end{figure}
\subparagraph{Runs and the Language of a CFM.}
Let $\mathcal{C} = ((\A_p)_{p \in \Procs},\Msg,\Acc)$ be a CFM and $M=(E,\prel,\mrel,\mathit{loc},\lambda)$ be an MSC over $\Procs$ and $\Sigma$. A \emph{run} of $\mathcal{C}$ on $M$ associates with
every event $e \in E_p$ ($p\in\Procs$) the transition $\rho(e) \in\Delta_p$ that
is executed at $e$. We require that
\begin{enumerate}
\item for all events $e \in E$, we have $\tlabel(\rho(e)) = \lambda(e)$,
\item for all processes $p \in \Procs$ such that $E_p \neq \emptyset$, we have $\source(\rho(e)) = \init_p$ where $e$ is the first event of $p$ (i.e., $e$ does not have a $\prel$-predecessor),
\item for all process edges $(e,f) \in {\prel}$, we have $\target(\rho(e)) = \source(\rho(f))$,
\item for all local events $e \in E$ ($e$ is neither a send nor a receive), $\rho(e)$ is a local transition, and
\item for all message edges $(e,f) \in {\mrel}$, say, with $e \in E_p$ and $f \in
E_q$, $\rho(e)\in\Delta_p$ is a send transition and $\rho(f)\in\Delta_q$ is a
receive transition such that $\tmsg(\rho(e)) = \tmsg(\rho(f))$,
$\receiver(\rho(e)) = q$, and $\sender(\rho(f)) = p$.
\end{enumerate}
To determine whether $\rho$ is accepting, we collect the last state of every
process $p$. If $E_p \neq \emptyset$, then let $s_p$ be $\target(\rho(e))$
where $e$ is the last event of $E_p$. Otherwise, let $s_p = \init_p$. Now,
$\rho$ is said to be \emph{accepting} if $(s_p)_{p \in \Procs} \in \Acc$.
Finally, the \emph{language} of $\A$ is $L(\A) := \{M \in \MSCs{\Procs}{\Sigma} \mid$ there is an accepting run of $\A$ on $M\}$.
For example, the MSC from Figure~\ref{fig:msc} is in the language of the \CFM from Figure~\ref{fig:cfm}.
\section{The Gossip Problem}\label{sec:gossip}
We are looking for a protocol (a CFM) that solves the gossip problem: When a
process $q$ receives a message at some event $f \in E_q$, it should be able to tell
what the most recent information is that it has on another process, say $p$. More
precisely, it should determine the label $\lambda(e)$ of the last (i.e., most
recent) event $e$ of $E_p$ that is in the (strict) past of $f$. For example,
consider the MSC in Figure~\ref{fig:gossip-msc} (for the moment, we ignore the bottom
part of the figure). At the time of executing event
$f_5$, process $q$ is supposed to ``output'' $\abullet$, since the most recent
event on $p$ is $e_5$.
Let us formally define what it means to be the \emph{most recent} event.
For all $f \in E$ and $p \in P$, we define
$\rPastp p f = \{e \in E_p \mid e < f\}$
to be the set of events on process $p$ that are in the \emph{past} of $f$.
We let
\[
\lastp p f = \begin{cases}
\max (\rPastp p f)& \text{if } \rPastp p f \neq \emptyset \\
\botp p & \text{otherwise} \, .
\end{cases}
\]
Thus, $\lastp p f$ is the most recent event of $p$ in the past of $f$.
\begin{example}
Consider the MSC from Figure~\ref{fig:gossip-msc}. We have
$\rPastp p {f_5} = \{e_0,\ldots,e_5\}$ and, therefore, $\lastp p {f_5} = e_5$.
Moreover, $\lastp p {f_2} = e_2$.
\end{example}
The CFM from Figure~\ref{fig:cfm} (cf.\ Example~\ref{ex:cfm}) can be seen as a first (na{\"i}ve) attempt to solve the gossip problem. When $q$ receives a message from $p$, it ``outputs'' the color of the sending event, and when $q$ receives a message from $r$, it outputs the color transmitted by $r$.
However, both rules are erroneous: Consider the MSC in Figure~\ref{fig:msc}.
At $f_2$ and $f_5$, process $q$ should have announced $\abullet$, but it outputs $\bbullet$. Actually, what we would like to have is the behavior depicted in Figure~\ref{fig:gossip-msc} where, for all $i \in \{0,\ldots,7\}$, we get $\lambda(f_i) = \lambda(\lastp p {f_i})$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[semithick,>=stealth]
\draw[->] (-0.25,0) -- (12.5,0);
\draw[->] (-0.25,1) -- (12.5,1);
\draw[->] (-0.25,2) -- (12.5,2);
\node[bcirc,label=above:$e_0$] (e0) at (0.25,2) {};
\node[bcirc,label=below:$f_0$] (f0) at (0.25,0) {};
\draw[->] (e0) -- (f0);
\node[bcirc,label=above:$e_1$] (e1) at (1.125,2) {};
\node[ncirc,label=below:$g_0$] (g0) at (1.125,1) {};
\draw[->] (e1) -- (g0);
\node[acirc,label=above:$e_2$] (e2) at (2,2) {};
\node[acirc,label=below:$f_1$] (f1) at (2,0) {};
\draw[->] (e2) -- (f1);
\node[ncirc,label=above:$g_1$] (g1) at (3,1) {};
\node[acirc,label=below:$f_2$] (f2) at (3.75,0) {};
\draw[->] (g1) -- (f2);
\node[bcirc,label=above:$e_3$] (e3) at (4,2) {};
\node[ncirc,label=below:$g_2$] (g2) at (4,1) {};
\draw[->] (e3) -- (g2);
\node[ncirc,label=above:$g_3$] (g3) at (5,1) {};
\node[bcirc,label=below:$f_3$] (f3) at (5.5,0) {};
\draw[->] (g3) -- (f3);
\node[bcirc,label=above:$e_4$] (e4) at (5,2) {};
\node[acirc,label=below:$f_5$] (f5) at (8.75,0) {};
\draw[->] (e4) -- (f5);
\node[acirc,label=above:$e_5$] (e5) at (6,2) {};
\node[ncirc,label=below:$g_4$] (g4) at (6,1) {};
\draw[->] (e5) -- (g4);
\node[ncirc,label=above:$g_5$] (g5) at (7.5,1) {};
\node[acirc,label=below:$f_4$] (f4) at (7.5,0) {};
\draw[->] (g5) -- (f4);
\node[bcirc,label=above:$e_6$] (e6) at (9,2) {};
\node[bcirc,label=below:$f_6$] (f6) at (10.5,0) {};
\draw[->] (e6) -- (f6);
\node[acirc,label=above:$e_7$] (e7) at (11,2) {};
\node[ncirc,label=below:$g_6$] (g6) at (11,1) {};
\draw[->] (e7) -- (g6);
\node[ncirc,label=above:$g_7$] (g7) at (12,1) {};
\node[acirc,label=below:$f_7$] (f7) at (12,0) {};
\draw[->] (g7) -- (f7);
\node at (-0.5,0) {$q$};
\node at (-0.5,1) {$r$};
\node at (-0.5,2) {$p$};
\node at (2,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p}{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}(f_1)\\=f_3\end{array}$}};
\node at (3.75,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p}{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}(f_2)\\=f_3\end{array}$}};
\node at (5.5,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p}{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}(f_3)\\=f_3\end{array}$}};
\node at (7.2,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p'}{{\mathrel{\text{$\xrightarrow{+}$}}}\p}(f_4)\\=f_6\end{array}$}};
\node at (8.8,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p'}{{\mathrel{\text{$\xrightarrow{+}$}}}\p}(f_5)\\=f_6\end{array}$}};
\node at (10.4,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p'}{{\mathrel{\text{$\xrightarrow{+}$}}}\p}(f_6)\\=f_6\end{array}$}};
\node at (12,-1.2) {\scalebox{0.75}{$\begin{array}{c}\fa{\p}{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}(f_7)\\=f_7\end{array}$}};
\node at (0.25,-2) {\scalebox{0.9}{$\p' \prec_{f_0} \p$}};
\node at (2,-2) {\scalebox{0.9}{$\p' \prec_{f_1} \p$}};
\node at (3.75,-2) {\scalebox{0.9}{$\p' \prec_{f_2} \p$}};
\node at (5.5,-2) {\scalebox{0.9}{$\p \preceq_{f_3} \p'$}};
\node at (7.5,-2) {\scalebox{0.9}{$\p \preceq_{f_4} \p'$}};
\node at (9,-2) {\scalebox{0.9}{$\p \preceq_{f_5} \p'$}};
\node at (10.5,-2) {\scalebox{0.9}{$\p' \prec_{f_6} \p$}};
\node at (12,-2) {\scalebox{0.9}{$\p \preceq_{f_7} \p'$}};
\node at (0.25,-2.5) {$\underbrace{\hspace{3.7em}}_{\textup{Lemma}~\ref{ple-char}.\ref{lem:initle}}$};
\node at (3.75,-2.5) {$\underbrace{\hspace{13.7em}}_{\textup{Lemma}~\ref{ple-char}.\ref{lem:gtle}}$};
\node at (9,-2.5) {$\underbrace{\hspace{12.2em}}_{\textup{Lemma}~\ref{ple-char}.\ref{lem:legt}}$};
\node at (12.,-2.5) {$\underbrace{\hspace{3.7em}}_{\textup{Lemma}~\ref{ple-char}.\ref{lem:gtle}}$};
\end{tikzpicture}
\caption{Comparison of $\p = {\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$ and $\p' = {\mmove{p}{r}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$\label{fig:gossip-msc}}
\end{figure}
Formally, we will treat ``outputs'' in terms of additional labels from another finite alphabet $\Xi$.
To do so, we consider \CFMs and MSCs over $\Procs$ and $\Sigma \times \Xi$. An MSC over $\Procs$ and $\Sigma \times \Xi$ is called an \emph{extended MSC}. It can be interpreted, in the expected way, as a pair $(M,\xi)$ where $M=(E,\prel,\mrel,\mathit{loc},\lambda)$ is an MSC over $\Procs$ and $\Sigma$, and $\xi: E \to \Xi$. If $(M,\xi)$ is accepted by the gossip \CFM, $\xi(e)$ shall provide the latest information that $e$ has about any other process. That is, $\Xi$ is the finite set of functions from $\Procs$ to $\Sigma \cup \{\botp p\}$. We assume $\bot \not\in \Sigma$ and $\lambda(\bot) = \bot$.
We are now looking for a \CFM $\mathcal{A}_{\mathsf{gossip}}$ over $\Procs$ and $\Sigma \times \Xi$ that has the following property:
\begin{center}
\fbox{\parbox{0.85\textwidth}{\it
The language $L(\mathcal{A}_{\mathsf{gossip}})$ is the set of extended MSCs
$((E,\prel,\mrel,\mathit{loc},\lambda),\xi)$ such that, for all events $e \in E$,
$\xi(e)$ is the function from $\Procs$ to $\Sigma \uplus \{\bot\}$ defined
by $\xi(e)(p) = \lambda(\lastp p e)$.
}\;}
\end{center}
Thus, the gossip CFM $\mathcal{A}_{\mathsf{gossip}}$ allows a process to infer, at any time,
the most recent information that it has about all other processes wrt.\ the \emph{causal past}.
In fact, we will pursue a more general approach based on \emph{path expressions}.
A path expression allows us to define what we actually mean by ``causal past''.
More precisely, it acts as a filter that considers only events in the past that are
(co-)reachable via certain paths (e.g., visiting only certain processes or at least one event with a given label).
Path expressions and their properties are studied in Section~\ref{sec:paths}.
In Section~\ref{sec:preorder-cfm}, we construct a CFM
that, at any event, is able to tell which of two path expressions provides more recent information.
We then obtain $\mathcal{A}_{\mathsf{gossip}}$ as a corollary.
\section{Comparing Path Expressions}\label{sec:comp}\label{sec:paths}
In this section, we introduce path expressions and establish some of their properties.
\subsection{Path Expressions}
Let us again look at our running example (cf.\ Figure~\ref{fig:gossip-msc}).
In the gossip problem, we need to know whether the most
recent information has been provided along a message from $p$ to $q$,
which will be represented by the path expression $\p = {\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$, or via the intermediate process $r$,
represented by the path expression $\p' = {\mmove{p}{r}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$.
We will write $\p \preceq_{f_5} \p'$
to describe the fact that $\last \p {f_5} \le \last {\p'} {f_5}$,
where $\last \p {f_5}=e_4$ and $\last {\p'} {f_5}=e_5$
denote the most recent events from which a $\p$-path and, respectively,
$\pi'$-path to $f_5$ exist.
Let us be more formal. A path expression is simply a finite word over the
alphabet
$
\Gamma = \{ {\prel}, \mathrel{\text{$\xrightarrow{\ast}$}} \} \cup \{ \mmove p q \mid p,q \in P$, $p \neq q \}
\cup \{\amove a \mid a \in \Sigma\}
$.
We let $\varepsilon$ be the empty word and introduce $\mathrel{\text{$\xrightarrow{+}$}}$ as a macro for the word ${{\prel}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$.
Let $M = (E,\prel,\mrel,\mathit{loc},\lambda)$ be an MSC. For all path expressions $\p \in \Gamma^\ast$, we define a relation $\sem M \p \subseteq E \times E$ as follows:\\
{
\begin{minipage}[b]{0.3\textwidth}
\begin{align*}
\sem M {\varepsilon} & = \{ (e,e) \mid e \in E \} \\
\sem M {\amove a} & = \{ (e,e) \in E \times E \mid \lambda(e) = a \} \\
\sem M {\mmove p q} & = \{ (e,f) \in E_p \times E_q \mid e \mrel f \}
\end{align*}
\end{minipage}
\begin{minipage}[b]{0.3\textwidth}
\begin{align*}
\sem M {{\prel}} & = \{ (e,f) \in E \times E \mid e \prel f \} \\
\sem M {\mathrel{\text{$\xrightarrow{\ast}$}}} & = \{ (e,f) \in E \times E \mid e \mathrel{\text{$\xrightarrow{\ast}$}} f \}
\end{align*}
\end{minipage}\vspace{-0.6ex}
\begin{align*}
\sem M {\p\p'} & = \sem{M}{\p}\circ\sem{M}{\p'}
= \{ (e,g) \in E \times E \mid \exists f \in E:
(e,f)\in\sem{M}{\p} \land (f,g)\in\sem{M}{\p'} \} \, .
\end{align*}
}
\begin{example}
Consider the MSC $M$ from Figure~\ref{fig:gossip-msc}.
For $\p = {\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$ and $\p' = {\mmove{p}{r}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$,
we have $(e_4,f_5) \in \sem M {\p}$ and $(e_5,f_5) \in \sem M {\p'}$.
Moreover, $\sem M {\bbullet{\prel}\bbullet{\mmove{p}{q}}} = \{(e_3,f_5)\}$.
\end{example}
We say that a pair of processes $(p,q)$ is \emph{compatible} with $\p \in
\Gamma^\ast$ if $\p$ may describe a path from $p$ to $q$. Formally, we define
$\Comp{\p} \subseteq \Procs \times \Procs$ inductively as follows:
$\Comp{\varepsilon} = \Comp{\amove a} = \Comp{{\prel}} = \Comp{\mathrel{\text{$\xrightarrow{\ast}$}}} = \{(p,p) \mid p \in \Procs\}$, $\Comp{\mmove p q} = \{(p,q)\}$,
and $\Comp{\p \p'} = \Comp{\p} \circ \Comp{\p'}$, where $\circ$ denotes the usual product of binary relations.
Note that, for each $p$, there is at most one $q$ such that $(p,q) \in
\Comp{\p}$. Conversely, for each $q$, there is at most one $p$ such that $(p,q)
\in \Comp{\p}$. We denote by $\Paths_{p,q}$ the set of path expressions
$\p\in\Gamma^*$ such that $(p,q)\in\Comp{\p}$.
\begin{example}
We have $\Comp{{\mmove{p}{r}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}} = \{(p,q)\}$,
$\Comp{{\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{q}{p}}} = \{(p,p)\}$,
$\Comp{\bbullet{\prel}\bbullet{\mmove{p}{q}}} = \{(p,q)\}$, and
$\Comp{{\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{p}}} = \emptyset$.
\end{example}
Next, given $\p \in \Gamma^\ast$ and $e \in E$, we define $\last \p e$ and $\first{\p}{e}$, which denote the most recent (resp.\ very next) event from which there is a $\p$-path to $e$
(resp.\ to which there is a $\p$-path from $e$). We extend $\le$ with the new elements $\bot$ and $\top$ by setting $\bot < e < \top$ for all $e \in E$. As before, we will assume $\lambda(\bot) = \bot$. Moreover, $\lambda(\top) = \top$.
All events $f$ such that $\Pto M f \p
e$ (resp.\ $\Pto M e \p f$) are located on the same process. Hence, we can
define, with $\max\emptyset=\bot$ and $\min\emptyset = \top$:
\begin{align*}
\last{\p}{e} & =\max\,\sem{M}{\p}^{-1}(e)
= \max \{ f \in E \mid \Pto M f \p e \} \\
\first{\p}{e} & =\min\,\sem{M}{\p}(e)
= \min \{ f \in E \mid \Pto M e \p f \} \,.
\end{align*}
The next lemma states that $\lastf \p$ and $\firstf \p$ are monotone.
\begin{lemma}\label{lem:monotone}
Let $\p \in \Gamma^\ast$ and $e,f \in E$. The following hold:
\begin{enumerate}
\item If $\last \p e \neq \botp p$, $\last{\p}{f} \neq \botp p$, and $e\mathrel{\text{$\xrightarrow{\ast}$}} f$, then $\last \p e \le \last \p f$.
\item If $\first \p e \neq \topp q$, $\first{\p}{f} \neq \topp q$, and $e\mathrel{\text{$\xrightarrow{\ast}$}} f$, then $\first \p e \le \first \p f$.
\item If $\last{\p}{e}\neq\botp{p}$, then $\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}=\last{\p}{e}$.
\item If $\first{\p}{e}\neq\topp{q}$, then $\first{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p}{e}=\first{\p}{e}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We show 1.\ and 3. The other two cases are analogous.
For 1., the proof is by induction on $\p$. We assume $\last \p e \neq \bot$
and $\last{\p}{f} \neq \bot$. The case $\p = \varepsilon$ is immediate.
Suppose $\p = \p' \mmove {r} q$. There exists
some $e' \in E_{r}$ such that $e' \mrel e$ and
$\last \p e = \last {\p'} {e'}$.
Similarly, there exists $f' \in E_{r}$ such that $f' \mrel f$ and
$\last \p f = \last {\p'} {f'}$.
Because of the FIFO ordering, we have $e' \mathrel{\text{$\xrightarrow{\ast}$}} f'$, and by induction
hypothesis, we get $\last \p e \le \last \p f$.
The cases $\p = \p' {{\prel}}$ and $\p = \p' \amove a$ are similar.
Suppose $\p=\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$. Due to $(\last{\p}{e},e)\in\sem{M}{\p}$ and $e\mathrel{\text{$\xrightarrow{\ast}$}} f$,
we have $(\last{\p}{e},f)\in\sem{M}{\p}$.
By definition of $\last \p f$, we then get $\last \p e \le \last \p f$.
For 3., we assume that $\last{\p}{e}\neq\botp{p}$. We have
$\sem{M}{\p}\subseteq\sem{M}{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}$ hence we get
$g=\last{\p}{e}\leq\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}=g'$. Now, there is $e'$ such that
$g'=\last{\p}{e'}$ and $e'\mathrel{\text{$\xrightarrow{\ast}$}} e$. From 1., we deduce that $g'\leq g$.
\end{proof}
Now, let us define formally when a path $\p'$ provides (strictly) more recent
information than a path $\p$. Fix $p,q \in P$.
For all $e \in
E_q$ and $\p, \p' \in \Paths_{p,q}$, we let
\begin{align*}
\p \preceq_e \p'
&\qquad\text{if}\qquad
\last \p e \le \last {\p'} e
\\
\p \prec_e \p'
&\qquad\text{if}\qquad \last \p e < \last {\p'} e,\ \text{i.e., }
\p' \not \preceq_e \p
\, .
\end{align*}
The definition is illustrated in Figure~\ref{fig:gossip-msc}.
Recall that our goal is to construct a \CFM computing the label of $\lastp p e$
for all events $e \in E_q$. Later (in Section~\ref{sec:cfm-labels}), we show that,
for all $\p$, there exists a \CFM associating with each event $e$ the label
of $\last \p e$.
Thus, it will be enough to construct a \CFM that identifies, for each event
$e$, some $\p \in \Gamma^\ast$ such that $\last \p e = \lastp p e$.
Moreover, path expressions of bounded length will suffice: If $f < e$, then
there is a path from $f$ to $e$ that enters and leaves each process at most once.
To achieve our goal, we will build a \CFM $\Ale$ computing the total preorders
$\preceq_e$ (restricted to path expressions of bounded size) for all events $e$ on a given process $q$. In particular, $\Ale$ is
sufficient to determine, for all $e\in E_q$ and $p \in \Procs$, some
$\p\in\Paths_{p,q}$ such that $\lastp p e = \last \p e$. The idea is that
$\Ale$ first determines $\preceq_e$ for the minimal event $e$ in $E_q$. Then, for
all $\p,\p' \in \Paths_{p,q}$, it computes the set of events where the order
between $\p$ and $\p'$ is switched. In Figure~\ref{fig:gossip-msc}, these
switching events are $f_3$, $f_6$, and $f_7$. The next subsection provides a
characterization of the preorder that can then (in
Section~\ref{sec:gossip-cfm}) be implemented as a CFM.
\subsection{A Characterization of $\preceq_e$}
Given $p,q\in\Procs$ and $\p,\p' \in \Paths_{p,q}$, we define the function
$\fa \p {\p'}\colon E_q \to E_q \cup \{\bot,\top\}$ (omitting index $(p,q)$)
as follows: $\fa \p {\p'} (e) = \first {\p'} {\last \p e}$, with
$\first {\p'} {\bot} = \bot$.
So we have $\fa \p {\p'} (e) = f \in E_q$ if there is $g\in E_p$ such that
$\last \p e = g$ and $\first {\p'} g = f$,
$\fa \p {\p'} (e) = \bot$ if $\last \p e = \bot$,
and $\fa \p {\p'} (e) = \top$ if $\last \p e = g \in E_p$ but
$\first {\p'} g = \top$.
From Lemma~\ref{lem:monotone}, we can deduce monotonicity of $\fa \p {\p'}$:
\begin{lemma}\label{lem:monotone2}
Suppose $e \mathrel{\text{$\xrightarrow{\ast}$}} f$ and $\fa \p {\p'} (e),\fa \p {\p'} (f) \in E_q$.
Then, $\fa \p {\p'} (e) \le \fa \p {\p'} (f)$.
\end{lemma}
\begin{example}
Consider, again, Figure~\ref{fig:gossip-msc} with $\p =
{\mmove{p}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$ and $\p' =
{\mmove{p}{r}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{r}{q}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$. We get $\fa \p
{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f_3) = f_3$ and $\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f_5) = f_4$.
Since $\last {\p'} {f_0} = \bot$ and $\last {\p} {f_0}=e_0 \neq \bot$,
we have $\p' \prec_{f_0} \p$.
\end{example}
\begin{figure}
\centering
\begin{minipage}[b]{0.18\textwidth}
\scalebox{0.85}{
\begin{tikzpicture}[scale=0.8,font=\footnotesize,inner sep=2pt,auto,>=stealth]
\draw (0,0) -- (2,0);
\draw (0,2) -- (2,2);
\node (q) at (-0.3,0) {$q$};
\node (p) at (-0.3,2) {$p$};
\node[dot,label=below:{$f$}] (e) at (0.5,0) {};
\node[dot] (g) at (0.5,2) {};
\node[dot] (h) at (1.7,2) {};
\node[anchor=west] at ($(g) + (-0.2,0.3)$) {$g ~~\le~~ g'$};
\path
(g) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->] node[left] {$\p$} (e)
(h) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->] node[right] {$\p'$} (e);
\path[purple,very thick,shorten > =5pt, shorten < = 5pt,->]
(e) edge[bend left=40] node[left] {$\mathsf{pred}_\p$} (g)
(g) edge[bend left=80,looseness=2] node[right] {$\mathsf{succ}_{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}$} (e);
\end{tikzpicture}}
\end{minipage}
\begin{minipage}[b]{0.38\textwidth}
\scalebox{0.85}{
\begin{tikzpicture}[scale=0.8,font=\footnotesize,inner sep=2pt,auto,>=stealth]
\draw (0,0) -- (6,0);
\draw (0,2) -- (6,2);
\node (q) at (-0.3,0) {$q$};
\node (p) at (-0.3,2) {$p$};
\node[anchor=west] at (-0.3,2.4)
{$\last {\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}} e ~<~ \last {\p{\mathrel{\text{$\xrightarrow{\ast}$}}}} e$};
\node[dot] (p) at (0.5,2) {};
\node[dot] (p') at ($(p)+(2.5,0)$) {};
\node[dot] (g') at (4.7,2) {};
\node[dot,right=0.8cm of g'] (g) {};
\node[above=0.1cm of g'] {$g\vphantom{g'}$};
\node[above right= 0.1cm and 0.2cm of g'] {$\vphantom{g'}{\le}$};
\node[above=0.1cm of g] {$g'$};
\node[dot,label={[name=el]below:{$e$\vphantom{$f$}}}] (e) at (4.3,0) {};
\node[dot,label={[name=fl]below:{$f$}}] (f) at (5.8,0) {};
\path
(p) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[left,xshift=-5pt] {$\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$} (e)
(p') edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[left,pos=0.3] {$\p{\mathrel{\text{$\xrightarrow{\ast}$}}}$} (e)
(g') edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[right,pos=0.3] {$\p$} (f)
(g) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[right] {$\p'$} (f);
\path[purple,very thick,shorten > =5pt, shorten < = 5pt,->]
(f) edge[bend left=40] node[left,pos=0.7] {$\lastf {\p}$} (g')
(g') edge[bend left=75,looseness=2] node[right,pos=0.9,xshift=0.1cm] {$\firstf {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}$} (f);
\draw[->] (el) -- (fl);
\end{tikzpicture}}
\end{minipage}
\begin{minipage}[b]{0.40\textwidth}
\scalebox{0.85}{
\begin{tikzpicture}[scale=0.8,font=\footnotesize,inner sep=2pt,auto,>=stealth]
\draw (0,0) -- (6,0);
\draw (0,2) -- (6,2);
\node (q) at (-0.3,0) {$q$};
\node (p) at (-0.3,2) {$p$};
\node[anchor=west] at (-0.3,2.4)
{$\last {\p{\mathrel{\text{$\xrightarrow{\ast}$}}}} e ~\le~ \last {\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}} e$};
\node[dot] (p) at (0.5,2) {};
\node[dot] (p') at ($(p)+(2.5,0)$) {};
\node[dot] (g') at (4.7,2) {};
\node[dot,right=0.8cm of g'] (g) {};
\node[above=0.1cm of g'] {$g'$};
\node[above right= 0.1cm and 0.2cm of g'] {$\vphantom{g'}{<}$};
\node[above=0.1cm of g] {$g\vphantom{g'}$};
\node[dot,label={[name=el]below:{$e$\vphantom{$f$}}}] (e) at (4.3,0) {};
\node[dot,label={[name=fl]below:{$f$}}] (f) at (5.8,0) {};
\path
(p) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[left,xshift=-5pt] {$\p{\mathrel{\text{$\xrightarrow{\ast}$}}}$} (e)
(p') edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[left,pos=0.3] {$\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$} (e)
(g') edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[right,pos=0.3] {$\p'$} (f)
(g) edge[decorate,decoration={snake,post length=1mm,amplitude=0.6mm},->]
node[right] {$\p$} (f);
\path[purple,very thick,shorten > =5pt, shorten < = 5pt,->]
(f) edge[bend left=40] node[left,pos=0.8] {$\lastf {\p'}$} (g')
(g') edge[bend left=75,looseness=2] node[right,pos=0.9,xshift=0.1cm] {$\firstf {{\mathrel{\text{$\xrightarrow{+}$}}}\p}$} (f);
\draw[->] (el) -- (fl);
\end{tikzpicture}}
\end{minipage}
\caption{Lemma~\ref{ple-char}, cases 1., 2., and 3.\label{fig:ple-char}}
\end{figure}
Generally, the relation $\preceq_e$
can be characterized as follows (cf.\ also Figure~\ref{fig:ple-char}):
\begin{lemma}\label{ple-char}
Let $\p,\p' \in \Paths_{p,q}$ with $p,q\in\Procs$, and $f \in E_q$.
\begin{enumerate}
\item\label{lem:initle} Assume that there exists no $e$ with $e \prel f$.
Then, $\p \preceq_f \p'$ iff $\last \p f = \botp p$ or
$\fa \p {\mathrel{\text{$\xrightarrow{\ast}$}} \p'} (f) = f$.
\item\label{lem:gtle} Assume that there exists $e \in E_q$ such that
$e \prel f$ and $\p'{\mathrel{\text{$\xrightarrow{\ast}$}}} \prec_e \p{\mathrel{\text{$\xrightarrow{\ast}$}}}$.
Then, $\p \preceq_f \p'$ iff $\last {\p} f = \botp p$ or
$\fa \p {\mathrel{\text{$\xrightarrow{\ast}$}} \p'} (f) = f$.
\item\label{lem:legt} Assume that there exists $e \in E_q$ such that
$e \prel f$ and $\p{\mathrel{\text{$\xrightarrow{\ast}$}}} \preceq_e \p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$.
Then, $\p' \prec_f \p$ iff $\last {\p'} f = \botp p$ and
$\last {\p} f \neq \botp p$, or $\fa {\p'} {\mathrel{\text{$\xrightarrow{+}$}} \p} (f) = f$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $\last \p f = \bot$ or $\last {\p'} f = \bot$, the proof of 1., 2.,
and 3.\ is immediate.
So we assume this is not the case, and we let $g = \last \p f \in E_p$ and
$g' = \last {\p'} f \in E_p$.
We first show that $\p \preceq_f \p'$ iff $\fa {\p} {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) \le f$.
Indeed, if $\p \preceq_f \p'$, then we have $g \mathrel{\text{$\xrightarrow{\ast}$}} g'$ and
$(g',f) \in \sem M {\p'}$, hence $(g,f) \in \sem M {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}$.
Then, by definition,
$\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) = \first {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} g \le f$.
Conversely, if $\p' \prec_f \p$, i.e., $g' < g$, then by maximality of
$g' = \last {\p'} {f}$, we have $(g,f) \notin \sem M {{\mathrel{\text{$\xrightarrow{\ast}$}}}\pi'}$,
hence $f < \first {{\mathrel{\text{$\xrightarrow{\ast}$}}}\pi'} g = \fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f)$
(either $\first {{\mathrel{\text{$\xrightarrow{\ast}$}}}\pi'} g = \top$, or it is an event to the right
of $f$).
Similarly, we have $\p' \prec_f \p$ iff $\fa {\p'} {{\mathrel{\text{$\xrightarrow{+}$}}}\p} (f) \le f$.
So, in all three statements, all that remains to be proved is the equality
in the left-to-right implications:
\begin{enumerate}
\item Assume $f$ is $\prel$-minimal and $\p \preceq_f \p'$.
By the above, we have $\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) \le f$, and since $f$ is
$\prel$-minimal, $\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) = f$.
\item Assume $e \prel f$, $\p'{\mathrel{\text{$\xrightarrow{\ast}$}}} \prec_e \p{\mathrel{\text{$\xrightarrow{\ast}$}}}$, and
$\p \preceq_f \p'$. In particular, $f' := \fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f)
= \first{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'}{g} \le f$.
Now, suppose $f' < f$ and, therefore, $f' \le e$.
Notice that $\last{\p'}{f'}\neq\botp{p}$ and $g\leq\last{\p'}{f'}$. Using
Lemma~\ref{lem:monotone} (monotonicity), we obtain the following
contradiction:
$$
g\leq\last{\p'}{f'}=\last{\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}}{f'}\leq\last{\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}
<\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}\leq\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{f}=\last{\p}{f}=g \,.
$$
\item Assume $e \prel f$, $\p{\mathrel{\text{$\xrightarrow{\ast}$}}} \preceq_e \p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$, and
$\p' \prec_f \p$. In particular, $f'=\fa {\p'} {{\mathrel{\text{$\xrightarrow{+}$}}}\p} (f)\leq f$.
Now, suppose $f' \le e$.
Notice that $\last{\p}{f'}\neq\botp{p}$ and $g'<\last{\p}{f'}$. Using
Lemma~\ref{lem:monotone} (monotonicity), we obtain the following
contradiction:
$$
g'<\last{\p}{f'}=\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{f'}\leq\last{\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}
\leq\last{\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}}{e}\leq\last{\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}}{f}=\last{\p'}{f}=g' \,.
$$
\end{enumerate}
This concludes the proof.
\end{proof}
\section{Constructing the Gossip CFM}\label{sec:preorder-cfm}
In this section, we construct $\Ale$ computing the total preorders $\preceq_e$
over a finite set of path expressions $\Pi$. We define the \emph{size} of
$\Pi$ as $\|\Pi\| = \sum_{\pi \in \Pi} |\pi|$, where $|\pi|$ denotes the length
of $\pi$.
\subsection{CFMs for $\fa \p {\p'}$}\label{sec:cfm-labels}
\begin{lemma}\label{lem:last-label}
Let $\Theta$ be a finite set such that $\bot \notin \Theta$, and
$\p \in \Gamma^\ast$ a path expression.
There exists a \CFM with $|\Theta|^{\mathcal{O}(|\p|)}$ states
recognizing the set of extended MSCs $(M,\xi)$ with
$\xi \colon E \to \Theta \times (\Theta \cup \{\bot\})$ such that,
for all events $e$, $\xi(e)$ is a pair $(\xia e,\xib e)$ such that
$\xib e = \xia {\last \p e}$, with $\xia \bot = \bot$.
\end{lemma}
\begin{proof}
Let $\Pi = \{\p' \in \Gamma^\ast \mid \exists \p'' \in \Gamma^\ast\text{: }
\p = \p'\p''\}$ be the set of prefixes of $\p$.
The state of the \CFM taken at event $e$ will consist of a function
$\theta(e) \colon \Pi \to \Theta \cup \{\bot\}$ such that, for all $e \in E$ and $\p' \in \Pi$,
$\tht e {\p'} = \xia {\last {\p'} e}$.
If $e$ is a send event, the function $\theta(e)$ is sent as a message.
In order to determine $\tht e {\p_1}$ for all events $e$ and $\p_1 \in \Pi$,
the \CFM only allows transitions ensuring the following:
\begin{itemize}
\item Suppose $\p_1 = \varepsilon$. Then, $\tht e {\p_1} = \xia e$.
\item Suppose $\p_1 = \p_2{\prel}$. If $e$ is $\prel$-minimal, then
$\tht e {\p_1} = \bot$.
If $f \prel e$ for some $f$, then $\tht e {\p_1} = \tht f {\p_2}$.
\item Suppose $\p_1 = \p_2 {\mathrel{\text{$\xrightarrow{\ast}$}}}$.
If $\tht e {\p_2} \neq \bot$, then $\tht e {\p_1} = \tht e {\p_2}$
(Lemma~\ref{lem:monotone}).
If $\tht e {\p_2} = \bot$ and $e$ is $\prel$-minimal, then
$\tht e {\p_1} = \bot$.
If $\tht e {\p_2} = \bot$ and $f \prel e$ for some $f$, then
$\tht e {\p_1} = \tht f {\p_1}$.
\item Suppose $\p_1 = \p_2 \mmove p q$. If $e \in E_q$ and there is an event
$f \in E_p$ such that $f \mrel e$, then $\tht e {\p_1} = \tht f {\p_2}$.
Otherwise, $\tht e {\p_1} =\bot$.
\item Suppose $\p_1 = \p_2 \amove a$. If $\lambda(e) = a$, then
$\tht e {\p_1} = \tht e {\p_2}$. Otherwise, $\tht e {\p_1} = \bot$.
\end{itemize}
Finally, the \CFM checks that, for all events $e$, $\xib e = \tht e \p$,
i.e., $\xib e = \xia {\last \p e}$.
\end{proof}
We can prove a similar result for $\mathsf{succ}_\p$:
\begin{lemma}\label{lem:succ-label}
Let $\Theta$ be a finite set such that $\top \notin \Theta$, and
$\p \in \Gamma^\ast$ a path expression.
There exists a \CFM with $|\Theta|^{\mathcal{O}(|\p|)}$ states
recognizing the set of extended MSCs $(M,\xi)$ with
$\xi \colon E \to \Theta \times (\Theta \cup \{\top\})$ such that,
for all events $e$, $\xi(e)$ is a pair $(\xia e,\xib e)$ such that
$\xib e = \xia {\first \p e}$, with $\xia \top = \top$.
\end{lemma}
\begin{proof}
Let $\Pi = \{\p'' \in \Gamma^\ast \mid \exists \p' \in \Gamma^\ast\text{: }
\p = \p'\p''\}$ be the set of suffixes of $\p$.
The state of the \CFM taken at event $e$ will consist of a
function $\theta(e) \colon \Pi \to
\Theta \cup \{\top\}$ such that, for all $e \in E$ and $\p' \in \Pi$,
$\tht e {\p'} = \xia {\first {\p'} e}$.
If $e$ is a send event, the function $\theta(e)$ is sent as a message.
In order to determine $\tht e {\p_1}$ for all events $e$ and $\p_1 \in \Pi$,
the \CFM only allows transitions ensuring the following:
\begin{itemize}
\item Suppose $\p_1 = \varepsilon$. Then, $\tht e {\p_1} = \xia e$.
\item Suppose $\p_1 = {\prel}\p_2$. If $e$ is $\prel$-maximal, then
$\tht e {\p_1} = \top$.
If $e \prel f$ for some $f$, then $\tht e {\p_1} = \tht f {\p_2}$.
\item Suppose $\p_1 = {\mathrel{\text{$\xrightarrow{\ast}$}}} \p_2$.
If $\tht e {\p_2} \neq \top$, then $\tht e {\p_1} = \tht e {\p_2}$
(Lemma~\ref{lem:monotone}).
If $\tht e {\p_2} = \top$ and $e$ is $\prel$-maximal, then
$\tht e {\p_1} = \top$.
If $\tht e {\p_2} = \top$ and $e \prel f$ for some $f$, then
$\tht e {\p_1} = \tht f {\p_1}$.
\item Suppose $\p_1 = \mmove p q \p_2$. If $e \in E_p$ and there is an event
$f \in E_q$ such that $e \mrel f$, then $\tht e {\p_1} = \tht f {\p_2}$.
Otherwise, $\tht e {\p_1} =\top$.
\item Suppose $\p_1 = \amove a \p_2$. If $\lambda(e) = a$, then
$\tht e {\p_1} = \tht e {\p_2}$. Otherwise, $\tht e {\p_1} = \top$.
\end{itemize}
Finally, the \CFM checks that, for all events $e$, $\xib e = \tht e \p$,
i.e., $\xib e = \xia {\first \p e}$.
\end{proof}
As a corollary, we obtain a \CFM for $\fa \p {\p'}$:
\begin{lemma}
\label{lem:fa-label}
Let $\Theta$ be a finite set such that $\Theta \cap \{\bot,\top\} =
\emptyset$, $p,q \in \Procs$, and $\p,\p' \in \Paths_{p,q}$.
There exists a \CFM
with $|\Theta|^{\mathcal{O}(|\pi|+|\pi'|)}$ states
recognizing the set of extended MSCs $(M,\xi)$ with
$\xi\colon E \to \Theta \times (\Theta \cup \{\bot,\top\})$ such that, for all
events~$e \in E_q$, $\xi(e)$ is a pair $(\xia e, \xib e)$ such that
$\xib e = \xia {\fa \p {\p'} (e)}$.
\end{lemma}
We are now ready to prove that there exists a \CFM $\Afa \p {\p'}$
that determines, for each event $e$, whether $\fa \p {\p'} (e) = e$.
\begin{lemma}
\label{lem:Afa}
Let $\p,\p' \in \Paths_{p,q}$ with $p,q\in\Procs$.
There exists a \CFM $\Afa \p {\p'}$ over $P$ and $\Sigma\times\{0,1\}$
with $2^{\mathcal{O}(|\pi|+|\pi'|)}$ states
that recognizes the set of MSCs $(M,\gamma)$ such that, for all events $e$
on process $q$, we have $\gamma(e) = 1$ iff $\fa \p {\p'} (e) = e$.
\end{lemma}
\begin{proof}
We denote by $L$ the set of MSCs $(M,\gamma)$ such that, for all events
$e$ on process $q$, $\gamma(e) = 1$ iff $\fa \p {\p'} (e) = e$.
To ensure that the input MSC is in $L$, the \CFM
$\Afa \p {\p'}$ will use a coloring of the events of process $q$, constructed
in such a way that, for all events $e$ on process~$q$, the events $e$ and
$\fa {\p} {\p'} (e)$ have the same color iff they are equal.
Formally, we consider doubly extended MSCs $(M,\gamma,\zeta)$ with $\gamma
\colon E \to \{0,1\}$ and $\zeta \colon E \to \{\colone,\coltwo,\zcolone,
\zcoltwo\}$. As usual, we define $\zeta(\bot) = \bot$ and $\zeta(\top) = \top$.
Let $\tilde{L}$ be the set of MSCs $(M,\gamma,\zeta)$ such that the following hold:
\begin{enumerate}
\item Denoting by $e_1 < e_2 < \cdots < e_k$ the events on process $q$
with $\gamma(e_i) = 1$, we have $\zeta(e_i) = \colone$ if $i$ is
odd, $\zeta(e_i) = \coltwo$ if $i$ is even, and $\zeta(e) \in
\{\zcolone,\zcoltwo\}$ if $e \in E_q\setminus \{e_1, \ldots, e_k\}$.
Intuitively, $\zeta(e)$ will be a color computed (if $\gamma(e) = 1$)
or guessed (if $\gamma(e) = 0$) by $\Afa \p {\p'}$.
\item For all $e \in E_q$, $\gamma(e) = 1$ iff
$\zeta(e) = \zeta(\fa \p {\p'} (e))$.
\end{enumerate}
We first show that there exists a \CFM accepting $\tilde{L}$. First, applying
Lemma~\ref{lem:fa-label} with $\Theta = \{\colone,\coltwo,\zcolone,\zcoltwo\}$,
we know that there exists a \CFM accepting the set of MSCs $(M,\gamma,\xi)$
with $\xi \colon E \to \Theta \times (\Theta \cup \{\bot,\top\})$ such that,
for all events $e$, $\xi(e) = (\xia e, \xib e) =
(\xia e, \xia {\fa \p {\p'} (e)}$.
We then restrict the transitions of this \CFM so that it additionally
checks that, for all events $e$ on process $q$, $\gamma(e) = 1$ iff
$\xia e = \xib e$.
By projection onto the first component of $\xi$, we obtain a \CFM
accepting $\tilde{L}$.
We define $\Afa \p {\p'}$ as the \CFM recognizing the projection of $\tilde{L}$
on $\Sigma \times \{0,1\}$. We claim that $L(\Afa \p {\p'}) = L$.
We first prove the left-to-right inclusion.
Suppose $(M,\gamma,\zeta) \in \tilde{L}$, with $e_1,\ldots,e_k$ defined
as above. Towards a contradiction, assume $(M,\gamma) \not\in L$.
For all events $e\in E_q\setminus\{e_1,\ldots,e_k\}$, we have
$\zeta(e)\neq\zeta(\fa \p {\p'} (e))$, hence $\fa \p {\p'} (e) \neq e$.
So there exists $g_0 \in \{e_1,\ldots,e_k\}$ such that $g_0 \neq \fa \p {\p'} (g_0)$.
For all $i \in \mathbb{N}$, let $g_{i+1} = \fa \p {\p'} (g_i)$.
Note that $g_i\in\{e_1,\ldots,e_k\}$ implies that $\fa \p {\p'} (g_i)\in E_q$
and $\zeta(g_{i+1})=\zeta(g_{i})\in\{\colone,\coltwo\}$, hence
$g_{i+1}\in\{e_1,\ldots,e_k\}$. Suppose $g_0 < g_1$
(the case $g_1 < g_0$ is similar). Take $g_0 < h_0 < g_1$ such that $\zeta(h_0)
\in \{\colone,\coltwo\}$ and $\zeta(h_0) \neq \zeta(g_0)$.
Again, for all $i \in \mathbb{N}$, let $h_{i+1} = \fa \p {\p'} (h_i)$.
Note that all $g_0,g_1,\ldots$ have the same color, and all $h_0,h_1,\ldots$
carry the complementary color. Thus, $g_i \neq h_i$ for all $i \in \mathbb{N}$.
But, by Lemma~\ref{lem:monotone2}, this implies $g_0 < h_0 < g_1 < h_1 < \ldots$
which contradicts the fact that we deal with finite MSCs.
Next, we show that $L \subseteq L(\Afa \p {\p'})$.
Suppose $(M,\gamma) \in L$. Let $E_0 = \{e \in E_q \mid \gamma(e) = 0\}
= \{e \in E_q \mid \fa \p {\p'} (e) \neq e\}$ and $E_1 = \{e \in E_q \mid
\gamma(e) = 1\} = \{e \in E_q \mid \fa \p {\p'} (e) = e\}$.
Consider the graph $G = (E_q,\{(e,\fa \p {\p'} (e)) \mid e \in E_q
\wedge \fa \p {\p'} (e)\in E_q\})$.
Every vertex has outdegree at most 1, and, since $\fa \p {\p'}$ is
monotone, there are no cycles except for self-loops.
So the restriction of $G$ to $E_0$ is a forest,
and there exists a $2$-coloring $\chi\colon E_0 \to \{\zcolone,\zcoltwo\}$ such
that, for all $e\in E_0$ with $\fa \p {\p'} (e)\in E_0$, we have
$\fa \p {\p'} (e) \in \{\bot,\top\}$ or $\chi(e) \neq \chi(\fa \p {\p'} (e))$.
Define $\zeta \colon E \to \{\colone,\coltwo,\zcolone,\zcoltwo\}$ by
$\zeta(e) = \chi(e)$ for $e \in E_0$ and as in Condition 1.\
for $e \in E_1$. Notice that Condition 2.\ is satisfied. Hence,
$(M,\gamma,\zeta) \in \tilde{L}$ and $(M,\gamma) \in L(\Afa \p {\p'})$.
\end{proof}
\subsection{The Gossip CFM}\label{sec:gossip-cfm}
Let $p,q \in \Procs$ and $\Paths$ be a finite subset of $\Paths_{p,q}$.
We are now in a positon to build a (non-deterministic) \CFM that outputs,
at every event $e \in E_q$,
the restriction of ${\preceq_e}$ to $\Paths \times \Paths$.
\begin{lemma}
\label{lem:ord}
Let $\Ord$ be the set of preorders over $\Paths$.
There exists a \CFM $\Ale$ over $P$ and $\Sigma \times \Ord$
with $2^{\mathcal{O}(\|\Pi\|^2)}$ states
that recognizes the set of MSCs $(M,\gamma)$ such that $\gamma(e) = {\preceq_e}$.
\end{lemma}
\begin{proof}
Without loss of generality, we can assume that, for all $\p \in \Paths$,
we have $\p{\mathrel{\text{$\xrightarrow{\ast}$}}} \in \Paths$ or $\p = \p'{\mathrel{\text{$\xrightarrow{\ast}$}}}$ for some
$\p' \in \Gamma^\ast$.
In addition, we will identify path expressions $\p{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$
and $\p{\mathrel{\text{$\xrightarrow{\ast}$}}}$, observing that we have
$\sem M {\p{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mathrel{\text{$\xrightarrow{\ast}$}}}} = \sem M {\p{\mathrel{\text{$\xrightarrow{\ast}$}}}}$.
With this convention, we can always assume that, if $\p \in \Paths$, then
$\p{\mathrel{\text{$\xrightarrow{\ast}$}}} \in \Paths$, while keeping $\Paths$ finite (and of linear size).
By Lemma~\ref{lem:Afa} (and since $\Paths$ is finite), $\Ale$ can determine,
for each event $e$ and all path expressions $\p,\p' \in \Paths \cup
\{{\mathrel{\text{$\xrightarrow{\ast}$}}}\p \mid \p \in \Paths\} \cup \{{\mathrel{\text{$\xrightarrow{+}$}}}{\p} \mid \p \in \Paths\}$,
whether $\fa \p {\p'} (e) = e$.
The \CFM then checks that, for all $f \in E_q$ and $\p,\p' \in \Paths$,
$(\p,\p') \in \gamma(f)$ iff one of the following holds
(cf.\ Lemma~\ref{ple-char}):
\begin{itemize}
\item $f$ is minimal on process $q$, and $\last \p f = \botp p$
or $\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) = f$.
\item $e \prel f$, $(\p{\mathrel{\text{$\xrightarrow{\ast}$}}},\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}) \notin \gamma(e)$,
and $\last \p f = \bot$ or $\fa \p {{\mathrel{\text{$\xrightarrow{\ast}$}}}\p'} (f) = f$.
\item $e \prel f$, $(\p{\mathrel{\text{$\xrightarrow{\ast}$}}},\p'{\mathrel{\text{$\xrightarrow{\ast}$}}}) \in \gamma(e)$,
$\fa {\p'} {{\mathrel{\text{$\xrightarrow{+}$}}}\p} (f) \neq f$,
and $\last \p f = \botp p$ or $\last {\p'} f \neq \botp p$.
\qedhere
\end{itemize}
\end{proof}
In fact, for the gossip problem, one needs only a particular set of path
expressions. For a sequence $w = p_1 \ldots p_n \in \Procs^+$ of pairwise
distinct processes, we define the path expression $\pexpr{w}$ by
$\pexpr{w}={\mathrel{\text{$\xrightarrow{+}$}}}$ if $n=1$, and
$\pexpr{w}={\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{p_1}{p_2}}{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{p_2}{p_3}} \ldots
{\mathrel{\text{$\xrightarrow{\ast}$}}}{\mmove{p_{n-1}}{p_n}}{\mathrel{\text{$\xrightarrow{\ast}$}}}$ if $n\geq2$. Let $\Pi^{\mathsf{gossip}}$ be
the set of all those path expressions (which is finite). Finally, given
processes $p,q \in \Procs$, we define $\Pi^{\mathsf{gossip}}_{p,q} =
\Paths_{p,q}\cap\Pi^{\mathsf{gossip}}$.
We have ${<} = \bigcup_{\p \in \Pi^{\mathsf{gossip}}} \sem M {\p}$.
Moreover, for all $e \in E_q$, $\lastp p e = \max \{ \last \p e \mid \p \in
\Pi^{\mathsf{gossip}}_{p,q} \}$.
We can now apply Lemma~\ref{lem:ord} to all sets $\Pi^{\mathsf{gossip}}_{p,q}$ to obtain the desired gossip \CFM $\mathcal{A}_{\mathsf{gossip}}$:
\begin{theorem}
\label{gossipcfm}
There exists a CFM $\mathcal{A}_{\mathsf{gossip}}$ with $|\Sigma|^{2^{\mathcal{O}(|P| \log |P|)}}$ states
that recognizes the set of extended MSCs
$((E,\prel,\mrel,\mathit{loc},\lambda),\xi)$ such that, for all events $e \in E$,
$\xi(e)$ is the function from $\Procs$ to $\Sigma \cup \{\bot\}$ defined by
$\xi(e)(p) = \lambda(\lastp p e)$.
\end{theorem}
\begin{proof}
The \CFM $\mathcal{A}_{\mathsf{gossip}}$ guesses, for all $e \in E_q$, some $\p \in \Pi^{\mathsf{gossip}}_{p,q}$.
Using Lemma~\ref{lem:ord}, it verifies $\lastp p e = \last \p e$.
Moreover, using Lemma~\ref{lem:last-label}, it checks that $\xi(e) = \lambda(\last \p e)$.
\end{proof}
Next, we show that $\mathcal{A}_{\mathsf{gossip}}$ is, unavoidably, non-deterministic.
Following \cite{HenriksenJournal,GKM07,Kuske01}, we call a \CFM $\mathcal{C} = ((\A_p)_{p \in
\Procs},\Msg,\Acc)$ \emph{deterministic} if, for all processes $p$ and
transitions $t_1=(s_1,\gamma_1,s_1')$ and $t_2=(s_2,\gamma_2,s_2')$ of $\A_p$ such that $s_1 = s_2$ and
$\tlabel(t_1) = \tlabel(t_2)$, the following hold:
\begin{itemize}
\item If $t_1$ and $t_2$ are internal transitions, then $s_1' = s_2'$.
\item If $t_1$ and $t_2$ are send transitions such that $\receiver(t_1) = \receiver(t_2)$, then $s_1' = s_2'$ and $\tmsg(t_1) = \tmsg(t_2)$.
\item If $t_1$ and $t_2$ are receive transitions such that $\sender(t_1) = \sender(t_2)$ and $\tmsg(t_1) = \tmsg(t_2)$, then $s_1' = s_2'$.
\end{itemize}
\begin{proposition}
There is no deterministic gossip \CFM for $|\Sigma| \ge 2$ and $|P| \ge 3$.
\end{proposition}
\begin{proof}
Let $P = \{p,q,r\}$ and $\Sigma = \{\bbullet,\abullet,\resizebox{!}{1.2ex}{$\diamond$}\}$.
The symbol $\resizebox{!}{1.2ex}{$\diamond$}$ will only be used for clarity, and could be replaced
arbitrarily with $\bbullet$ or $\abullet$.
We show that there exists no deterministic \CFM recognizing the set $L$ of
MSCs $M = (E,\prel,\mrel,\mathit{loc},\lambda)$ such that for all $e \in E_q$,
$\lambda(e) = \lambda(\lastp p e)$.
As a consequence, there is no deterministic gossip \CFM over $P$ and $\Sigma$.
Assume that there exists a deterministic \CFM
$\A = (\A_p,\A_q,\A_r,\Msg,\Acc)$ such that $L(\A) = L$.
Fix $n > |S_q|^2$, where $S_q$ is the set of states of $\A_q$.
For all $k \in \{0,\ldots,n-1\}$, we define
an MSC $M^k = (E,\prel,\mrel^k,\mathit{loc},\lambda^k)$,
as depicted in Figure~\ref{fig:det-counter-example}
(where $n = 5$ and $k = 2$):
\begin{itemize}
\item $E_p = \{e_i \mid 0 \le i < 2n\}$,
$E_q = \{f_i \mid 0 \le i < 2n\}$,
and $E_r = \{g_i \mid 0 \le i < 2n\}$,
with $e_0 \prel e_1 \prel \cdots \prel e_{2n-1}$,
$f_0 \prel f_1 \prel \cdots \prel f_{2n-1}$,
and $g_0 \prel g_1 \prel \cdots \prel g_{2n-1}$.
\item For all $0 \le i < k$, $e_{2i} \mrel^k f_{i}$, and for all
$k \le i < n$, $e_{2i} \mrel^k f_{n + i}$.
For all $0 \le i < n$, $e_{2i+1} \mrel^k g_{2i}$, and $g_{2i+1} \mrel^k f_{k+i}$.
\item For all $0 \le i < n$,
$\lambda^k(e_{2i}) = \bbullet$ and $\lambda^k(e_{2i+1}) = \abullet$.
For all $f \in E_q$, $\lambda^k(f) = \lambda^k(\lastp p e)$.
That is, for all $0 \le i < 2k-1$, $\lambda^k(f_i) = \bbullet$,
and for all $2k - 1 \le i < n$, $\lambda^k(f_{i}) = \abullet$.
For all $g \in E_r$, $\lambda^k(g) = \resizebox{!}{1.2ex}{$\diamond$}$.
\end{itemize}
Clearly, $M^k \in L(\A)$.
Let $s_k$ and $t_k$ be the states associated respectively with $f_{k-1}$
(or the initial state of $\A_q$ if $k = 0$)
and $f_{k+n-1}$ in the unique run $\rho^k$ of $\A$ on $M^k$.
That is, if $k > 0$, $s_k = \target(\rho^k(f_{k-1}))$ and
$t_k = \target(\rho^k(f_{k+n-1}))$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[semithick,>=stealth,xscale=1]
\draw (-0.5,0) -- (12,0);
\draw (-0.5,1.5) -- (12,1.5);
\draw (-0.5,3) -- (12,3);
\foreach \i
[evaluate=\i as \x using int(2*\i), evaluate=\i as \y using int(2*\i+1)]
in {0,...,4} {
\node[bcirc,label=above:$e_{\x}$] (e\x) at (1.5*\i,3) {};
\node[acirc,label=above:$e_{\y}$] (e\y) at (1.5*\i+0.6,3) {};
\node[ncirc,label=below:$g_{\x}$] (g\x) at (1.5*\i+0.6,1.5) {};
\node[ncirc,label=below:$g_{\y}$] (g\y) at (1.5*\i+1.2,1.5) {};
\draw[->] (e\y) -- (g\x);
}
\foreach \i in {0,1} {
\node[bcirc,label=below:$f_\i$] (f\i) at (1.5*\i,0) {};
}
\foreach \i in {2} {
\node[bcirc,label=below:$f_\i$] (f\i) at (1.3*\i,0) {};
}
\foreach \i in {3,...,9} {
\node[acirc,label=below:$f_\i$] (f\i) at (1.3*\i,0) {};
}
\foreach \i [evaluate=\i as \x using int(2*\i)] in {0,1} {
\draw[->] (e\x) -- (f\i);
}
\foreach \i [evaluate=\i as \x using int(2*\i),
evaluate=\i as \y using int(5+\i)] in {2,3,4} {
\draw[->] (e\x) -- (f\y);
}
\foreach \i [evaluate=\i as \x using int(2*\i+1),
evaluate=\i as \y using int(2+\i)] in {0,...,4} {
\draw[->] (g\x) -- (f\y);
}
\node at (-1,0) {$q$};
\node at (-1,1.5) {$r$};
\node at (-1,3) {$p$};
\draw[decorate,decoration={brace,amplitude=5pt,raise=0.7cm}]
(f1) -- (f0) node[midway,yshift=-1cm] {$k$};
\draw[decorate,decoration={brace,amplitude=5pt,raise=0.7cm}]
(f6) -- (f2) node[midway,yshift=-1cm] {$n$};
\draw[decorate,decoration={brace,amplitude=5pt,raise=0.7cm}]
(f9) -- (f7) node[midway,yshift=-1cm] {$n-k$};
\node[above left = 0cm of f1] {$s_k$};
\node[above left = 0cm of f6] {$t_k$};
\end{tikzpicture}
\caption{Definition of $M^k$\label{fig:det-counter-example}}
\end{figure}
Note that for all $k$, the sequence of send and receive actions performed by
process $p$ or process $r$ in $M^k$ are the same, so the runs of
$\A$ on MSCs $M^k$ only differ on process $q$.
In particular, the sequence of $n$ messages sent by process $r$ to process $q$
is the same for all $k$.
Moreover,
since $n > |S_q|^2$, there exist $0 \le k < k' < n$ such that $s_k = s_{k'}$
and $t_k = t_{k'}$.
We can then combine the runs of $\A$ on $M^k$ and $M^{k'}$
to define a run where process $q$ receives the messages from process $p$ and
$r$ in the same order as in $M^k$, but behaves as in $M^{k'}$ in the middle
part where it receives the $n$ messages from process~$r$. More precisely,
let $M = (E,\prel,\mrel^k,\mathit{loc},\lambda)$, where $(E,\prel,\mrel^k,\mathit{loc})$ is
as in $M^k$, and $\lambda$ is defined as follows: for all $0 \le i < k+k'-1$,
$\lambda(f_{i}) = \bbullet$, and for all $k+k'-1 \le i < n$,
$\lambda(f_{i}) = \abullet$.
Then $M \in L(\A)$, but $M \notin L$.
\end{proof}
\section{Linear-Time Temporal Logic}
\label{sec:logic}
The transformation of temporal-logic formulas into automata has many applications, ranging from synthesis to verification. Temporal logics are well understood in the realm of sequential systems where formulas can reason about linearly ordered sequences of events. As we have seen, executions of concurrent systems are actually partially ordered. Over partial orders, however, there is no longer a canonical temporal logic like LTL over words. There have been several attempts to define natural counterparts over Mazurkiewicz traces (see \cite{GK-fi07} for an overview). All of them are less expressive than asynchronous automata \cite{Zielonka87}, a standard model of shared-memory systems. We will show below that this is still true when formulas are interpreted over MSCs and the system model is given in terms of \CFMs.
\smallskip
Many temporal logics over partial orders are captured by the following generic language, which we call \TL.
The set of \TL formulas is defined as follows:
\[
\varphi ::= a \mid p \mid \varphi \lor \varphi \mid \lnot \varphi \mid
\Co \varphi \mid \varphi \mathbin{\tilde{\mathsf{U}}} \varphi \mid \varphi \mathbin{\tilde{\mathsf{S}}} \varphi
\qquad \text{where } a \in \Sigma\text{, } p \in P \, .
\]
A formula $\varphi \in \TL$ is interpreted over events of MSCs.
We say that $M,e \models a$ if $\lambda(e) = a$;
similarly, $M,e \models p$ if $\mathit{loc}(e) = p$.
The $\Co$ modality jumps to a parallel event:
$M,e \models \Co \varphi$ if there exists $f \in E$ such that $e \not\le f$,
$f \not\le e$, and $M,f \models \varphi$.
We use strict versions of until and since:
\[
\begin{array}{lcl}
M,e \models \varphi_1 \mathbin{\tilde{\mathsf{U}}} \varphi_2
& \quad\text{if}\quad & \text{there exists $f \in E$ such that }
e < f \text{ and } M,f \models \varphi_2 \\
& & \text{and, for all } e < g < f,\ M,g \models \varphi_1 \, \\
M,e \models \varphi_1 \mathbin{\tilde{\mathsf{S}}} \varphi_2
& \quad\text{if}\quad & \text{there exists $f \in E$ such that }
f < e \text{ and } M,f \models \varphi_2 \\
& & \text{and, for all } f < g < e,\ M,g \models \varphi_1 \, .
\end{array}
\]
This temporal logic and others have been studied in the context of Mazurkiewicz
traces \cite{GK-fi07,Thiagarajan94,DiekertG06}. The logic introduced by Thiagarajan in
\cite{Thiagarajan94} uses an until modality $\Up p$ corresponding to the usual LTL
(non-strict) until for
a single process $p$, together with a unary modality $\Op p$ interpreted as
follows: $\Op p \varphi$ holds at $e$ if the first event on process $p$ that is
not in the past of $e$ satisfies $\varphi$. Other interesting modalities are
$\X{p}$ and $\Y{p}$ with the following meaning: $\X{p}$ moves to the first event on process $p$ in the
strict future of the current event, while $\Y{p}$ moves to the last event on
process $p$ that is in the strict past of the current event. All these
modalities can be expressed in \TL:
\begin{align*}
\X{p}\varphi & := \neg p \mathbin{\tilde{\mathsf{U}}}(p\wedge\varphi) & \varphi_1 \Up p \varphi_2 & := (p \land \varphi_2) \lor\Bigl((\lnot p \lor \varphi_1) \land \Bigl((\lnot p \lor \varphi_1) \mathbin{\tilde{\mathsf{U}}} (p \land \varphi_2)\Bigl)\Bigr)
\\
\Y{p}\varphi & := \neg p \mathbin{\tilde{\mathsf{S}}}(p\wedge\varphi)
& \Op p \varphi & := \Y p \X p \varphi
\lor \Co \bigl(p \land \lnot \Y p \mathit{true} \land \varphi \bigr)
\lor \X p \bigl(\lnot \Y p \mathit{true} \land \varphi \bigr)
\end{align*}
It turns out that we can exploit our gossip protocol to translate every \TL formula into an equivalent \CFM:
\begin{theorem}
For all $\varphi \in \TL$, there exists a \CFM $\A_\varphi$ over $\Procs$ and $\Sigma \times \{0,1\}$
with $2^{|\varphi|^{\mathcal{O}(|P| \log |P|)}}$ states recognizing
the set of MSCs $(M,\gamma)$ such that, for all events $e$, $\gamma(e) = 1$
iff $M,e \models \varphi$.
\end{theorem}
\begin{proof}
We construct $\A_\varphi$ by induction on $\varphi$.
The cases $\varphi = a$, $\varphi = p$, $\varphi = \lnot \psi$, and
$\varphi_1\vee\varphi_2$ are straightforward.
For $\varphi = \Co \psi$, we compose $\A_\psi$ with a \CFM that
tests, for each event~$e$, whether it is parallel to some $1$-labeled event.
The existence of such a CFM (with $2^{2^{\mathcal{O}(|P| \log |P|)}}$ states)
has been shown in \cite[Lemma 14]{BFG-stacs18}.
Suppose that we have \CFMs $\A_{\varphi_1}$ and $\A_{\varphi_2}$ for
$\varphi_1$ and $\varphi_2$. The input MSCs of $\A_{\varphi_1 \mathbin{\tilde{\mathsf{S}}}
\varphi_2}$ will be ``pre-labeled'' using $\A_{\varphi_1}$ and
$\A_{\varphi_2}$, and by projection we can assume that we work with MSCs over
an alphabet $\{a,b,c,d\}$ where $a$ stands for $\varphi_1 \land \varphi_2$,
$b$ stands for $\varphi_1 \land \lnot \varphi_2$, $c$ stands for $\lnot
\varphi_1 \land \varphi_2$, and $d$ stands for $\lnot \varphi_1 \land \lnot
\varphi_2$. So the construction of $\A_{\varphi_1 \mathbin{\tilde{\mathsf{S}}} \varphi_2}$ comes
down to the construction of a \CFM over $\{a,b,c,d\}$ for the formula $(a \lor
b) \mathbin{\tilde{\mathsf{S}}} (a \lor c) \equiv \bigvee_{p,q \in P} \varphi_{p,q}$ where
$\varphi_{p,q}= q \land \Big((a \lor b) \mathbin{\tilde{\mathsf{S}}} (p \land (a \lor c))\Big)$.
Moreover, since ${<}=\bigcup_{\p\in\Pi^{\mathsf{gossip}}}\sem{\p}{M}$, it is not
difficult to check that, for all $e \in E_p$, we have: $M,e \models
\varphi_{p,q}$ iff
\begin{align*}
& \max\left\{\last \p e \mid \p \in a\cdot\Pi^{\mathsf{gossip}}_{p,q} \cup
c\cdot\Pi^{\mathsf{gossip}}_{p,q} \right\} \\
> {}
& \max\left\{\last \p e \mid \p \in \textstyle \bigcup_{r \in P}
\Pi^{\mathsf{gossip}}_{p,r} \cdot c \cdot \Pi^{\mathsf{gossip}}_{r,q}
\cup \Pi^{\mathsf{gossip}}_{p,r} \cdot d \cdot \Pi^{\mathsf{gossip}}_{r,q} \right\} \, .
\end{align*}
Indeed, this can be read as ``the last event $f \in E_q$ satisfying
$a \lor c$ in the past of $e$ happens after the last event $g \in E_q$
such that there exists $h$ with $g<h<e$ which is not labeled $a$ or~$b$''.
Moreover, by Lemma~\ref{lem:ord}, this property can be tested by
a \CFM.
As CFMs are closed under mirror languages, we can also construct a
CFM for $\varphi_1\mathbin{\tilde{\mathsf{U}}}\varphi_2$.
\end{proof}
Note that this result is orthogonal to all other known translations of logic
formulas into \emph{unbounded} \CFMs \cite{BolligJournal,BFG-stacs18,BKM-lmcs10}.
\section{Conclusion}\label{sec:conclusion}
We studied the gossip problem in a message-passing environment with unbounded FIFO channels.
Our non-deterministic protocol is of own interest but also sheds light on the expressive power of communicating finite-state machines.
It allows us to embed well-known temporal logics into CFMs, i.e., properties that typically use three first-order variables.
We believe that we can go further and exploit gossiping to capture even more expressive logics and other high-level specifications
based on the notion of message sequence graphs. We leave this to future work.
|
{
"timestamp": "2018-04-30T02:07:31",
"yymm": "1802",
"arxiv_id": "1802.08641",
"language": "en",
"url": "https://arxiv.org/abs/1802.08641"
}
|
\section{Introduction}\label{s:intro}
Understanding the behaviour of rotating fluid flows is fundamental to many problems in geophysical fluid dynamics.
The simplest rotating fluid model is arguably the 2d Navier--Stokes equations, which however is unaffected by constant (rigid body) rotation.
It is affected, however, by differential rotation, such as that in a rotating sphere or its simplified model, the $\beta$-plane.
In this case one expects on physical grounds that the flow will become more zonal (i.e.\ less dependent on the ``longitude'' $x$) as the rotation rate increases.
To quantify this, we decompose the (scalar) vorticity as $\omega(x,y,t)=\bar{\w}(y,t)+\tilde{\w}(x,y,t)$, with the zonal part $\bar{\w}$ obtained by averaging $\omega$ over $x$.
In \cite{mah-dw:beta} and \cite{dw:nss2}, it was proved that the non-zonal part of the flow becomes small as $t\to\infty$, in the sense that $|\tilde{\w}(t)|_{L^2}^2\le \varepsilon M_0$ for sufficiently large $t$.
It was also proved that the global attractor $\mathcal{A}$ reduces to a point for $\varepsilon$ sufficiently small (but still finite).
Naturally, this begs the question of how the number of degrees of freedom in the flow scales with $\varepsilon$.
In the non-rotating case, the results on determining modes and attractor dimensions agreed (essentially, up to a logarithm) with those expected on physical grounds from the Kolmogorov theory, after two decades of effort \cite{foias-prodi:67,constantin-foias-temam:88,jones-titi:93}.
The present rotating case is more delicate, and there is as yet no physical consensus on the number of degrees of freedom as a function of $\varepsilon$: as discussed in \cite[\S9.1.1]{vallis:aofd}, there are several plausible estimates of the Rhines wavenumber $\kappa_\beta$, roughly the smallest wavenumber (largest scale) that supports turbulent flows \cite{rhines:75,vallis-maltrud:93}.
These physical estimates depend only on the energy $|{\boldsymbol{v}}|_{L^2}^2$ and enstrophy $|\omega|_{L^2}^2$, although arguably the arguments implicitly assume certain unspecified smoothness of the flows.
Extending the results from \cite{mah-dw:beta}, and using tools from \cite{jones-titi:92n,jones-titi:93}, in this paper we prove bounds on the number of determining modes and nodes related to the number of degrees of freedom in the rotating NSE.
Unlike the physical estimates in the previous paragraph, our rigorous results inevitably involve higher derivatives of the vorticity (and thus the forcing).
It is not clear at this point whether our bounds are optimal, particularly as one does not know what to expect on physical grounds.
A natural extension of our results is to bound the Hausdorff dimension of the global attractor $\mathcal{A}$.
This we have not been able to do, and it appears that current methods to estimate attractor dimensions (e.g., \cite{robinson:idds,doering-gibbon:aanse,ilyin-titi:08}) are not directly applicable to our problem.
Given a bound on the attractor dimension, an analogous bound on the number of determining nodes would follow from \cite{friz-robinson:01}: if $N>32\,\hbox{dim}_H\mathcal{A}$, then almost every set of $N$ nodes is determining.
We are not however aware of any result in the opposite direction (which is what is needed in our case).
We expect that our results could be extended to the more realistic case of the rotating sphere with minimal additional conceptual difficulty; cf.~\cite{dw:nss2}.
However, as the bounds obtained here may not be optimal, we do not do so in this paper.
\medskip\hbox to\hsize{\qquad\hrulefill\qquad}\medskip
We consider the two-dimensional rotating Navier--Stokes equations in the so-called $\beta$-plane approximation,
\begin{equation}\label{q:dvdt}\begin{aligned}
&\partial_t {\boldsymbol{v}} + {\boldsymbol{v}} \cdot \nabla {\boldsymbol{v}} + \beta y {\boldsymbol{v}}^{\perp} + \nabla p = \mu \Delta {\boldsymbol{v}} + f_{\vb}^{},\\
&\gb\!\cdot\!\vb = 0.
\end{aligned}\end{equation}
Here ${\boldsymbol{v}} = (v_1, v_2)$ is the velocity of the fluid, $p$ is the pressure, $\mu$ is the kinematic viscosity and $f_{\vb}^{}$ is the forcing on the velocity, assumed to be independent of time.
The term $\beta y{\boldsymbol{v}}^\perp$, where ${\boldsymbol{v}}^{\perp} := (-v_2, v_1)$, arises from the differentially rotating frame, which can be thought of as a linearised approximation of a region on a rotating sphere.
We take as our domain $\mathscr{M} = [0,L] \times [-L/2,L/2]$ with periodicity in both directions assumed.
We assume without loss of generality that
\begin{equation}\label{q:v0int}
\int_{\mathscr{M}} {\boldsymbol{v}} \>\mathrm{d}\boldsymbol{x}= 0.
\end{equation}
For consistency with the periodic domain, we also assume the following symmetries:
\begin{equation}\label{q:velsym}\begin{aligned}
&v_1(x,-y,t) = v_1(x,y,t),\\
&v_2(x,-y,t) = -v_2(x,y,t),
\end{aligned}\end{equation}
with analogous symmetries imposed on $f_{\vb}^{}$.
We drop all dimensions except length, so ${\boldsymbol{v}}$ and $f_{\vb}^{}$ have dimensions of length, $\nabla$ has dimension (length)$^{-1}$ and $\mu$ has dimension (length)$^2$; the $L^p$ norm $|\cdot|_{L^p(\mathscr{M})}$ has dimension (length)$^{2/p}$, with $|\cdot|_{L^\infty}^{}$ being naturally dimensionless.
Constants denoted by $c$ and numbered constants $c_i$ are dimensionless.
With this non-dimensionalisation, we take $\nabla^\perp\cdot{}$(\ref{q:dvdt}a) to get
\begin{equation}\label{q:dwdt}
\partial_t \omega + \partial(\psi,\omega) + \frac{\kappa_0}{\varepsilon}\partial_{x}\psi = \mu\Delta\omega + f,
\end{equation}
where the (scalar) vorticity is $\omega:=\nabla^\perp\cdot{\boldsymbol{v}}=\partial_x v_2-\partial_y v_1$, which conveniently is dimensionless.
Here $\partial(\cdot,\cdot)$ denotes the Jacobian, i.e.\ $\partial(f,g):=\partial_xf\,\partial_yg-\partial_xg\,\partial_yf$, which has the property that
\begin{equation}
(\partial(f,g),g)_{L^2(\mathscr{M})}^{} = 0
\end{equation}
for all $f$, $g$ such that the expression is defined.
The forcing (on vorticity) is $f:=\nabla^\perp\cdotf_{\vb}^{}$, $\varepsilon\sim1/\beta$ (both dimensionless) and $\kappa_0=2\pi/L$ is the Poincar\'e constant for $\mathscr{M}$.
For later use, we define the (dimensionless) parameter $\nu_0 := \mu\kappa_0^2$ and assume for convenience that $\nu_0\le1$ (we shall use the fact that $\mathrm{e}^{\nu_0}<3$ below)
The streamfunction $\psi$ is defined uniquely by
\begin{equation}
\psi := \Delta^{-1}\omega \qquad\text{with}\quad \int_\mathscr{M} \psi \>\mathrm{d}\boldsymbol{x} = 0.
\end{equation}
We note that due to the property of the curl,
\begin{equation}\label{q:0int}
\int_{\mathscr{M}} \omega \>\mathrm{d}\boldsymbol{x}= 0.
\end{equation}
Moreover, the symmetries \eqref{q:velsym} imply
\begin{equation}\label{q:wsym}
\omega(x,-y,t) = -\omega(x,y,t).
\end{equation}
It follows from the symmetries on $f_{{\boldsymbol{v}}}$ that $f(x,-y,t)=-f(x,y,t)$ for all ${\boldsymbol{x}}$ and $t$.
Thanks to \eqref{q:v0int} and \eqref{q:0int}, the $H^s$ norm is equivalent to
\begin{equation}
|\nabla^s\omega|_{L^2}^2 := |(-\Delta)^{s/2}\omega|_{L^2}^2.
\end{equation}
It is a classical result that, given $f_{\vb}^{}$ and ${\boldsymbol{v}}(0)\in L^2$, the NSE \eqref{q:dwdt} has a globally unique solution that is bounded only by the forcing (i.e.\ independently of the initial data), for sufficiently large times, in terms of the Grashof number
\begin{equation}\label{q:gr}
\mathcal{G} = \frac{|f_{\vb}^{}|_{L^2}^{}}{\mu^2\kappa_0^2} =: \mathcal{G}_0.
\end{equation}
Defining ``higher Grashof numbers'' by
\begin{equation}
\mathcal{G}_m := \frac{|\nabla^mf_{\vb}^{}|_{L^2}^{}}{(\mu\kappa_0)^{2-m}},
\end{equation}
we can bound derivatives of the vorticity independently of the initial data,
\begin{equation}\label{q:wgm}
|\nabla^m\omega(t)|_{L^2}^2 + \mu\int_0^t|\nabla^{m+1}\omega|_{L^2}^2\,\mathrm{e}^{\nu_0(\tau-t)} \>\mathrm{d}\tau \le c\,(m)\,\frac{\mathcal{G}_m^2(1+c'(m)\,\nu_0^2\,\mathcal{G}_0^2)^m}{(\mu\kappa_0)^{2m-2}}
\end{equation}
for all $t\ge T_m(|{\boldsymbol{v}}(0)|_{L^2}^{},|\nabla^{m-1}f|_{L^2}^{}; \mu)$ as long as $\mathcal{G}_m$ is defined.
\section{Background and Main Results}\label{s:results}
It was discovered fifty years ago \cite{foias-prodi:67} that the solutions of 2d NSE are determined essentially by a finite number of degrees of freedom.
Following Foias and Prodi, we consider two solutions $\omega$ and $\omega^\sharp$ of \eqref{q:dwdt} with the same $f\in H^{-1}$ but potentially different initial data ${\boldsymbol{v}}(0)$ and ${\boldsymbol{v}}^\sharp(0) \in L^2$,
\begin{equation}\label{q:beta1}
\partial_t \omega + \partial(\psi,\omega) + \frac{\kappa_0}{\varepsilon}\partial_{x}\psi \quad \enskip = \mu\Delta\omega + f
\end{equation}
\begin{equation}\label{q:beta2}
\partial_t \w^{\sharp} + \partial(\psi^{\sharp},\w^{\sharp}) + \frac{\kappa_0}{\varepsilon}\partial_x \psi^{\sharp} = \mu\Delta\w^{\sharp} + f,
\end{equation}
and note that their difference $\delta\omega:=\omega-\omega^\sharp$ satisfies
\begin{equation}\label{q:dwbeta}
\partial_t\delta\omega + \partial(\psi^{\sharp},\delta\omega) + \partial(\delta\psi,\omega) + \frac{\kappa_0}{\varepsilon}\partial_x \delta\psi = \mu\Delta\delta\omega.
\end{equation}
We expand $\delta\omega$ in Fourier series,
\begin{equation}
\delta\omega({\boldsymbol{x}},t) = \sum_{{\boldsymbol{k}}\in\Zahl_L}\,\delta\omega_{\boldsymbol{k}}(t)\,\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot{\boldsymbol{x}}}
\end{equation}
where $\Zahl_L:=\{(2\pi l_1/L,2\pi l_2/L):(l_1,l_2)\in\mathbb{Z}^2\}$.
All wavenumber sums, unless otherwise stated, are henceforth understood to be over $\Zahl_L$.
Introducing a threshold wavenumber $\kappa$, we define the $L^2$ projection ${\sf P}_\kappa$ and
\begin{align}
\dwm<({\boldsymbol{x}},t) &:= {\sf P}_\kappa\delta\omega({\boldsymbol{x}},t) & &:= \sum_{|{\boldsymbol{k}}|\leq\kappa}\,\delta\omega_{\boldsymbol{k}}(t) \ex^{\im{\boldsymbol{k}}\cdot{\boldsymbol{x}}},\hbox to 70pt{}\\
\dwm>({\boldsymbol{x}},t) &:= \delta\omega({\boldsymbol{x}},t) - \dwm<({\boldsymbol{x}},t) & &\,= \sum_{|{\boldsymbol{k}}|>\kappa}\,\delta\omega_{\boldsymbol{k}}(t) \ex^{\im{\boldsymbol{k}}\cdot{\boldsymbol{x}}}.
\end{align}
The central idea is that if one takes $\kappa$ sufficiently large, the behaviour of the NSE in the long-time limit is determined only by the low ``determining'' modes (i.e.\ ${\sf P}_{\kappa}\,\omega$), in the sense that if $|{\sf P}_\kappa\delta\omega(t)|_{L^2(\mathscr{M})}^{}\to0$ as $t\to\infty$, then also $|\delta\omega(t)|_{L^2(\mathscr{M})}^{}\to0$ as $t\to\infty$.
The bound on the number of determining modes was improved considerably in \cite{foias-manley-temam-treve:83}, approaching up to a logarithm what one expects on physical grounds \cite{manley-treve:82}.
Subsequently, Jones and Titi \cite{jones-titi:93} obtained a bound free of the ``spurious'' logarithmic term:
\begin{theorem}\label{t:jt93m}
Let $\delta\omega$ satisfy \eqref{q:dwbeta}.
There exists an absolute constant $c_1$ such that if
\begin{equation}\label{q:jt93m}
\kappa/\kappa_0 \ge c_1\,\mathcal{G}_0^{1/2},
\end{equation}
then
\begin{equation*}
\lim_{t \to \infty} |{\sf P}_\kappa\delta\omega(t)|_{L^2(\mathscr{M})}^{} = 0
\quad \text{implies} \quad
\lim_{t \to \infty} |\delta\omega(t)|_{L^2(\mathscr{M})}^{} = 0.
\end{equation*}
\end{theorem}
\noindent
We remark that this bound supports the physical intuition that turbulence is extensive, in the sense that if one were to merge two similar systems (having the same dimensions and Grashof numbers), the number of degrees of freedom (viz.\ determining modes), which scales as $(\kappa/\kappa_0)^2$, would double.
Similarly, following \cite{jones-titi:93} and \cite{foias-temam:84}, we call a set of points $\mathcal{E} = \{{\boldsymbol{x}}_1,\cdots,{\boldsymbol{x}}_N\} \subset \mathscr{M}$ determining nodes if
\begin{equation}
\lim_{t\to\infty} \delta\omega({\boldsymbol{x}}_i,t) = 0 \quad \forall i \in \{1, \cdots , N\} \quad \text{implies} \quad \lim_{t\to\infty}|\delta\omega(t)|_{L^2(\mathscr{M})}^{} = 0.
\end{equation}
Foias and Temam \cite{foias-temam:84} first proved the existence of such a set and gave a bound on the maximal distance between individual nodes, while Jones and Titi \cite{jones-titi:93} gave the following qualitatively optimal bound on the number of determining nodes.
\begin{theorem}\label{t:jt93n}
Let $\delta\omega$ satisfy \eqref{q:dwbeta}.
There exists an absolute constant $c_2$ and a set of determining nodes $\mathcal{E} = \{{\boldsymbol{x}}_1,\cdots,{\boldsymbol{x}}_N\}$, where
\begin{equation}\label{q:jt93n}
N \geq c_2 \,\mathcal{G}_0,
\end{equation}
i.e.\ $\lim_{t\to\infty}\delta\omega({\boldsymbol{x}}_i,t) = 0$ for $i \in \{1, \cdots , N\}$ implies that $\lim_{t\to\infty}|\delta\omega(t)|_{L^2(\mathscr{M})}^{} = 0$.
\end{theorem}
The bounds in \eqref{q:jt93m} and \eqref{q:jt93n} are qualitatively equivalent, i.e.\ they involve the same number of degrees of freedom (possibly up to a constant).
They are also independent of the rotation rate $\varepsilon^{-1}$, i.e.\ they hold with or without rotation.
On physical grounds, however, one expects that under a differential rotation, the number of determining modes and nodes would decrease as the rotation rate increases.
To this end, we begin by splitting the vorticity into its zonal (independent of $x$) and non-zonal components,
\begin{equation}
\bar{\w}(y,t) := \frac{1}{L}\int_0^L \omega(x,y,t) \>\mathrm{d}x
\quad\text{and}\quad
\tilde{\w}(x,y,t) := \omega(x,y,t) - \bar{\w}(y,t).
\end{equation}
For convenience, we also define projections to the zonal and non-zonal components,
\begin{equation}
\bar{\sf P}\omega := \bar{\w}
\quad\text{and}\quad
\tilde{\sf P}\omega := (1-\bar{\sf P})\,\omega = \tilde{\w}.
\end{equation}
These are orthogonal projections in $H^m$, commuting with ${\sf P}_\kappa$.
Moreover, they satisfy
\begin{equation}\label{q:bb0}
\partial(\rho,\gamma) = 0 \qquad \text{whenever} \qquad \partial_x\rho = \partial_x\gamma = 0.
\end{equation}
The key ingredient for the results in this paper is the bound on the non-zonal component $\tilde{\w}$ from \cite{mah-dw:beta}.
Here we state it in a form that shows the explicit dependence on $\mathcal{G}_m$:
\begin{theorem}\label{t:aw}
Assume that the initial data ${\boldsymbol{v}}(0) \in L^2(\mathscr{M})$ and that $|\Delta f|_{L^2} < \infty$.
Then there exist a $\mathcal{T}_0(|{\boldsymbol{v}}(0)|_{L^2}^{},|\Delta f|_{L^2}^{};\mu)$ and a constant $c_3(\nu_0)$ such that
\begin{align}
&|\tilde{\w}(t)|_{L^2}^2 + \mu\int_t^{t+1} |\nabla \tilde{\w}(\tau)|_{L^2}^2 \>\mathrm{d}\tau \le \varepsilon M_0/\kappa_0^2\label{q:aw}\\
&|\tilde{\w}(t)|_{L^2}^2 + \mu\int_0^{t} |\nabla \tilde{\w}(\tau)|_{L^2}^2 \mathrm{e}^{\nu_0(\tau-t)}\>\mathrm{d}\tau \le \varepsilon M_0/\kappa_0^2\label{q:aw'}
\end{align}
for all $t \ge \mathcal{T}_0$, where
\begin{equation}\label{q:m0}
M_0 = c_3\,\mathcal{G}_2\mathcal{G}_3(1+\mathcal{G}_0^2).
\end{equation}
\end{theorem}
\noindent
We note that our $M_0$ is $\kappa_0^2\,M_0$ in \cite{mah-dw:beta}; we have also tightened the bound slightly (this is obvious from the proof), with $\mathcal{G}_2\,\mathcal{G}_3$ in place of $\mathcal{G}_3^2$ in \cite{mah-dw:beta}.
\medskip
For our tighter $\varepsilon$-dependent bounds on the determining modes, it is interesting to consider several forms of zonal forcing often used in numerical simulations of 2d turbulence.
One case is where $\bar{f}$ is bandwidth-limited, in the sense that there is a (modest) $\kappa_f$ such that
\begin{equation}\label{q:fkpf}
\bar{f} ={\sf P}_{\kappa_f}\bar{f}.
\end{equation}
Another case is where $\bar{f}$ decays exponentially in Fourier space (analytic $\bar{f}$),
\begin{equation}\label{q:fexa}
|\bar{f}_{(0,k)}^{}| \le \frac{\nu_0^2\,\mathcal{G}_0^{}}{2\kappa_0}\Bigl(\frac{2\alpha}{1+2\alpha}\Bigr)^{1/2}\,\mathrm{e}^{\alpha(1-|k|/\kappa_0)},
\end{equation}
where $\alpha>0$.
Finally, we consider algebraically-decaying $\bar{f}$,
\begin{equation}\label{q:fhs}
|\bar{f}_{(0,k)}^{}| \le \frac{\nu_0^2\kappa_0^{s-1}\mathcal{G}_0}{\sqrt2\zeta(2+2s)^{1/2}}|k|^{-s}
\end{equation}
for $s>5/2$ in order that $\bar{f}\in H^2$.
In both \eqref{q:fexa} and \eqref{q:fhs}, the constants have been chosen so that $|\nabla^{-1}\bar{f}|/(\mu\kappa_0)^2\le\mathcal{G}_0$ to be consistent with \eqref{q:gr}.
We stress that no assumptions are made on $\tilde{f}$ (other than it being in $H^2$ needed for Theorem~\ref{t:aw}).
Our main result on determining modes follows.
\begin{theorem}\label{t:modes}
Let $\delta\omega$ be the solution of \eqref{q:dwbeta} with $f\in H^2(\mathscr{M})$.
Then the low modes ${\sf P}_\kappa\,\omega$ are determining, i.e.\
$\lim_{t\to\infty}|{\sf P}_\kappa\,\delta\omega(t)|_{L^2}^{}=0$ implies that $\lim_{t\to\infty}|\delta\omega(t)|_{L^2}^{}=0$, if any of the following conditions hold for constants $c_4$, $c_5$, $c_6$ and $\varepsilon$ sufficiently small:\break
\noindent
(a)~if $\bar{f}$ satisfies \eqref{q:fkpf} and
\begin{equation}\label{q:bdkpf}
\kappa/\kappa_0 > c_4(\nu_0)\,\max\bigl\{ \varepsilon^{1/4} M_0^{1/4},
\,(\kappa_f/\kappa_0)^{3/8}\,\mathcal{G}_0^{1/4} \bigr\}; \text{ or}
\end{equation}
(b)~if $\bar{f}$ satisfies \eqref{q:fhs} and
\begin{equation}\label{q:bdhs}
\kappa/\kappa_0 > c_5(\nu_0,s)\,\max\bigl\{ \varepsilon^{1/4}M_0^{1/4},\,
\, \mathcal{G}_0^{\,(2s+5)/(8s+14)} \bigr\}; \text{ or}
\end{equation}
(c)~if $\bar{f}$ satisfies \eqref{q:fexa} and
\begin{equation}\label{q:bdexa}
\kappa/\kappa_0 > c_6(\nu_0)\,\max\bigl\{ \varepsilon^{1/4} M_0^{1/4},
\,F_\alpha\bigl(\nu_0^{-1/2}\mathcal{G}_0\bigr)^{3/8}\mathcal{G}_0^{1/4} \bigr\}
\end{equation}
where $F_\alpha$ is defined in \eqref{q:falp} below.
\end{theorem}
\noindent
We note that for large $u$, $F_\alpha(u)=\log u/(2\alpha)+\cdots$, so the last term scales essentially as $\mathcal{G}_0^{1/4}$.
The smallness requirement on $\varepsilon$, \eqref{q:eps-kf}, \eqref{q:eps-hs} and \eqref{q:eps-exa} below, is not essential and can be removed at the expense of more messy expressions for the above bounds.
As is apparent from the proof below, heuristically one may regard the $\varepsilon^{1/4}M_0^{1/4}$ in \eqref{q:bdkpf} and \eqref{q:bdexa} as arising from the non-zonal forcing $\tilde{f}$ and the term involving $\mathcal{G}_0$ as arising from the zonal forcing $\bar{f}$.
That the latter bound scales essentially as $\mathcal{G}_0^{1/4}$ as opposed to $\mathcal{G}_0^{1/2}$ in the general (``non-rotating'') case suggests that, in the limit of small $\varepsilon$, the differentially rotating NSE \eqref{q:dwbeta} essentially consists of a one-dimensional ``mean'' plus a small amount of two-dimensional ``noise'', which agrees with what one would expect on physical grounds.
Barring the discovery of yet unforeseen cancellations, it is therefore unlikely that one could obtain a bound with a smaller power of $\mathcal{G}_0$ than $\frac14$.
Similar considerations apply to \eqref{q:bdhs}, where since $\bar{f}\in H^2$ by hypothesis, one must take $s > \sfrac52$, giving a limiting worst-case dependence of $\mathcal{G}_0^{\,5/17}$.
\medskip
Analogous to Theorem \ref{t:modes}, we have the following bounds on determining nodes:
\begin{theorem}\label{t:nodes}
Let $\delta\omega$ be the solution of \eqref{q:dwbeta} with $f\in H^2(\mathscr{M})$.
Then there exists a set of determining nodes $\mathcal{E}=\{{\boldsymbol{x}}_1,\cdots,{\boldsymbol{x}}_N\}$ whenever
\begin{align}
&N > c_7(\nu_0)\,\max\bigl\{ \varepsilon^{1/2}M_0^{1/2}, (\kappa_f/\kappa_0)^{1/3}\,\mathcal{G}_0^{2/3} \bigr\}
\text{ when $\bar{f}$ satisfies \eqref{q:fkpf}; or}\label{q:nkpf}\\
&N > c_8(\nu_0,s)\,\max\bigl\{ \varepsilon^{1/2}M_0^{1/2}, \,\mathcal{G}_0^{\,(4s+5)/(6s+5)}\bigr\}
\text{ when $\bar{f}$ satisfies \eqref{q:fhs}; or}\label{q:nhs}\\
&N > c_9(\nu_0)\,\max\bigl\{ \varepsilon^{1/2}M_0^{1/2}, \,F_{\alpha}(\nu_0^{-1}\,\mathcal{G}_0^{2/3})^{1/3}\,\mathcal{G}_0^{2/3} \bigr\}
\text{ when $\bar{f}$ satisfies \eqref{q:fexa}},\label{q:nexa}
\end{align}
for constants $c_7$, $c_8$ and $c_9$, $F_{\alpha}$ defined in \eqref{q:nfalp} below and $\varepsilon\le c\,\nu_0/M_0$.
\end{theorem}
\noindent
These nodal results are weaker than their modal counterparts, with the ``zonal part'' scaling essentially as $\mathcal{G}_0^{2/3}$ rather than $\mathcal{G}_0^{1/2}$.
We believe that this is an artefact of our approach and not intrinsic to the problem.
As in the modal case, the smallness requirement for $\varepsilon$ is not essential and can be removed in exchange for messier expressions in the above bounds.
\section{Proof: Determining Modes}\label{s:pf-modes}
This section is devoted to proving Theorem~\ref{t:modes} using more refined estimates of the nonlinear terms and of the zonal vorticity $\bar{\w}$.
For conciseness, when no ambiguity may arise, we write $|\cdot|_p^{}:=|\cdot|_{L^p}^{}$, $|\cdot|:=|\cdot|_{L^2}^{}$ and $(\cdot,\cdot):=(\cdot,\cdot)_{L^2}^{}$.
As usual, $c$ denotes a dimensionless constant whose value may differ in each use.
We also assume for convenience that $\varepsilon\le1$.
First, we collect some basic inequalities.
From the Fourier expansion, we have the following ``improved'' and ``reverse'' Poincar\'e inequalities:
\begin{align}
&\kappa\,|\dwm>|_{2}^{} \,\le |\nabla\dwm>|_{2}^{}\label{q:arrPoi1}\\
&|\nabla\dwm<|_{2}^{} \le \kappa\,|\dwm<|_{2}^{}.\label{q:arrPoi2}
\end{align}
Next, we recall Agmon's inequality in 2d,
\begin{equation}\label{q:ag2}
|u|_{\infty}^{} \le c\,|u|_2^{1/2}|\Delta u|_2^{1/2}
\end{equation}
for $u \in H^2(\mathscr{M})$.
For functions depending on $y$ and $t$ only, we have the improved version (with the $L^p$ norms always taken over $\mathscr{M}$),
\begin{equation}\label{q:ag1}
|\bar{v}|_{\infty}^{} \le c\,\kappa_0^{1/2}|\bar{v}|_2^{1/2}|\nabla\bar{v}|_2^{1/2}.
\end{equation}
We note the following integral inequality:
Let $\nu>0$ be fixed and $u(t)\ge0$, and suppose that for any $t\ge1$
\begin{equation}
\int_0^t u(\tau)\,\mathrm{e}^{\nu(\tau-t)}\>\mathrm{d}\tau \le M,
\end{equation}
then for any $t>0$,
\begin{equation}\label{q:intineq1}
\int_t^{t+1} u(\tau) \>\mathrm{d}\tau
\le \int_t^{t+1} \mathrm{e}^{\nu(\tau-t)} u(\tau)\>\mathrm{d}\tau
\le \int_0^{t+1} \mathrm{e}^{\nu(\tau-t)} u(\tau)\>\mathrm{d}\tau
\le \mathrm{e}^\nu M.
\end{equation}
Next, we quote the following Gronwall-type lemma from \cite{foias-manley-temam-treve:83,jones-titi:93}.
\begin{lemma}\label{t:gronwall}
Let $\alpha$ and $\beta$ be locally integrable functions on $(0,\infty)$ satisfying
\begin{equation}\begin{aligned}
&\liminf_{t\to\infty}\int_t^{t+1} \alpha(\tau)\>\mathrm{d}\tau > 0,
\hbox to 30pt{} \limsup_{t\to\infty}\int_t^{t+1} \alpha^{-}(\tau)\>\mathrm{d}\tau < \infty,\\
&\lim_{t\to\infty}\int_t^{t+1} \beta^{+}(\tau)\>\mathrm{d}\tau = 0,
\end{aligned}\end{equation}
where $\alpha^{-} := \max\{-\alpha,0\}$ and $\beta^{+} := \max\{\beta,0\}$.
Suppose $\xi$ is an absolutely continuous non-negative function on $(0,\infty)$ such that
\begin{equation}
\ddt{\xi} + \alpha\xi \leq \beta
\end{equation}
almost everywhere.
Then $\xi(t) \to 0$ as $t \to \infty$.
\end{lemma}
We first use the bound \eqref{q:aw} on the non-zonal $\tilde{\w}$ to derive a useful control on the {\em zonal\/} vorticity $\bar{\w}$.
Fixing some $\kappa_f\ge\kappa_0$, let $\wfb{>f}=(1-{\sf P}_{\kappa_f})\bar{\w}$ and $\bar{f}^{\sst{>f}}=(1-{\sf P}_{\kappa_f})\bar{f}$.
We multiply \eqref{q:dwdt} by $\wfb{>f}$ in $L^2$ and compute
\begin{align*}
\frac12\ddt{}|\wfb{>f}|^2 &+ \mu|\nabla\wfb{>f}|^2
= - (\partial(\psi,\omega),\wfb{>f}) + (f,\wfb{>f})\\
&= - (\partial(\tilde{\psi},\tilde{\w}),\wfb{>f}) + (\bar{f}^{\sst{>f}},\wfb{>f}) \hspace{90pt} \text{by \eqref{q:bb0}}\\
&\le |\nabla\tilde{\psi}|_{\infty}^{} |\tilde{\w}|_2^{}|\nabla\wfb{>f}|_2^{} + \frac{2}{\mu}|\nabla^{-1}\ffb{>f}|^2 + \frac{\mu}{8}|\nabla\wfb{>f}|^2\\
&\le \frac{2}{\mu}|\nabla\tilde{\psi}|_\infty^2|\tilde{\w}|^2 + \frac{2}{\mu}|\nabla^{-1}\ffb{>f}|^2 + \frac{\mu}{4}|\nabla\wfb{>f}|^2\\
&\le \frac{c}{\nu_0}\,\varepsilon M_0 |\nabla\tilde{\psi}|\,|\nabla\tilde{\w}| + \frac{2}{\mu}|\nabla^{-1}\ffb{>f}|^2 + \frac{\mu}{4}|\nabla\wfb{>f}|^2\\
&\le \frac{c\,\varepsilon M_0}{\nu_0\,\kappa_0^2} |\nabla\tilde{\w}|^2 + \frac{2}{\mu}|\nabla^{-1}\ffb{>f}|^2 + \frac{\mu}{4}|\nabla\wfb{>f}|^2
\end{align*}
where we have used \eqref{q:aw} and \eqref{q:ag2} for the penultimate line.
This gives
\begin{equation}\label{q:genwfb}
\ddt{}|\wfb{>f}|^2 + \frac{3}{2}\,\mu|\nabla\wfb{>f}|^2 \le \frac{c\,\varepsilon M_0}{\nu_0\,\kappa_0^2}|\nabla\tilde{\w}|^2 + \frac{4}{\mu}|\nabla^{-1}\ffb{>f}|^2.
\end{equation}
Using Poincar\'e on the lhs and multiplying by $\mathrm{e}^{\nu_0 t}$, this gives us
\begin{equation}
\ddt{}\bigl(\mathrm{e}^{\nu_0 t}|\wfb{>f}|^2\bigr) + \frac\mu2\mathrm{e}^{\nu_0 t}|\nabla\wfb{>f}|^2
\le \frac{c\,\varepsilon M_0}{\nu_0\,\kappa_0^2}|\nabla\tilde{\w}|^2\mathrm{e}^{\nu_0 t} + \frac{4\,\mathrm{e}^{\nu_0 t}}{\mu}|\nabla^{-1}\ffb{>f}|^2,
\end{equation}
and, upon integration in time,
\begin{equation}\label{q:wfbt}\begin{aligned}
|\wfb{>f}(t)|^2 &+ \frac\mu2 \int_0^t \mathrm{e}^{\nu_0(\tau-t)}|\nabla\wfb{>f}|^2 \>\mathrm{d}\tau\\
&\le \mathrm{e}^{-\nu_0t}|\wfb{>f}(0)|^2 + \frac{c\,\varepsilon M_0}{\nu_0\,\kappa_0^2}\int_0^t |\nabla\tilde{\w}|^2\mathrm{e}^{\nu_0(\tau-t)}\>\mathrm{d}\tau + \frac{4}{\mu\nu_0}|\nabla^{-1}\ffb{>f}|^2\\
&\le \frac{c_*\,\varepsilon^2 M_0^2}{2\nu_0^2\,\kappa_0^2} + \frac{4}{\mu\nu_0}|\nabla^{-1}\ffb{>f}|^2
\end{aligned}\end{equation}
where we have used \eqref{q:aw'} for the last line, taken $t$ sufficiently large and adjusted the constant.
We now consider the consequences of the hypotheses \eqref{q:fkpf}--\eqref{q:fhs}.
First, when $\bar{f}$ satisfies \eqref{q:fkpf}, we have $\bar{f}^{\sst{>f}}=0$, giving, using \eqref{q:intineq1} and the fact that $\mathrm{e}^{\nu_0}<3$,
\begin{equation}\label{q:bd-wbfk}
\int_t^{t+1} |\nabla\bar{\w}^{\sst{>f}}(\tau)|_{L^2}^2 \>\mathrm{d}\tau
\le 3c_*\varepsilon^2M_0^2/\nu_0^3.
\end{equation}
Next, for $\bar{f}$ satisfying \eqref{q:fhs}, we have the bound
\begin{equation}
|\nabla^{-1}\ffb{>f}|^2
\le \frac{\nu_0^4(\kappa_0/\kappa_f)^{2s+1}}{(2s+1)\zeta(2s+2)}\frac{\mathcal{G}_0^2}{\kappa_0^4}.
\end{equation}
Using this in \eqref{q:wfbt} and dropping $|\wfb{>f}(t)|^2$ on the lhs gives
\begin{equation}\label{q:bd-wbfs}
\int_0^t \mathrm{e}^{\nu_0(\tau-t)}|\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
\le \frac{c_*\varepsilon^2M_0^2}{\nu_0^3} + \frac{8\,(\kappa_0/\kappa_f)^{2s+1}\nu_0}{(2s+1)\,\zeta(2s+2)}\,{\mathcal{G}_0^2},
\end{equation}
Finally, when $\bar{f}$ satisfies \eqref{q:fexa}, we have
\begin{equation}
|\nabla^{-1}\ffb{>f}|^2
\le \nu_0^4\frac{2\alpha}{1+2\alpha}\frac{\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}}{1-\mathrm{e}^{-2\alpha}}\frac{\mathcal{G}_0^2}{\kappa_0^4}
\le \frac{\nu_0^4}{\kappa_0^4}\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}\mathcal{G}_0^2.
\end{equation}
Using this in \eqref{q:wfbt} and dropping $|\wfb{>f}(t)|^2$ on the lhs as before gives
\begin{equation}\begin{aligned}\label{q:bd-wbfx}
\int_0^t \mathrm{e}^{\nu_0(\tau-t)}|\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
&\le \frac{c_*\varepsilon^2M_0^2}{\nu_0^3} + 8\nu_0\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}\mathcal{G}_0^2.
\end{aligned}\end{equation}
For both \eqref{q:bd-wbfs} and \eqref{q:bd-wbfx}, suitable $\kappa_f$ will be chosen when these inequalities are used below in the proof of Theorem \ref{t:modes}.
\begin{proof}[Proof of Theorem \ref{t:modes}]
We multiply \eqref{q:dwbeta} by $\dwm>$ in $L^2$ to obtain
\begin{equation*}\begin{aligned}
(\partial_t\delta\omega,\dwm>) + (\partial(\psi^{\sharp},\delta\omega),\dwm>) +(\partial(\delta\psi,\omega),\dwm>) &+ \frac{\kappa_0}{\varepsilon}(\partial_x\delta\psi,\dwm>)\\
&= (\mu\Delta\delta\omega, \dwm>).
\end{aligned}\end{equation*}
Integration by parts shows that the $\kappa_0/\varepsilon$ term is 0, so
\begin{equation}\label{q:dwbeta2}\begin{aligned}
\frac{1}{2}\ddt{}|\dwm>|_2^2 &+ \mu|\nabla\dwm>|_2^2\\
&= -(\partial(\psi^{\sharp},\delta\omega),\dwm>) - (\partial(\dpsm<,\omega),\dwm>) - (\partial(\dpsm>,\omega),\dwm>).
\end{aligned}\end{equation}
For the first term on the right hand side, we use the fact that $(\partial(\psi^{\sharp},\dwm>),\dwm>) = 0$, to get
\begin{equation*}
(\partial(\psi^{\sharp},\delta\omega),\dwm>) = (\partial(\psi^{\sharp},\dwm<),\dwm>).
\end{equation*}
As for the third term on the right hand side of \eqref{q:dwbeta2}, we write $\omega = \bar{\w} + \tilde{\w}$ to get
\begin{equation}\label{q:psszonal}
(\partial(\dpsm>,\omega),\dwm>) = (\partial(\dpsm>, \tilde{\w}),\dwm>) + (\partial(\dpsm>,\bar{\w}), \dwm>),
\end{equation}
the last term of which becomes, by \eqref{q:bb0},
\begin{equation*}
(\partial(\dpsm>,\bar{\w}),\dwm>) = (\partial(\dpstm>,\bar{\w}),\dwtm>).
\end{equation*}
For some $\kappa_f \geq \kappa_0$ to be fixed later, we split $\bar{\w} = \wfb{<f} + \wfb{>f}$, where $\wfb{<f} = {\sf P}_{\kappa_f}\bar{\w}$ and $\wfb{>f} = \bar{\w}-\wfb{<f}$.
Then
\begin{equation}\label{q:midpt}
(\partial(\dpstm>,\bar{\w}),\dwtm>) = (\partial(\dpstm>,\wfb{<f}),\dwtm>) + (\partial(\dpstm>,\wfb{>f}),\dwtm>).
\end{equation}
Thus \eqref{q:dwbeta2} becomes
\begin{equation}\begin{aligned}\label{q:dwbeta3}
\frac{1}{2}\ddt{}|\dwm>|^2 &+ \mu|\nabla\dwm>|^2 \\
= &-(\partial(\psi^{\sharp},\dwm<),\dwm>) - (\partial(\dpsm<,\omega),\dwm>) - (\partial(\dpsm>,\tilde{\w}),\dwm>)\\
&- (\partial(\dpstm>,\wfb{<f}),\dwtm>) - \partial(\dpstm>,\wfb{>f}),\dwtm>).
\end{aligned}\end{equation}
We bound the first two terms on the right hand side (recall $|\cdot|_p^{} := |\cdot|_{L^p}^{}$) by
\begin{align}
|(\partial(\psi^{\sharp},\dwm<),\dwm>)| &\leq |\nabla\psi^{\sharp}|_\infty^{} |\nabla\dwm>|_2^{} |\dwm<|_2^{}\notag\\
&\leq \frac{4}{\mu}|\nabla\psi^{\sharp}|_\infty^2|\dwm<|_2^2 + \frac{\mu}{16}|\nabla\dwm>|_2^2,\label{q:modes1}\\
|(\partial(\dpsm<,\omega),\dwm>)|
&\leq |\nabla\dwm>|_2^{}\,|\nabla\dpsm<|_\infty^{} |\omega|_2^{}\notag\\
&\leq \frac{4}{\mu}|\nabla\dpsm<|_\infty^2|\omega|_2^2 + \frac{\mu}{16}|\nabla\dwm>|_2^2.\label{q:modes2}
\end{align}
We then bound the third term by
\begin{align}
|(\partial(\dpsm>,\tilde{\w}),\dwm>)|
&\le |\nabla\dpsm>|_\infty^{} |\nabla\tilde{\w}|_2^{}|\dwm>|_2^{}\notag\\
&\le c\,|\nabla\dpsm>|_2^{1/2}|\nabla\dwm>|_2^{1/2}|\dwm>|_2^{}|\nabla\tilde{\w}|_2^{} &&\text{by \eqref{q:ag2}}\notag\\
&\le \frac{c}{\kappa}\, |\nabla\dwm>|_2^{}|\dwm>|_2^{}|\nabla\tilde{\w}|_2^{} &&\text{by \eqref{q:arrPoi1}}\notag\\
&\le \frac{\mu}{16}|\nabla\dwm>|^2 + \frac{c}{\mu\kappa^2}|\nabla\tilde{\w}|^2|\dwm>|^2,\label{q:modes3}
\end{align}
and the fourth term by
\begin{align}
|(\partial(\dpsm>,\wfb{<f}),\,&\dwm>)| = |(\partial(\dpsm>,\nabla\wfb{<f}),\nabla\dpsm>)|\notag\\
&\le c\,|\Delta\wfb{<f}|_\infty^{}|\nabla\dpsm>|_2^2\notag\\
&\le c\,\frac{\kappa_0^{1/2}}{\kappa^2}|\Delta\wfb{<f}|^{1/2}|\nabla^3\wfb{<f}|^{1/2}|\dwm>|^2 &&\text{by \eqref{q:arrPoi1} and \eqref{q:ag1}}\notag\\
&\le c\,\frac{(\kappa_0\kappa_f^3)^{1/2}}{\kappa^3}|\nabla\wfb{<f}|\,|\dwm>|\,|\nabla\dwm>| &&\text{by \eqref{q:arrPoi2}}\notag\\
&\le \frac{\mu}{16}|\nabla\dwm>|^2 + \frac{c\,\kappa_0\kappa_f^3}{\mu\kappa^6}|\nabla\omega|^2|\dwm>|^2.\label{q:modes4}
\end{align}
Finally, the last term on the rhs of \eqref{q:dwbeta3} can be bounded as
\begin{align}
|(\partial(\dpsm>,\wfb{>f}),\,&\dwm>)| \leq |\nabla\dpsm>|_2^{}|\wfb{>f}|_{\infty}^{}|\nabla\dwm>|_2^{}\notag\\
&\le c\,\frac{\kappa_0^{1/2}}{\kappa}|\wfb{>f}|^{1/2}|\nabla\wfb{>f}|^{1/2}|\nabla\dwm>|\,|\dwm>| &&\text{by \eqref{q:arrPoi1} and \eqref{q:ag1}}\notag\\
&\le c\,\frac{\kappa_0^{1/2}}{\kappa\kappa_f^{1/2}}|\nabla\wfb{>f}|\,|\nabla\dwm>|\,|\dwm>| &&\text{by \eqref{q:arrPoi1}}\notag\\
&\le \frac\mu{16}|\nabla\dwm>|^2 + \frac{c\kappa_0}{\mu\kappa^2\kappa_f}|\nabla\wfb{>f}|^2|\dwm>|^2.\label{q:modes5}
\end{align}
Putting all these together and applying \eqref{q:arrPoi1} on the lhs gives
\begin{align*}
\ddt{}|\dwm>|^2 &+ |\dwm>|^2\Bigl(\mu\kappa^2 - \frac{c}{\mu\kappa^2}\,|\nabla\tilde{\w}|^2 - \frac{c\kappa_0\kappa_f^3}{\mu\kappa^6}|\nabla\omega|^2 - \frac{c\kappa_0}{\mu\kappa^2\kappa_f}|\nabla\wfb{>f}|^2\Bigr)\\
&\leq \frac{8}{\mu}|\nabla\psi^{\sharp}|_\infty^2|\dwm<|^2 + \frac{8}{\mu}|\nabla\dpsm<|_\infty^2|\omega|^2.
\end{align*}
We aim to apply Lemma \ref{t:gronwall} to this, with $\xi = |\dwm>|^2$, $\alpha$ the large bracket on the lhs and $\beta$ the rhs.
Now the hypothesis of the lemma on $\beta$ is satisfied since $|\dwm<(t)|\to0$ as $t\to\infty$ (and what multiply it are bounded when integrated in time), and that on $\xi$ follows from the standard regularity of the NSE.
The hypothesis on $\alpha$ would follow from
\begin{equation}\label{q:col2}
\limsup_{t\to\infty}\int_t^{t+1} \Bigl(\frac1{\mu\kappa^2}|\nabla\tilde{\w}|^2 + \frac{\kappa_0\kappa_f^3}{\mu\kappa^6}|\nabla\omega|^2 + \frac{\kappa_0}{\mu\kappa^2\kappa_f}|\nabla\wfb{>f}|^2\Bigr) \>\mathrm{d}\tau < c\mu\kappa^2,
\end{equation}
which in turn is implied by the conditions
\begin{align}
&\limsup_{t\to\infty}\int_t^{t+1} |\nabla\tilde{\w}|^2 \>\mathrm{d}\tau < c\,\mu^2\kappa^4,\label{q:ci1}\\
&\limsup_{t\to\infty}\int_t^{t+1} |\nabla\omega|^2 \>\mathrm{d}\tau < \frac{c\,\mu^2\kappa^8}{\kappa_0\kappa_f^3},\label{q:ci2}\\
&\limsup_{t\to\infty}\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau < \frac{c\,\mu^2\kappa^4\kappa_f}{\kappa_0}.\label{q:ci3}
\end{align}
For the first condition, we note that \eqref{q:aw} implies
\begin{align*}
\int_t^{t+1} |\nabla\tilde{\w}|^2 \>\mathrm{d}\tau
\le {\varepsilon M_0}/{\nu_0}
\end{align*}
so \eqref{q:ci1} would follow if
\begin{equation}\label{q:epbd}
\kappa/\kappa_0 > c\,(\varepsilon M_0/\nu_0^3)^{1/4}.
\end{equation}
By \eqref{q:wgm}, the second condition is implied by
\begin{equation}\label{q:bdk-pf1}
c\,\mathcal{G}_0^2\nu_0 < \mu^2\kappa^8/(\kappa_0\kappa_f^3)
\quad\Leftrightarrow\quad
\kappa/\kappa_0 > c\,\nu_0^{-1/8}(\kappa_f/\kappa_0)^{3/8}\,\mathcal{G}_0^{1/4}.
\end{equation}
\medskip
We first consider the case when $\bar{f}$ satisfies \eqref{q:fkpf}.
By \eqref{q:bd-wbfk}, we have
\begin{equation*}
\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
\le c\,\varepsilon^2M_0^2/\nu_0^3
\end{equation*}
so in this case \eqref{q:ci3} would hold if
\begin{equation}\label{q:bdk-pf2}
\kappa/\kappa_0 > c\,(\varepsilon M_0)^{1/2}\nu_0^{-5/4}(\kappa_0/\kappa_f)^{1/4}.
\end{equation}
This bound is dominated by \eqref{q:epbd} when
\begin{equation}\label{q:eps-kf}
\varepsilon M_0 \le c\,\nu_0^2(\kappa_f/\kappa_0),
\end{equation}
which we hereby assume.
Combining \eqref{q:epbd}, \eqref{q:bdk-pf1} and \eqref{q:bdk-pf2}, we recover \eqref{q:bdkpf}.
\medskip
Next, we consider the case when $\bar{f}$ satisfies \eqref{q:fhs}.
By \eqref{q:intineq1} and \eqref{q:bd-wbfs}, we have
\begin{equation}\label{q:pwk-pf1}
\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
\le c\,\varepsilon^2M_0^2/\nu_0^3 + c\,c_\zeta(s)\,\nu_0\,(\kappa_0/\kappa_f)^{2s+1}\,\mathcal{G}_0^2 =: I_1
\end{equation}
where $1/c_\zeta(s):=(2s+1)\,\zeta(2s+2)$.
So \eqref{q:ci3} would be satisfied if $I_1<c\mu^2\kappa^4(\kappa_f/\kappa_0)$;
analogously to what we did with \eqref{q:col2}, this is in turn implied by the inequalities
\begin{align}
(\kappa/\kappa_0)^4 &> c\,(\varepsilon M_0)^2\nu_0^{-5}(\kappa_0/\kappa_f)\label{q:ci3s1}\\
(\kappa/\kappa_0)^4 &> c\,c_\zeta(s)\,\nu_0^{-1}(\kappa_0/\kappa_f)^{2s+2}\,\mathcal{G}_0^2.\label{q:ci3s2}
\end{align}
Since both \eqref{q:bdk-pf1} and \eqref{q:ci3s2} must be satisfied, we equate these bounds and find
\begin{equation}\label{q:pwk-kpf}
(\kappa_f/\kappa_0)^{2s+7/2} = c\,c_\zeta(s)\nu_0^{-1/2}\mathcal{G}_0,
\end{equation}
which fixes $\kappa_f$ and turns both \eqref{q:bdk-pf1} and \eqref{q:ci3s2} to
\begin{equation}\label{q:pwk-pf4}
\kappa/\kappa_0 > c\,\bigl(c_{\zeta}(s)^{3/2}\,\nu_0^{-(s+5/2)}\,\mathcal{G}_0^{2s+5}\bigr)^{1/(8s+14)}.
\end{equation}
Using \eqref{q:pwk-kpf}, \eqref{q:ci3s1} becomes
\begin{equation}
\kappa/\kappa_0 > c_s\,(\varepsilon\,M_0)^{1/2} \nu_0^{-5/4+1/(16s+28)}\mathcal{G}_0^{-1/(8s+14)}
\end{equation}
with $c_s=c\,c_\zeta(s)^{-1/(8s+14)}$, noting that since $s>5/2$ the exponent for $\mathcal{G}_0$ lies between $-1/34$ and $0$, giving a weak dependence.
This term is dominated by \eqref{q:epbd} when
\begin{equation}\label{q:eps-hs}
\varepsilon\,M_0 \le c\,c_s^{-4}\,\nu_0^{2-1/(4s+7)}\mathcal{G}_0^{2/(4s+7)}.
\end{equation}
Assuming this, \eqref{q:bdhs} follows from \eqref{q:epbd} and \eqref{q:pwk-pf4}.
\medskip
Finally we consider $\bar{f}$ satisfying \eqref{q:fexa}.
By \eqref{q:intineq1} and \eqref{q:bd-wbfx}, we have
\begin{equation}\label{q:exk-pf1}
\int_t^{t+1} |\nabla\wfb{>f}|^2\>\mathrm{d}\tau
\le c\,(\varepsilon M_0)^2/\nu_0^3 + c\,\nu_0 \,\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}\mathcal{G}_0^2.
\end{equation}
As before, \eqref{q:ci3} would be satisfied if both of the following hold:
\begin{align}
&(\kappa/\kappa_0)^4 > c\,(\varepsilon M_0)^2\nu_0^{-5}(\kappa_0/\kappa_f)\label{q:ci3a1}\\
&(\kappa/\kappa_0)^4 > c\,\nu_0^{-1}\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}(\kappa_0/\kappa_f)\,\mathcal{G}_0^2.\label{q:ci3a2}
\end{align}
Equating the bounds from \eqref{q:bdk-pf1} and \eqref{q:ci3a2}, we arrive at
\begin{equation}
(\kappa_f/\kappa_0)^{5/2}\mathrm{e}^{2\alpha(\kappa_f/\kappa_0-1)} = c_\alpha\,\nu_0^{-1/2}\,\mathcal{G}_0,
\end{equation}
which can be inverted to give
\begin{equation}\label{q:falp}
\kappa_f/\kappa_0 = F_\alpha\bigl(\mathcal{G}_0/\nu_0^{1/2}\bigr)
\quad\text{where}\quad
F_\alpha^{-1}(y)=y^{5/2}\mathrm{e}^{2\alpha (y-1)}/c_\alpha.
\end{equation}
This fixes $\kappa_f$.
Now the bound from \eqref{q:epbd} would dominate that from \eqref{q:ci3a1} when
\begin{equation}\label{q:eps-exa}
\varepsilon M_0 \le c\,\nu_0^2\,(\kappa_f/\kappa_0).
\end{equation}
Assuming this, \eqref{q:bdexa} follows from \eqref{q:epbd} and \eqref{q:bdk-pf1}.
\end{proof}
\section{Proof: Determining Nodes}\label{s:pf-nodes}
In this section, we prove Theorem~\ref{t:nodes}. We follow the notations and conventions of \S\ref{s:pf-modes}.
We use several crucial inequalities proved in \cite{jones-titi:93}.
Following them, let the points ${\boldsymbol{x}}_1,\cdots,{\boldsymbol{x}}_N$ be placed at regular spacings within our periodic domain $\mathscr{M}=[0,L]\times[-L/2,L/2]$.
Defining
\begin{equation}\label{q:defeta}
\eta(u) := \max_{1\leq i \leq N}|u({\boldsymbol{x}}_i)|
\end{equation}
for all $u \in H^2(\mathscr{M})$ satisfying \eqref{q:v0int},
we have the following bounds:
\begin{equation}\label{q:jtnodes1}
|u|_{L^2}^2 \hspace{37pt} \leq c_\eta \,L^2\eta(u)^2 + c_\eta \frac{L^4}{N^2}|\Delta u|_{L^2}^2,
\end{equation}
\begin{equation}\label{q:jtnodes2}
|\nabla u|_{L^2}^2, |u|_{L^\infty}^2 \leq c_\eta\, N \eta(u)^2 + c_\eta \frac{L^2}{N}|\Delta u|_{L^2}^2,
\end{equation}
for an absolute constant $c_\eta$.
\begin{proof}[Proof of Theorem \ref{t:nodes}]
We multiply \eqref{q:dwbeta} by $\delta\omega$ in $L^2$ to obtain
\begin{equation*}
(\partial_t\delta\omega,\delta\omega) + (\partial(\psi^{\sharp},\delta\omega),\delta\omega) + (\partial(\delta\psi,\omega),\delta\omega) + \frac{\kappa_0}{\varepsilon}(\partial_x\delta\psi,\delta\omega) = \mu(\Delta\delta\omega,\delta\omega).
\end{equation*}
The second and fourth term vanish upon integration by parts, giving
\begin{equation}\label{q:nodes0}
\frac{1}{2}\ddt{}|\delta\omega|^2 + \mu|\nabla\delta\omega|^2 = -(\partial(\delta\psi,\omega),\delta\omega).
\end{equation}
We use \eqref{q:bb0} and the splitting $\omega = \bar{\w} + \tilde{\w}$ to write the rhs as
\begin{equation}
-(\partial(\delta\psi,\omega),\delta\omega) = -(\partial(\delta\psi,\tilde{\w}),\delta\omega) - (\partial(\delta\psi,\bar{\w}),\delta\omega).
\end{equation}
As in \eqref{q:midpt}, we split $\bar{\w} = \wfb{<f} + \wfb{>f}$ where $\wfb{<f} = {\sf P}_{\kappa_f}\bar{\w}$ and $\wfb{>f} = \bar{\w}-\wfb{<f}$, for some $\kappa_f \geq \kappa_0$ to be fixed later.
Now \eqref{q:nodes0} becomes
\begin{equation}\label{q:dwnodes}
\frac{1}{2}\ddt{}|\delta\omega|^2 + \mu|\nabla\delta\omega|^2 = -(\partial(\delta\psi,\tilde{\w}),\delta\omega) - (\partial(\delta\psi,\wfb{<f}),\delta\omega) - (\partial(\delta\psi,\wfb{>f}),\delta\omega).
\end{equation}
For $\mathcal{E}$, we pick $N$ equally spaced points $\{{\boldsymbol{x}}_1,\cdots,{\boldsymbol{x}}_N\}$.
We bound the first term on the rhs using \eqref{q:jtnodes2},
\begin{equation}\label{q:nodes1}\begin{aligned}
\bigl|(\partial(\delta\psi,\tilde{\w}),\delta\omega)\bigr|
&\le |\nabla\delta\psi|_\infty^{}|\nabla\tilde{\w}|_2^{}|\delta\omega|_2^{}\\
&\le \frac{c\mu N}{L^2}|\nabla\delta\psi|_\infty^2 + \frac{cL^2}{\mu N}|\nabla\tilde{\w}|^2|\delta\omega|^2\\
&\le \frac{c\mu N}{L^2}\Bigl[N\eta(\nabla\delta\psi)^2 + \frac{L^2}{N}|\nabla\delta\omega|^2\Bigr] + \frac{cL^2}{\mu N}|\nabla\tilde{\w}|^2|\delta\omega|^2.
\end{aligned}\end{equation}
Similarly, we bound the second term on the rhs of \eqref{q:dwnodes} using Young and \eqref{q:jtnodes1} as
\begin{equation}\label{q:nodes2}\begin{aligned}
\bigl|(\partial(\delta\psi,\wfb{<f}),\,&\delta\omega)\bigr|
\le |\nabla\wfb{<f}|_\infty^{}|\nabla\delta\psi|_2^{}|\delta\omega|_2^{}\\
&\le c(\kappa_0\kappa_f)^{1/2}|\nabla\omega|\,|\nabla\delta\psi|\,|\delta\omega|\\
&\le \frac{c\mu N^2}{L^4}|\nabla\delta\psi|^2 + \frac{cL^4}{\mu N^2}\kappa_0\kappa_f|\nabla\omega|^2|\delta\omega|^2\\
&\le \frac{c\mu N^2}{L^4}\Bigl[L^2\eta(\nabla\delta\psi)^2 + \frac{L^4}{N^2}|\nabla\delta\omega|^2\Bigr] + \frac{cL^4}{\mu N^2}\kappa_0\kappa_f|\nabla\omega|^2|\delta\omega|^2.
\end{aligned}\end{equation}
The final term in \eqref{q:dwnodes} we bound as
\begin{equation}\label{q:nodes3}\begin{aligned}
\bigl|(\partial(\delta\psi,\wfb{>f}),\,&\delta\omega)\bigr|
\le |\nabla\delta\psi|_\infty^{}|\nabla\wfb{>f}|_2^{}|\delta\omega|_2^{}\\%\pnote{\text{restart}}\\
&\le \frac{c\mu N}{L^2}|\nabla\delta\psi|_{\infty}^2 + \frac{cL^2}{\mu N}|\nabla\wfb{>f}|^2|\delta\omega|^2\\
&\le \frac{c\mu N}{L^2}\Bigl[N\eta(\nabla\delta\psi)^2 + \frac{L^2}{N}|\nabla\delta\omega|^2\Bigr] + \frac{cL^2}{\mu N}|\nabla\wfb{>f}|^2|\delta\omega|^2.
\end{aligned}\end{equation}
Applying \eqref{q:jtnodes2} to the lhs of \eqref{q:dwnodes} as
\begin{equation}
\frac12\ddt{\;}|\delta\omega|^2 + \frac{c\mu N}{L^2}|\delta\omega|^2 - \frac{c\mu N^2}{L^2}\eta(\nabla\delta\psi)^2
\le \frac12\ddt{\;}|\delta\omega|^2 + \mu|\nabla\delta\omega|^2,
\end{equation}
and putting together \eqref{q:nodes1}--\eqref{q:nodes3} gives, after some rearrangement,
\begin{equation}\label{q:ncol0}\begin{aligned}
\ddt{\;}|\delta\omega|^2 &+ |\delta\omega|^2\Bigl[\frac{c\mu N}{L^2} - \frac{cL^2}{\mu N}|\nabla\tilde{\w}|^2 - \frac{cL^4}{\mu N^2}\kappa_0\kappa_f|\nabla\omega|^2 - \frac{cL^2}{\mu N}|\nabla\wfb{>f}|^2\Bigr]\\
&\le \frac{c\mu N^2}{L^2}\eta(\nabla\delta\psi)^2.
\end{aligned}\end{equation}
As in the proof of Theorem \ref{t:modes}, we aim to apply Lemma \ref{t:gronwall} to $\xi = |\delta\omega|^2$, $\alpha$ being the large bracket on the lhs and $\beta$ the rhs of \eqref{q:ncol0}.
The hypothesis of the lemma on $\beta$ is met because $\nabla\delta\psi({\boldsymbol{x}}_i,t) \to 0$ as $t \to \infty$ for all $i$ and $|\nabla\omega|$ is bounded, while the hypothesis on $\xi$ follows from the regularity of the NSE.
The hypothesis on $\alpha$ would follow from, noting that $\nu_0=c\mu/L^2$,
\begin{equation}\label{q:ni}
\limsup_{t\to\infty}\int_t^{t+1} \Bigl( \frac{c}{\nu_0 N}|\nabla\tilde{\w}|^2 + \frac{c}{\nu_0 N^2}\frac\kappa_f\kappa_0|\nabla\omega|^2 + \frac{c}{\nu_0N}|\nabla\wfb{>f}|^2 \Bigr) \>\mathrm{d}\tau
< \nu_0 N.
\end{equation}
With no loss of generality, we require that this inequality is satisfied by each term separately (adjusting the $c$ as usual).
For the first term, we note that \eqref{q:aw} implies
\begin{equation}
\int_t^{t+1} |\nabla\tilde{\w}|^2 \>\mathrm{d}\tau \leq \varepsilon M_0/\nu_0.
\end{equation}
so \eqref{q:ni} for $|\nabla\tilde{\w}|^2$ would be satisfied for
\begin{equation}\label{q:nepbd}
N^2 > c\,\varepsilon M_0/\nu_0^3.
\end{equation}
For the second term, we have by \eqref{q:wgm}
\begin{equation}
\int_t^{t+1} |\nabla\omega|^2 \>\mathrm{d}\tau \le c\,\nu_0\mathcal{G}_0^2,
\end{equation}
so the $|\nabla\omega|$ part of \eqref{q:ni} is implied by
\begin{equation}\label{q:n<fbd}
N > \frac{c}{\nu_0^{1/3}} \Bigl(\frac{\kappa_f}{\kappa_0}\Bigr)^{1/3}\mathcal{G}_0^{2/3}.
\end{equation}
For the inequality involving $|\nabla\wfb{>f}|^2$, we need to handle the cases separately.
\medskip
We consider first when $\bar{f}$ satisfies \eqref{q:fkpf}.
By \eqref{q:bd-wbfk},
\begin{equation}
\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
\le c\,\varepsilon^2 M_0^2/\nu_0^3,
\end{equation}
so the $|\nabla\wfb{>f}|$ part of \eqref{q:ni} holds if
\begin{equation}\label{q:nbdk-pf1}
N > c\,\varepsilon M_0/\nu_0^{5/2}.
\end{equation}
Since $\nu_0\le1$, this bound is dominated by \eqref{q:nepbd} when $\varepsilon M_0\le c\,\nu_0^2$.
Assuming this, \eqref{q:nkpf} follows from \eqref{q:nepbd} and \eqref{q:n<fbd}.
\medskip
For $\bar{f}$ instead satisfying \eqref{q:fhs}, we recall from \eqref{q:pwk-pf1} that
\begin{equation}\label{q:pwk-ni3}
\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
\le c\,\varepsilon^2M_0^2/\nu_0^3 + c\,c_{\zeta}(s)\,\nu_0\,(\kappa_0/\kappa_f)^{2s+1}\,\mathcal{G}_0^2 = I_1,
\end{equation}
where $1/c_{\zeta}(s) = (2s+1)\,\zeta(2s+2)$.
Therefore, the $|\nabla\wfb{>f}|^2$ part of \eqref{q:ni} would be satisfied if $I_1 \leq c\,\nu_0^2 N^2$;
analogously to \eqref{q:pwk-pf1}, this in turn is implied by
\begin{equation}\label{q:ni3s1}
N^2 > c\,(\varepsilon M_0)^2\nu_0^{-5},
\end{equation}
\begin{equation}\label{q:ni3s2}
N^2 > c\,c_\zeta(s)\,\nu_0^{-1}(\kappa_0/\kappa_f)^{2s+1}\,\mathcal{G}_0^2.
\end{equation}
Since \eqref{q:n<fbd} and \eqref{q:ni3s2} must both hold, we equate these bounds to fix $\kappa_f$:
\begin{equation}\label{q:pwk-nkpf}
(\kappa_f/\kappa_0)^{2s+5/3} = c\,c_\zeta(s)\,\nu_0^{-1}\mathcal{G}_0^{2/3},
\end{equation}
with which both \eqref{q:n<fbd} and \eqref{q:ni3s2} now read
\begin{equation}\label{q:pwk-ni2s2}
N > c\,(c_\zeta(s)\nu_0^{-1}\,\mathcal{G}_0^{4s+5})^{1/(6s+5)}.
\end{equation}
As with the case when $\bar{f}$ satisfies \eqref{q:fkpf}, \eqref{q:nhs} follows from \eqref{q:nepbd} and \eqref{q:pwk-ni2s2} when $\varepsilon M_0 \le c\,\nu_0^2$.
Finally we consider $\bar{f}$ satisfying \eqref{q:fexa}.
By \eqref{q:intineq1} and \eqref{q:bd-wbfx},
\begin{align*}
\int_t^{t+1} |\nabla\wfb{>f}|^2 \>\mathrm{d}\tau
&\leq c\,(\varepsilon M_0)^2/\nu_0^3 + c\,\nu_0\,\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}\,\mathcal{G}_0^2.
\end{align*}
As before, the $|\nabla\wfb{>f}|^2$ part of \eqref{q:ni} is satisfied when both of the following hold:
\begin{equation}\label{q:ni3a1}
N^2 > c\,(\varepsilon M_0)^2\nu_0^{-5},
\end{equation}
\begin{equation}\label{q:ni3a2}
N^2 > c\,\nu_0^{-1}\mathrm{e}^{2\alpha(1-\kappa_f/\kappa_0)}\,\mathcal{G}_0^2.
\end{equation}
We equate the rhs of \eqref{q:n<fbd} and \eqref{q:ni3a2} to obtain
\begin{equation*}
(\kappa_f/\kappa_0)^{2/3}\,\mathrm{e}^{2\alpha(\kappa_f/\kappa_0-1)} = c_\alpha\,\nu_0^{-1}\,\mathcal{G}_0^{2/3},
\end{equation*}
which we invert to find
\begin{equation}\label{q:nfalp}
\kappa_f/\kappa_0 = F_{\alpha}(\nu_0^{-1}\mathcal{G}_0^{2/3})
\end{equation}
where $F_{\alpha}^{-1}(y) := y^{2/3}\mathrm{e}^{2\alpha(y-1)}/c_\alpha$ is that in \eqref{q:falp}, abusing notation slightly (possibly different $c_\alpha$).
As before, assuming $\varepsilon M_0 \le c\,\nu_0^2$, \eqref{q:nexa} follows from \eqref{q:nepbd} and \eqref{q:n<fbd}.
This concludes the proof.
\end{proof}
\nocite{temam:iddsmp}
\nocite{vallis:aofd}
\nocite{rhines:75}
\nocite{tgs:87}
\nocite{schochet:94}
\nocite{gallagher-straymond:07}
|
{
"timestamp": "2018-02-26T02:11:58",
"yymm": "1802",
"arxiv_id": "1802.08644",
"language": "en",
"url": "https://arxiv.org/abs/1802.08644"
}
|
\section{Introduction }
It is known that roughly half of Sun-like stars exist in multiples and about a third in binaries \citep{heintz69,duquennoy91,raghavan10,tokovinin14}. It is also known that extra-solar planets are highly abundant, with most stars hosting at least one planet \citep{howard10,mayor11,petigura13}. The next step is to connect the two concepts and pose the question of planets in binaries. Such planets are often thought of as exotic examples of nature's diversity. However, considering the ubiquity of both planets and binaries throughout the Galaxy, the question of their coupled existence is in fact natural.
We first cover a few important aspects of stellar multiplicity and the configurations, stability and dynamics of planets in binaries. The rest of this chapter is devoted to analysing the observed populations of planets in binaries. Some of the implications for planet formation are also discussed.
\subsection{Stellar multiplicity}\label{subsec:stellar_multiplicity}
The seminal work of \citet{raghavan10} draws upon binary and higher-order multi-star systems discovered with a variety of techniques. Two of the most important results are the multiplicity rate of stars and the separation distribution for binaries. These two results are shown in Fig.~\ref{fig:stellar_multiplicity}. For FGK stars that are typically considered for exoplanet surveys $\sim$40-50\% of stars have additional companions. The multiplicity is higher for more massive stars and lower for less massive. For binary stars the distribution of separations can be reasonably fitted by a log-normal function with a mean of 293 years. In terms of semi-major axis, this corresponds to roughly 50 AU for a mass sum $M_{\rm A}+M_{\rm B}=1.5M_{\odot}$. This distribution of separations is calculated using primaries of all masses. When split into different primary spectral types, the semi-major axis distribution grows wider as a function of increasing primary mass.
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\begin{center}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{binary_multiplicity.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{binary_separation_monotone.pdf}
\end{subfigure}
\caption{Left: stellar companion percentage as a function of spectral type. Right: period distribution of observed binaries, with a semi-major axis distribution calculated assuming a mass sum of $1.5M_{\odot}$, which is the average observed value. The dashed line is a fitted log-normal distribution with a mean of $\log T_{\rm bin}=5.03$ and a standard deviation of $\sigma_{\log T_{\rm bin}}=2.28$. Both figures are adapted from \citet{raghavan10}, with the data taken from sources listed in that paper.}
\label{fig:stellar_multiplicity}
\end{center}
\end{figure}
\subsection{Orbital configurations}\label{subsec:configurations}
There are two types of orbits in which planets have been discovered in binary star systems. First, the planet may have a wider orbit than the binary ($a_{\rm p}>a_{\rm bin}$) and orbit around the barycentre of the inner binary. This is known as a circumbinary or ``p-type'' planet. Alternatively, the planet may have a smaller orbit than the binary ($a_{\rm p} < a_{\rm bin}$) and only orbit around one component. This is known generally as a circumstellar or ``s-type'' planet, or as a circumprimary or circumsecondary planet as a function of which star is being orbited\footnote{These terms were first coined in \citet{dvorak86} and stand for ``planet-type'' and ``satellite-type''.}. These configurations are illustrated in Fig.~\ref{fig:binary_orbits}. Other, more exotic orbits in binaries have been considered, such as trojan planets near L4 and L5 \citep{dvorak86,schwarz15} and halo orbits near L1, L2 and L3 \citep{howell83}. No such planets have been discovered though.
\begin{figure}
\includegraphics[width=0.9\textwidth]{Binary_Orbits.pdf}
\caption{Left: circumstellar ``s-type'' planets in binaries, individually around both primary and secondary stars. Right: circumbinary ``p-type'' orbits in binaries collectively around both stars.}
\label{fig:binary_orbits}
\end{figure}
\subsection{Orbital stability}\label{subsec:stability}
There is a limit to where a planet may have a stable orbit in a binary star system. This has a profound effect on the observed populations, by carving away unstable regions of the parameter space. Much of the work to derive three-body stability limits was undertaken even before planets were discovered in binaries \citep{ziglin75,black82,dvorak86,eggleton95,holman99,mardling01,lohinger03,mudryk06,doolin11}. The classic method has been to run numerical $N$-body simulations over a parameter space and determine regular and chaotic domains. The often-quoted work of \citet{holman99} used this method to derive empirical stability limits for both circumbinary and circumstellar planets.
Circumbinary planets have stable orbits beyond $a_{\rm crit}$,
\begin{align}
\begin{split}
\label{eq:circumbinary_stability_HW}
\frac{a_{\rm crit}}{a_{\rm bin}} &= 1.60 + 5.10e_{\rm bin} - 2.22e_{\rm bin}^2 + 4.12\mu_{\rm bin} - 4.27 e_{\rm bin} - 5.09 \mu_{\rm bin}^2 + 4.61e_{\rm bin}^2\mu_{\rm bin}^2,
\end{split}
\end{align}
where $a_{\rm bin}$ is the semi-major axis of the binary, $e_{\rm bin}$ is the eccentricity of the binary and $\mu_{\rm bin} = M_{\rm B}/(M_{\rm A}+M_{\rm B})$ is the reduced mass of the binary. This does not account for eccentric or misaligned planets or resonances, which can create islands of both stability and instability \citep{doolin11}. For circumstellar orbits the widest planet orbit $a_{\rm crit}$ is
\begin{align}
\begin{split}
\label{eq:circumstellar_stability_HW}
\frac{a_{\rm crit}}{a_{\rm bin}} &= 0.464 - 3.80\mu_{\rm bin} -0.631e_{\rm bin} + 0.586 e_{\rm bin}\mu_{\rm bin} + 0.150 e_{\rm bin}^2 -0.198\mu_{\rm bin}e_{\rm bin}^2.
\end{split}
\end{align}
For details, including error bars on the coefficients of Eqs.~\ref{eq:circumbinary_stability_HW} and ~\ref{eq:circumstellar_stability_HW} see \citet{holman99}.
\subsection{Kozai-Lidov cycles}\label{subsec:kozai_lidov}
For circumstellar planets in binaries one must consider the Kozai-Lidov effect, which is named after the pioneering work of \citet{lidov61,lidov62,kozai62}. If the planet and binary orbits are misaligned between $39^{\circ}$ and $141^{\circ}$ then there is a secular oscillation of both the planet's eccentricity, $e_{\rm p}$, and its inclination with respect to the binary, $I_{\rm p}$. An example is shown in Fig.~\ref{fig:kozai_lidov_example}. An initially circular circumstellar planet obtains a maximum eccentricity of
\begin{equation}
e_{\rm p,max} = \sqrt{1-\frac{5}{3}\cos^2I_{\rm p,0}},
\end{equation}
where $I_{\rm p,0}$ corresponds to the planet's inclination at $e_{\rm p}=0$. This is derived to quadrupole order, under the assumption that the outer orbit (here the binary) carries the vast majority of the angular momentum. The outer eccentricity and inclination do not change. More general equations that can be applied to any inner and outer angular momenta were derived in \citet{lidov76,naoz13,liu15}. For {\it circumbinary} planets, where the outer angular momentum is typically negligible, the Kozai-Lidov effect practically disappears \citep{migaszewski11,martin16}.
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\begin{center}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{kozai_lidov_ecc.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{kozai_lidov_incl.pdf}
\end{subfigure}
\caption{Example of Kozai-Lidov cycles for a 0.5 AU circumprimary planet in a 5 AU binary, showing the variation of $e_{\rm p}$ (left) and $I_{\rm p}$ (right). The planet is initially inclined by $60^{\circ}$ with respect to the binary's orbital plane.}
\label{fig:kozai_lidov_example}
\end{center}
\end{figure}
\section{Discoveries and analysis}\label{sec:observations}
\begin{figure}
\includegraphics[width=0.99\textwidth]{all_planets_in_binaries_revision.pdf}
\caption{Planets in multi-star systems. Circumbinary planets are denoted by diamonds and circumstellar planets by squares. The different colours indicate the discovery technique for the planet, not the binary. The circumbinary and circumstellar stability limits are calculated using Eqs.~\ref{eq:circumbinary_stability_HW} and ~\ref{eq:circumstellar_stability_HW}, respectively, with $M_{\rm A}=1M_{\odot}$, $M_{\rm B}=0.5M_{\odot}$. For circumbinaries $e_{\rm bin}=0.15$ (mean for transiting discoveries) and for circumstellar planets $e_{\rm bin}=0.5$ (representative of wider binaries, \citealt{tokovinin16}).}
\label{fig:all_planets}
\end{figure}
Despite thousands of exoplanet discoveries to date, only a small fraction are known to exist in multi-star systems. This may seem surprising given the frequency of binary stars, but there have however been historical biases and strategies against finding planets in such systems \citep{eggenbergerudry07,wright12}.
A catalog of planets in binaries and multi-star systems is maintained by Richard Schwarz (\citealt{schwarz16}, \url{http://www.univie.ac.at/adg/schwarz/multiple.html}). As of May 2017 it lists 113 planets in 80 binaries and an additional 33 planets in 24 triple and higher order stellar systems. A comparison between binary and higher-order stellar systems is beyond the scope of this chapter, although we note that the first planet found in a multi-star system was found in a triple (16 Cyg, \citealt{cochran97}). The closest exoplanet known also exists in a triple (Proxima Cen, \citealt{angladaescude16}). We only know of two planets which exist in a circumbinary configuration but also have outer stellar companions, and hence possess both p-type and s-type orbits (PH-1/Kepler-64, \citealt{schwamb13,kostov13} and HW Virginis, \citealt{lee09}).
In Fig.~\ref{fig:all_planets} is a plot of the planet and binary semi-major axes for all systems in the Schwarz catalog with these values recorded. For planets in multi-star systems $a_{\rm bin}$ is the separation to the closest stellar companion to the host star. In triple and higher-order systems the closest stellar companion may itself be a binary. HW Virginis and PH-1/Kepler-64 are plotted as circumbinary systems.
This figure demonstrates that the circumbinary and circumstellar planets are naturally separated by the two stability limits, with roughly eight of each type near the respective stability boundary. According to the plot, two circumstellar planets are seemingly outside of the stable parameter space: OGLE-2008-BLG-092L (black square, \citealt{poleski14}) and HD 131399 (purple square, \citealt{wagner16}). However, in both cases the orbit may be stable for binary eccentricities less than the value of 0.5 used to demarcate the stability limit in Fig.~\ref{fig:all_planets}. Furthermore, \citet{nielsen17} present evidence that the planet in HD 131399 may in fact be a false-positive background star.
The circumstellar discoveries are more numerous than the circumbinaries so far, at a ratio of roughly 5:1. However since circumbinary discoveries are in their infancy this ratio is not meaningful. Because the two populations are seemingly distinct, we treat them in their own separate sections.
\subsection{Circumbinary planets}\label{subsec:circumbinary}
There have been many attempts with different techniques to find circumbinary planets. A general review of circumbinary detection methods is provided in the chapter by Doyle \& Deeg. In Fig.~\ref{fig:all_planets} we see that two techniques have dominated the circumbinary landscape: transits and eclipse timing variations (ETVs). Welsh \& Orosz review the {\it Kepler} mission's search for transiting systems. The chapter by Marsh covers the proposed discoveries of planets around post-common envelope binaries uncovered by ETVs, although this technique is also applicable to main sequence binaries \citep{schwarz11}. The few remaining circumbinary discoveries have some from pulsar timing, microlensing and imaging. Three of the imaging circumbinary planets - SR 12 AB c, Ross 458 c and ROXs 42b are not displayed in Fig.~\ref{fig:all_planets} because they lack a value for $a_{\rm bin}$ in the Schwarz catalog (see \citealt{kraus14,bowler14} for more details on their characterisation).
The method of radial velocities (RVs), which has been highly productive for planets around single stars, is yet to yield a bonafide circumbinary planet. This is despite concerted efforts over the years (e.g. TATOOINE, \citealt{konacki05,konacki10}). A potential circumbinary planet in HD 202206 was proposed by \citet{correia05}, but later astrometry characterised it as a circumbinary brown dwarf \citep{fritz17}. Astrometry with {\it GAIA} has the potential to find massive new circumbinary planets at moderate separations (a few AU) and also confirm or deny some of the ETV candidates \citep{sahlmann15}.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{Radius_vs_period-eps-converted-to.pdf}
\caption{Period (in days) vs radius (in Earth radii) for the {\it Kepler} objects of interest around all stars (green circles), transiting circumbinary planets (CBPs, blue squares) and the five innermost Solar System planets (SS, red diamonds).}
\label{fig:circumbinary_radius_vs_period}
\end{center}
\end{figure*}
\subsubsection{Observed trends}\label{subsubsec:circumbinary_trends}
When analysing the trends of circumbinary planets we largely stick to results of the {\it Kepler} transit survey. This is because it is the only sample that is both large enough for preliminary population studies and contains reliable discoveries, unlike the many caveats of the proposed ETV planets. Also by limiting ourselves to a single observing technique only a single observing bias needs to be accounted for.
The smallest circumbinary planet discovered to date is $3R_{\oplus}$; the rest are all larger than Neptune. They also have periods between 49 and 1108 days, which span those in the inner Solar System and are considered long for transit surveys. This is evident in Fig.~\ref{fig:circumbinary_radius_vs_period} where the circumbinary planets populate the top right of the parameter space. Finding circumbinary planets at long periods is aided by a transit probability which, compared with that around single stars, is both higher and has a shallower dependence on orbital period \citep{schneider90,schneider94,martin15,li16,martin17}.
There are evidently two stark holes in the circumbinary population: small planets and short-period planets. The shallow depth of small planets lowers the detection efficiency, however Fig.~\ref{fig:circumbinary_radius_vs_period} demonstrates that discoveries of them around single stars have been plentiful. Furthermore, studies of single stars such as \citet{petigura13} have demonstrated that small super-Earth and Earth-sized exoplanets are much more frequent than larger planets. The discovery of small circumbinary planets must however overcome an additional challenge: a unique transit timing signature.
For planets around single stars one may phase-fold the data on a certain period to stack transits and build statistical significance. For circumbinary planets, the barycentric motion of the binary and variation of the planetary orbit result in transit timing variations on the order of $(T_{\rm p}T_{\rm bin}^2)^{1/3}/(2\pi)$ \citep{agol05,armstrong13}. This may be on the order of days, and hence significantly longer than the transit duration. This inhibits the effectiveness of phase-folding for circumbinaries. All of the discoveries to date were made by eye, which is only effective when each individual transit is highly significant as in the case of giant planet transits.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{EB_histogram-eps-converted-to.pdf}
\caption{Histogram of the 2862 {\it Kepler} eclipsing binaries in black (\url{http://keplerebs.villanova.edu/} and first outlined in \citealt{prsa11}). The median at 2.8 days is denoted by a vertical blue solid line. The periods of binaries known to host transiting circumbinary planets are shown as vertical red dashed lines.}
\label{fig:circumbinary_EB_histogram}
\end{center}
\end{figure*}
For the lack of $< 50$ day planets, there are two components. First, the stability limit (Eq.~\ref{eq:circumbinary_stability_HW}) prevents planets from orbiting with $a_{\rm p} \lesssim 2.5 a_{\rm bin}$, and hence $T_{\rm p}\lesssim 4 T_{\rm bin}$. Second, there is an apparent paucity of circumbinary planets orbiting the tightest eclipsing binaries ($T_{\rm bin} < 7$ days). This is shown in Fig.~\ref{fig:circumbinary_EB_histogram}, where the histogram of the {\it Kepler} eclipsing binaries has a median of 2.8 days. If planets were distributed irrespective of binary period, at least twice as many should have been discovered \citep{martin14,armstrong14}. Such tight binaries are not believed to form in situ, but rather at wider separations followed by a process of high-eccentricity Kozai-Lidov under the influence of a misaligned third star, followed by tidal friction \citep{harrington68,mazeh79,eggleton01,tokovinin06,fabrycky07,naoz14,moe18}. This formation pathway for very tight binaries has been used to explain the dearth of observed planets around them \citep{munoz15,martinetal15,hamers16,xu16}. Most planets sandwiched in this evolving, misaligned triple system either fail to form, or become unstable during the shrinking process, or actually inhibit the binary shrinkage. Furthermore, the rare remaining planets are expected to have small mass and orbits that are long-period and misaligned, and hence harder to discover.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{stability_limit-eps-converted-to.pdf}
\caption{Circumbinary periapse distance, $a_{\rm p}(1-e_{\rm p})$, as a function of the critical distance ($a_{\rm crit}$ in Eq.~\ref{eq:circumbinary_stability_HW}) for the {\it Kepler} transiting circumbinary planets. Planetary orbits in the red zone are unstable. The multi-planet Kepler-47 system is drawn with grey circles, and the blue squares correspond to the other, single planet systems. The right plot is zoomed near the stability limit, which excludes Kepler-1647. Kepler numbers are labelled.}
\label{fig:circumbinary_stability_limit}
\end{center}
\end{figure*}
With respect to the stability limit imposed by the dynamical influence of the binary, the circumbinary planets have generally been found as close as possible. This is demonstrated in Fig.~\ref{fig:circumbinary_stability_limit}. The planet periapse distance is plotted as an ad hoc means of including the planet eccentricity, which Eq.~\ref{eq:circumbinary_stability_HW} does not account for, although most of the known circumbinary planets have small eccentricities $e_{\rm p}<0.1$. See \citet{mardling01} for further details on the effect of the outer eccentricity. Kepler-47 is the only multi-planet system, with the innermost planet right next to the stability limit, following the trend. It would be impossible for the outer two planets in Kepler-47 to also be close to the stability limit.
\citet{welsh14} attributed this observed pile-up to either a true preference for circumbinary planets to exist as close as possible to the stability limit, or an observing bias. \citet{martin14} simulated the {\it Kepler} circumbinary population and could not reproduce the observed pile-up of planets with observing biases alone. The most recent Bayesian analysis of \citet{li16}, including the recently-discovered Kepler-1647 (top blue square in Fig.~\ref{fig:circumbinary_stability_limit} left), showed that there was evidence for a pile-up if this very long-period planet was an outlier of the planet period distribution. If it was instead drawn from the same distribution as all of the others then the statistical significance of the pile-up was reduced. We note that the single planet discovered by microlensing (OGLE-2007-BLG-349L, \citealt{bennett16}) has $a_{\rm p}/a_{\rm bin}\sim 40$, far from the stability limit. The borderline RV discovery of HD 202206 \citep{correia05,fritz17} however has $a_{\rm p}/a_{\rm bin}\sim 2.3$, near the stability limit and the 5:1 resonance. Overall, more discoveries are needed, using different observing techniques with different biases.
It is seemingly difficult to form circumbinary planets in situ so close to the binary, owing to a hostile disc environment \citep{paardekooper12,lines14}. The favoured theory is an inwards migration of the planets followed by a parking near the stability limit \citep{pierens13,kley14}, where the disc is expected to have been truncated \citep{artymowicz94}.
\subsubsection{The occurrence rate of circumbinary planets}\label{subsubsec:abundance}
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\begin{center}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{armstrong_abundance_size.pdf}
\label{fig:armstrong_abundance_size}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{armstrong_abundance_incl.pdf}
\label{fig:armstrong_abundance_incl}
\end{subfigure}
\caption{Occurrence rate of circumbinary planets orbiting within 10.2 times the binary orbital period. Left: planets drawn from a Gaussian distribution of mutual inclinations with a $5^{\circ}$ standard deviation, as a function of the planet radius. Right: planets between 4 and 10 $R_{\oplus}$ as a function of the standard deviation of the Gaussian mutual inclination distribution. In all cases the Gaussian distribution is convolved with an isotropic distribution (i.e. uniform in $\cos I_{\rm p}$). See \citet{armstrong14}. for more details.}
\label{fig:armstrong_abundance}
\end{center}
\end{figure}
The first estimate for the circumbinary occurrence rate was made in the discovery paper of Kepler-34 and -35 by \citet{welsh12}. They used a simple geometric approach with static orbits to calculate that for the one Kepler-16, -34 and -35 that was observed transiting an eclipsing binary, another 5, 9 and 7 similar planets should exist that did not transit. Based on 750 eclipsing binaries being analysed, they estimated the circumbinary frequency as $(5+9+7)/750 = 2.8\%$. This was expected to be an underestimate given that the search was not exhaustive at that point.
The studies of \citet{martin14} and \citet{armstrong14} calculated the frequency of circumbinary planets as a function of the underlying distribution of the alignment between binary and planetary orbits. All of the systems discovered so far are flat to within $\sim 4^{\circ}$. This is similar to the Solar System and multi-exoplanet systems around single stars \citep{fabrycky14}. However, the the detection efficiency of misaligned circumbinary planets is reduced; whilst they may still pass the binary orbit, they will often miss transits, creating a sparse transit signature which is hard to identify.
\citet{martin14,armstrong14} noted that any abundance deduced based on the coplanar sample would therefore only be a minimum abundance, as a highly misaligned sample of planets could not be ruled out. \citet{martin14} simulated the {\it Kepler} detection yield for a suite of hypothetical circumbinary distributions, which was then compared with the actual {\it Kepler} findings. The tested distribution which best matched the {\it Kepler} discoveries had a 10\% minimum frequency of gas giants. The more comprehensive study by \citet{armstrong14} used an automated algorithm was made to search the {\it Kepler} eclipsing binary light curves for transit signals of circumbinary planets. Its sensitivity was limited to gas giants ($\gtrsim4R_{\oplus}$). The algorithm was tested on all detached {\it Kepler} eclipsing binary light curves, searching for both real planets and injected fake transit signals. By quantifying the detectability of planets in each eclipsing binary light curve, \citet{armstrong14} derived a minimum occurrence rate that matched the $\sim 10\%$ calculation by \citet{martin14}. Both studies are higher than the initial $\sim 3\%$ calculation by \citet{welsh12}, but the present sample size is too small to rule out this lower value.
In Fig.~\ref{fig:armstrong_abundance} (left) the \citet{armstrong14} occurrence rate the frequency is broken down into different radius intervals. There is a decreased frequency for larger planets, in line with what is known for single stars. Note that Kepler-1647 had not been confirmed at the time of their analysis, and is 11.9 Earth radii. Figure~\ref{fig:armstrong_abundance} (right) demonstrates how the true frequency of circumbinary planets is a function of the underlying distributions of the alignment between the binary and planet orbital planes.
A giant circumbinary planet frequency of 10\% would be compatible with what is seen around single stars at similar periods \citep{howard10,mayor11,petigura13}. This hints that the formation of gas giants might be similar around one and two stars. Furthermore, the existence of a highly misaligned population of circumbinary planets would be indicative of an even higher abundance when compared with single stars, posing curious questions to planet formation theories.
Most recently, \citet{li16} suggested that the existing discoveries can actually be used to deduce a true mutual inclination distribution of just a few degrees. However transit discovery methods that are sensitive to highly misaligned planets (e.g. $\gtrsim20^{\circ}$) are yet to be demonstrated. Overall, more circumbinary discoveries are required to draw any firm conclusions.
\citet{klagyivik17} searched for circumbinary planets using data from the {\it CoRoT} mission, which preceded {\it Kepler}. The shorter {\it CoRoT} observing timespans between 30 and 180 days limited the search sensitivity to $P_{\rm p}< 50$ days and $P_{\rm bin} < 10$ days. No discoveries were made, but within this period range the Jupiter- and Saturn-sized circumbinary frequency was constrained to $<0.25\%$ and $0.56\%$, respectively. This is much smaller than seen for comparable planets around single stars, but fitting with the dearth of circumbinary planets around tight binaries found in the {\it Kepler} mission.
Efforts have also been made to quantify the circumbinary frequency at wider separations. The SPOTS survey conducts direct imaging on young spectroscopic binaries to search for outer companions \citep{thalmann14}. The initial sample of 26 binaries has a wide spread of periods ranging from 1 day up to 40 years. The latest work in \citet{bonavita16} has been to combine observations taken in SPOTS with those already existing in the literature. No confirmed detections were made, but the frequency of planets between 2 and 15 $M_{\rm Jup}$ between 10 and 1000 AU was confined to $<9$\% with 95\% confidence. For comparison, \citet{bowler16} analysed single stars and made a much more precise occurrence rate calculation of $0.8^{+1.0}_{-0.6}\%$ for $5-13M_{\rm Jup}$ planets in wide $10-1000$ AU orbits. Surveys of massive, long-period circumbinary planets are therefore comparatively in their infancy.
The imaging surveys have focused on young systems and the {\it Kepler} results have been for main sequence binaries. Contrastingly, the method of ETVs has typically focused on evolved, post-common envelope binaries with $P_{\rm bin}<1$ day. \citet{zorotovic13} find that roughly 90\% of such binaries have observed ETVs, which could be interpreted as planets. This is roughly 10 times larger than seen in {\it Kepler} or the SPOTS survey. This indicates that ETVs observed are unlikely to all be of planetary origin, and likely include false positives such as the Applegate mechanism \citep{applegate92}. Alternatively, there would need to be a highly effective means of second generation planet formation after the evolution of the inner binary \citep{perets10,bear14}.
\subsection{Circumprimary and circumsecondary planets}\label{subsec:circumprimary}
Methodologically, there are two approaches to finding circumstellar planets in binaries. First, a binary may already be known and then a search is made for interior planets, for example the \citet{eggenberger06,toyota09} surveys. Alternately, a planet may already been known and then there is a search for outer stellar companions. The latter approach is favoured in the literature, because finding an additional star is simply an easier task than finding an additional planet.
In Fig.~\ref{fig:all_planets} we see that most of the circumstellar planets in binaries have been discovered by transits and RVs. The binaries themselves are generally discovered with RVs, imaging and astrometry, sometimes in combination. In this figure only part of the stable parameter space is well-populated. There is a lack of wide-orbit planets ($a_{\rm p}\gtrsim10$ AU). This can be explained by the difficulty in finding planets so far from their host star, particularly with the RV and transit techniques. This may change in the near future as direct imaging continues to improve.
There is also a reduced number of planets around binaries with $a_{\rm bin}<50$ AU (mean of the log-normal binary separation distribution, \citealt{raghavan10}). We know of 17 circumstellar planets in tighter binaries, compared to 101 planets in wider systems. The tightest binary known to host a circumstellar planet is 5.3 AU (KOI-1257, \citealt{santerne14}), although continued RV follow-up is on going to better characterise the outer orbit. There also exist some borderline binary cases with brown dwarf secondary ``stars'' in even tighter orbits (WASP-53 and WASP-81, \citealt{triaud17}, but not included in Fig.~\ref{fig:all_planets}). Tight binaries may not be resolvable by imaging surveys, but they are the easiest to find by the RV technique. Additionally, {\it Kepler} survey has provided almost 3,000 eclipsing binaries, with periods ranging from less than a day to several hundred (Fig.~\ref{fig:circumbinary_EB_histogram}), but none are known to host circumstellar planets.
We first review the multiplicity of planet-hosting stars, particularly as a function of binary separation, for example like the aforementioned dearth of planets in $a_{\rm bin}<50$ AU binaries. Comparisons are also made with the multiplicity of stars in general. The special class of hot Jupiters is then treated separately, before finally summarising some of the difficulties and caveats in the studies of circumstellar planets in binaries.
\subsubsection{The stellar multiplicity of planet hosts }\label{subsubsec:multiplicity}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{wang2.pdf}
\caption{Ratio of the planet frequency in single star systems to that in multi-star systems as a function of the separation to the stellar companion, taken from \citet{wang2} based on imaging of KOIs. Error bars come from Poisson statistics, but are not calculated for the first three data points due to a lack of detected stellar companions. Error bars for the last three data points are invisibly small. The dashed line is a ratio of 1.}
\label{fig:wang2}
\end{center}
\end{figure*}
There have been two main sources of planet-hosting stars around which stellar companions were searched. Earlier studies used planets discovered by RVs. More recent work has used Kepler Objects of Interest (KOIs), i.e. transiting planet candidates. The two samples typically have vastly different planet properties, sample biases and observational sensitivities to outer companions. Consistency in the stellar multiplicity rates is therefore not necessarily expected. However, some of the same trends have been seen in both samples.
One of the first large studies was conducted by \citet{eggenberger07}. A sample was constructed of 130 RV target stars, half of which were known to host a gas giant planet and the other half used as a control sample. Direct imaging was used to uncover outer stars. The control sample multiplicity was 18\%, almost double that of the planet-host sample which had a multiplciity rate of 10\%. \citet{eggenberger11} showed that whilst the planet hosts have a lower rate of stellar companions than field stars within 100 AU, there was no discernible difference for companions between 100 and 200 AU. The independent \citet{desidera07} imaging survey also recovers the detrimental impact of binaries tighter than 100 AU. \citet{ginski12,ginski16} surveyed 125 RV planet hosts and calculated an overall smaller multiplicity of $5.6\%$ based on confirmed stellar companions, but this percentage raises to $9-10\%$ if unconfirmed companions were included.
\citet{ngo17} compared the distribution of mass, period and eccentricity of RV planets around stars within and without stellar companions within 6 arcsecs. They found no discernable difference. The complementary survey of \citet{moutou17} observed multi-stellar systems with wider separations. It was found that that eccentric RV planets are more likely to exist in a binary than circular RV planets, potentially as a consequence of dynamical perturbations (see simulations by \citealt{kaib13}).
\citet{wang1} combined imaging and spectroscopic measurements of KOIs in the search for stellar companions. They demonstrated a paucity of planets in tight binaries ($\lesssim 20$ AU), for which the multiplicity of planet hosts was roughly three times less than for field stars. The follow-up study of \citet{wang2} was extended to to wider binary separations. They found a small depletion of planets in binaries as wide as 1500 AU, but only at 1-2$\sigma$ significance. In Fig.~\ref{fig:wang2} we plot their calculated ratio of the planet frequency in single star systems to that in multi-star systems. This matched the later work of \citet{kraus16} to also directly image KOIs calculated the suppression of planets in tight binaries ($< 50^{+49}_{-23}$ AU) by a factor of 3 compared to the frequency around single stars or wider binaries. Accounting for both the paucity and the rate of stellar multiplicity in field stars, it was deduced that one fifth of all solar-type stars are unable to host exoplanets, owing to a detrimental effect of a binary companion.
The study of \citet{horch14} similarly targeted KOIs with direct imaging, but at a lower spatial resolution. Consequently, they were typically sensitive to wider binaries than the previously-mentioned KOI surveys. They calculated a multiplicity rate of (37\% $\pm$ 7\%) and (47\% $\pm$ 19\%), based on the work done using the WIYN 3.5 m and Gemini North 8.1 m telescopes, respectively. These numbers are similar to the multiplicity of field stars ($\sim 50\%$, \citealt{duquennoy91,raghavan10}). The \citet{horch14} results are consistent with those from \citet{wang1,wang2,kraus16} for wide binaries, i.e. there is minimal or no impact of wide stellar companions ($\gtrsim 100$ AU) on planet occurrence.
A follow-up imaging survey of \citet{wang3} focused on solely giant planet KOIs, and hence may be more easily compared with RV-discovered planets. They discerned that the multiplicity of planet hosts was depleted to $0^{+5}_{-0}\%$ for binaries within 20 AU when compared to $18\pm2\%$ for field stars. Contrastingly, \citet{wang3} discovered a surprising increase in the multiplicity of planet hosts to $34\pm8\%$ for binaries between 20 and 200 AU, which is significantly higher than the field star multiplicity of $12\pm2\%$. This is a result not seen in studies of RV planet hosts and warrants further investigation, particularly given the potential consequences on planet formation. For binaries wider than $\sim 200$ AU the multiplicity rate of field stars and planet hosts was comparable, as found by other authors.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{ziegler_2017.pdf}
\caption{Binary percentage of giant planets ($>3.9R_{\oplus}$) and small planets based on imaging of {\it Kepler} candidates by \citet{ziegler17a}. Error bars are $1\sigma$.}
\label{fig:ziegler_2017}
\end{center}
\end{figure*}
Since 2012 the Robo-AO survey has conducted adaptive optics follow-up of hosts of {\it Kepler} planet candidates, with a sensitivity out to $4''$. This work has been published in a series of four papers \citep{law14,baranec16,ziegler17a,ziegler17b}. In Fig.~\ref{fig:ziegler_2017} we show their comparative stellar binary rates for hosts of giant ($>3.9R_{\oplus}$) and smaller planets. At short periods less than $\sim 10$ days there is a marginal increase in stellar multiplicity for giant planets. No statistically significant differences are seen at longer planet periods.
The presence of a close binary companion has strong implications for planet formation theories. It is predicted that the protoplanetary disc will be truncated \citep{artymowicz94} and that its conditions will be less favourable for planet formation by both gravitational collapse and core accretion \citep{nelson00,mayer05}. There may also be an ejection of formed planets \citep{zuckerman14}. Observations of protoplanetary discs also show evidence for decreased lifetimes in $<100$ AU binaries \citep{kraus12,daemgen13,daemgen15,cheetham15}.
\subsubsection{Hot Jupiters in stellar binaries}\label{subsubsec:hot_jupiters}
The existence and properties of ``hot Jupiters'' - giant planets on orbits of just a few days - have confounded us ever since the first discovery of 51 Peg \citep{mayor95} (see chapter by Santerne). The environment at such close proximity of the stars has classically thought to be a hinderance to planet formation (\citealt{pollack96,rafikov06}, but see also \citealt{boley16,batygin16}). Alternatively, the giant planet forms farther out in the disc before migrating inwards. Several different migration mechanisms have been proposed, such as disc migration \citep{goldreich79,lin79,ward97,masset03}, planet-planet scattering \citep{weidenschilling96,rasio96,chatterjee08,beauge12} and {Kozai-Lidov cycles plus tidal friction} \citep{innanen97,wu03,fabrycky07,naoz12}.
It has been observed that $\sim 30\%$ of hot Jupiters exist on orbits that are misaligned or even retrograde with respect to the spin of the host star (\citealt{hebrard08,winn09,triaud10} and the chapter by Triaud). This may be a fingerprint of Kozai-Lidov cycles acting on the inner orbit. Alternatively, the misalignment distribution may be a reflection of the tilting of the protoplanetary disc \citep{lai14,spalding14,spalding15,matsakos17} or planetary engulfment \citep{matsakos15}. A massive outer body is often implicated in these theories, and hence stellar binaries have been targeted as an explanation for hot Jupiters.
The ``friends of hot Jupiters'' survey has searched for outer companions to hot Jupiters drawn predominantly from the {\it WASP} and {\it HAT} photometric surveys. The results have been presented in a series of papers \citep{knutson14,ngo15,piskorz15,ngo16}. Radial velocities are used to search for close companions ($a_{\rm bin}<50$ AU), whereas direct imaging probes farther bodies. Two contrasting results were discovered, as shown in Fig.~\ref{fig:friends_of_hot_jupiters}. For $a_{\rm bin}$ between 1 to 50 AU the multiplicity of hot Jupiter hosts is $3.9^{+4.5}_{-2.0}\%$, which is roughly four times less than what is seen for field stars. The presence of a close stellar companion is seemingly detrimental to the existence of a hot Jupiter. On the other hand, hot Jupiters are seen to have wider stellar companions (50 - 2000 AU) at a rate of $47\pm7\%$, which is three times larger than what is seen for field stars. Note that in Fig.~\ref{fig:friends_of_hot_jupiters} this high multiplicity is split into five separation bins.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{friends_of_hot_jupiters.pdf}
\caption{ Completeness-corrected stellar multiplicity rate of hot Jupiter hosts compared with field stars from \citet{ngo16}. The multiplicity is calculated in five uniform bins in log space between 100 and 1778 AU, corresponding to the sensitivity of imaging, and an additional bin between 1 and 50 AU for the closer companions detectable by RVs.}
\label{fig:friends_of_hot_jupiters}
\end{center}
\end{figure*}
The \citet{evans16} imaging survey of predominantly {\it WASP} and {\it CoRoT} hot Jupiters calculated a multiplicity rate of $38^{+17}_{-13}\%$. Since their survey was typically only sensitive to companions farther than 200 AU, this rate is expectantly slightly lower than that calculated by the \citet{ngo16} survey, whose imaging was sensitive to companions as close as 50 AU. The work of \citet{daemgen09,faedi13,adams13,bergfors13} have also reported multiplicity rates that are slightly smaller than \citet{ngo16}, but only calculated using a small sample of $\sim 15-20$ hot Jupiters. The Robo-AO survey results from \citet{ziegler17a} (Fig.~\ref{fig:ziegler_2017}) demonstrate a heightened stellar multiplicity of stars hosting hot Jupiters compared with those hosting hot small planets.
The inherent rarity of hot Jupiters does mean that the current statistics significant have room for improvement. This problem will hopefully be overcome by {\it TESS} (see chapter by Ricker) and {\it PLATO} (see chapter by Rauer \& Heras). Improvements in imaging and results from the {\it GAIA} astrometric survey (see chapter by Sozzetti \& Bruijne) will aid the detection of stellar companions
On the surface, a heightened stellar multiplicity of hot Jupiter hosts gives credence to the idea of Kozai-Lidov migration. However, two problems have been uncovered. First, a stellar companion is not always sufficient to induce Kozai-Lidov cycles, as a faster secular effect will quench them, for example apsidal precession induced by tides and general relativity. The timescale of Kozai-Lidov cycles increases for farther companions, like the ones generally found around hot Jupiters. \citet{ngo16} calculated that only $\sim20\%$ of their surveyed hot Jupiters could have formed by Kozai-Lidov migration. This result is compatible with earlier simulations of \citet{wu07,naoz12,petrovich15}. A second problem with the Kozai-Lidov migration scenario is that \citet{ngo15} found that the stellar multiplicity rate was not correlated with the misalignment of hot Jupiters.
It is possible, however, that Kozai-Lidov migration is typically caused not by stellar companions to hot Jupiters but rather planetary companions. This has been investigated theoretically \citep{naoz11} and observationally \citep{bryan16}, but is beyond the scope of this review.
\subsubsection{Caveats and difficulties}\label{subsubsec:difficulties}
Overall, it is hard to quantitatively compare the derived stellar multiplicity rates between different surveys. Difficulties arise from inconsistent sample selection, e.g. planets detected by RVs or transits. Radial velocity exoplanet surveys have historically avoided binary systems, owing to the threat of spectral contamination. This is expected to be the main reason why multiplicity rates of RV-discovered planets are lower than that for transiting planets. Planets found using RVs are also typically larger and on longer periods, and both properties are seemingly connected to the influence of stellar companions.
Even for a single detection method such as transits, a difference in precision and observational timespan (e.g. {\it Kepler} verses {\it WASP}/{\it HAT}) biases the size and period distribution of the planets and hence potentially the deduced rate of stellar multiples. An additional effect, which has not been explored here, is the effect of the host star mass. This has been known to affect both the stellar multiplicity rate and semi-major axis distribution of binaries \citep{raghavan10}, as well as the planet occurrence rate around single stars \citep{johnson10}.
Another difficulty is the presence of the Malmquist bias, which is a preferential selection of brighter targets within astronomical surveys. Since multi-star systems have more flux contributions than single stars, they may be overrepresented in some surveys, skewing statistics. See \citet{kraus16} for further discussion on the Malmquist bias in multiplicity studies, and also \citet{wang2,wang_other,ginski16} for a more in depth discussion on other challenges.
\section{Summary of observed trends}\label{sec:summary}
Listed here are the most compelling observational trends so far.
\vspace{0.4cm}
Circumbinary (p-type) planets:
\begin{itemize}
\item Giant circumbinary planets around moderately wide binaries ($P_{\rm bin}\sim 7-41$ days) are found at a similar frequency to similar sized planets around single stars, hinting at a similar formation efficiency around one and two stars.
\item There is a dearth of circumbinary planets around tighter binaries, which is seen as evidence for the existing theory of tight binary formation formation via Kozai-Lidov cycles under the influence of a third star, plus tidal friction.
\item There is an over-abundance of circumbinary planets near the orbital stability limit. This may be indicative of inwards migration within the protoplanetary disc before a parking mechanism stops the planets near the inner hole in the disc which has been carved out by the binary.
\end{itemize}
Circumstellar (s-type) planets in binaries:
\begin{itemize}
\item When marginalized over all planet sizes, tight stellar companions ($\lesssim 50$ AU) are $\sim 3$ times less likely to be found around exoplanet hosts than field stars. This suggests a ruinous influence of a tight binary on planet formation and/or survival.
\item For wider binaries the planet host and field star multiplicity rates are similar, so additional stars at these separations are seemingly too distant to influence the planets. This is again for planets of all sizes.
\item Hot Jupiters have a $\sim 3$ times heightened stellar multiplicity rate compared to field stars, but only for wide ($ \gtrsim 50$ AU) binaries. This may be indicative of a nurturing influence of a wide stellar companion on hot Jupiters.
\end{itemize}
\section{Cross-References}
\begin{itemize}
\item{Two Suns in the Sky: The Kepler Circumbinary Planets $-$ Welsh, W. \& Orosz, J.}
\item{Circumbinary Planets Around Evolved Stars $-$ Marsh, T.}
\item{The Way to Circumbinary Planets $-$ Doyle, L. \& Deeg, H.}
\item{The Rossiter/McLaughlin Effect in Exoplanet Research $-$ Triaud, A.}
\item{Hot Jupiter Populations from Transit and RV Surveys $-$ Santerne, A.}
\item{Space Astrometry Missions for Exoplanet Science: Gaia and the Legacy of Hipparcos $-$ Sozzetti, A. \& Bruijne, J.}
\item{Space Missions for Exoplanet Science: TESS $-$ Ricker, G.}
\item{Space Missions for Exoplanet Science: PLATO $-$ Rauer, H. \& Heras, A.}
\end{itemize}
\begin{acknowledgement}
Thank you to Dave Armstrong, Sebastian Daemgen, Dan Fabrycky, Elliott Horch, Adam Kraus, Henry Ngo, Richard Schwarz and Amaury Triaud for expert insights on earlier versions of the manuscript. I also thank section editor Natalie Batalha for her thorough review, and book editors Hans Deeg and Juan Antonio Belmonte for giving me the opportunity to write this chapter. Finally, I acknowledge funding and support from the Swiss National Science Foundation, The University of Chicago and the Unversit{\'e} de Gen{\`e}ve. \end{acknowledgement}
|
{
"timestamp": "2018-02-27T02:00:18",
"yymm": "1802",
"arxiv_id": "1802.08693",
"language": "en",
"url": "https://arxiv.org/abs/1802.08693"
}
|
\section*{Acknowledgment}
\section{CONCLUSIONS}
\label{sec:Conclusions}
In this paper we introduced a new approach to vehicle trajectory prediction, which utilizes static environment information about drivable maneuvers contained in a map. Our method was evaluated on a set of over 14500 trajectories recorded at an intersection and proved to provide significantly better mid-term prediction results than motion model-based prediction. The second contribution of this paper is a method to automatically create lane-accurate maps from accumulated trajectories and extract a topological graph describing the traffic framework in the considered area.
\section{Evaluation}
\label{sec:Evaluation}
First, we briefly describe the evaluation data set in Section~\ref{subsec:evaluation_data_set} which provides suitably long trajectories.
Second, we introduce a similarity measure in Section~\ref{subsec:evaluation_similarity_measure} that serves as error measure for our method.
After presenting a baseline method in Section~\ref{subsec:evaluation_baseline_method}, we discuss the results in Section~\ref{subsec:evaluation_discussion}.
\subsection{Data Set}
\label{subsec:evaluation_data_set}
The data set was recorded at a busy intersection in Karlsruhe,~Germany with multiple maneuver options for each lane on two different days and viewpoints to minimize blind spots.
We used a Velodyne HDL64E-S2 \cite{velodyneHDL} range sensor mounted on an experimental vehicle to measure highly accurate surface positions of the environment represented as point sets.
Then, we remove points close to the ground by fitting a RANSAC \cite{fischler1981random} plane model to the data and removing all points below a minimum signed distance.
Afterwards, we perform cluster segmentation based on a generalized Connected Components Labeling on nearest neighbors as presented in \cite{Rusu_ICRA2011_PCL}.
Given the point clusters corresponding to objects we compute their convex hull and assume the object position to be the geometric center of that convex hull.
The tracking is realized by assuming a CYRA motion model and using a simple nearest neighbor association approach as long as each object was visible.
In the next step each proposed object is verified and labeled by hand in order to eliminate objects that are neither cars nor trucks.
Finally, all trajectories are split and/or trimmed so that their segments each are long enough to provide data for different evaluation horizons.
This results in 14531 vehicle trajectories of at least 4 m length and 5766 trajectories being at least 20 m long.
\subsection{Similarity Measure}
\label{subsec:evaluation_similarity_measure}
Choosing an evaluation metric for trajectory prediction while avoiding bias is a challenging task.
In general, a suitable metric should take the predicted trajectorie(s) and compare them with the actual driven trajectory resulting in a value representing either the distance or the similarity to the ground truth.
As detailed in \cite{zheng2015trajectory}, there are several different metrics comparing different aspects of trajectory similarity with each metric providing different results.
Most notably there is a fundamental difference in doing a path or a trajectory comparison.
As proposed in \cite{quehl2017how} it is possible to combine different metrics, each concentrating on different aspects of trajectory similarity, to a weighted sum of several metrics.
In the context of this paper the chosen similarity measure was created based on that proposed method using global orientation difference $m_{\mathrm{GOD}}$, average velocity difference $m_{\mathrm{AVD}}$ and mean euclidean distance for trajectories $m_{\mathrm{MEDT}}$ and paths $m_{\mathrm{MEDP}}$ as basis.
The results showed that $m_{\mathrm{MEDT}}$ and $m_{\mathrm{MEDP}}$ had similar and far larger weights than the other two metrics.
Therefore we chose $m(T_1,T_2) = 0.5 \cdot m_{\mathrm{MEDT}} + 0.5 \cdot m_{\mathrm{MEDP}}$.
This implies that the measure can be seen as the average distance (in meters) from the points closest in time and space.
\subsection{Baseline Method}
\label{subsec:evaluation_baseline_method}
As a baseline for our evaluation we chose to implement a simple CYRA-based prediction approach.
For this model, we assume constant yaw rate and linear acceleration during prediction.
The resulting trajectory is then compared to the ground truth using the metric defined in Section~\ref{subsec:evaluation_similarity_measure}.
The same procedure was conducted with our graph based prediction approach.
Since our approach also allows to predict more than one trajectory and assign each trajectory an estimated probability we can define this as a third prediction approach for the comparison.
Since this third approach provides several trajectories while the other two only one, the comparison method has to be modified.
For the third approach the evaluation was done by calculating the mathematical expectation based on the estimated probabilities.
\begin{figure*}
\centering
\includegraphics[width=0.87\textwidth]{tex/Graphics/Evaluation/Paper2_no_heading_labeled.jpg}
\caption{Evaluation of our prediction approach (blue and red) compared to prediction with a CYRA motion model (green).
The upper and lower boundaries represent the 75 and 25 percentile, respectively.}
\label{fig:comparison}
\end{figure*}
\subsection{Discussion}
\label{subsec:evaluation_discussion}
The results of our trajectory prediction evaluation are depicted in Fig.~\ref{fig:comparison}.
As previously explained, the evaluation was performed by calculating the predicted position for each timestamp that occurs in the ground truth until a given distance was traveled.
For each point we calculated the mean between the euclidean distance to the closest point on the interpolated ground truth trajectory and the euclidean distance to the point at the same timestamp.
This form of evaluation was chosen in order to mitigate differences in velocity between different trajectories of the data set.
Choosing the prediction horizon based on time difference would instead result in less meaningful data since a second may constitute large spatial differences based on the velocity.
The evaluation shows that both variants of the newly proposed prediction method perform better than a CYRA model based prediction.
While the CYRA assumption seems to hold for small distances, after a few meters it diverges increasingly fast from the ground truth.
The reason for this is that most driving maneuvers at an intersection are completed after a few meters driven causing a change in yaw rate and acceleration.
Our graph based prediction on the other hand incorporates future changes in these parameters.
For example in a situation at the beginning of a curve a CYRA approach predicts that the vehicle will continue driving in a circle or spiral along the current curvature depending on the acceleration.
A graph based approach on the other hand predicts that the vehicle will stop taking the turn after a few meters and continue straight ahead afterwards.
What is noteworthy is that the average distance to the ground truth of both variants of the graph prediction approach seems to increase only linearly unlike with the CYRA approach.
This implies that in more than $75\%$ of all cases the predicted path was chosen and a small error accumulates linearly with distance traveled.
The fact that the evaluation shows slightly better results for the prediction with more than one result implies that in cases where the most probable estimated trajectory is wrong, a more correct trajectory is still found and assigned a significant probability.
\section{Graph Creation}
\label{sec:Graph Creation}
When predicting vehicle trajectories it is reasonable to assume that all observed vehicles will behave according to traffic rules.
Their movement is confined to the boundaries of the road and their trajectories can be segmented into shorter trajectories with a well defined starting and endpoint.
For example consider a group of vehicles driving from on the same lane towards an intersection: A part of these vehicles will go straight while some vehicles will take a turn.
The accumulated trajectories of these different driving behaviors form a traffic network that describes the geometrical structure and drivable areas of the considered intersection.
This traffic network can be described in the form of a directed topological graph $G = (E,V)$ with transition probabilities.
These transition probabilities represent the general tendencies to choose the respective edge as basis for the next partial trajectory and can depend on a number of factors like velocity or even the current time of day (rush hour).
In order to create such a graph, the first step is to determine the main structure of $G$, which consists of a set of nodes $V$ (or vertices) and a set of edges $E$.
This can be realized in several ways: The first method would be to use existing maps readily available online to extract information about the surrounding road structure.
Open Street Map \cite{OpenStreetMap} for example allows the automatic retrieval of such information that can be parsed to a suitable graph.
However this method has the drawback that maps like these are usually not geo-referenced precise enough everywhere to constitute a lane accurate depiction of the course of the road.
Further they are often missing relevant information like the number of lanes in some areas.
Therefore we decided to create the graph in a different way.
Assuming that more and more cars are being equipped with sensors that allow for accurate tracking of vehicles, the accumulated trajectory data at the given intersection can be used in order to create the aforementioned graph.
Using this data we can infer, where possible paths can exist and how probable it is that vehicles follow that path.
This method has the advantage that the result will contain not only the correct number of lanes but also all maneuvers that were observed at some point at this intersection.
\captionsetup[sub]{font=footnotesize}
\begin{figure*}
\centering
\begin{subfigure}[h]{0.32\linewidth}
\includegraphics[width=\linewidth]{tex/Graphics/Graph_Creation/original_pseudo.png}
\caption{Vehicle paths observed at an intersection in Karlsruhe,~Germany.
The color encodes the trajectory density from low (blue) to high (red).}
\label{fig:original picture}
\end{subfigure}
\begin{subfigure}[h]{0.32\linewidth}
\includegraphics[width=\linewidth]{tex/Graphics/Graph_Creation/binary.png}
\caption{Homogenization by binarization of vehicle paths in order to compensate for differing amount of trajectory observations.}
\label{fig:binary picture}
\end{subfigure}
\begin{subfigure}[h]{0.32\linewidth}
\includegraphics[width=\linewidth]{tex/Graphics/Graph_Creation/thining.png}
\caption{Reduction to simple lines through thinning. This depicts the basis for graph extraction.}
\label{fig:picture after thining}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{tex/Graphics/Graph_Creation/classification.png}
\caption{Classification of nodes.
Start and end nodes depicted in black, decision and crossover nodes in cyan.}
\label{fig:classification of nodes}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{tex/Graphics/Graph_Creation/clusters.png}
\caption{Final result containing trajectory prototypes for each velocity cluster which represent different maneuvers.}
\label{fig:trajectories clusters}
\end{subfigure}
\caption{Graph extraction steps. Starting from an aerial trajectory image nodes and edges are extracted by using binarization, thinning and line following algorithms. Afterwards trajectory prototypes are matched to each edge.}
\label{fig:qualitative_results}
\end{figure*}
In order apply this method first we start by transforming all observed car trajectories into the map frame as depicted for a sample intersection in Figure~\ref{fig:original picture}.
The task of extracting typical trajectories is similar to the problem of line detection in a top view representation of these trajectories.
This inspired the idea to use image processing techniques in order to extract the basic graph structure and afterwards using the association of trajectory data to the graph structure to infer all further information.
The process of the topological graph's generation from trajectory data can be summarized in two steps: (a) Extraction of the topology of $G$ using image processing; (b) Classification of each Node $n \in G$ according to possible maneuvers.
The following sections will explain these steps in detail.
\subsection{Topology extraction}
\label{subsec:graph_generation}
All recorded trajectories in the observed traffic area are first converted to a top-view gray-scale image, in which the brightness of one pixel depends on the number of trajectories, which cross the area represented by this pixel.
Figure~\ref{fig:original picture} depicts this image using a color map to better illustrate brightness differences.
The goal of the first processing step is to eliminate outliers that do not represent the general structure of the observed area.
These outliers are for example lane changes which are generally possible at any point between neighboring lanes but do not change the underlying structure of separate lanes.
For that step, the gray-scale image is processed multiple times with morphological operations (Opening and Closing) with varying parameters.
Since we assume that most irrelevant outlier trajectories were eliminated in the previous step, we assume that the remaining trajectories are all relevant for the graph.
Therefore the next step is the binarization of the image which represents a homogenization of trajectory density as seen in Figure~\ref{fig:binary picture}.
In order to extract possible edges and nodes of the graph the next step is line thinning.
We apply the Zhan-Suen-Thinning algorithm \cite{Zhang:1984} to the binary image and receive a new image in which all lines have a thickness of only one pixel.
This method however might produce artifacts in areas with many diverging trajectories.
These artifacts are very short lines orthogonal to a longer line and can easily be identified and removed by searching for very short branches.
The result of these steps is depicted in Figure~\ref{fig:picture after thining}.
Finally, the nodes and edges are found by searching for black pixels with either only one or more than two black pixels in its direct neighborhood.
Pixels with exactly two black pixels in their neighborhood on the other hand are part of an edge that connects one node with another.
By following all black neighboring pixels of a node to the next node the edges are found.
The set of all nodes and edges found this way generate the structure of $G$. At this point edges are added bidirectionally.
\subsection{Maneuver Classification}
\label{subsec:maneuver_classification}
The determined main structure of topological graph $G$ only contains geometric information about the observed traffic area.
However, the possible movement directions as well as transition probabilities are still missing.
In order to add that information to $G$, map matching is used.
Each recorded trajectory $T$ is matched to an ordered set $E_T = \{e_1, e_2,...e_n\}, e_i \in E, n \in \mathbb{N}_0$ of connected edges in the topological graph.
The matching is done by iterating over all timestamps of each trajectory $T$.
For each timestamp $t \in T$ the current position is transformed into image coordinates and the edge belonging to the closest black pixels is a new candidate to be added at the end of $E_t$.
Each edge $e$ can only be added to $E_T$ if it is not already the last edge in $E_t$ ($e \neq e_n$) and if it is directly connected to a node connected to the last element ($\exists v \in V : (e,v) \in E \land (v,e_n) \in E$) or if $E_T$ is empty ($||E_T||=0$).
Since at this point $G$ is created bidirectionally this always yields two possible nodes $(n_1,n_2),(n_2,n_1)$.
We only add the edge that is aligned with the trajectories movement direction.
The result of this map matching is a sequence that best describes the trajectory in the context of $G$.
After repeating this for all trajectories, every edge that was never part of $E_t$ is removed resulting in a directed graph $G$.
Each node can then be classified into one of four types of nodes: (a) Start nodes only have outgoing edges.
They depict the border of our mapped region. If they occur in $E_T$ they are always the first element in $e_1$.
They are end nodes in neighboring map sections (b) End nodes have only ingoing edges and only occur at the end of $E_T$.
They constitute the outgoing edges of the mapped area and are start nodes in neighboring sections of the map.
(c) crossover nodes have multiple input nodes and multiple output nodes.
However the incoming edge always decides which outgoing edge is chosen.
These nodes are the result of crossing paths in different directions.
(d) Decision nodes have an input node and multiple output nodes.
The corresponding transition probabilities to different output nodes are between 0 and 1.
The sum of the probabilities to all output nodes is 1.
These are nodes where different maneuvers diverge.
An example classification can be seen in Figure~\ref{fig:classification of nodes}.
\section{Motivation and Related Work}
In automated driving, planning and understanding the ego trajectory is one of the most important and fundamental tasks.
It is necessary in order to plan emergency maneuvers, enable automatic lane keeping, dynamically adapt the velocity or in order to drive autonomously.
To solve this task it is necessary to know how the car's surroundings will change during the planning horizon.
Since these changes cannot be measured or known in advance it is necessary to perform some form of prediction.
For static surroundings or obstacles this task is trivial.
However, for moving objects and other traffic participants in particular the estimation of future behavior becomes a very challenging problem.
In commonly used motion planning tasks, the behavior of traffic participants is fully expressed by their trajectories.
Therefore, we focus on trajectory prediction in this work.
Whereas pedestrians and in many cases also cyclists move arbitrarily on free areas, roadways restrict vehicle motion to certain trajectory patterns.
Thus, we target the identification and extraction of vehicle trajectory patterns in this work and present a framework to predict vehicle trajectories.
Vehicle trajectory prediction is not an exact and deterministic problem since an observer does usually not have all the relevant information such as a driver's intention or driving style.
Many prediction approaches make the assumption that these factors cannot be reliably estimated and therefore only use previously known information about vehicle dynamics and the current movement state of the observed vehicle for their trajectory prediction.
For example, it is known that given a current velocity the other vehicle cannot accelerate or decelerate faster than the engine or breaks physically allow and that further the driver aims too keep some degree of comfort for all passengers.
Therefore the future vehicle velocity and acceleration can be estimated within certain bounds.
A similar assumption can be made about the curvature the vehicle can drive based on the maximum steering angle and vehicle stability.
However, these bounds still do not limit the amount of trajectories to a size for which an accurate prediction is possible.
Previous work by Schubert et al. \cite{schubert2008comparison} showed that assuming a Constant Yaw Rate and Acceleration (CYRA) model provides good results for vehicle tracking tasks.
This suggests that for short-term prediction, e.g. the time between two vehicle detections, such a simple model based on vehicle dynamics provides good predictions.
This was applied for example in \cite{tamke2011flexible} and \cite{berthelot2011handling} for short-term vehicle trajectory prediction.
For long-term predictions on the other hand such predictions can become quite inaccurate since they cannot predict changes in the yaw rate that each person would be able to predict e.g. when leaving or entering a bend.
A different approach was pursued in \cite{sung2012trajectory} and \cite{laugier2011probabilistic} which incorporated a Maneuver Recognition Module (MRM) in order to identify actions from a certain set of maneuvers and using that for trajectory prediction.
\begin{figure}[t]
\centering
\includegraphics[width = 0.90\columnwidth]{tex/Graphics/Introduction/prediction_comparison_2}
\caption{Comparison of different prediction approaches at two points in time.
CYRA in orange, MRM-based prediction in red (solid and dashed) and our proposed method in green.}
\label{fig:pred_comp}
\end{figure}
Houenou et al. \cite{houenou2013vehicle} combined motion model and maneuver based predictions trying to benefit of both approaches.
A disadvantage with maneuver based approaches, however, is that the maneuvers recognized by the MRM may be predictable a lot sooner.
A MRM for example needs a short time in order to recognize that a car starts entering a bend even though any person that sees that a bend is coming could make a better prediction far sooner.
This paper proposes a trajectory prediction method that uses a special graph based map representation generated through trajectory observations to predict future changes in maneuver and trajectory.
By using information contained in a map this approach is able to predict driving maneuvers along the pathway of lanes and intersections and assign probabilities to different driving behavior.
Our prediction method is facilitated by statistical information about natural driving behavior in the surrounding area.
Figure~\ref{fig:pred_comp} illustrates the different approaches with a simple example: Before the car starts turning neither an MRM nor a motion model based approach can make a correct prediction.
In the next timestep an MRM-based method might yield a correct prediction, however if a wrong maneuver is estimated (lane change instead of turning), it might still make a wrong prediction.
CYRA on the other hand will provide a good short-term but a bad long-term prediction either way.
First, we provide an overview on our automatic map creation in Section~\ref{sec:Graph Creation}.
Based on this map information, we present our novel prediction method in Section~\ref{sec:Prediction}.
By evaluating our method in Section~\ref{sec:Evaluation}, we show that it is capable to make accurate mid-term predictions.
Finally, we conclude our work in Section~\ref{sec:Conclusions}.
\section{Prediction}
\label{sec:Prediction}
The result of the previous section of this paper is a graph which describes the possible maneuvers for vehicles in certain areas of the road structure. This chapter describes how to add information to the graph that can be used to predict trajectories and how to perform said prediction.
\subsection{Accumulate Prediction Information in Data Structure}
\label{subsec:Accumulation}
The information extraction process can again be split into two steps: (1) Calculating transition probabilities based on observed trajectories (2) Selecting prototype trajectories describing typical behavior.
\subsubsection{Transition Probabilities}
\label{subsubsec:transition_probabilities}
From the last step in chapter \ref{sec:Graph Creation}, it can be seen that the transition probabilities at end nodes and crossover nodes are easily determinable.
The corresponding transition probabilities at decision nodes and start nodes with more than one adjacent edge, on the other hand, are still unknown.
In order to determine the transition probabilities in these cases, we assume a relationship between movement velocity and maneuver probability.
The reasoning here is that a change in maneuvers is often accompanied with a change in velocity.
For example vehicles approaching an intersection with high velocity are unlikely to take a turn while vehicles slowing down in front of a fork in the road will more likely take a turn.
In order to get the relationship between velocity and decisions at a decision node, the different speeds of vehicles are first determined in a given distance before reaching the considered decision node based on the trajectory data.
Subsequently, all velocities are clustered based on a processing chain containing the TRACLUS algorithm \cite{Lee:2007} and Agglomerative Hierarchical Clustering \cite{Kumari_performanceevaluation}.
As a first step, all trajectories are clustered with the TRACLUS algorithm based on their velocity.
Because of the different data density of trajectories in the observed traffic area, it is nearly impossible to find parameters that work in all area of the graph equally good.
In order to improve the clustering performance, all clusters detected by the TRACLUS algorithm are checked and if necessary improved by further applying Agglomerative Hierarchical Clustering.
After clustering all trajectories, the transition probability $tp_i^j$ at a decision node can be calculated for each cluster $i$ and output edge $j$ by using the equation $tp_i^j=n_{i,j}/n_i$, where $n_{i,j}$ is the number of vehicles that have a speed in cluster $i$ and drive from the considered decision node to output node $j$.
$n_i$ is the number of vehicles which have a speed in cluster $i$ and drive cross the considered decision node.
These determined transition probabilities $tp_i^j$ can then be used for trajectory prediction at the corresponding decision node.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{tex/Graphics/Prediction/GMTS.png}
\caption{Overview over the data used for prediction: Depicted in black are the edges and non-decision nodes, in cyan decision nodes.
Each trajectory color represents a merged prototype trajectory for a given maneuver edge.}
\label{fig:graph and pattern trajectories}
\end{figure}
\subsubsection{Prototype creation}
\label{subsubsec:Prototype_creation}
After all clusters have been determined for each decision node, an attempt is made to extract a prototype trajectory $PT$ from each cluster.
This extraction is realized using a trajectory density based clustering algorithm proposed by Lee et al. \cite{Lee:2007}. The result of this algorithm is a set of representative trajectories each describing a typical trajectory given a velocity and location within the considered area.
The result is depicted in Figure~\ref{fig:trajectories clusters}).
For each cluster the associated prototype trajectory is stored in the edge of the topological graph $G$ and can be used as a basis for prediction.
In Figure~\ref{fig:graph and pattern trajectories} we illustrate the final resulting data structure with trajectories describing the same maneuver in different velocities merged together for comprehensibility.
\subsection{Prediction Process}
\label{subsec:Prediction_Process}
Our proposed prediction method uses this generated data structure by (1) associating the current movement state to at least one cluster and retrieving the respective prototype trajectories and (2) transforming the prototype trajectories in order to match the vehicle's movement.
\subsubsection{Retrieving trajectory prototypes}
\label{subsubsec:prototype_retrieval}
In order to find prototype trajectories, the current movement state of the vehicle is compared to the edges which represent areas closest to the location of the vehicle. By comparing the direction of edge and movement as well as spatial proximity the best matching edge is found.
If the desired prediction horizon exceeds the length of the respective edge, continuation edges are concatenated based on possible movements through the graph.
If one of the movements traverses over a decision node this results in several possible edge sequences which can then be processed to several different predictions.
Using these sequences and their corresponding prototype trajectories $\{PT\}$ we can calculate multiple motion predictions with probabilities:
First we determine the velocity $v_{m}$ of the observed vehicle and compare it to the centers of the velocity clusters for the respective decision node. The result are two neighboring clusters with their respective velocities $v_{slow}$ and $v_{fast}$.
The differences $\delta_{slow}$ and $\delta_{fast}$ between the two neighboring cluster centers and $v_{m}$ are calculated with the equation \eqref{eq:c3 1:a}.
With the difference $\delta$, defined by equation \eqref{eq:c3 1:b} and the corresponding transition probabilities $p_{slow, h}$ and $p_{fast, h}$, the transition probability $p_h$ to the following node $h$ can be calculated using the equation \eqref{eq:c3 1:c}. This results in one or more edge sequences which each represent one prediction and their respective probabilities.
\begin{align}
\label{eq:c3 1:a}
\delta_{\text{slow/fast}} &= \vert v_{\text{m}} - v_{\text{slow/fast}} \vert \\
\label{eq:c3 1:b}
\delta &= \vert v_{\text{fast}} - v_{\text{slow}} \vert\\
\label{eq:c3 1:c}
p_{\text{h}} &= p_{\text{slow}, h} \cdot \frac{\delta_{\text{fast}}}{\delta} + p_{\text{fast}, h} \cdot \frac{\delta_{\text{slow}}}{\delta}
\end{align}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{tex/Graphics/Prediction/trajectory_transformation.png}
\caption{The prototype trajectory is transformed in order to continue the basis trajectory in a sensible way.}
\label{fig:transformation of trajectories}
\end{figure}
\subsubsection{Transformation of the prototype trajectories}
\label{subsubsec:trajectory_transformation}
In order to calculate the predicted trajectory from a sequence of edges, their respective prototype trajectories and the observed vehicle motion have to be combined to one consistent trajectory.
We are going to explain this process using a simplified example depicted in Figure~\ref{fig:transformation of trajectories}.
The segment $T_\text{seg}$ beginning at the closest point to the observed trajectory is cut out of the respective prototype trajectory $T_{\text{best}}$ with the length depending on the length and distance of each trajectory. This segment $T_{\text{seg}}$ is then transformed to $T_{\text{pre}}$ which connects the observed trajectory with the remainder of the prototype.
In order to obtain a realistic prediction, a spatial transformation of $T_{\text{seg}}$ is required. This transformation can be realized by moving the first point to the end of the observed trajectory and each subsequent point in $T_{\text{seg}}$ into the same direction but by a smaller amount. If we define $v_n$ as the vector between the last point in $T_{best}$ before the beginning of $T_{\text{seg}}$ and the end of the observed Trajectory and the length of $T_{\text{seg}}$ as $n$, then we can use the function $f(i)=1-\frac{i}{n}$ to transform each point $p^{\text{seg}}_i \in T_{\text{seg}}$ to the respective point $p^{\text{pre}}_i \in T_{\text{pre}}$ via equation \ref{eq:c4}.
\begin{equation}
\label{eq:c4}
p^{\text{pre}}_j = p^{\text{seg}}_j + f(j) \cdot v_n , \quad 1 \leq j \leq n_{seg}
\end{equation}
This transformation is repeated to connect all prototype trajectories in each sequence resulting in a final trajectory each for which we can calculate the probability using equation \ref{eq:c3 1:c}.
|
{
"timestamp": "2018-06-15T02:06:24",
"yymm": "1802",
"arxiv_id": "1802.08632",
"language": "en",
"url": "https://arxiv.org/abs/1802.08632"
}
|
\section{Introduction}
\label{section:intro}
\textsc{Homomorphism Extension} asks whether a group homomorphism from a subgroup can be extended to a homomorphism from the entire group. We consider the case that the groups are represented as permutation groups. The complexity of this natural problem within NP is unresolved.
\subsection{Connection to list-decoding homomorphism codes}
Our study is partly motivated by our recent work on local list-decoding homomorphism codes from alternating groups~\cite{homcodes}. For groups $G$ and $H$, the set
of {$G\to H$} (affine) homomorphisms can be viewed as a code. The study of list-decoding such codes originates with the celebrated paper by Goldreich and Levin~\cite{GL89} and has more recently been championed by Madhu Sudan and his coauthors~\cite{GKS06, DGKS08, GS14}. While this body of work pertains to groups that are ``close to abelian'' (abelian, nilpotent, some classes of solvable groups), in~\cite{homcodes} we began the study of the case when the group $G$ is not solvable. As a test case, we have studied the alternating groups and plan to study other classes of simple groups.
For homomorphism codes, the ``code distance'' corresponds to the maximum agreement $\Lambda$ between two homomorphisms. The list-decoding efforts described in Babai, Black, and Wuu \cite{homcodes} only guarantee returning $M\rightharpoonup H$ partial homomorphisms, defined on subgroups $M\le G$ of order $\abs{M} >\Lambda\abs{G}$. In the case of solvable groups (all previously studied cases fall in this category), maximum agreement sets are subgroups of smallest index\footnote{ Strictly speaking, this statement requires the ``irrelevant kernel'' to be trivial. The irrelevant kernel is the intersection of the kernels of all $G \to H$ homomorphisms, cf.~\cite[Section 4]{homcodes}. The $\{$solvable$\to$nilpotent$\}$ case appears in~\cite{G15}.}, so $G$ is the only subgroup of $G$ of order greater than $\Lambda$. This is not the case, however, for groups in general; in particular, it fails for the alternating groups $A_n$ where a maximum agreement set can be a subgroup of index $\binom{n}{2}$ (but not smaller). To solve the list-decoding problem, we need to extend these partial homomorphisms to full homomorphisms, i.e., we need to solve the Homomorphism Extension Search Problem for subgroups $M$ of order $\abs{M} > \Lambda\abs{G}$ (and therefore, of small index). Indeed, a special case of the main result here (Theorem~\ref{thm:main}) is used, and is credited to this paper, in Babai, Black, and Wuu~\cite{homcodes} to complete the proof of one of the main results of that paper. For a more detailed explanation, see part (b) of Section~\ref{section:appendix-motivation}, especially Remark~\ref{rmk:roadblock-to-LD}.
\subsection{Definition and results}
\label{section:intro-results}
We define the \textsc{Homomorphism Extension} problem. Denote by $\Hom(G,H)$ the set of homomorphisms from group $G$ to group $H$.
\begin{definition}
\label{def:homext-problem}
\textsc{Homomorphism Extension} \\
\indent \textbf{Instance:} Groups $G$ and $H$ and a partial map $\gamma: G \rightharpoonup H$. \\
\indent \textbf{Solution:} A homomorphism $\varphi \in \Hom(G,H)$ that extends $\gamma$, i.e., $\varphi|_M = \gamma$.
\end{definition}
The \textsc{Homomorphism Extension} Decision Problem ($\textsc{HomExt}$) asks whether a solution exists.
\begin{remark}
Our algorithmic results for $\textsc{HomExt}$ solve the \textsc{Homomorphism Extension} Search Problem as well, which asks whether a solution exists and, if so, to find one.
\end{remark}
The problems as stated above are not fully specified. Representation choices of the groups $G$ and $H$ affect the complexity of the problem. For example, $G$ may be given as a permutation group, a black-box group, or a group given by a generator-relator presentation.
For the rest of this paper we restrict the problem to permutation groups.
\begin{definition}
$\textsc{HomExtPerm}$ is the version of {\textsc{HomExt}} where the groups are permutation groups \emph{given by a list of generators.} $\textsc{HomExtSym}$ is the subcase of $\textsc{HomExtPerm}$ where the codomain $H$ is a symmetric group.
\end{definition}
Membership in permutation groups is polynomial-time testable. Our standard reference for permutation group algorithms is~\cite{SeressPGA}. Section~\ref{section:pga} summarizes the results we need, including material not easily found in the literature. Our standard reference for permutation group theory is~\cite{DM}.
Partial maps are represented by listing their domain and values on the domain.
Homomorphisms in $\Hom(G,H)$ are represented by their values on a set of generators of $G$.
For a partial map $\gamma: G \rightharpoonup H$, we denote by $M_\gamma := \langle \dom \gamma \rangle$ the subgroup of $G$ generated by the domaim $\dom \gamma$ of $\gamma$.
\begin{remark}
\label{rmk:promise}
Whether the input map $\gamma: G \rightharpoonup H$ extends as a homomorphism in $\Hom(M_\gamma, H)$ is a polynomial-time testable condition in permutation groups. See Section~\ref{section:promise}.
\end{remark}
Since extending to $M_\gamma \leq G$ is easy, this paper is primarily concerned with extending a homomorphism from a subgroup to a homomorphism from the whole group.
\begin{assumption}[Given partial map defines a homomorphism on subgroup]
Unless otherwise stated, in our analysis we assume without loss of generality that the input partial map $\gamma: G \rightharpoonup H$ extends to a homomorphism in $\Hom(M_\gamma, H)$. This is possible due to Remark~\ref{rmk:promise}. In this case, the homomorphism $\psi$ is represented by $\gamma$, as a partial map on generators of $M_\gamma$. We will think of $\psi$ as the input to $\textsc{HomExt}$. We often drop the subscript on $M_\gamma$.
\end{assumption}
Since a minimal set of generators of a permutation group of degree $n$ has no more than $2n$ elements~\cite{Bab_subgroupchain} and any set of generators can be reduced to a minimal set in polynomial time, we shall assume our permutation groups are always given by at most $2n$ generators.
We note that the decision problem {\textsc{HomExtPerm}} is in NP.
\begin{open}
Is $\textsc{HomExtPerm}$ NP-complete?
\end{open}
This paper considers the important subcase of the problem when $H=S_m$, the symmetric group of degree $m$. A homomorphism $G\to S_m$ is called a \textbf{group action} (more specifically, a \textbf{$G$-action}) on the set $[m]=\{1,\dots,m\}$.
The {\textsc{HomExtSym}} problem seems nontrivial even
for bounded $G$ (and variable $m$).
\begin{theorem}[Bounded domain] \label{thm:bounded}
If $G$ has bounded order, then $\textsc{HomExtSym}$ can be solved in polynomial time.
\end{theorem}
The degree of the polynomial in the polynomial running time is exponential in $\log^2\abs{G}$.
\begin{open}
Can $\textsc{HomExtSym}$ be replaced by $\textsc{HomExtPerm}$ in Theorem~\ref{thm:bounded}, i.e., can $H = S_m$ be replaced by $H \leq S_m$?
\end{open}
Our main result, the one used in our work on homomorphism codes, concerns variable $n$ and is stated next.
In the results below, ``polynomial time'' refers to $\poly(n,m)$ time.
\margincomment{What about $G = S_n$.}
\begin{theorem}[Main] \label{thm:main}
If $G=A_n$ (alternating group of degree $n$), $\textsc{HomExtSym}$ can be solved in polynomial time under the following assumptions.
\begin{itemize}
\item[(i)] The index of $M$ in $A_n$ is bounded by $\poly(n)$, and
\item[(ii)] $m < 2^{n-1}/\sqrt{n}$, where $H = S_m$.
\end{itemize}
\end{theorem}
Under the assumptions above, counting the number of extensions is also polynomial-time.
\begin{theorem}[Main, counting]
Under the assumption of Theorem~\ref{thm:main}, the number of solutions to the instance of $\textsc{HomExtSym}$ can be found in polynomial time.
\end{theorem}
Note the rather generous upper bound on $m$ in item (ii). Whether an instance of $\textsc{HomExtSym}$ satisfies the conditions of Theorem~\ref{thm:main} can be verified in $\poly(n)$ time (see Section~\ref{section:promise}).
We state a polynomial-time result for very large $m$ (Theorem~\ref{thm:large-m}, of which Theorem~\ref{thm:bounded} is a special case).
\begin{theorem}[Large range] \label{thm:large-m}
If $G \leq S_n$ and $m > 2^{1.7^{n^2}}$, then $\textsc{HomExtSym}$ can be solved in polynomial time.
\end{theorem}
\subsection{Methods}
We prove the results stated above by reducing $\textsc{HomExtSym}$ to a polynomial-time solvable case of a multi-dimensional oracle version of Subset Sum with Repetition (SSR).
SSR asks to represent a target number as a non-negative \textit{integral linear combination} of given numbers, whereas the classical Subset Sum problem asks for a\textit{ 0-1 combination}. SSR is NP-complete by easy reduction from Subset Sum.
We call the multi-dimensional version of the SSR problem $\textsc{MultiSSR}$.
The reduction from homomorphism extension to $\textsc{MultiSSR}$ is the main technical contribution of the paper (Theorem~\ref{thm:main-reduction} below).
The reduction is polynomial time; therefore, the complexity of our solutions to $\textsc{HomExtSym}$ will be the complexity of special cases of $\textsc{MultiSSR}$ that arise. The principal case of $\textsc{MultiSSR}$ is one we call ``triangular'' ; this case can be solved in polynomial time. The difficulty is aggravated by exponentially large input to $\textsc{MultiSSR}$, to which we assume oracle access ($\textsc{OrMultiSSR}$ Problem). Implementing oracles calls will amount to solving certain problems in computational group theory, addressed in Section 8 of the Appendix.
The $\textsc{MultiSSR}$ problem takes as input a multiset $\mathsf{K} $ in universe $\mathcal{U}$ (viewed as a non-negative integral function $\mathsf{K}: \mathcal{U} \to \mathbb{Z}^{\geq 0}$ ) and a set $\mathfrak{F}$ of multisets in $\mathcal{U}$ . $\textsc{MultiSSR}$ asks if $\mathsf{K}$ is a nonnegative integral linear combination of multisets in $\mathfrak{F}$ (see Section~\ref{section:reduction}). The set $\mathfrak{F}$ will be too large to be explicitly given (it will contain one member per conjugacy class of subgroups of $G$). Instead, we contend with oracle access to the set $\mathfrak{F}$. For a more formal presentation of $\textsc{MultiSSR}$ and $\textsc{OrMultiSSR}$, see Section~\ref{sec:subset-sum}.
From every instance $\psi$ of $\textsc{HomExtSym}$ describing a group action, we will construct an $\textsc{OrMultiSSR}$ instance $\OMS_\psi$ (see Section~\ref{section:reduction}). In the next result, we describe the merits of this translation.
Two permutation actions $\varphi_1, \varphi_2: G \to S_m$ are \defn{permutation equivalent} if there exists $h \in S_m$ such that $\varphi_1(g) = h^{-1} \varphi_2(g) h$ for all $g \in G$.
\begin{theorem}[Translation]
\label{thm:main-reduction}
For every instance $\psi \in \Hom(M, S_m)$, the instance $\OMS_\psi$ of $\textsc{OrMultiSSR}$ satisfies the following.
\begin{enumerate}[(a)]
\item $\OMS_\psi$ can be efficiently computed from $\psi$. For what this means, see Section~\ref{section:reduction}.
\item There exists a bijection between the set of non-empty classes of equivalent (under permutation equivalence) extensions $\varphi : G \to S_m$ and the set of solutions to $\OMS_\psi$.
\item Given a solution to $\OMS_\psi$, a representative $\varphi$ of the equivalence class of extensions can be computed efficiently.
\end{enumerate}
\end{theorem}
Here, ``efficiently'' means in $\poly(n,m)$-time. The universe $\mathcal{U}$ of $\OMS_\psi$ will be the conjugacy classes of subgroups of $M$. The set $\mathfrak{F}$ will be indexed by the conjugacy classes of subgroups of $G$. These sets can be exponentially large. For $G= S_n$, $\abs{\mathfrak{F}} = \exp(\widetilde{\Theta}(n^2))$ by \cite{PyberSubgroupsSn}.\\
Now, it suffices to efficiently find solutions to instances $\OMS_\psi$ of $\textsc{OrMultiSSR}$ arising under this reduction.
Theorem~\ref{thm:large-m} (large $m$) follows from Theorem~\ref{thm:main-reduction} and a result of Lenstra \cite{Lenstra1983} (cf. Kannan~\cite{KannanIP}), that shows $\textsc{Integer Linear Programming}$ is fixed-parameter tractable. As $\textsc{MultiSSR}$ can naturally be formulated as an $\abs{\mathcal{U}} \times \abs{\mathfrak{F}}$ integer linear program, we conclude polynomial-time solvability due to the assumed magnitude of $m$ (see Appendix, Section 7).
For Theorem~\ref{thm:main}, we will show that $\OMS_\psi$ instances satisfy the conditions of $\textsc{TriOrMultiSSR}$, a ``triangular'' version of $\textsc{OrMultiSSR}$ (see Section~\ref{section:uniqueness}).
\begin{theorem}[Reduction to $\textsc{TriOrMultiSSR}$]
\label{thm:main-methods-trireduction}
If an instance $\psi$ of $\textsc{HomExtSym}$ satisfies the conditions of Theorem~\ref{thm:main}, the instance $\OMS_\psi$ of $\textsc{OrMultiSSR}$ is also an instance of $\textsc{TriOrMultiSSR}$. The oracle queries can be answered in polynomial time.
\end{theorem}
Despite only being given oracle access, $\textsc{TriOrMultiSSR}$ turns out to be polynomial-time solvable (see Section~\ref{section:triormultiss}, or the Appendix, Section 5).
\begin{proposition}
\label{prop:main-methods-trisearch}
$\textsc{TriOrMultiSSR}$ can be solved in polynomial time.
\end{proposition}
\begin{proposition}
\label{prop:main-methods-triunique}
If a solution to $\textsc{TriOrMultiSSR}$ exists, then it is unique.
\end{proposition}
Polynomial time for an $\textsc{OrMultiSSR}$ problem means polynomial in the length of $\mathsf{K}$ and the length of the representation of elements of $\mathfrak{F}$. For details on representating multisets, see Section~\ref{section:prelim-multisets}.
\subsection{Efficient enumeration}
The methods discussed give a more general result than claimed. Instead of solving the Search Problem, we can in fact efficiently solve the Threshold-$k$ Enumeration Problem for $\textsc{HomExtSym}$. This problem asks to find the set of extensions, unless there are more than $k$, in which case output $k$ of them.
This question is also motivated by the list-decoding problem; specifically, Threshold-2 Enumeration can be used to prune the output list. See Section~\ref{section:appendix-motivation} for details.
We remark that solving Threshold-2 Enumeration already requires all relevant ideas in solving Threshold-$k$ Enumeration.
\begin{definition}[Threshold-$k$]
\label{def:threshold-k-problem}
For a set $\mathcal{S}$ and an integer $k\ge 0$, the
\textbf{Threshold-$k$ Enumeration Problem} asks
to return the following pair $(\val,\out)$ of outputs.\\
\indent \indent If $\abs{\mathcal{S}} \le k$ , return $\val=\abs{\mathcal{S}}$
and $\out =\mathcal{S}$\\
\indent \indent Else,
return $\val =$ ``more'' and $\out =$
a list of $k$ distinct elements of $\mathcal{S}$.
\end{definition}
Note that the Threshold-$0$ Enumeration Problem
is simply the \textbf{decision problem} ``is $\mathcal{S}$ non-empty?''
while the Threshold-$1$ Enumeration Problem
includes the \textbf{search problem} (if not empty, find an
element of $\mathcal{S}$).
We say that an algorithm \textbf{efficiently} solves the
Threshold-$k$ Enumeration Problem if the cost divided by $k$
is considered ``modest'' (in our case, polynomial in
the input length).
Our work on list-decoding homomorphism codes uses solutions to the \emph{Threshold-$2$ Enumeration Problem}
for the set of extensions of a given homomorphism.
With potential future applications in mind, we discuss the
Threshold-$k$ Enumeration Problem for variable~$k$.
\begin{definition}
\textsc{Homomorphism Extension Threshold-$k$ Enumeration} ({\textsc{HomExtThreshold}}) is the Threshold-$k$ Enumeration Problem for the set of solutions to $\textsc{Homomorphism Extension}$ ($\HExt_G$ defined below).
\end{definition}
\begin{notation}[$\HExt_G(\psi)$]
We will denote by $\HExt_G(\psi)$ the set of solutions to an instance $\psi$ of $\textsc{HomExt}$.
$$\HExt_G(\psi) := \{ \varphi \in \Hom(G,H) : \varphi|_M = \psi \}.$$
\end{notation}
The following condition strengthens the notion of efficient solutions to threshold enumeration.
\begin{definition}[Efficient enumeration]
We say that a set $\mathcal{S}$ can be \defn{efficiently enumerated}
if an algorithm lists the elements of $\mathcal{S}$ at modest marginal cost.
\end{definition}
The marginal cost of the $i$-th element is the time spent between producing the $(i-1)$-st and the $i$-th elements. In this paper, ``modest marginal cost'' will mean $\poly(n,m)$ marginal cost, where $n$ and $m$ denote the degrees of the permutation groups $G$ and $H$, respectively.
\begin{observation}
If a set $\mathcal{S}$ can be efficiently enumerated then the
Threshold Enumeration Problem can be solved efficiently.
\end{observation}
In particular, the decision and search problems
can be solved efficiently. The following theorems are the strengthened versions of the ones stated in Section~\ref{section:intro-results}.
\begin{theorem}[Bounded domain, enumeration] \label{thm:bounded-enum}
If $G$ has bounded order, then the set $\HExt_G(\psi)$ can be efficiently enumerated.
\end{theorem}
\begin{theorem}[Main, enumeration] \label{thm:main-enum}
If $G=A_n$ (alternating group of degree $n$), then the set $\HExt_G(\psi)$ can be efficiently enumerated
under the following assumptions:
\begin{itemize}
\item[(i)] the index of $M$ in $A_n$ is bounded by poly$(n)$, and
\item[(ii)] $m < 2^{n-1}/\sqrt{n}$, where $H = S_m$.
\end{itemize}
\end{theorem}
\begin{theorem}[Large range, enumeration] \label{thm:large-enum}
If $G \leq S_n$ and $m > 2^{1.7^{n^2}}$, then the $\textsc{HomExtSym}$ Threshold-$k$ Enumeration Problem can be solved in $\poly(n, m, k)$) time.
\end{theorem}
\subsection{Enumeration methods}
\label{section:intro-enum-methods}
Recall that Theorem~\ref{thm:main-reduction} gave a bijection between classes of equivalent extensions and solutions to the $\textsc{OrMultiSSR}$ instance. It remains to solve the Threshold-$k$ Enumeration Problem for $\textsc{OrMultiSSR}$, then to efficiently enumerate extensions within one equivalence class, given a representative of that class.
\paragraph{Solutions of Threshold-$k$ for $\textsc{OrMultiSSR}$\\ }
Under the assumptions of Theorem~\ref{thm:main}, the instance $\OMS_\psi$ of $\textsc{OrMultiSSR}$ (reduced to from the $\textsc{HomExt}$ instance $\psi$) will also be an instance of $\textsc{TriOrMultiSSR}$. Since solutions are unique if they exist (Proposition~\ref{prop:main-methods-triunique}), solving the Search Problem also solves the Threshold-$k$ Enumeration Problem for $\textsc{TriOrMultiSSR}$. But, the Search Problem can be solved in polynomial time by Proposition~\ref{prop:main-methods-trisearch}.
In the case of Theorem~\ref{thm:bounded}, $\OMS_\psi$ is an integer linear program with a bounded number of variables and constraints (corresponding to classes of subgroups of $G$) and the solutions can therefore be efficiently enumerated.
For Theorem~\ref{thm:large-enum} (thus also implying Theorem~\ref{thm:bounded-enum}), the Threshold-$k$ Enumeration Problem for the \textsc{Integer Linear Program} version of $\OMS_\psi$ can be answered in polynomial time by viewing it as an integer linear program. See Section~\ref{section:large}.
\paragraph{Efficient enumeration within one equivalence class\\}
We now wish to efficiently enumerate extensions within each class of equivalent extensions, given a representative.
Two permutation actions $\varphi_1, \varphi_2: G \rightarrow S_m$ are \defn{equivalent (permutation) actions} if there exists $\lambda \in S_m$ such that $\varphi_1(g) = \lambda^{-1} \varphi_2(g) \lambda$ for all $g \in G$. We say that two homomorphisms $\varphi_1, \varphi_2 : G \rightarrow S_m$ are \defn{equivalent extensions} of the homomorphism $\varphi: M \rightarrow S_m$ if they (1) both extend $\varphi$ and (2) are equivalent permutation actions.
Enumerating extensions within one equivalence class reduces to the following:
Given subgroups $K\le L\le S_m$, efficiently enumerate
coset representatives for $K$ in $L$.
This problem was solved by Blaha and Luks in the 1980s (unpublished, cf.~\cite{BlahaLuks}). For completeness
we include the solution based on communication by Gene
Luks~\cite{luks} (see Section~\ref{section:luks}).
We explain the connection between finding coset representatives and the classes of equivalent extensions of $\psi$. Consider an extension $\varphi_0 \in \Hom(G,S_m)$ of $\psi \in \Hom(M,S_m)$. For any $\lambda \in S_m$, the homomorphism $\varphi_\lambda $, defined as $\varphi_\lambda(g) = \lambda^{-1} \varphi(g)_0 \lambda$ for all $g \in G$, is an equivalent permutation action. First, $\varphi_\lambda = \varphi$ if and only if $\lambda \in C_{S_m}(\psi(G))$ (the centralizer in $S_m$ of the $\psi$-image of $G$, i.e., the set of elemenets of $S_m$ that commute with all elements in $\psi(G)$). The centralizer of a group in the symmetric group can be found in polynomial time (see Section~\ref{section:appendix-centralizer}). Also, $\varphi|_\lambda$ extends $\psi$ (thus is an equivalent extension to $\varphi$) if and only if $\lambda \in C_{S_m}(\psi(M))$.
So, finding coset representatives of $K=C_{S_m}(\psi(G))$ in $L=C_{S_m}(\psi(M))$ suffices for finding all equivalent extensions. Applying the Blaha--Luks result yields the following corollary (see Section~\ref{section:within eq class}).
\begin{corollary}
Let $M \leq G \leq S_n$ and $\psi: M \rightarrow S_m$. Suppose that $\varphi_0: G \rightarrow S_m$ extends $\psi$. Then, the class of extensions equivalent to $\varphi_0$ can be efficiently enumerated.
\end{corollary}
\subsection{Acknowledgments}
I would like to thank Madhu Sudan for introducing me to the subject of list-decoding homomorphism codes. I would also like to thank Gene Luks for communicating the content of Section~\ref{section:luks}. Last but not least, I would like to thank my adviser Laci Babai for his generous support, ideas, and endless advice.
\section{Preliminaries}
\label{section:prelim}
We write $\mathbb{N}$ for $\mathbb{N} = \{0, 1, 2, \ldots \}$.
\subsection{Multiset notation}
\label{section:prelim-multisets}
In this paper, we will consider both sets and multisets. All sets and multisets are finite.
We typographically distinguish multisets using ``mathsf'' font, e.g., $\mathsf{F}$, $\mathsf{K}$ and $\mathsf{L}$ denote multisets. A multiset within a universe $\mathcal{U}$ is formally a function $\mathsf{L}: \mathcal{U} \rightarrow \mathbb{N}$. For a member $u\in \mathcal{U}$ of the universe, the \defn{multiplicity} of $u$ in $\mathsf{L}$ is $\mathsf{L}(u)$. We say that $u$ is an \defn{element} of $\mathsf{L}$ ($u \in \mathsf{L}$) if $\mathsf{L}(u) > 0$, i.e., if $u$ has non-zero multiplicity in $\mathsf{L}$. The set of elements of $\mathsf{L}$ is called the \defn{support of $\mathsf{L}$, denoted by $\supp(\mathsf{L}) \subseteq \mathcal{U}$}.
We algorithmically represent a multiset $\mathsf{L}: \mathcal{U} \rightarrow \mathbb{N}$ by listing its support $\supp(\mathsf{L}) \subseteq \mathcal{U}$ and the values on the support, so the description is of length $\abs{\supp(\mathsf{L})} \cdot \log (\norm{\mathsf{L}}_\infty) \cdot \ell$, where $\ell$ is the description length for elements of $\mathsf{L}$. The \defn{size} of $\mathsf{L}$ is $\norm{\mathsf{L}}_1$, the 1-norm of the function $\mathsf{L}: \mathcal{U} \to \mathbb{N}$.
Let $\mathsf{L}_1, \mathsf{L}_2: \mathcal{U} \to \mathbb{N}$ be two multisets in the same universe. Their \defn{sum} $\mathsf{L}_1 + \mathsf{L}_2$ is the multiset obtained by adding the multiplicities. We say that $\mathsf{L}_1$ is a \defn{submultiset} of $\mathsf{L}_2$ if $\mathsf{L}_1(u) \leq \mathsf{L}_2(u)$ for all $u$.
Sets will continue to be denoted by standard font and defined via one set of braces $\{ \, \}$. Often it is convenient to list the elements of a multiset $\mathsf{L}$ as $\{\{L_1, \ldots, L_r\} \} = \{\{ L_i : i = 1 \ldots r\} \}$ using double braces, where $L_i \in \mathcal{U}$ and each $u \in \mathcal{U}$ occurs $\mathsf{L}(u)$ times in this list. The length $r$ of this list is the size of $\mathsf{L}$. In our notation, $\{A, A\} = \{A\}$ but $\{\{A,A\}\} \neq \{\{A\}\}$.
A disjoint union of two sets is denoted by $\Omega = \Omega_1 \dotcup \Omega_2$.
\subsection{Group theory notation}
\label{section:prelim-grouptheory}
Let $G$ be a group. We write $M \leq G$ to express that $M$ is a subgroup; we write $N\trianglelefteq G$ to denote that $N$ is a normal subgroup.
For $M \leq G$ and $a \in G$, we call the coset $Ma$ of $M$ a \defn{subcoset} of $G$. We define the \defn{index} of a subcoset $Ma$ in $G$ by $\ind{G}{Ma} := \ind{G}{M}$. For a subset $S$ of a group $G$, we denote by $\langle S \rangle$ the subgroup generated by $S$.
We introduce nonstandard notation for that will be used in the rest of the paper.
\begin{notation}[$\Sub(G)$]
We denote the set of subgroups of $G$ by $\Sub(G) := \{ L: L \leq G \}$.
\end{notation}
For $L \leq G$, denote by $L \backslash G : = \{ Lg : g \in G \}$ the (right) \defn{coset space} (set of right cosets). For $L, M \leq G$, denote by $L \backslash G / M := \{ L g M : g\in G\}$ the set of \defn{double cosets}. Double cosets form an uneven partition of $G$. They are important in defining the $\textsc{MultiSSR}$ instance from an instance of {\textsc{HomExtSym}} (see Section~\ref{section:comb}).
Two subgroups $L_1, L_2 \leq G$ are \defn{conjugate in $G$} if there exists $g \in G$ such that $L_1= g^{-1} L_2 g$. The equivalence relation of conjugacy in $G$ is denoted by $L_1 \sim_G L_2$, or $L_1 \sim L_2$ if $G$ is understood.
\begin{notation}
\label{notation:conj-class}
For a subgroup $L \leq G$, the \defn{conjugacy class of $L$ in $G$} is denoted by $[L]_G$ (or $[L]$ if $G$ is understood), so $[L]_G := \{ L_1 \leq G :L_1 \sim_G L\}$.
\end{notation}
\begin{notation}[$\Conj(G)$]
We denote the set of conjugacy classes of $G$ by $\Conj(G) := \{ [L] : L \leq G\}$.
\end{notation}
Using the introduced notation, if $L\leq G$, then $L \in \Sub(G)$, $L \in [L] \in \Conj(G)$ and $[L] \subset \Sub(G)$.
\subsection{Permutation groups}
\label{section:prelim perm groups}
In this section we fix terminology for groups and, in particular, permutation groups. A useful structure theorem for large subgroups of the alternating groups is presented as well. For reference see \cite{DM}.
For a set $\Omega$, $\Sym(\Omega)$ denotes the symmetric group on $\Omega$ and $\Alt(\Omega)$ denotes the alternating group on $\Omega$. Often, we write $S_n$ (or $A_n$) for the symmetric (or alternating) group on $[n] = \{1, \ldots, n\}$.
\begin{definition}[Group actions]
A \defn{(permutation) action} of a group $G$ on a set $\Omega$ is given by a homomorphism $\psi: G \rightarrow \Sym(\Omega)$, often denoted by $G \overset{\psi}{\actson} \Omega$ or $G \actson \Omega$.
\end{definition}
Let $G \leq \Sym(\Omega)$, $g \in G$, $\omega \in \Omega$, and $\Delta \subset \Omega$.
The image of $\omega$ under $g$ is denoted by $\omega^g$. This notation extends to sets. So, $\Delta^g := \{ \omega^g: \omega \in \Delta \}$ and $\Delta^G := \{\omega^g : \omega \in \Delta, g\in G\}$. The subset $\Delta \subset \Omega$ is \defn{$G$-invariant} if $\Delta^G = \Delta$. The \defn{orbit $\omega^G$ of $\omega$ under action by $G$} is given by $\omega^G := \{ \omega^g : g \in G\}$. The orbits of $G$ are $G$-invariant and they partition $\Omega$. All $G$-invariant sets are formed by unions of orbits.
The \defn{point stabilizer $G_\omega$ of $\omega$} is the subgroup of $G$ fixing $\omega$, given by $G_\omega = \{ g \in G \mid \omega^g = \omega \}$. The \defn{pointwise stabilizer $G_{(\Delta)}$ of $\Delta$} is the subgroup fixing every point in $\Delta$, given by $G_{(\Delta)} = \bigcap_{\omega \in \Delta} G_\omega$. The \defn{setwise stabilizer $G_\Delta$ of $\Delta$} is given by $G_\Delta = \{ g \in G \mid \Delta^g = \Delta \}$.
Let $\Delta \subseteq \Omega$ be $G$-invariant. For $g \in G$, denote by $g^\Delta$ the restriction of the action of $g$ to $\Delta$.
The group $G^\Delta = \{ g^\Delta : g \in G \} \leq \Sym(\Delta) $ is the image of the permutation representation of $G$ in its action on $\Delta$. We see that $G^\Delta \cong G/G_{(\Delta)}$.
We state a result that goes back to Jordan. Its modern formulation by Liebeck (see \cite[Theorem 5.2A]{DM}) describes the small index subgroups of $A_n$. This theorem is used to categorize group actions by $A_n$ in Theorem \ref{thm:main}.
\begin{theorem}[Jordan--Liebeck]
\label{thm:JordanLiebeck}
Let $n \geq 10$ and let $r$ be an integer with $1 \leq r < n/2$.
Suppose that $K \leq A_n$ has index $\ind{A_n}{K} < \binom{n}{r}$. Then, for some $\Delta \subseteq [n]$ with $\lvert \Delta \rvert < r$, we have $(A_n)_{(\Delta)} \leq K \leq (A_n)_{\Delta}$.
\end{theorem}
\subsection{Equivalent extensions}
\label{section:prelim equivalence}
In this section we characterize equivalence of two group actions and, in particular, fix notation to describe equivalence.
\begin{definition}[Equivalent permutation actions]
Two permutation actions $G \curvearrowright \Omega$ and $G \curvearrowright \Gamma$ are \defn{equivalent} if there exists a bijection $\zeta: \Omega \rightarrow \Gamma$ such that $\zeta(\omega^g) = (\zeta(\omega))^g$ for all $g \in G$ and $\omega \in \Omega$.
\end{definition}
Note that two permutation actions $\psi_1, \psi_2:G \rightarrow S_m$ of $G$ on the same domain are equivalent if there exists $\zeta \in S_m$ such that $\psi_1(g) = \zeta^{-1}\psi_2(g) \zeta$ for all $g \in G$.
The Introduction defined two homomorphisms $\varphi_1, \varphi_2: G \rightarrow S_m$ as ``equivalent extensions'' of $\varphi:M \rightarrow S_m$ if they both extend $\varphi$ and if they are equivalent as actions. The following definition is equivalent to that definition provided in the Introduction.
For groups $M \leq G$, the \defn{centralizer} of $M$ in $G$ is given by $C_G(M) = \{ g \in G: (\forall x \in M)(g x = x g) \}$.
\begin{definition}[Equivalent extensions]
Let $M \leq G$ and $\psi: M \rightarrow S_m$.
We say that $\varphi_1$ and $\varphi_2$ are \defn{equivalent extensions of $\varphi$} if there exists $\zeta \in C_{S_m}(\psi(M))$ such that $\zeta^{-1} \varphi_2(g) \zeta = \varphi_1(g)$ for all $g \in G$.
\end{definition}
Next we consider the equivalence of transitive group actions, through their point stabilizers. A $G$-action on $\Omega$ is \defn{transitive} if $\omega^G = \Omega$ for all $\omega \in \Omega$, i.e., for every pair $\omega_1, \omega_2 \in \Omega$, there is a group element $g \in G$ satisfying $\omega_1^g = \omega_2$.
Lemma~\ref{lem_transgroupactions_equivalence} is Lemma 1.6A in \cite{DM}.
\begin{lemma}
\label{lem_transgroupactions_equivalence}
Suppose $G$ acts transitively on the sets $\Omega$ and $\Gamma$. Let $L$ be the stabilizer of a point in the first action. Then, the actions are equivalent if and only if $L$ is the stabilizer of some point in the second action.
\end{lemma}
Recall that we denote the conjugacy class of a subgroup $L\leq G$ by $[L]$, so $L$ is conjugate to $L_1$ if and only if $[L] = [L_1]$. We find all point stabilizers are conjugate, and all conjugate subgroups are point stabilizers.
\begin{fact}
Let $L$ be a point stabilizer of a transitive $G$-action on $\Omega$. A subgroup $L_1$ is conjugate to $L$ ($ [L_1]=[L]$) if and only if $L_1$ is also the stabilizer of a point in $\Omega$.
\end{fact}
All transitive $G$-actions are equivalent to one of its natural actions on cosets, $\rho_L$ defined below.
\begin{example}[Natural actions on cosets]
\label{ex:coset-action}
For $L \leq G$, we denote by $\rho_L$ the natural action of $G$ on $L \backslash G$. More specifically, an element $g \in G$ acts on a coset $Lh \in L \backslash G$ as $(Lh)^g := L(hg)$.
\end{example}
We see that the equivalence class of a transitive action is determined by the conjugacy class of its point stabilizers.
\begin{corollary}
\label{cor:prelim-equiv-vs-conj}
Consider a transitive $G$-action $\varphi:G \rightarrow \Sym(\Omega)$. Let $L \leq G$.
The following are equivalent.
\begin{enumerate}[(1)]
\item $\varphi$ is equivalent to $\rho_L$.
\item $L$ is a point stabilizer of the $G$-action.
\item Some $L_1 \leq G$ satisfying $L_1 \sim L$ is a point stabilizer of the $G$-action.
\item $\varphi$ is equivalent to $\rho_{L_1}$ for $L_1 \sim L$.
\end{enumerate}
\end{corollary}
Motivated by Corollary~\ref{cor:prelim-equiv-vs-conj}, we will define the notion of ``$(G,L)$-actions,'' which describe transitive $G$-actions up to equivalence. This definition will be generalized to intransitive actions (see Section~\ref{section:GL-actions}).
\subsection{Computation in permutation groups}
A permutation group $G \leq S_n$ is \defn{given} by a list of generators. We say that $G$ is \defn{known} if a list of generators of $G$ is known. Based on this representation, membership testing can be performed in polynomial time. In Appendices~\ref{section:pga} and~\ref{section:luks} we list the algorithmic facts about permutation groups used in this paper.
\section{Multi-dimensional subset sum with repetition}
\label{sec:subset-sum}
We consider the \textsc{Subset Sum Problem with Repetitions} (SSR). An instance is given by a set of positive integers and a ``target'' positive integer $s$. The question is ``can $s$ be represented as a non-negative linear combination\footnote{Notice that a non-negative linear combination of a set of integers is exactly the sum of a multiset in that set of integers. This question is asking for the existence of a multiset.} of the other integers?'' This problem is NP-complete by an easy reduction from the standard \textsc{Subset Sum} problem, which asks instead for a 0-1 linear combination.
We define a multi-dimensional version (\textsc{MultiSSR}) below. It has its own associated Decision, Search, and Threshold-$k$ Enumeration (Definition~\ref{def:threshold-k-problem}) Problems.
\begin{definition}
\label{def:subsum-problem}
\textsc{Multi-dimensional Subset Sum with Repetition} (\textsc{MultiSSR}) \\
\indent \textbf{Instance:} Multiset $\mathsf{K}: \mathcal{U} \rightarrow \mathbb{N}$ and set $\mathfrak{F}$ of multisets in $\mathcal{U}$.\footnote{$\mathcal{U}$ is the underlying universe. Its entirety is not required in the input, but its size is the dimensionality of this problem. An element $\mathsf{F} \in \mathfrak{F}$ is a multiset $\mathsf{F}: \mathcal{U} \to \mathbb{N}$ in $\mathcal{U}$.} \\
\indent \textbf{Solution:} A multiset of $\mathfrak{F}$ summing to $\mathsf{K}$, i.e., a multiset $\mathsf{L}: \mathfrak{F} \rightarrow \mathbb{N}$ satisfying $\sum\limits_{\mathsf{F} \in \mathfrak{F}} \mathsf{L}(\mathsf{F}) \cdot \mathsf{F} = \mathsf{K}$.
\begin{notation}[$\SubSum(\mathsf{K}, \mathfrak{F})$]
\label{notation:SubSum}
We write $\SubSum$ for the set of solutions to an instance of $\textsc{MultiSSR}$, i.e.,
$$ \SubSum(\mathsf{K}, \mathfrak{F}) := \left\{ \mathsf{L}: \mathfrak{F} \rightarrow \mathbb{N} \; \bigg\vert \; \sum\limits_{\mathsf{F} \in \mathfrak{F}} \mathsf{L}(\mathsf{F}) \cdot \mathsf{F} = \mathsf{K} \right\}.$$
\end{notation}
The $\textsc{MultiSSR}$ Decision Problem asks whether a solution exists ($\SubSum$ is nonempty).
The $\textsc{MultiSSR}$ Search Problem asks whether a solution exists and, if so, find one.
The $\textsc{MultiSSR}$ Threshold-$k$ Enumeration Problem asks for the solution to the Threshold-$k$ Enumeration Problem for the set $\SubSum$.
\end{definition}
\begin{remark}[$\textsc{MultiSSR}$ as \textsc{Integer Program}]
Every instance of $\textsc{MultiSSR}$ can naturally be viewed as an instance of \textsc{Integer Linear Programming}, with $\abs{\mathcal{U}}$ constraints and $\abs{\mathfrak{F}}$ variables. The variables $\mathsf{L}(\mathsf{F})$ are the number of copies of each $\mathsf{F} \in \mathfrak{F}$ in the subset sum. The constraints correspond to checking that every element in $\mathcal{U}$ has the same multiplicities in $\mathsf{K}$ and $\sum \mathsf{L}(\mathsf{F})\cdot \mathsf{F}$.
\end{remark}
\subsection{Oracle MultiSSR}
\label{section:ormultiss}
In our application, the set $\mathfrak{F}$ and universe $\mathcal{U}$ will be prohibitively large to input explicitly. To address this, we define an oracle version of $\textsc{MultiSSR}$ called \textsc{Oracle Multi-dimensional Subset Sum with Repetitions} (\textsc{OrMultiSSR}). We will reduce a $\textsc{HomExtSym}$ instance $\psi$ to an {\textsc{OrMultiSSR}} instance denoted by $\OMS_\psi$, then show that the oracles can be answered efficiently.
We will find it convenient to introduce a bijection between $\mathfrak{F}$ and another set $\mathcal{V}$ of simpler objects, used to index $\mathfrak{F}$.\footnote{The index set $\mathcal{V}$ will be the conjugacy classes of subgroups of $G$, whereas $\mathfrak{F}$ will be a set of multisets of conjugacy classes of subgroups of $M$.} Access to $\mathfrak{F}$ is given by the oracle ``$\Foracle$,'' which on input $v \in \mathcal{V}$ returns the element $\mathsf{F}_v$ of $\mathfrak{F}$ indexed by $v$. Elements of the universes $\mathcal{U}$ and $\mathcal{V}$ are encoded by strings in $\Sigma_1^{n_2}$ and $\Sigma_2^{n_2}$, respectively, and the alphabets $\Sigma_i$ and encoding lengths $n_i$ constitute the input.
We allow non-unique\footnote{In our application, $\Sigma_1 = S_n$ and $\Sigma_2 = S_m$. The universes $\mathcal{U}$ and $\mathcal{V}$ will be conjugacy classes of large subgroups of $S_n$ and $S_m$, respectively. Each conjugacy class is non-uniquely encoded by generators of a subgroup in the class.} encodings of $\mathcal{U}$ and $\mathcal{V}$, but provide ``equality'' oracles.\footnote{We will not need to test membership of a string from $\Sigma^n$ in the universe.} To handle non-unique encodings of $\mathcal{V}$ in $\Sigma_2^{n_2}$, we assume that $\Foracle$ returns the same multiset on $\mathcal{U}$ (though possibly via different encodings) when handed different encodings of the same $v \in \mathcal{V}$. Writing $\mathsf{K}: \mathcal{U} \rightarrow \mathbb{N}$ implies that $\mathsf{K}$ is represented as a multiset on $\Sigma_1^{n_1}$ but with the promise that all strings in its support are encodings of elements of $\mathcal{U}$.
\begin{definition}
\textsc{Oracle Multi-dimensional Subset Sum with Repetition} ({\textsc{OrMultiSSR}}) \\
\indent \textbf{Instance:} \\
\indent \indent \underline{Explicit input} \\
\indent \indent \indent Alphabets $\Sigma_1$ and $\Sigma_2$; \\
\indent \indent \indent Numbers $n_1, n_2 \in \mathbb{N}$, in unary; and \\
\indent \indent \indent Multiset $\mathsf{K}: \mathcal{U} \rightarrow \mathbb{N}$, by listing the elements in its support and their multiplicities.\\
\indent \indent \underline{Oracles}\\
\indent \indent \indent $\equiv$ oracle for equality in $\mathcal{U}$ or $\mathcal{V}$, and \\
\indent \indent \indent $\Foracle$ oracle for the set $\mathfrak{F} = \{\mathsf{F}_v: \mathcal{U} \rightarrow \mathbb{N} \}_{v \in \mathcal{V}}$, indexed by $\mathcal{V}$.\\
\indent \textbf{Solution:} A sub-multiset of $\mathcal{V}$ that defines a sub-multiset of $\mathfrak{F}$ summing to $\mathsf{K}$, i.e., \\
\indent \indent a multiset $\mathsf{L} : \mathcal{V} \to \mathbb{N}$ satisfying $\sum\limits_{v \in \mathcal{V}} \mathsf{L}(v) \cdot \mathsf{F}_v = \mathsf{K} $.
\begin{notation}[$\SubSum(\mathsf{K}, \mathfrak{F})$]
Again, we write $\SubSum$ for the set of solutions to an instance of $\textsc{OrMultiSSR}$, though the indexing is slightly different.
$$ \SubSum(\mathsf{K}, \mathfrak{F}) := \left\{ \mathsf{L}: \mathcal{U} \rightarrow \mathbb{N} \; \bigg\vert \; \sum\limits_{v \in \mathcal{V}} \mathsf{L}(v) \cdot \mathsf{F}_v = \mathsf{K} \right\}. $$
\end{notation}
The length of the input is $\log \abs{\Sigma_1} + \log \abs{\Sigma_2} + n_1 + n_2 + \norm{\mathsf{K}}_0 \cdot \log \norm{\mathsf{K}_\infty} \cdot n_1 \log \abs{\Sigma_1}$.
\end{definition}
Due to non-unique encodings, checking whether a multiset $\mathsf{L}$ satisfies $\sum_{v \in \mathcal{V}} \mathsf{L}(v) \cdot \mathsf{F}_v = \mathsf{K} $ will actually require calling the $\equiv$ oracle, as the multisets on the left and right sides of the equation may be encoded differently.
\subsection{Triangular \textsc{MultiSSR}}
\label{section:triormultiss}
The Search Problem for $\textsc{OrMultiSSR}$ with an additional ``Triangular Condition'' (and oracles corresponding to this condition) can be solved in polynomial time. We call this problem $\textsc{TriOrMultiSSR}$.
This section defines $\textsc{TriOrMultiSSR}$. The next section will provide an algorithm that solves the $\textsc{TriOrMultiSSR}$ Search Problem in polynomial time, proving Proposition~\ref{prop:main-methods-trisearch}.
Under the conditions of Theorem~\ref{thm:main} ($G= A_n$, $M\leq G$ has polynomial index, and the codomain $S_m$ has exponentially bounded permutation domain size $m < 2^{n-1}/\sqrt{n}$), a $\textsc{HomExtSym}$ instance $\psi$ reduces to an instance $\OMS_\psi$ of $\textsc{OrMultiSSR}$ that satisfies the additional assumptions of $\textsc{TriOrMultiSSR}$. The additional oracles of $\textsc{TriOrMultiSSR}$ can be efficiently answered (see Section~\ref{section:uniqueness}).
\paragraph{Definition of $\textsc{TriOrMultiSSR}$\\}
The triangular condition roughly says that the matrix for the corresponding (prohibitively large) integer linear program is upper triangular.
Below we say that a relation $\preccurlyeq$ is a \defn{total preorder} if it is reflexive and transitive with no incomparable elements.\footnote{A total order also imposes antisymmetry, i.e., if $x \preccurlyeq y$ and $y \preccurlyeq x$ then $x = y$. That is the assumption we omit.}
\begin{definition}
\textsc{Triangular Oracle Multi-dimensional Subset Sum with Repetition} (\textsc{TriOrMultiSSR}) \\
\indent \textbf{Input, Set, Oracles, Output:} Same as $\textsc{OrMultiSSR}$.\\
\indent \textbf{Triangular Condition:} $\mathcal{U}$ has a total preorder $\preccurlyeq$.\\
\indent \indent For every $v \in \mathcal{V}$, the multiset $\mathsf{F}_v$ contains a unique $\preccurlyeq$-minimal element $\tau(v) \in \mathcal{U}$. \\
\indent \indent The map $\tau: \mathcal{V} \to \mathcal{U}$ is injective. \\
\indent \textbf{Additional Oracles:} \\
\indent \indent $\preccurlyeq$: compares two elements of $\mathcal{U}$, and \\
\indent \indent $\Trioracle : \mathcal{U} \to \mathcal{V} \cup \{\Error\}$ inverts $\tau$, i.e., on input $u \in \mathcal{U}$ it returns
\begin{equation}
\triangle(u) =
\begin{cases}
\text{the unique $v \in \mathcal{V}$ such that $\tau(v) = u$} & \text{ if $v$ exists} \\
\Error & \text{ if no such $v$ exists}.
\end{cases}
\end{equation}
\end{definition}
\paragraph{Integer program and uniqueness of solutions\\}
Uniqueness of solutions for $\textsc{TriOrMultiSSR}$ can be seen by looking at the integer linear program formulation, where variables correspond to $\mathcal{V}$ and constraints correspond to $\mathcal{U}$. The Triangular Condition implies that, for every variable ($v \in \mathcal{V}$), there exists a unique minimal constraint ($\tau(v) \in \mathcal{U}$) containing this variable. The ordering $\preccurlyeq$ on $\mathcal{U}$ gives an ordering $\preccurlyeq_\mathcal{V}$ on $\mathcal{V}$ by setting $v_1 \preccurlyeq_\mathcal{V} v_2$ when $\tau(v_1) \preccurlyeq \tau(v_2)$. Order the variables and constraints by $\preccurlyeq_\mathcal{V}$ and $\preccurlyeq$, respectively (break ties in $\preccurlyeq$ arbitrarily and have $\preccurlyeq_\mathcal{V}$ respect the tie-breaking of $\preccurlyeq$). The matrix for the corresponding linear program is upper triangular.
Hence, if the integer program has a solution, it is unique.
It trivially follows that solving the $\textsc{TriOrMultiSSR}$ Search Problem also solves the corresponding Threshold-$k$ Enumeration Problem.
\subsection{$\textsc{TriOrMultiSSR}$ Search Problem}
Algorithm~\ref{alg:MultiSS} (\textsc{TriOrMultiSSR}) below solves the $\textsc{TriOrMultiSSR}$ Search Problem in polynomial time (Proposition~\ref{prop:main-methods-trisearch}). If viewing the problem as a linear program, the algorithm essentially solves the upper triangular system of equations by row reduction, except that the dimensions are too big and only oracle access is provided.
In each iteration, {\textsc{TriOrMultiSSR}} finds one minimal element $u$ in $\supp(K)$. It removes the correct number $m$ of copies of $\mathsf{F}_{\triangle(u)}$ from $\mathsf{K}$, in order to remove all copies of $u$ from $\mathsf{K}$. If this operation fails, the algorithm returns `no solution.' Meanwhile, $\mathsf{L}(\triangle(u))$ is updated in each iteration to record the number of copies of $\mathsf{F}_{\triangle(u)}$ removed.
There are three reasons the operation may fail. (1) Removing all copies of $u$ from $\mathsf{K}$ may not be possible through removal of $\mathsf{F}_{\triangle(u)}$ (the number $m = \mathsf{K}(u) / \mathsf{F}_{\triangle(u)}$ of copies is not an integer). (2) $\mathsf{K}$ may not contain $m$ copies of $\mathsf{F}_{\triangle(u)}$ (the operation $\mathsf{K} - m \cdot \mathsf{F}_{\triangle(u)}$ results in negative values). (3) $\triangle(u)$ returns $\Error$ ($u$ is not in the range of $\tau$).
\paragraph{Subroutines\\}
$\minoracle(S)$: $\minoracle$ takes as input a subset $S \subset \Sigma_1^{n_1}$ and outputs one minimal element under $\preccurlyeq$. Using the $\preccurlyeq$ oracle, a $\minoracle$ call can be executed in $\poly(\abs{S})$-time.
$\textsc{Remove}(\mathsf{K}, \mathsf{F}, m)$: $\textsc{Remove}$ takes as input multisets $\mathsf{F}, \mathsf{K}: \Sigma_1^{n_1} \to \mathbb{N}$ and a nonnegative integer $m$. It returns $\mathsf{K}$ after removing $m$ copies of the multiset if possible, while accounting for non-unique encodings. Otherwise, it returns `no solution.' Pseudocode for \textsc{Remove} is provided below.
$\textsc{Consolidate}(\mathsf{K}_1, \ldots, \mathsf{K}_n)$: $\textsc{Consolidate}$ adjusts for non-unique encodings of $\mathcal{U} \rightarrow \mathbb{N}$ multisets as $\Sigma_1^{n_1} \rightarrow \mathbb{N}$ multisets. Given input the encoded multisets $\mathsf{K}_1, \ldots, \mathsf{K}_n: \Sigma_1^{n_1} \to \mathbb{N}$, $\textsc{Consolidate}$ outputs multisets $\widetilde{\mathsf{K}}_1, \ldots, \widetilde{\mathsf{K}}_n: \Sigma_1^{n_1} \to \mathbb{N}$ that encode the same multisets of $\mathcal{U}$, but uniquely. In other words, $\widetilde{\mathsf{K}}_i$ satisfy $\widetilde{\mathsf{K}}_i = \mathsf{K}_i $, with their combined support $\dot{\bigcup_i}\supp( \widetilde{\mathsf{K}}_i) \subset \Sigma_1^{n_1}$ containing at most one encoding per element of $\mathcal{U}$.
\paragraph{Algorithm\\}
Recall that we denote the empty multiset by $\emptymultiset$. We give pseudocode for the $\textsc{Remove}$ subroutine, followed by the main algorithm.
\vspace{.2in}
\begin{algorithmic}
\Procedure{Remove}{$\mathsf{K}, \mathsf{F}, m$}
\State $\textsc{Consolidate}(\mathsf{K}, \mathsf{F})$ \pcom{Remove duplicate encodings within $\supp(\mathsf{K}) \cup \supp(\mathsf{F})$.}
\State $\mathsf{K} \gets \mathsf{K} - m \cdot \mathsf{F}$ \pcom{Execute as $\mathsf{K}, \mathsf{F}: \Sigma_1^{n_1} \rightarrow \mathbb{Z}$, assuming integer range}
\If{ $\mathsf{K}$ has negative values}
\State \Return `no solution'
\Else\, \Return $\mathsf{K}$
\EndIf
\EndProcedure
\end{algorithmic}
\begin{algorithm}[H]
\caption{Triangular Oracle MultiSS}
\label{alg:MultiSS}
\begin{algorithmic}[1]
\Procedure{TriOrMultiSS}{$\Sigma_1$, $n_1$, $\Sigma_2$, $n_2$, $\mathsf{K}$, $\equiv$, $\preccurlyeq$, $\Foracle$, $\Trioracle$}
\State Initialize $\mathsf{L} = \emptymultiset$ \pcom{$\mathsf{L}$ is the empty multiset of $\Sigma_2^{n_2}$}
\State $\textsc{Consolidate}(\mathsf{K})$. \label{line:MultiSS preprocess K} \pcom{Remove duplicate encodings within $\supp(\mathsf{K})$}
\While{$\mathsf{K} \neq \emptymultiset$ }\label{line:MultiSS big while}
\State $u \gets \minoracle(\supp(\mathsf{K}))$
\pcom{$u$ is a minimal element of $\mathsf{K}$}
\If{$\triangle(u) = \Error$}
\State \Return `no solution'
\Else
\State $\mathsf{F} \gets \Foracle_{\Trioracle(u)}$
\pcom{$\mathsf{F}$ is $\mathsf{F}_v$, where $\tau(v) = u$ by Triangular Condition}
\State $m \gets\frac{\mathsf{K}(u)}{\mathsf{F}(u)}$
\pcom{$m$ is number of copies of $\mathsf{F}$ to remove from $\mathsf{K}$.}
\If{ ($m \notin \mathbb{N}$) or ($\textsc{Remove}(\mathsf{K}, \mathsf{F}, m) = $ `no solution') }
\State \Return `no solution'
\Else
\State $\mathsf{L}(\Trioracle(u)) \gets \mathsf{L}(\Trioracle(u)) + m$ \label{line:MultiSS update L}
\label{line:MultiSS update K}
\State $\mathsf{K} \gets \textsc{Remove}(\mathsf{K}, \mathsf{F}, m)$
\EndIf
\EndIf
\EndWhile
\State \Return $\mathsf{L}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Analysis\\}
The pre-processing step of Line~\ref{line:MultiSS preprocess K} can be computed in time $\abs{\supp(\mathsf{K})}^2$, by pairwise comparisons. The \textbf{while} loop of Line~\ref{line:MultiSS big while} is executed exactly $\abs{\supp(\mathsf{K})}$ number of times, for each $u \in \supp(\mathsf{K})$.
The $\textsc{Consolidate}$ call in $\textsc{TriOrMultiSSR}$ returns $\widetilde{\mathsf{K}}:\Sigma_1^{n_1} \rightarrow \mathbb{N}$, a different encoding of the multiset $\mathsf{K}$ of $\mathcal{U}$, such that all elements of $\supp(\widetilde{\mathsf{K}})$ are uniquely encoded. This requires ${\abs{\supp(\mathsf{K})} \choose 2}$ pairwise comparisons, or, $< \abs{\supp(\mathsf{K})}^2$ calls to the $\equiv$ oracle. Similarly, the $\textsc{Consolidate}$ call in $\textsc{Remove}$ can be achieved in $< \abs{\supp(\mathsf{K}) \cup \supp(\mathsf{F})}^2$ calls to the $\equiv$ oracle.
\section{Reduction of $\textsc{HomExtSym}$ to $\textsc{OrMultiSSR}$}
\label{section:comb}
We define the reduction from {\textsc{HomExtSym}} to $\textsc{OrMultiSSR}$ then prove the three parts of Theorem~\ref{thm:main-reduction}: the polynomial-time efficiency of the reduction, the bijection between classes of equivalent extensions in $\HExt(\psi)$ and the set $\SubSum(\OMS_\psi)$ of solutions to $\OMS_\psi$, and efficiency of of defining an extension homomorphism $\varphi \in \HExt(\psi)$ from a solution $\mathsf{L} \in \SubSum(\OMS_\psi)$. \\
For notational convenience, Section~\ref{section:GL-actions} defines ``$(G, \mathsf{L})$-actions'' which describe permutation actions up to equivalence.
Towards proving Theorem~\ref{thm:main-reduction} (a), Section~\ref{section:reduction} presents the reduction from a $\textsc{HomExtSym}$ instance $\psi$ to the $\textsc{OrMultiSSR}$ instance $\OMS_\psi$. We define the instance $\OMS_\psi$ and show that its oracles can be answered in $\poly(n,m)$-time.
Section~\ref{section:comb intransitive} proves the bijection claimed in Theorem~\ref{thm:main-reduction} (b), assuming the transitive case. The transitive case is proved in Sections~\ref{section:comb decomposing} and~\ref{section:comb extending}.
Section~\ref{section:def-ext} proves Theorem~\ref{thm:main-reduction} (c) by providing the algorithmic details of defining $\varphi\in \HExt(\psi)$ given a solution in $\SubSum(\OMS_\psi)$.
\subsection{$(G,\mathsf{L})$-actions, equivalence classes of $G$-actions}
\label{section:GL-actions}
We introduce the terminology ``$(G,\mathsf{L})$-actions'' (or ``$(G, L)$-actions'' for transitive actions), which describes group actions up to permutation equivalence. The $\mathsf{L}: \Sub(G) \to \mathbb{N}$ denotes a multiset of subgroups of $G$, describing point stabilizers of the action. We make this more precise.
Recall that we write $[L]_G = [L]$ to denote the conjugacy class of the subgroup $L$ in $G$.
\begin{definition}[$(G,L)$-action]
Let $\varphi:G \rightarrow \Sym(\Omega)$ be a transitive action. Let $L \leq G$. We say that $\varphi$ is a \defn{$(G,L)$-action} if $\varphi$ is equivalent to $\rho_L$, the natural on right cosets of $L$ (Example~\ref{ex:coset-action}). We say that $\varphi$ is a \defn{$(G, [L])$-action} if $\varphi$ is a $(G,L)$-action.
\end{definition}
By Corollary~\ref{cor:prelim-equiv-vs-conj}, a $G$-action is a $(G,L)$-action if and only if $L$ is a point stabilizer of the action. Moreoever, a $(G,L)$-action is a $(G,L_1)$-action if and only if $[L] = [L_1]$. So, we can speak of $(G,[L])$-actions and make no distinction between $(G,[L])$-actions and $(G,L)$-actions.
We now introduce notation to describe equivalence between intransitive actions.
\begin{definition}[$(G,\mathsf{L})$-action]
Let $\varphi: G \to \Sym(\Omega)$ be a group action.
Let $\mathsf{L}: \Sub(G) \to \mathbb{N}$ be a multiset listed as $\mathsf{L} = \{\{ L_i \leq G \}\}_{i=1}^d$.
We say the action of $G$ on $\Omega$ is a \defn{$(G, \mathsf{L})$-action} if the orbits in $\Omega$ of the action can be labeled $\Omega = \Omega_1 \dotcup \cdots \dotcup \Omega_d$ so that $G$ acts on $\Omega_i$ as a $(G,L_i)$-action for all $1 \leq i \leq d$.\footnote{The multiset $\mathsf{L}:\Sub(G) \rightarrow \mathbb{N}$ contains one point stabilizer per orbit of the $G$-action. Viewing $\mathsf{L}$ as a multiset is essential. For example, $\mathsf{L} = \{\{G\}\}$ describes the trivial action of $G$ on one point, whereas $ \mathsf{L} = \{\{G,G\}\}$ describes the trivial action of $G$ on two points.}
\end{definition}
Again, the equivalence class of the $G$-action is determined by the multiset $\mathsf{L}$ up to conjugation of its elements. We introduce notation describing conjugate multisets.
\begin{notation}
\label{notation:conj-multiset}
Let $\mathsf{L} = \{ \{ L_1, \ldots, L_k\}\}$ be a multiset of subgroups of $G$. We denote by $[\mathsf{L}]_G = \{ \{ [L_1]_G, \ldots, [L_k]_G\}\}$ the multiset of conjugacy classes for the subgroups of $\mathsf{L}$.
\end{notation}
In other words, for a multiset $\mathsf{L}: \Sub(G) \to \mathbb{N}$, denote by $[\mathsf{L}]_G : \Conj(G) \to \mathbb{N}$ the multiset found by replacing every element $L \in \mathsf{L}$ by $[L]_G$. Multiplicities of subgroup conjugacy classes $[L]$ in the multiset $[\mathsf{L}]$ satisfy $[\mathsf{L}]([L]) = \sum_{L \in [L]} \mathsf{L}(L)$. We may write $[L]$ for $[L]_G$ if $G$ is understood.
\begin{definition}[Conjugate multisets]
We say that two multisets $\mathsf{L}_1, \mathsf{L}_2 : \Sub(G) \rightarrow \mathbb{N}$ are \defn{conjugate} if $[\mathsf{L}_1] = [\mathsf{L}_2]$. In other words, there exists a bijection $\pi: \mathsf{L}_1 \rightarrow \mathsf{L}_2$ such that $\pi(L) \sim_G L$ for all $L \in \mathsf{L}_1$.\footnote{This definition does not require conjugacy of all pairs simultaneously via the one element of $G$.}
\end{definition}
Conjugate multisets describes group actions up to equivalence, as we see in the following next statement, which follows from the definitions and Corollary~\ref{cor:prelim-equiv-vs-conj}.
\begin{corollary}
\label{cor:similar-multiset-same-action}
Let $\mathsf{L}_1, \mathsf{L}_2: \Sub(G) \rightarrow \mathbb{N}$. The following are equivalent.
\begin{itemize}
\item $\mathsf{L}_1$ and $\mathsf{L}_2$ are conjugate, or $[\mathsf{L}_1] = [\mathsf{L}_2]$.
\item A $(G, \mathsf{L}_1)$-action is permutation equivalent to a $(G, \mathsf{L}_2)$-action.
\item A $(G, \mathsf{L}_1)$-action is also a $(G, \mathsf{L}_2)$-action.
\end{itemize}
\end{corollary}
So, we can speak of $(G,[\mathsf{L}])$-actions and make no distinction between $(G,[\mathsf{L}])$-action and $(G,\mathsf{L})$-actions.
\subsection{Reduction}
\label{section:reduction}
In this section, we discuss the $\poly(n,m)$-time reduction from $\textsc{HomExtPerm}$ to $\textsc{OrMultiSSR}$.
\begin{remark}[Meaning of ``reduction'']
As usual, our reduction will compute the explicit inputs to $\textsc{OrMultiSSR}$ from a $\textsc{HomExtSym}$ instance in $\poly(n,m)$ time. However, to account for the oracles in $\textsc{OrMultiSSR}$, we provide also answers to its oracles in $\poly(n,m)$-time.
\end{remark}
Recall that $\Sub(G)$ denotes the set of subgroups of $G$ and $\Conj(G)$ denotes the set of conjugacy classes of subgroups of $G$. Denote by $\SubLeq(G)$ the set of subgroups of $G$ with index bounded by $m$. Denote by $\ConjLeq(G)$ the set of conjugacy classes of subgroups of $G$ with index bounded by $m$.
\paragraph{Construction of $\OMS_\psi$}
We define $\mathcal{U}, \mathcal{V}, [\mathsf{K}]$ and encodings $\Sigma_1^{n_1}, \Sigma_2^{n_1}$ of the $\textsc{OrMultiSSR}$ instance $\OMS_\psi$.\\
$\mathcal{U}$: $\ConjLeq(M)$.
$\mathcal{V}$: $\ConjLeq(G)$.
Encoding of $\mathcal{U}$: words of length $n_1 = 2n$ over alphabet $\Sigma_1 = M$. A conjugacy class in $\mathcal{U}$ of subgroups is encoded by a representative subgroup in $\SubLeq(M)$, which is then encoded by a list of at most $2 n$ generators.
Encoding of $\mathcal{V}$: Likewise, with $\Sigma_2 = G$ and $n_2 = 2n$.
$[\mathsf{K}]$: Let $\mathsf{K} : \SubLeq(M) \rightarrow \mathbb{N}$ be a multiset containing one point stabilizer per orbit of the action $\psi : M \rightarrow S_m$. So, $[\mathsf{K}]: \ConjLeq(M) \rightarrow \mathbb{N}$ is a multiset of conjugacy classes, as in Notation~\ref{notation:conj-multiset}.
\paragraph{Notational issues.}
Using $[\mathsf{K}]$ versus $\mathsf{K}$ reflects the non-unique encoding of $\mathcal{U} = \ConjLeq(M)$ by $\SubLeq(G)$ and $\mathcal{V} = \ConjLeq(G)$ by $\SubLeq(G)$, adhering to Notation~\ref{notation:conj-class} and~\ref{notation:conj-multiset}. A conjugacy class $[K] \in \mathcal{U}$ will be encoded by $K \in \SubLeq(M)$. A multiset $[\mathsf{K}]: \mathcal{U} \to \mathbb{N}$ will be encoded by $\mathsf{K}: \SubLeq(M) \to \mathbb{N}$.
\paragraph{Calculating $[\mathsf{K}]$\\}
Calculating $[\mathsf{K}]: \mathcal{U} \to \mathbb{N}$ from $\psi: M \to S_m$: Decompose $[m] = \Sigma_1 \dotcup \ldots \dotcup \Sigma_s$ into its $M$-orbits under the action described by $\psi$. Choose one element $x_i \in \Sigma_i$ per orbit.\footnote{The choice of $x_i$ will not affect the correctness of the reduction.} Then, calculate the multiset $\mathsf{K} := \{ \{ M_{x_i} : i = 1 \ldots s \} \}$ by finding the point stabilizer of each chosen element. So, calculating $\mathsf{K}$ can be accomplished in $\poly(n)$-time by Proposition~\ref{prop:pga-basic}.
\paragraph{Answering $\equiv$ oracle\\}
The $\equiv$ oracle: given two subgroups in $\SubLeq(M)$, check their conjugacy. This can be accomplished in $\poly(n,m)$-time by Proposition~\ref{prop:pga-subgroup}.
\paragraph{Answering $\Foracle$ oracle. \\}
The set $\mathfrak{F}$ is indexed by $\mathcal{V} = \ConjLeq(G)$. $\Foracle$ takes as input $[L] \in \ConjLeq(G)$ (represented by a $L \in \SubLeq(G)$) and returns $[\mathsf{F}_L]:\ConjLeq(M) \to \mathbb{N}$ (represented by $\mathsf{F}_L: \SubLeq(M) \to \mathbb{N}$), defined below. The multiset $\mathsf{F}_L : \SubLeq(M) \rightarrow \mathbb{N}$ is defined so that $(G,L)$-actions induce $(M, \mathsf{F}_L)$-actions.
\begin{definition}[$\mathsf{F}_L(\bm{\sigma})$]
\label{def:FsubL}
Let $\bm{\sigma} = (\sigma_1, \ldots, \sigma_d)$ be a list of double coset representatives for $L \backslash G /M$. We define the multiset $\mathsf{F}^M_L(\bm{\sigma}) : \Sub(M) \rightarrow \mathbb{N}$ by
\begin{equation*}
\mathsf{F}^M_L(\bm{\sigma}) = \mathsf{F}_L := \{\{ \sigma_i^{-1} L \sigma_i \cap M: i = 1 \ldots d \}\}.
\end{equation*}
\end{definition}
In the context of extending an $M$-action $\psi:M \to S_m$ to a $G$-action, $M$ is understood, so we drop the superscript and write $\mathsf{F}_L$.
$\Foracle$ is well-defined. First of all, the choice $\bm{\sigma}$ of double coset representatives will not affect the conjugacy class of $\mathsf{F}_L^M(\sigma)$
(see Remark~\ref{rmk:Foracle-well-defined}).
Moreover, if $[L]_G = [L_1]_G$ then $[\mathsf{F}_L]_M = [\mathsf{F}_{L_1}]_M$.
Section~\ref{section:comb decomposing} further discusses and proves these claims about the properties of $\mathsf{F}_L$.
$\Foracle$ can be answered in $\poly(n,m)$-time by Proposition~\ref{prop:pga-double-cosets}.
\subsection{Combinatorial condition for extensions}
\label{section:comb intransitive}
We are now equipped to state the central technical result. It relates $M$-actions to extension $G$-actions by describing how $M$-orbits may be grouped to form $G$-orbits.
First, we address the case of transitive extensions.
As in Definition~\ref{def:FsubL}, $\mathsf{F}_L:\Sub(M) \to \mathbb{N}$ denotes the multiset returned by the oracle $\Foracle$ on input $L \in \Sub(G)$. Since we assume the extension $G$-action is transitive, the multiset $\mathsf{F}_L$ describes exactly the $M$-orbits that must be collected to form one $(G,L)$-orbit.
\begin{lemma}[Characterization of transitive extensions]
\label{lemma:comb-trans}
Let $M, L \leq G$ and $m \in \mathbb{N}$. Let $\psi : M \to S_m$ be an $M$-action. Under these circumstances, $\psi$ extends to a $(G,L)$-action if and only if $\psi$ is a $(M,\mathsf{F}_L)$-action.
\end{lemma}
The forward and backwards directions are Corollary~\ref{cor:comb-forward-direction} and Proposition~\ref{prop:gluing} in the next two sections.
\begin{remark}
To rephrase Lemma~\ref{lemma:comb-trans}, an $(M, \mathsf{K})$-action extends to a transitive $(G,L)$-action if and only if $[\mathsf{K}] = [\mathsf{F}_L]$ (see Corollary~\ref{cor:similar-multiset-same-action}).
\end{remark}
The following result on intransitive actions is a corollary to Lemma~\ref{lemma:comb-trans}.
\begin{theorem}[Key technical lemma: characterization of $\textsc{HomExtSym}$ with codomain $S_m$]
\label{thm:HE comb main}
Let $M \leq G$ and $m \in \mathbb{N}$. Let $\psi: M \to S_m$ be an $M$-action. Let $[\mathsf{L}]:\Conj(G) \to \mathbb{N}$. Let $[\mathsf{K}] : \Conj(M) \to \mathbb{N}$ describe the equivalence class of $\psi$, so $\psi$ is an $(M, \mathsf{K})$-action. Under these circumstances, $\psi$ extends to a $(G, [\mathsf{L}])$-action if and only if $[\mathsf{K}]$ is an $[\mathsf{L}]$-linear combination of elements in $\mathfrak{F}$, i.e.,
\begin{equation}
\label{eqn:HE comb char}
[\mathsf{K}] = \sum_{L \in \mathsf{L}} [\mathsf{F}_L]= \sum_{[L] \in \ConjLeq(G)} \mathsf{L}([L]) [\mathsf{F}_L].
\end{equation}
\end{theorem}
We have found that an $(M, \mathsf{K})$-action extends exactly if $\mathsf{K}$ is a Subset Sum with Repetition of $\{\mathsf{K}_L\}$. Compare Equation~\eqref{eqn:HE comb char} to the definition of $\SubSum(\OMS_\psi)$ (see Notation~\ref{notation:SubSum} and the reduction of Section~\ref{section:reduction}). We have found the following.
\begin{corollary}
Let $M \leq G$ and $m \in \mathbb{N}$. Let $\psi: M \to S_m$ be an $(M, [\mathsf{K}])$-action, where $[\mathsf{K}]: \Conj(M) \to \mathbb{N}$. Under these circumstances, $\psi$ extends to a $G$-action if and only if $\SubSum(\OMS_\psi)$ is nonempty.
\end{corollary}
So, $\HExt(\psi)$ is nonempty if and only if $\SubSum(\OMS_\psi)$ is nonempty.
\begin{remark}
We have found something even stronger. The multisets $[\mathsf{L}]$ satisfying Equation~\eqref{eqn:HE comb char} are exactly the elements in $\SubSum(\OMS_\psi)$. A multiset $[\mathsf{L}]: \Conj(G) \rightarrow \mathbb{N}$ satisfies Equation~\eqref{eqn:HE comb char} if and only if $\HExt(\psi)$ contains a $(G,\mathsf{L})$-action extending $\psi$. This notation identifies all equivalent extensions, so we have found a bijection between the solutions in $\SubSum(\OMS_\psi)$ and classes of equivalent extensions in $\HExt(\psi)$, as promised by Theorem~\ref{thm:main-reduction} (b).
\end{remark}
\subsection{$(G,L)$-actions induce $(M, \mathsf{F}_L)$-actions}
\label{section:comb decomposing}
Let $M \leq G$. This section describes the $M$-action found by restricting a (transitive) $G$-action. If $\psi: G \rightarrow \Sym(\Omega)$ describes a $G$-action on $\Omega$, we will call the $M$-action on $\Omega$ found by restriction of $\psi$ to $M$ the \defn{$M$-action induced by $\psi$}, denoted by $\psi|_M$.
First, we identify the permutation domain $\Omega$ of a $(G,L)$-action with the right cosets $L \backslash G$. By definition of ``$(G,L)$-action,'' there exists a permutation equivalence of this action with $\rho_L$ (the national action on cosets of $L$), i.e., there exists a bijection $\pi: \Omega \to L\backslash G$ respecting the $G$-action. This bijection $\pi$ identifies $\Omega$ with $L \backslash G$.
We now describe the behavior of the induced $M$-action on $L \backslash G$.
\begin{remark}
\label{remark:FsubL-doublecosets-vs-Morbits}
Let $M,L \leq G$. Consider the natural $M$-action on $L \backslash G$ (the $M$-action induced by the $G$-action $\rho_L$). The cosets $(Lg_1)$ and $(Lg_2)$ belong to the same $M$-orbit if and only if $Lg_1M = Lg_2 M$, i.e., if $g_1$ and $g_2$ belong to the same double coset of $L \backslash G /M$.
\end{remark}
\begin{lemma}
\label{lemma:FsubL-Mactions}
Let $g_0 \in G$. Let $M,L \leq G$. The action of $M$ on the orbit $(Lg_0)^M$ of $Lg_0$ in $L \backslash G$ is equivalent to the action of $M$ on $K \backslash M$, where $K := g_0^{-1}Lg_0 \cap M$. The bijection is given by $La \leftrightarrow Kg_0^{-1} a$.
\end{lemma}
\begin{proof}
Both actions are transitive. Let $\zeta: (Lg_0)^M \rightarrow K \backslash M$ be defined by $\zeta(Lg) = Kg_0^{-1}g$ for all $g \in Lg_0 M$. For all $a \in M$,
\begin{equation*}
\zeta ( (Lg)^a) = \zeta (L(ga)) = K g_0^{-1} (ga) = (Kg_0^{-1} g)^a = \zeta(Lg)^a.
\end{equation*}
\end{proof}
From Remark~\ref{remark:FsubL-doublecosets-vs-Morbits} and Lemma~\ref{lemma:FsubL-Mactions}, we have found the (possibly non-transitive) natural action of $M$ on $L \backslash G$ satisfies the following.
\begin{enumerate}[(1)]
\item The number of orbits is $\abs{L \backslash G / M}$, the number of double cosets of $L$ and $M$ in $G$.
\item The point stabilizer of $Lg \in L \backslash G$ under the $M$-action is $M_{Lg} = g^{-1} L g \cap M$.
\end{enumerate}
We restate the definition of $\mathsf{F}_L$, which we now see describes the $M$-action on $L \backslash G$.
\begin{definition}[$\mathsf{F}_L(\bm{\sigma})$]
Let $\bm{\sigma} = (\sigma_1, \ldots, \sigma_d)$ be a list of double coset representatives for $L \backslash G /M$. We define the multiset $\mathsf{F}^M_L(\bm{\sigma}) : \Sub(M) \rightarrow \mathbb{N}$ by
\begin{equation*}
\mathsf{F}^M_L(\bm{\sigma}) = \mathsf{F}_L := \{\{ \sigma_i^{-1} L \sigma_i \cap M: i = 1 \ldots d \}\}.
\end{equation*}
\end{definition}
If the subgroup $M$ is understood, we drop the superscript $M$.
From Remark~\ref{remark:FsubL-doublecosets-vs-Morbits} and Lemma~\ref{lemma:FsubL-Mactions}, we find that $(G,L)$-actions restrict to $(M, \mathsf{F}_L)$-actions.
\begin{corollary}
\label{cor:comb-forward-direction}
Let $M, L \leq G$. Let $\bm{\sigma} = (\sigma_1, \ldots, \sigma_d)$ be a set of double coset representatives of $L \backslash G /M$. If $G$ acts on $\Omega$ as a $(G,L)$-action, then the induced action of $M$ on $\Omega$ is an $(M, \mathsf{F}_L(\bm{\sigma}))$-action. In fact, the $M$-action induced by a $(G,[L])$-action is an $(M, [\mathsf{F}_L])$-action.
\end{corollary}
The last sentence of Corollary~\ref{cor:comb-forward-direction} follows from Corollary~\ref{cor:similar-multiset-same-action} and Lemma~\ref{lemma:Foracle-well-defined} below, which say that the choice $\sigma$ of double coset representatives and the choice $L$ of conjugacy class representative make no difference to the conjugacy class $[\mathsf{F}_L(\bm{\sigma})]$. \\
We show the $\Foracle$ is well-defined.
\begin{remark}
\label{rmk:Foracle-well-defined}
For any two choices $\bm{\sigma}$ or $\bm{\sigma}'$ of double coset representatives of $L \backslash G /M$, we have that $[\mathsf{F}_L(\bm{\sigma})]_M = [\mathsf{F}_L(\bm{\sigma}')]_M$. So, we may reference $(M, \mathsf{F}_L)$-actions without specifying $\bm{\sigma}$.
\end{remark}
This is true since, if $\sigma_1$ and $\sigma_2$ are representatives of the same double coset, then $\sigma_1^{-1} L \sigma_1 \cap M$ and $\sigma_2^{-1} L \sigma_2 \cap M$ are conjuate in $M$.
In fact, only the conjugacy class of $L$ matters in determining the conjugacy class of $\mathsf{F}_L$. In particular, the $\Foracle$ oracle is well-defined.
\begin{lemma}
\label{lemma:Foracle-well-defined}
Let $M, L, L_1 \leq G$. If $[L]_G = [L_1]_G$, then $[\mathsf{F}^M_L]_M = [\mathsf{F}^M_{L_1}]_M$. In other words, if $L$ and $L_1$ are conjugate in $G$, then $\mathsf{F}^M_L$ and $\mathsf{F}^M_{L_1}$ are conjugate in $M$.
\end{lemma}
\begin{proof}
The natural $G$-actions on $L \backslash G$ and $L_1\backslash G$ are equivalent by Corollary~\ref{cor:prelim-equiv-vs-conj}. Thus, the induced $M$-action on $L \backslash G$ and the induced $M$-action on $L_1\backslash G$ are equivalent, using the same bijection on the domain. But, the $M$-action on $L \backslash G$ is an $(M, \mathsf{F}_L)$-action and the $M$-action on $L_1 \backslash G$ is an $(M, \mathsf{F}_{L_1})$-action.
By Corollary~\ref{cor:similar-multiset-same-action}, we find $[\mathsf{F}_L]_M = [\mathsf{F}_{L_1}]_M$.
\end{proof}
\begin{comment}
We define $\mathsf{F}^{\mathbf{b}}_M(L)$, a multiset of point stabilizers describing the $M$-action on $L \backslash G$. The choice of $\mathbf{b}$, a list of double coset representatives for $L \backslash G / M$, matters little when describing $M$-actions.
\begin{definition}
\label{def:HE DML}
Let $\mathbf{b} = (g_1, \ldots, g_d)$ be a list of double coset representatives for $ L \backslash G / M$. We denote by $\mathsf{F}^{\mathbf{b}}_M(L)$ the multiset of subgroups of $M$ given by
\begin{equation*}
\mathsf{F}^{\mathbf{b}}_M(L) = \mathsf{F}^{g_1, \ldots, g_d}_M(L) := \{ g_i^{-1} L g_i \cap M: i = 1 \ldots r \}.
\end{equation*}
\end{definition}
If the subgroup $M$ is understood, we sometimes drop the subscript $M$ and write $\mathsf{F}(L) = \mathsf{F}_M(L)$.
By Corollary~\ref{cor:similar-multiset-same-action}, an $(M, \mathsf{F}^{\mathbf{b}}_M(L))$-action is also an $(M, \mathsf{F}^{\mathbf{b}'}_M(L))$-action for any choice of double coset representatives $g_i'$. We refer to these as $(M, \mathsf{F}_M(L))$-actions without loss of generality. It is assumed that a chosen list $\mathbf{b}$ of double coset representatives of $L \backslash G / M$ has been chosen.
Intuitively, $\mathsf{F}_M(L)$ describes the point stabilizers of the $M$-action on $L \backslash G$, with one point stabilizer per $M$-orbit. The choice of $g_i$ within a double coset in $\in L \backslash G /M$ affects only the choice of stabilized point within an $M$-orbit. This is formalized in the next corollary.
\begin{corollary}[Breakdown of $(G,L)$-action into $M$-action]
\label{cor:HE the M orbits in L cosets}
Let $M, L \leq G$. If $G$ acts on $\Omega$ as a transitive $(G,L)$-action, then $M$ acts on $\Omega$ as an $(M, \mathsf{F}_M(L))$-action.
\end{corollary}
Proposition~\ref{prop:gluing} in the next section will be a converse to this corollary.
\begin{remark}
\label{remark:HE DML coset reps}
Let $\mathbf{b}$ be a set of double coset representatives for $ L \backslash G / M$. For $\mathsf{K} \in \mathcal{S}_M$, $\mathsf{K} \sim_M \mathsf{F}^{\mathbf{b}}_M(L)$ if and only if there exists a list of double representatives $\mathbf{b}'$ such that $\mathsf{K} = \mathsf{F}^{\mathbf{b}'}_M(L)$.
\end{remark}
\begin{proof}
Let $\mathbf{b} = ( g_1, \ldots, g_d)$ and $\mathbf{b}' = (g_1', \ldots, g_d')$.
By definition, $\mathsf{F}_M(L) =\mathsf{F}^{\mathbf{b}}_M(L) := \{ g_i^{-1} L g_i \cap M: i = 1 \ldots r \}$. Write $\mathsf{K} = \{ K_1 , \ldots, K_d \}$ such that there exist $ m_1, \ldots, m_d$ such that $m_i^{-1} K_i m_i = g_i^{-1} L g_i \cap M$ for all $i$. Then, $K_i = m_i g_i^{-1} L g_i m_i^{-1} \cap M$ for all $i$.
Since $g_i$ and $g_im_i^{-1}$ are contained the same double coset in $L \backslash G /M$, $g_i' = g_im_i^{-1}$ is a set of double coset representatives satisfying the lemma.
\end{proof}
\end{comment}
\subsection{Gluing $M$-orbits to find extensions to $G$-actions }
\label{section:comb extending}
In this section we see that any $(M, \mathsf{F}_L)$-action can extend to a $(G,L)$-action.
We proved in the last section that the $M$-action induced by every $(G,L)$-action is an $(M, \mathsf{F}_L)$-action. Since all $(M, \mathsf{F}_L)$-actions are permutation equivalent (Corollary~\ref{cor:similar-multiset-same-action}), the given $(M,\mathsf{F}_L)$-action and the $(M, \mathsf{F}_L)$-action induced by the $(G,L)$-action $\rho_L$ are permutation equivalent. This gives a bijection between permutation domains which respects the $M$-actions. Thus, the given $M$-action extends to a $(G,L)$-action.
In what follows we construct the bijection explicitly.
Let $M, L \leq G$. Let $\psi: M \to \Sym(\Omega)$ be an $(M, \mathsf{F}_L)$-action.
By definition, we may label the orbits in $\Omega$ by the sets of cosets $K \backslash M$ for $K \in \mathsf{F}_L$ (each orbit is labeled by one set of cosets $K \backslash M$), so that $M$ acts as the natural action $\rho_K$ on each coset.
Consider the natural $G$-action $\rho_L$ on right cosets $L \backslash G$. It will suffice to label $\Omega$ by the right cosets $L \backslash G$, so that the natural action of $G$ extends the $M$-action $\psi$.
Let $\sigma \in G$. Lemma~\ref{lemma:FsubL-Mactions} gave a permutation equivalence between the $M$-action on the orbit $(L\sigma)^M$ of $(L\sigma)$ in $L \backslash G$ and the natural $M$-action on $F_i \backslash M$, where $F_i = \sigma^{-1} L \sigma \cap M$. We extend this equivalence here.
\begin{construction}[Equivalence $\zeta$]
\label{def:extension map}
Fix a choice $\bm{\sigma} = (\sigma_1, \ldots, \sigma_d)$ of double coset representatives for $L \backslash G /M$. Recall the definition $\mathsf{F}_L(\bm{\sigma}) = \{ \{ F_i : i = 1 \ldots d \} \}$, where $F_i = \sigma_i^{-1} L \sigma_i \cap M$. Define the map $\zeta$ by
\begin{equation*}
\zeta: \left(\dot{\bigcup}_i F_i\backslash M \right) \to L \backslash G, \; \; \; \zeta: F_i \tau \mapsto L \sigma_i \tau.
\end{equation*}
\end{construction}
That $\zeta$ is a permutation equivalence of the $M$-actions on the two sets follows immediately from Lemma~\ref{lemma:FsubL-Mactions}.
\begin{corollary}
The map $\zeta$ given in Construction~\ref{def:extension map} is a permutation equivalence of the $M$-action.
\end{corollary}
The next result is almost immediate from our discussion above.
\begin{proposition}[Gluing]
\label{prop:gluing}
Let $L, M \leq G$. Suppose that $\psi: M \rightarrow \Sym(\Omega)$ describes an $(M, \mathsf{F}_L)$-action. Then, there exists an extension $\varphi: G \rightarrow \Sym(\Omega)$ of $\psi$ that is a $(G,L)$-action.
\end{proposition}
\begin{proof}
We label the $M$-orbits of $\Omega$ by the cosets $F_i\backslash M$, use $\zeta$ to label $\Omega$ by $L \backslash G$, then let $G$ act on $\Omega$ in its natural action on $L \backslash G$. The output is the evaluation of $\varphi$ on the generators of $G$ as given by $\varphi(g_j): La \mapsto L a g_j$.
\end{proof}
\subsection{Defining one extension from $\SubSum$ solution}
\label{section:def-ext}
We prove Theorem~\ref{thm:main-reduction} (c) by defining an extension $\varphi \in \HExt(\psi)$ given a solution $[L] \in \SubSum(\OMS_\psi)$.
First of all, Construction~\ref{def:extension map} addresses the transitive case. It gives an explicit bijection $\zeta$ that, given an $(M, \mathsf{F}_L)$-action for $L \leq G$, defines an extension $(G,L)$-action. This bijection $\zeta$ can be computed in $\poly(n,m)$ time.
The issue remains of finding the $\mathsf{F}_L$ ``grouping'' of the $M$-orbits that respect the orbits of the $(G,\mathsf{L})$-action. \\
Fix a $\textsc{HomExtSym}$ instance $\psi$. Fix $\mathsf{L}: \Sub(G) \to \mathbb{N}$ in $\SubSum(\OMS_\psi)$, so $\mathsf{L}$ satisfies Equation~\eqref{eqn:HE comb char}.
Recall that $\mathsf{L}$ is represented by listing the subgroups in its support and their multiplicities. Since $\abs{\supp(\mathsf{L})} \leq \norm{\mathsf{L}}_1$, the number of orbits of the $G$-action, we find that $\abs{\supp(\mathsf{L})} \leq m$.
It takes $\poly(n,m)$ time to compute the multiset $\mathsf{K}$ of point stabilizers (one point stabilizer per orbit), and label $[m]$ by $\dot{\bigcup}_{K \in \mathsf{K}} K \backslash M$, the right cosets in $M$ of the subgroups in $\mathsf{K}$. Compute the multiset $\sum_{L \in \mathsf{L}} [ \mathsf{F}_L ]$ in $\poly(n,m, \norm{\mathsf{K}}_1)$-time, by calling the $\mathfrak{F}$ oracle.
By Theorem~\ref{thm:HE comb main}, $[\mathsf{K}] = \sum_{L \in \mathsf{L}} [ \mathsf{F}_L ]$. Via at most $m^2$ $\poly(n,m)$-time conjugacy checks between subgroups in $M$, compute the map $\pi: \mathsf{K} \leftrightarrow \sum_{L \in \mathsf{L}} \mathsf{F}_L $ that identifies conjugate subgroups. Compute the conjugating element for each pair.
For each $L \in \mathsf{L}$, use the map $\zeta$ of Construction~\ref{def:extension map} to label $\Omega$ by right cosets of elements in $\mathsf{L}$. Define $\varphi$ by its natural action on cosets.
\section{Reducing to \textsc{TriOrMultiSSR}}
\label{section:uniqueness}
In this section we prove Theorem~\ref{thm:main-methods-trireduction}, i.e., an instance $\psi$ of $\textsc{HomExtSym}$ satisfying the conditions of Theorem~\ref{thm:main} will reduce to an instance $\OMS_\psi$ of $\textsc{TriOrMultiSSR}$.
Fix an instance $\psi: M \to S_m$ of $\textsc{HomExtSym}$ that satisfies the conditions of Theorem~\ref{thm:main}, i.e., $M = A_n$, $\ind{G}{M} = \poly(n)$ and $m < 2^{n-1}/\sqrt{n}$. Consider the instance $\OMS_\psi$ of$\textsc{OrMultiSSR}$ found via the reduction of Section~\ref{section:comb}. We will show that $\OMS_\psi$ satisfies the additional assumptions of $\textsc{TriOrMultiSSR}$ and provide answers for the additional oracles.
\paragraph{Ordering, the $\preccurlyeq$ oracle\\}
The ordering $\preccurlyeq$ on conjugacy classes in $\mathcal{U} = \ConjLeq(M)$ is given by ordering the indices of a representative subgroup for each conjugacy class. In other words, $[K_1] \preccurlyeq [K_2]$ if $\ind{M}{K_1} \leq \ind{M}{K_2}$. This relation is well-defined as conjugate subgroups have the same index. The relation $\preccurlyeq$ is clearly a total preorder.
$\preccurlyeq$ oracle: The index of a subgroup $K \leq M$ can be computed in $\poly(n)$-time by Proposition~\ref{prop:pga-basic}. The $\preccurlyeq$ oracle compares two conjugacy classes in $\ConjLeq(M)$ by comparing the indices of two representatives.
\paragraph{Triangular condition, the $\triangle$ oracle\\ }
Here we define the $\triangle$ oracle on $\mathcal{U} = \ConjLeq(M)$ (Construction~\ref{def:triangle-oracle}), analyze its efficiency (Remark~\ref{rmk:triangle-oracle-efficiency}), then prove its correctness (Lemma~\ref{lemma:HE L is unique if K is large}). The assumptions of Theorem~\ref{thm:main} are essential.
First we set up some notation. By the assumptions of Theorem~\ref{thm:main}, $G = A_n$ and $M \leq G$ satisfies $\ind{G}{M} = \poly(n)$. Assume more specifically that $\ind{G}{M} < {n \choose r}$, for constant $r$. By Jordan-Liebeck (Theorem~\ref{thm:JordanLiebeck}) we find that $(A_n)_{(\Sigma)} \leq M \leq (A_n)_{\Sigma}$ for some $\Sigma \subseteq [n]$ with $\lvert \Sigma \rvert < r$. Fix this subset $\Sigma \subset [n]$.
Recall that, for a subset $\Sigma \subseteq [n]$ that is invariant under action by the permutation group $M \leq S_n$, we denote by $M^\Sigma \leq \Sym(\Sigma)$ the induced permutation group of the $M$-action on $\Sigma$. \\
\begin{construction}[$\triangle$ oracle]
\label{def:triangle-oracle}
We define a map $\triangle: \SubLeq(M) \to \SubLeq(G)$.\footnote{Though the $\triangle$ oracle returns an element of $\ConjLeq(G)$ on an input from $\ConjLeq(M)$, these conjugacy classes are represented by subgroups. So, the $\triangle$ oracle should return an element of $\SubLeq(G)$ on an input from $\SubLeq(M)$, while respecting conjugacy.}
Let $K \in \SubLeq(M)$. By Jordan-Liebeck, we find that $(A_n)_{(\Gamma)} \leq K \leq (A_n)_{\Gamma}$ for $\Gamma \subseteq [n]$ with $\abs{\Gamma} < n/2$. There are two cases. If there is a subset $\Sigma_0 \subseteq \Gamma$ such that $K^{\Sigma_0} = M^\Sigma$, then let $\bar{\Gamma} = \Gamma \setminus \Sigma_0$ and
\begin{equation}
\triangle(K) = \begin{cases}
\Alt([n] \setminus \bar{\Gamma}) \times K^{\bar{\Gamma}} & \text{if $K^{\bar{\Gamma}}$ is even} \\
\text{the subgroup of index $2$ in }
\Sym([n] \setminus \bar{\Gamma}) \times K^{\bar{\Gamma}} & \text{if $K^{\bar{\Gamma}}$ contains an odd permutation}
\end{cases}.
\end{equation}
If such a $\Sigma_0$ does not exist, then let $\triangle(K) = \textsf{Error}$.
\end{construction}
\begin{remark}[Efficiency of $\triangle$ oracle]
\label{rmk:triangle-oracle-efficiency}
Answering the $\triangle$ oracle of Construction~ \ref{def:triangle-oracle} requires finding orbits, finding the induced action on orbits, and checking permutation equivalence (or conjugacy of point stabilizers, per Corollary~\ref{cor:prelim-equiv-vs-conj}). These can be accomplished in $\poly(n, m)$ time (Propositions~\ref{prop:pga-basic} and~\ref{prop:pga-subgroup}).
\end{remark}
\begin{remark}
The $\triangle$ oracle is well-defined as a $\ConjLeq(M) \to \ConjLeq(G)$ map.
\end{remark}
Now, we prove that the oracle $\triangle$ (Definition~\ref{def:triangle-oracle}) satisfies the conditions of $\textsc{TriOrMultiSSR}$.
In other words, the equivalence class of the $M$-action on its longest orbit uniquely determines the equivalence class of the transitive $G$-action and this correspondence is injective. Lemma~\ref{lemma:HE L is unique if K is large} makes this more precise.
\begin{lemma}
\label{lemma:HE L is unique if K is large}
Let $M \leq G = A_n$ have index $\ind{G}{M} \leq {n \choose u}$. Let $G$ act on $\Omega$ transitively, with degree $\abs{\Omega} < {n \choose v}$. Assume $u + v < n/2$.
If $K_0$ is a point stabilizer of the induced $M$ action on its longest orbit, then $\triangle(K_0)$ is a point stabilizer of the $G$-action on $\Omega$.\footnote{If $M$ and $K_0$ are known, then $\triangle(K_0)$ is uniquely determined.}
\end{lemma}
To rephrase, if $M$ acts on its longest orbit as an $(M,K_0)$-action, then $G$ acts as a $(G,\triangle(K_0))$-action.
We defer the proof of Lemma~\ref{lemma:HE L is unique if K is large} to present a few useful claims.
\begin{claim}
\label{claim:HE (Delta,LDelta) determines large L}
If $(A_n)_{(\Sigma)} \leq L \leq (A_n)_{\Sigma}$, then the pair $(\Sigma, L^\Sigma)$ determines $L$.
\end{claim}
\begin{proof}
We have two cases. Either $L = (A_n)_{(\Sigma)} \times L^\Sigma = A_{n-\abs{\Sigma}} \times L^\Sigma$, or $L$ is an index $2$ subgroup of $(S_n)_{(\Sigma)} \times L^\Sigma = S_{n-\abs{\Sigma}} \times L^\Sigma$. In the first case, all permutations in $L^\Sigma$ must be even. In the second case, $L^\Sigma$ must contain an odd permutation.
\end{proof}
\begin{claim}
\label{claim:HE LDelta = McapL Delta}
Suppose that $(A_n)_{(\Sigma)} \leq L \leq (A_n)_{\Sigma}$ and $(A_n)_{(\Gamma)} \leq M \leq (A_n)_{\Gamma}$ for $\Gamma \cap \Sigma = \emptyset$. Then, $L^\Sigma = (L \cap M)^\Sigma$. (Equivalently, $M^\Gamma = (L \cap M)^\Gamma$.)
\end{claim}
\begin{proof}
The inclusion $\supseteq$ is obvious. We show $\subseteq$.
Let $\sigma \in L^\Sigma$. View $\sigma$ as a permutation in $S_n$. Let $\Sigma \subseteq [n]$ be such that $[n] = \Gamma \dotcup \Sigma \dotcup \Sigma$. Consider the set $T = \{ \tau \in S_n : \supp(\tau) \subseteq \Sigma \text{ and } \sgn \tau = \sgn \sigma \}.$
We see that for all $\tau \in T$, $\sigma \tau \in M \cap L$. Thus, $\sigma \in (M \cap L)^\Gamma$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:HE L is unique if K is large}]
Let $L$ be a point stabilizer of $G$ acting on $\Omega$. Since $\abs{\Omega} < {n \choose v}$, by Jordan-Liebeck Theorem~\ref{thm:JordanLiebeck}, there exists a subset $\bar{\Gamma} \subset [n]$ such that $(A_n)_{(\bar{\Gamma})} \leq L \leq (A_n)_{\bar{\Gamma}}$ and $\abs{\bar{\Gamma}} < v$. Similarly, there exists $\Sigma \subset [n]$ such that $(A_n)_{(\Sigma)} \leq M \leq (A_n)_{\Sigma}$ and $\abs{\Sigma} < u$. Fix $\bar{\Gamma}$ and $\Sigma$.
By Theorem~\ref{thm:HE comb main}, we find that the point stabilizers of the $M$-action on $\Omega$ are described by $\mathsf{F}_L$. By Definition~\ref{def:FsubL} and Corollary~\ref{cor:similar-multiset-same-action}, we find that
\begin{equation*}
K_0 = \argmax \{\ind{M}{K} : K \in \mathsf{F}_L \} \sim_M \argmin \{ \abs{K}: K = g^{-1} L g \cap M \text{ for } g \in G \}.
\end{equation*}
But, $\abs{g^{-1} L g \cap M}$ is minimized when $g \in G = A_n$ satisfies $\Gamma^g \cap \Sigma = \emptyset$. Fix this $g$. By Claims~\ref{claim:HE (Delta,LDelta) determines large L} and~\ref{claim:HE LDelta = McapL Delta} applied to $g^{-1} L g$ and $M$, we find that
\begin{equation}
g^{-1} L g = \begin{cases}
\Alt([n] \setminus \bar{\Gamma}) \times K^{\bar{\Gamma}} & \text{if $K^{\bar{\Gamma}}$ is even} \\
\text{the subgroup of index $2$ in }
\Sym([n] \setminus \bar{\Gamma}) \times K^{\bar{\Gamma}} & \text{if $K^{\bar{\Gamma}}$ contains an odd permutation}
\end{cases}.
\end{equation}
In other words, we have found that $g^{-1} L g = \triangle(K_0)$, i.e., $L \sim_G \triangle(K_0)$.
It follows that the $G$-action on $\Omega$ is a $(G, \triangle(K_0))$-action.
\end{proof}
\section{Generating extensions within one equivalence class}
\label{section:within eq class}
We now consider how to, given one extension $\varphi \in \Hom(G, S_m)$ of $\psi \in \Hom(M, S_m)$, generate all extensions of $\psi$ equivalent to $\varphi$.
\begin{theorem}
\label{thm:within eq class}
Let $M \leq G$ and $\psi \in \Hom(M, S_m)$. Suppose that $\varphi \in \Hom(G,S_m)$ extends $\psi$. Then the class of extensions equivalent to $\varphi$ can be efficiently enumerated.
\end{theorem}
We will see that proving this result reduces to finding coset representatives for subgroups of permutation groups. First, some notation for describing group actions equivalent to $\varphi$.
\begin{notation}
Let $\lambda \in S_m$. Let $\varphi \in \Hom(G,S_m)$. Define $\varphi^\lambda \in\Hom(G,S_m)$ by $\varphi^\lambda(g) = \lambda^{-1} \varphi(g) \lambda$ for all $g \in G$.
\end{notation}
While $\varphi^\lambda$ will be equivalent to $\varphi$, regardless of the choice of $\lambda \in S_m$, we remark on the distinction between $\varphi^\lambda$ being the same group action, an equivalent extension of $\psi$, and an equivalent action.
\begin{remark}
Let $\lambda \in S_m$. Let $\varphi_1, \varphi_2 \in \Hom(G,H)$.
\begin{itemize}
\item $\varphi_1$ and $\varphi_2$ are equivalent (as a permutation actions) $\iff$ $\varphi_1 = \varphi_2^\lambda$ for some $\lambda \in S_m$.
\item $\varphi_1$ and $\varphi_2$ are equivalent extensions of $\psi$ $\iff$ $\varphi_1 = \varphi_2^\lambda$ and $\varphi_1|_M = \psi$ $\iff$ $\varphi_1 = \varphi_2^\lambda$ for some $\lambda \in C_{S_m}(\varphi_1(M)) = C_{S_m}(\psi(M))$.
\item $\varphi_1$ and $\varphi_2$ are equal $\iff$ $\varphi_1 = \varphi_2^\lambda$ for some $\lambda \in C_{S_m}(\varphi_1(G))$.
\end{itemize}
\end{remark}
We conclude that the sets of coset representatives of $C_{S_m}(\varphi(G))$ in $C_{S_m}(\psi(M))$ generate the non-equal equivalent extensions of $\psi$.
\begin{remark}
\label{rmk:eqclass-vs-cosetreps}
Let $R$ be a set of coset representatives of $C_{S_m}(\varphi(G))$ in $C_{S_m}(\psi(M))$. The set of equivalent extensions to $\varphi$ can be described (completely and without repetitions) by
$$
\{ \varphi^\lambda: \lambda \in R \} .$$
\end{remark}
These centralizers can be found in $\poly(n, m)$-time. The centralizer of a set of $T$ permutations in $S_m$ can be found in $\poly(\abs{T}, m)$ time (see Section~\ref{section:appendix-centralizer}), and we use this with the set of generators of $M$ and $G$. We can now apply the cited unpublished result by Blaha and Luks, stated below and proved in Section~\ref{section:luks}.
\begin{theorem}[Blaha--Luks] \label{thm:luks}
Given subgroups $K\le L\le S_m$, one can efficiently enumerate a representative of each coset of $K$ in $L$.
\end{theorem}
Since coset representatives of $K = C_{S_m}(\psi(M))$ in $L = C_{S_m}(\varphi(G))$ can be efficiently enumerated, so can all equivalent extensions to $\varphi$, by Remark~\ref{rmk:eqclass-vs-cosetreps}.
As a corollary, we find that the number of equivalent extensions can be computed in $\poly(n,m)$ time.
\begin{corollary}
Suppose $\varphi \in \Hom(G,S_m)$ extends $\psi \in \Hom(M,S_m)$. The number of equivalent extensions to $\varphi$ is $\ind{C_{S_m}(\varphi(G)) }{C_{S_m}(\psi(M) }$. This can be computed in $\poly(n,m)$-time.
\end{corollary}
\section{Integer linear programming for large $m$}
\label{section:large}
There is an interesting phenomenon for very large $m$, when $m > 2^{1.7^{n^2}}$. The instances $\OMS_\psi$ of $\textsc{OrMultiSSR}$ can be solved in polynomial time.
$\textsc{MultiSSR}$ can naturally be formulated as an \textsc{Integer Linear Program}, with dimensions $\abs{\mathcal{U}} \times \abs{\mathcal{V}}$, the size of the universe $\mathcal{U}$ and length of the list $\mathfrak{F}$ (indexed by $\mathcal{V}$). The variables correspond to multiplicities of the elements of $\mathfrak{F}$. The constraints correspond to elements of $\mathcal{U}$, by checking whether their multiplicities in the multiset and subset sum are equal.
In $\OMS_\psi$, these are $\Conj(M)$ and $\Conj(G)$. A result of Pyber~\cite{PyberSubgroupsSn} says that for $G \leq S_n$, the number of of subgroups is bounded by $\abs{\Sub(S_n)} \leq 1.69^{n^2}$. This bound is tight, so we cannot hope for the number of variables $(\Conj(M))$ to be smaller than exponential in $n^2$.
The ``low-dimensional'' algorithms of Lenstra and Kannan solve \textsc{Integer Linear Programming} in ``polynomial'' time \cite{Lenstra1983, KannanIP}, which are sufficient for this purpose. We state their results more precisely below.
\begin{theorem}
\label{thm:prelim KannanIP}
The \textsc{Integer Linear Programming}--Search and Decision Problems can be solved in time $N^{O(N)} \cdot s$, where $N$ refers to the number of variables and $s$ refers to the length of the input.\footnote{This result shows that {\textsc{ILP}} is fixed-parameter tractable, but we will not use that terminology here.}
\end{theorem}
\begin{lemma}
Suppose that the \textsc{Integer Linear Programming} Search Problem can be solved in time $f(N, M,a)$. Then, the \textsc{Integer Linear Programming} Threshold-$k$ Enumeration Problem can be solved in time $f(N,M, a) \cdot O(k^2)$.
\end{lemma}
We have found that, for instances $\psi$ of $\textsc{HomExtSym}$ with $m > 2^{1.7^{n^2}}$, the Threshold-$k$ Enumeration Problem for $\OMS_\psi$ can be solved in $\poly(n,m, k)$-time. For these instances of $\psi$, the Threshold-$k$ Enumeration Problem can be solved in $\poly(n,m,k)$-time.
\section{Background: permutation group algorithms}
\label{section:pga}
\subsection{Basic results}
We present results we use from the literature on permutation group algorithms.
Our main reference is the monograph~\cite{SeressPGA}.
Recall that a group $G$ is \defn{given} or \defn{known} when a set of generators for $G$ is given/known. A coset $Ga$ is \defn{given} or \defn{known} if the group $G$ and a coset representative $a' \in Ga$ are given/known. A group (or a coset) is \defn{recognizable} if we have an oracle for membership and \defn{recognizable in time $t$} if the membership oracle can be implemented in time $t$.
\begin{proposition}
\label{prop:pga-membership-test}
Membership in a given group $G \leq S_n$ (or coset $Ga$) can be tested in $\poly(n)$ time. In other words, a known group (or coset) is polynomial-time recognizable.
\end{proposition}
\begin{proof}
This is accomplished by the Schreier-Sims algorithm, see \cite[Section 3.1 item (b)]{SeressPGA}.
\end{proof}
\begin{corollary}
\label{cor:pga-intersection-recognizable}
If $G_1, \ldots, G_k \leq S_n$ and $a_1, \ldots, a_k \in S_n$ are given, then the intersection $\bigcap_{i} G_i a_i$ is polynomial-time recognizable.
\end{corollary}
\begin{proposition}
\label{prop:pga-basic}
Given $G \leq S_n$, the following can be computed in $\poly(n)$-time.
\begin{enumerate}[(a)]
\item A set of $\leq 2n$ generators of $G$. \label{item:pga-nonredundant-generators}
\item The order of $G$.
\label{item:pga-order}
\item The index $\ind{G}{M}$, for a given subgroup $M \leq G$.
\label{item:pga-index}
\item The orbits of $G$. \label{item:pga-orbits}
\item The point stabilizers of $G$.
\end{enumerate}
\end{proposition}
\begin{proof}
Most items below are addressed in \cite[Section 3.1]{SeressPGA}.
\begin{enumerate}[(a)]
\item Denote by $T$ the set of given generators of $G$. Use membership testing to prune $T$ down to a non-redundant set of generators. By~\cite{Bab_subgroupchain}, the length of subgroup chains in $S_n$ is bounded by $2n$, so $\abs{T} \leq 2 n$ after pruning.
\item See \cite[Section 3.1 item (c)]{SeressPGA}.
\item Compute $\abs{M}$ and $\abs{G}$.
\item See \cite[Section 3.1 item (a)]{SeressPGA}.
\item See \cite[Section 3.1 item (e)]{SeressPGA}.
\end{enumerate}
\end{proof}
\begin{proposition}
\label{prop:pga-recognizable-to-given}
Let $M \leq G$ be a recognizable subgroup of $G$ of index $\ind{G}{M} = s$. A set of generators for $M$ and a set of coset representatives for $M \backslash G$ can be found in $\poly(n,s)$ time (including calls to the membership oracle).
\end{proposition}
\begin{proof}
Consider the subgroup chain $G \geq M \geq M_1 \geq M_{(12)} \geq M_{(123)} \geq M_{(12\cdots n)} = 1$ ($M$ is followed by its stabilizer chain). Apply Schreier-Sims to this chain. (This is the ``tower of groups'' method introduced in~\cite{BabaiLV} and derandomized in~\cite{FurstHopcroftLuks}. Note that this method only requires the subgroups in this chain to be recognizable.)
\end{proof}
\begin{proposition}
\label{prop:pga-subgroup}
Let $G \leq S_n$ be a given permutation group. Let $M, L \leq G$ be given subgroups. Denote their indices by $s = \ind{G}{M}$ and $t = \ind{G}{L}$.
\begin{enumerate}[(a)]
\item The normalizer $N_G(M)$ can be found in $\poly(n,s)$-time.
\item The number of conjugates of $M$ in $G$ can be computed in $\poly(n,s)$-time.
\item The conjugacy of $L$ and $M$ in $G$ can be decided and a conjugating element $g \in G$ such that $g^{-1} L g = M$ can be found if it exists, in $\poly(n, s)$ time.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[(a)]
\item Let $S$ be the given set of generators of $M$. Take a set of coset representatives for $M \backslash G$, found by Proposition~\ref{prop:pga-recognizable-to-given}. Remove the coset representatives $g$ that do not satisfy $g^{-1} S g \subseteq M$. This is accomplished through membership testing. The remaining coset representatives, along with $S$, generate $N_G(M)$.
\item The number of conjugates of $M$ in $G$ is the index $\ind{G}{N_G(M)}$.
\item Check if $\abs{L} = \abs{M}$ by Proposition~\ref{prop:pga-basic} \ref{item:pga-order}. If not, they are not conjugate. Otherwise, let $S$ be the set of given generators of $M$. Now, $L$ and $M$ are conjugate if and only if there exists a coset representative $g$ for $N_G(M) \backslash G$ that satisfies $g^{-1} S g \subseteq L$.
\end{enumerate}\end{proof}
\begin{proposition}
\label{prop:pga-double-cosets}
Let $G \leq S_n$ be a given permutation group. Let $M, L \leq G$ be given subgroups. Denote their indices by $s = \ind{G}{M}$ and $t = \ind{G}{L}$.
\begin{enumerate}[(a)]
\item Given $g, h \in G$, membership of $h$ in the double coset $LgM$ can be decided in $\poly(n, \min\{s,t\})$-time.
\item A set of double coset representatives for $L \backslash G /M$ can be found in $\poly(n, \min\{s,t\})$-time.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[(a)]
\item Without loss of generality assume that $s \leq t$. Notice that
\begin{align*}
h \in LgM & \iff Lh \cap g M \neq \emptyset
\iff g^{-1} L h \cap M \neq \emptyset
\iff (g^{-1}Lg) \cap Mh^{-1} g \neq \emptyset.
\end{align*}
So, deciding whether $h \in LgM$ is equivalent to deciding whether the subgroup $L^* = g^{-1} L g$ and coset $ M g^*$ have non-empty intersection, where $g^* = h^{-1} g$. This intersection, $L^* \cap M g^*$, is either empty or a right coset of $L^* \cap M$ in $L^*$. In what remains we check whether a coset of $L^*\cap M$ is contained in $L^* \cap M g^*$.
Notice that $\ind{L^*}{L^* \cap M} \leq \ind{G}{M} = s$. Find a set $R$ of coset representatives of $L^* \cap M$ in $L^*$ using Proposition~\ref{prop:pga-recognizable-to-given}, noting that $L^*\cap M$ is recognizable (Corollary~\ref{cor:pga-intersection-recognizable}).
For each representative $r \in R$, check whether $r \in L^* \cap Mg^*$ (Corollary~\ref{cor:pga-intersection-recognizable}).
\item A list of $t$ coset representatives of $M$ in $G$ is a redundant set of double coset representatives for $L \backslash G /M$. This can be pared down to a set of non-redundant double coset representatives by ${t \choose 2}$ comparisons using part (a).
\end{enumerate}
\end{proof}
\subsection{Generators and relations}
\label{section:pga-generators-relations}
Let $x_1, \ldots, x_s$ be free generators of the free group $F_s$. Let $R_1, \ldots, R_t \in F_q$. The notation $G = \langle x_1, \ldots, x_s \mid R_1, \ldots, R_t\rangle$ refers to the group $F_s/N$ where $N$ is the normal closure of $\{R_1, \ldots, R_t\}$. This notation is referred to as a generator-relator presentation of $G$; the $R_i$ are called the relators.
\begin{definition}[Straight-line program]
Let $X$ be a set of generators of a group $H$. A \defn{straight line program} in $H$ starting from $X$ reaching a subset $Y \subseteq H$ is a sequence $h_1, \ldots, h_m$ of elements of $H$ such that, for each $i$, either $h_i \in S$, or $h_i^{-1} \in S$, or $(\exists j, k < i)(h_i = h_jh_k)$, and $Y \subseteq \{h_1, \ldots, h_m\}$.
\end{definition}
We shall say that a straight line program is \defn{short} if its length is $\poly(n)$, where $n$ is a given input parameter.
\begin{theorem}
\label{thm:pga-straight-line}
Let $G \leq S_n$ given by a set $S=\{a_1, \ldots, a_s\}$ of generators. Then, there exists a presentation $G \cong \langle x_1, \ldots, x_s \mid R_1, \ldots, R_t\rangle$ such that the set $\{R_1, \ldots, R_t\}$ is described by a short straight-line program, and the free generator $x_i$ corresponds to $a_i$ under the $F_s \to G$ epimorphism. Moreover, this straight-line program can be constructed in polynomial time.
\end{theorem}
The proof of this well-known fact follows from the Schreier-Sims algorithm.
\subsection{Extending a homomorphism from generators}
\label{section:promise}
We address Remark~\ref{rmk:promise} that $\textsc{HomExtSym}$ is not a promise problem. The input homomorphism $\psi: M \to H$ is represented by its values on generators of $M$. Whether this input does indeed represent a homomorphism, i.e., whether the values on the generators extend to a homomorphism on $M$, can be verified in $\poly(n)$ time.
\begin{proposition}
Let $G\leq S_n$ and $H \leq S_m$ be permutation groups. Let $S=\{a_1, \ldots, a_s\}$ be a set of generators of $G$ and $f: S \to H$ a function. Whether $f$ extends to a $G \to H$ homomorphism is testable in $\poly(n,m)$ time.
\end{proposition}
\begin{proof}
By Theorem~\ref{thm:pga-straight-line}, a generator-relator presentation of $G$ can be found in $\poly(n)$ time, in the sense that the relators are described by straight-line programs constructed in $\poly(n)$ time. If $R_i(a_1, \ldots, a_s)$ is one of the relators, then we can verify $R_i(f(a_1), \ldots, f(a_s)) =1$ in time $\poly(n,m)$ by evaluating the straight-line program. The validity of these equations is necessary and sufficient for the extendability of $f$.
\end{proof}
In particular, whether inputs to $\textsc{HomExtSym}$ satisfy the conditions of Theorems~\ref{thm:bounded}--\ref{thm:large-m} (and Theorems~\ref{thm:bounded-enum}--\ref{thm:large-enum}) can be verified in $\poly(n)$ time.
\subsection{Centralizers in $S_n$}
\label{section:appendix-centralizer}
\begin{proposition}
Given $G \leq S_n$, its centralizer $C_{S_n}(G)$ in the full symmetric group can be found in polynomial time.
\end{proposition}
\begin{proof}
Let $T = \{t_i\}_i$ denote the given set of generators for $G$. Without loss of generality, we may assume $\abs{T} \leq 2 n$ by Proposition~\ref{prop:pga-basic}~\ref{item:pga-nonredundant-generators}.
Construct the permutation graph $X = (V,E)$ of $G$, a colored graph on vertex set $V = [n]$ and edge set $E = \bigcup_{t \in T}E_t$, where $E_t= \{ (i, i^t): i \in [n] \}$ for each color $t \in T$. The edge set colored by $t \in T$ describes the permutation action of $t$ on $[n]$. We see that $C_{S_n}(G) = \Aut(X)$, where automorphisms preserve color by definition.
If $G$ is transitive ($X$ is connected), then $C_{S_n}(G)$ is semiregular (all point stabilizers are the identity). For $i, j \in [n]$, it is possible in $\poly(n)$ time to decide whether there exists a permutation $\sigma \in \Aut(G) = C_{S_n}(G)$ satisfying $i^\sigma = j$ (takes $i$ to $j$), then find the unique $\sigma$ if it exists. To see this, build the permutation $\sigma$ by setting $i^\sigma = j$, then following all colored edges from $i$ and $j$ in pairs to assign $\sigma$. If this is a well-defined assignment, then the permutation $\sigma \in \Aut(X)$ satisfying $i^\sigma = j$ exists.
In fact, if $X_1 = (V_1, E_1)$ and $X_2 = (V_2, E_2)$ are connected, whether then a graph isomorphism taking $i \in V_1$ to $j \in V_2$ can be found in $\poly(\abs{V_1})$ time if one exists.
If $X$ is disconnected, collect the connected components of $X$ by isomorphism type, so that there are $m_i$ copies of the connected graph $X_i$ in $X$, where $i = 1 \ldots \ell$ numbers the isomorphism types. The components and multiplicities can be found in $\poly(n)$ time by finding the components of $X$ (or, orbits of $G$, by Proposition~\ref{prop:pga-basic}~\ref{item:pga-orbits}) and pairwise checking for isomorphism. The automorphism group of $X$ is
\begin{equation*}
\Aut(X) = \Aut(X_1) \wr S_{m_1} \times \cdots \times \Aut(X_\ell) \wr S_{m_\ell}.
\end{equation*}
Each $X_i$ is connected, so $\Aut(X_i)$ can be found as above.
\end{proof}
\section{Blaha-Luks: enumerating coset representatives } \label{section:luks}
We sketch the proof of the unpublished result by Blaha and Luks (Theorem~\ref{thm:luks}), restated here for convenience. Below, by ``coset'' we mean ``right coset.''
\begin{theorem}[Blaha--Luks]
\label{thm:luks-local}
Given subgroups $K\le L\le S_n$, one can efficiently enumerate (at $\poly(n)$ cost per item) a representative of each coset of $K$ in $L$.
\end{theorem}
Let $\textsc{MoveCoset}(M\sigma, i, j)$ be a routine that decides whether there exists a permutation $\pi \in M\sigma$ satisfying $i^\pi = j$, and if so, finds one.
\begin{proposition}
$\textsc{MoveCoset}$ can be implemented in polynomial time.
\end{proposition}
\begin{proof}
Answering $\textsc{MoveCoset}$ is equivalent to finding $\pi \in M$ satisfying $i^\pi = j^{\sigma^{-1}}$ if one exists.
This is the same as finding the orbits of $M$ (Proposition~\ref{prop:pga-basic}~\ref{item:pga-orbits}).
\end{proof}
\begin{definition}[Lexicographic ordering of $S_n$]
Let us encode the permutation $\pi \in S_n$ by the string $\pi(1) \pi(2) \cdots \pi(n)$ of length $n$ over the alphabet $[n]$.
Order permutations lexicographically by this code.
\end{definition}
Note that the identity is the lex-first permutation in $S_n$.
\begin{lemma}
Let $\sigma \in S_n$ and $K \leq S_n$. The algorithm \textsc{LexFirst} (below) finds the
lex-first element of the subcoset $K \sigma\subseteq S_n$ in polynomial time.
\end{lemma}
\begin{algorithm}[H]
\caption{LexFirst within Subcoset}
\label{algorithm:LexFirst}
\begin{algorithmic}[1]
\Procedure{LexFirst}{subcoset $K\sigma$}
\State \textbf{for} $ i \in [n]$ \textbf{do} \; $i^\pi \gets \Null$\pcom{ Initialize $\pi:[n] \rightarrow [n]\cup\{\Null\}$}
\For{$s \in [n]$} \pcom{Find smallest possible image of $1$ under a permutation in $K\sigma$, then iterate.}
\For{$t \in [n]$}\pcom{Find smallest $s^\pi$ possible by checking $[n]$ in order}
\State \textbf{if} $\textsc{MoveCoset}(K\sigma,s,t) = \True$ \textbf{break}
\EndFor
\State $s^\pi \gets t$
\State $\tau \gets \textsc{MoveCoset}(K\sigma, s, t)$ \pcom{Restrict subcoset to elements moving $s$ to $t$}
\EndFor
\State \Return $\pi$
\EndProcedure
\end{algorithmic}
\end{algorithm}
It is straightforward to verify the correctness and efficiency of \textsc{LexFirst}. \qed
\begin{proof}[Proof of Theorem~\ref{thm:luks-local}]
Let $K \leq L \leq S_n$. Let $S$ be a set of generators of $L$.
The \defn{Schreier graph} $\Gamma =\Gamma(K \backslash L, S)$ is the permutation graph of the $L$-action on the coset space $K \backslash L$, with respect to the set $S$ of generators. $\Gamma$ is a directed graph with vertex set $V = K \backslash L$ and edge set $E = \{ (i, i^\pi): i \in [n], \pi \in S \}$.
To prove Theorem~\ref{thm:luks-local}, we may assume $\abs{S} \leq 2n$, by Proposition~\ref{prop:pga-basic}\ref{item:pga-nonredundant-generators}. Use breadth-first search on $\Gamma$, constructing $\Gamma$ along the way. Represent each vertex (a coset) by its lexicographic leader. Then, store the discovered vertices, ordered lexicographically, in a balanced dynamic search tree such as a red-black tree. Note that the tree will have $O(\log(n!)) = O(n \log n)$ depth and every vertex of $\Gamma$ has at most $2n$ out-neighbors. Hence, the incremental cost is $\poly(n)$.
\end{proof}
\section{List-decoding motivation for $\textsc{HomExt}$ Search and Threshold-$k$ Enumeration}
\label{section:appendix-motivation}
In this appendix we shall (a) indicate that Homomorphism Extension is a natural component of list-decoding homomorphism codes, (b) discuss the role of Theorem~\ref{thm:main} in list-decoding, and (c) motivate the special role of Threshold-$2$ Enumeration in this process. We note that all essential ideas in $\textsc{HomExt}$ Threshold-$k$ Enumeration already occur in the Threshold-$2$ case.
A function $\psi: G \to H$ is an \defn{affine homomorphism} if $\varphi(a b^{-1} c) = \varphi(a) \varphi(b)^{-1} \varphi(c)$ for all $a, b, c \in G$, or, equivalently, if $\varphi = h_0 \cdot \varphi_0$ for an element $h_0 \in H$ and homomorphism $\varphi_0: G \to H$. For groups $G$ and $H$, let $\aHom(G,H)$ denote the set of affine $G \to H$ homomorphisms.
Let $H^G$ denote the set of all functions $f: G \to H$. We view $\aHom(G,H)$ as a (nonlinear) code within the code space $H^G$ (the space of possible ``received words'') and refer to this class of codes as \defn{homomorphism codes}. ($H$ is the alphabet.) These codes are candidates for \defn{local} list-decoding up to minimum distance. For more detailed motivation see~\cite{ GKS06,DGKS08, homcodes}.
In~\cite{homcodes}, the \textsc{Homomorphism Extension} Search Problem arises as a natural roadblock to list-decoding homomorphism codes, if the minimum distance does not behave nicely.
To elaborate, the minimum distance of $\aHom(G,H)$ is the minimum normalized Hamming distance between two $G \to H$ affine homomorphisms. The complementary quantity is the \defn{maximum agreement}, which for the code $\aHom(G,H)$ we denote by
\begin{equation}
\label{eqn:Lambda}
\Lambda = \Lambda_{G,H} = \max_{\substack{\varphi_1, \varphi_2 \in \aHom(G,H) \\ \varphi_1 \neq \varphi_2}} \agr(\varphi_1, \varphi_2),
\end{equation}
where $\agr(\varphi_1, \varphi_2) = \frac{1}{\abs{G}} \abs{\{ g \in G: \varphi_1(g) = \varphi_2(g) \}}$ is the fraction of inputs on which two homomorphisms agree.
\paragraph{(a) $\textsc{HomExt}$ as a component of list-decoding\\}
When list-decoding a function $f:G \to H$, i.e., finding all $\varphi \in \aHom(G,H)$ satisfying $\agr(f, \varphi) \geq \Lambda + \epsilon$ for fixed $\epsilon >0$, we run into difficulty if there is
a subgroup $M \lneq G$ satisfying $\abs{M} > (\Lambda + \epsilon)\abs{G}$. In this case, it is possible for the agreement between $f$ and $\psi$ to lie entirely within $M$. As a consequence, $f$ may only provide information on the restriction $\varphi|_M: M \to H$ of $\varphi$ to $M$, but not on its behavior outside $M$. The natural objects returned by our list-decoding efforts are such partial homomorphisms, defined only on the subgroup $M$. We see from this that solving $\textsc{Homomorphism Extension}$ from subgroups of density greater than $\Lambda$ is a natural component to full list-decoding.
Works prior to~\cite{homcodes} considered cases for which $\Lambda$ was known, so it could be guaranteed that affine homomorphisms $\varphi$ in the output satisfied $\agr(f, \varphi) > \Lambda + \epsilon/2$.\footnote{In $\poly(\delta,\log \abs{G})$ time, we can estimate $\agr(f, \varphi)$ for $\varphi$ in the output list to within $\delta$ with high confidence. With this, we can prune the small agreement homomorphisms satisfying $\agr(f, \varphi) \leq \Lambda+\epsilon/2$ with high probability.\label{footnote:pruning}}
Additionally, they considered classes of groups for which defining an affine homomorphism on a set of density greater than $\Lambda$ immediately defined the affine homomorphism on the whole domain, so $\textsc{HomExt}$ was not an issue.
\paragraph{(b) The case $G$ is alternating, $M$ has polynomial index\\}
One of the main results stated in~\cite{homcodes} is the following.
\begin{theorem}
\label{thm:homcodes-alttoalt}
Let $G =A_n$, $H = S_m$ and $m < 2^{n-1}/\sqrt{n}$. Then, $\aHom(G,H)$ is algorithmically list-decodable, i.e., there exists a list-decoder that decodes $\aHom(G,H)$ up to distance $(1-\Lambda-\epsilon)$ in time $\poly(n, m, 1/\epsilon)$ for all $\epsilon>0$.
\end{theorem}
The proof of this result depends on the main result of the present paper, Theorem~\ref{thm:main}, in the following way.
For $A_n$, the theory of permutation groups tells us that $\Lambda \geq 1/{n\choose 2}$. It depends on $H$ whether this lower bound is tight. What the algorithm in~\cite{homcodes} actually finds is an intermediate output list consisting of $M \to S_m$ homomorphisms, where $M\leq A_n$ has order greater than $\Lambda\abs{A_n}$, i.e., $\ind{A_n}{M} < {n \choose 2}$. Our Theorem~\ref{thm:main} solves $\textsc{HomExt}$ for the case $\ind{A_n}{M} = \poly(n)$ and $m<2^{n-1}/\sqrt{n}$, completing the proof of Theorem~\ref{thm:homcodes-alttoalt}.
\begin{remark}
\label{rmk:roadblock-to-LD}
The restrictions on $H$ in Theorem~\ref{thm:homcodes-alttoalt} arise from the limitations of the $\textsc{HomExt}$ results in this paper. Any $\textsc{HomExt}$ results relaxing conditions on $H$ would automatically yield the same relaxations on $H$ for list-decoding, potentially extending the validity of all permutation groups $H$. In this sense, the \textit{limitations of our understanding of the Homomorphism Extension Problem constitute one of the main roadblocks to list-decoding homomorphism codes for broader classes of groups. }
\end{remark}
\paragraph{(c) Role of Threshold-$2$ Enumeration\\}
Our discussion above shows that $\Lambda$ is the lower threshold for densities of subgroups from which $\textsc{HomExt}$ must extend. Also, the algorithm of~\cite{homcodes} guarantees that only partial homomorphisms with domain density greater than $\Lambda$ are of interest.
However, the actual value of $\Lambda$ is not obvious to compute, nor is it automatically given as part of the input to a list-decoding problem. Lower bounds on $\Lambda$ are necessary to make $\textsc{HomExt}$ tractable; they also improve the algorithmic efficiency and output quality in list-decoding. Solving $\textsc{HomExt}$ Threshold-2 Enumeration instead of $\textsc{HomExt}$ Search, when extending lists of partial homomorphisms, can provide (or improve) lower bounds on $\Lambda$.
It is easy to see how Threshold-$2$ helps improve our lower bound on $\Lambda$. If a partial homomorphism $\psi$ extends non-uniquely, $\textsc{HomExt}$ Threshold-$2$ returns a pair of homomomorphisms whose agreement is larger than the domain of $\psi$. So, their agreement (and the density of the domain of $\psi$) gives witness to an updated lower bound on $\Lambda$.
Better lower bounds for $\Lambda$ have three main consequences.
\begin{itemize}
\item As discussed, better lower bounds for $\Lambda$ relax the requirements for the $\textsc{HomExt}$ algorithm called by the list-decoder. It suffices to extend from subgroups with densities above the lower bound.
\item Since the algorithm of~\cite{homcodes} guarantees that only partial homomorphisms with domain density greater than $\Lambda$ are of interest, the intermediate list of partial homomorphisms may be pruned.
\item Once a list of full homomorphisms is generated, a better lower bound allows better pruning of the output list of a list-decoder
(discussed in footnote~\ref{footnote:pruning}).
\end{itemize}
\bibliographystyle{alpha}
|
{
"timestamp": "2018-06-22T02:01:37",
"yymm": "1802",
"arxiv_id": "1802.08656",
"language": "en",
"url": "https://arxiv.org/abs/1802.08656"
}
|
\section{Introduction}
The recent wide field surveys performed in the optical and radio
bands (e.g. SDSS\footnote{Sloan Digital Sky Survey (York et al. 2000)} and FIRST\footnote{Faint Images of the Radio Sky at Twenty centimetres survey (Becker et al. 1995; Helfand et al. 2015)})
showed that the population of radio sources
associated with active galactic nuclei (AGN) is dominated by objects in
which the radio emission is unresolved or barely resolved in the
FIRST images (e.g. Best et al. 2005; Baldi \& Capetti 2010; Baldi et al. 2018a): this implies that they have
typical sizes of less than $\sim$10~kpc. In contrast, radio
galaxies selected by high-flux limited low-frequency surveys such as the 3C (Edge et al. 1959),
the 3CR (Bennett 1962), the 4C (Pilkington et al. 1965) and the B2 (Colla et al. 1975; Fanti et al. 1978) often extend to
hundreds of kpc (FRI/FRII) and appear resolved at the angular
resolution provided by FIRST.
Baldi et al. (2010, 2018a) showed that the radio
galaxies selected in the local Universe at 1.4~GHz, with similar
bolometric luminosities, span in a broad distribution of radio
luminosities and sizes, from compact to resolved with clear extended radio emission.
It is important to note that a clear dichotomy is not present among
sources selected from classical low-frequency radio catalogues (B2, 3C, 4C) and
radio galaxies selected in the local Universe by surveys as the FIRST.\\
The lack of a clear difference in luminosity and size distribution
requires to adopt an arbitrary angular size threshold,
that furthermore corresponds to a different physical scale depending on distance.
Therefore, a precise definition of the population of compact radio sources suffers from several
observational difficulties, mainly because they are selected from
surveys (e.g. FIRST, NVSS\footnote{National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey (Condon et al. 1998)} and AT20G\footnote{The Australia Telescope 20~GHz survey (Murphy et al. 2010)}) limited in flux, resolution and sensitivity.
Ghisellini (2011) firstly described the compact sources studied by
Baldi \& Capetti (2009, 2010) as FR0s. The FR0 nomenclature was then followed by
Sadler et al. (2014) ``as a convenient way
of linking the compact radio sources seen in nearby galaxies into the
canonical Fanaroff-Riley classification scheme."
However, Sadler et al. found a more diversified population, e.g. with significant
contribution of high-excitation galaxies (HEG), with respect to Baldi \& Capetti (2010). These differences
are likely related to a substantial distinction in the luminosity functions of the two samples
considered: the Sadler et al. sources extend to a radio power $\sim$100 times higher at a
radio frequency 10 times higher than the sample selected by Baldi \& Capetti. \\
It is also clear that compact radio sources are a very
heterogeneous population and they can be produced by AGN with widely
different multi-wavelength properties. For example, although most of
them are radio-loud AGN, radio-quiet galaxies often show compact
radio cores sometimes associated with pc/kpc scale emission (Ulvestad \& Ho 2001;
Nagar et al. 2005; Baldi et al. 2018b). Furthermore, the properties of their hosts and nuclei
differ depending on the frequency and flux threshold at which they
are selected.
Considering the difficulties in univocally defining the class described above,
Baldi \& Capetti suggested to restrict the FR0 definition to a sub-population of
compact radio sources whose compactness is not due to relativistic effects
and which do not follow the correlation between total and
core radio power of classical FRI and FRII sources (Giovannini et al. 1988).
Indeed, a source property useful to try a comparison and to select different
populations with different properties could be the core dominance. Giovannini
et al. (1988) discussed the core dominance properties for all sources from the
3CR and B2 catalogues with the only selection effect on the declination and
galactic latitude. A clear correlation between the core and total radio power
was found useful to constrain the source orientation and jet velocity. The
best fit linear regression of LogP$_{\rm c}$ versus LogP$_{\rm t}$ gives (see Giovannini et al. 2001):
\begin{equation}
\rm LogP_{c}=(7.6\pm1.1)+(0.62\pm0.04)~LogP_{t}
\end{equation}
where P$_{\rm c}$ is the core radio power at 5~GHz and P$_{\rm t}$ is the total radio power at 408~MHz.
In a pilot program of high resolution ($\sim 0\farcs2$) radio imaging of a small sample of compact
sources, Baldi et al. (2015, hereafter B15) defined as genuine FR0
those sources that appear unresolved, or slightly resolved, on a scale of 1-3~kpc in the radio maps, that are
located in red massive ($\sim$10$^{11}$~M$_{\odot}$) early-type galaxies with high black hole masses
(M$_{\rm BH}$$\geq$10$^{8}$M$_{\odot}$) and that are spectroscopically classified in the optical as low-excitation
galaxies (LEG)\footnote{LEG have generally weaker [OIII]-line
emission with respect to high-excitation galaxies (HEG) that show
[OIII]/H$\alpha$>0.2 and equivalent width of [OIII]>3~\AA. (Laing
et al. 1994; Jackson \& Rawlings 1997). More recent definitions have been provided by Kewley et al. (2006) on the basis of the
L$_{\rm [OIII]}$/$\sigma$$^{4}$ quantity and Buttiglione et al. (2010)
on the basis of the Excitation Index (EI) defined as EI=log[OIII]/H$\beta$-1/3(log[NII]/H$\alpha$+log[SII]/H$\alpha$+log[OI]/H$\alpha$).
In particular, LEG sources are characterized by EI$\leq$0.95.}.
The sources of the B15 sample are highly core-dominated, since most of the emission
detected at 5$\arcsec$ (FIRST resolution) is included within a compact region unresolved at 45$\arcsec$ (NVSS resolution): this turns out in a core-dominance higher by factor of $\sim$30 for FR0s with respect to FRIs of the 3CR catalog.
\noindent
Line luminosity is a robust proxy of the radiative power of the AGN
and, at least for the sources with similar multi-wavelength properties,
of the accretion rate. At a given line luminosity, FR0s are $\sim$100
less luminous than FRIs in total radio power.
Therefore, the compact radio galaxies studied by B15 are not simply unresolved sources, but they show a genuine lack of extended radio emission at large scales.
Possible explanations
have been proposed, such as: (i) FR0s could be short-lived and/or recurrent
episodes of AGN activity, not long enough for radio jets to develop at large
scales (Sadler et al. 2014; Sadler 2016), or (ii) FR0s produce slow jets
experiencing instabilities and entrainment in the dense interstellar medium of
the host galaxy corona that causes their premature disruption (Bodo et
al. 2013; B15; Baldi et al. 2018a).
In this paper we present the first systematic X-ray study of a sample of
FR0 radio galaxies.
Since the radio selection of compact radio galaxies carried out
by Baldi \& Capetti (2010) and B15 turns out to correspond to an optical selection,
we adopt these radio and spectro-photometric characteristics to define our
FR0 class of low-excitation radio galaxies.
This classification differs from the other FR classes,
not only for the radio morphology but also for specific
spectro-photometric characteristics.
The key aim of our work is to investigate the central regions of FR0s
through X-rays in an effort to shed light on the nature of their central
engine. A comparison with the radio, optical and X-ray properties of the FRI
radio galaxies is also pursued to further explore differences/similarities
between these two classes of sources.
Since the FR0/FRI comparison is one of the main drivers of the present study,
this motivates our selection of only LEG spectroscopic types. Furthermore, this stricter
definition of FR0 enables us to restrict on a more homogeneous population of compact sources,
avoiding confusion with e.g. Seyfert-like objects or GPS/CSS sources (see Section 4.3).
Data were taken from the public archives of the X-ray satellites
currently on-flight (e.g. \emph{XMM-Newton}, \emph{Chandra},
\emph{Swift}). Most of the X-ray data of our sample are unpublished.
Incidentally, we note that very recently a FR0 radio galaxy, i.e. Tol1326-379,
has been associated for the first time with a $\gamma$-ray source (Grandi,
Capetti \& Baldi 2016). Tol1326-379 shows a GeV luminosity typical of FRIs but
with a steeper $\gamma$-ray spectrum that can be related to intrinsic jet
properties or to a different viewing angle. For this source, a \emph{Swift} Target of Opportunity
(ToO) observation was performed during
the writing of the paper.
The paper is organized as follows: in Section 2 we define the sample. In
Section 3 we describe the observations, data reduction and spectral analysis,
while the results are discussed in Section 4. Notes on single sources and details of the X-ray analysis are reported in Appendix~A.
The multi-wavelength properties of the FRI comparative sample are listed in Appendix~B.
Throughout the paper we use the following
cosmological parameters: H$_{0}$= 70~km$^{-1}$~s$^{-1}$~Mpc$^{-1}$,
$\Omega_{m}$=0.3, $\Omega_{\lambda}$=0.7 (Spergel et al. 2007).
\begin{table*}
\caption{Log of the observations of the FR0 sample.}
\label{tab1}
\centering
\begin{tabular}{l l l l l l}
\hline\hline
SDSS name &Telescope &ObsID & Exposure [ks] &Offset ['] \\
\hline
J004150.47$-$091811.2 &Chandra &15173 &42.5 &3.4\\
J010101.12$-$002444.4$^{*}$ &Chandra &8259 &16.8 &0.0 \\
J011515.78+001248.4$^{*}$ &XMM &0404410201 &54.0 &0.095\\
J015127.10$-$083019.3 &Swift &00036976004 &5.6 &0.894\\
J080624.94+172503.7 &Swift &00085577001 &1.3 &0.337\\
J092405.30+141021.5 &Chandra &11734 &30.1 &0.0\\
J093346.08+100909.0 &Swift &00036989002 &12.2 &1.867\\
J094319.15+361452.1 &Swift &00036997001 &5.4 &3.437\\
J104028.37+091057.1 &XMM &0038540401 &24.0 &0.003\\
J114232.84+262919.9 &XMM &0556560101 &32.9 &13.7\\
J115954.66+302726.9 &Swift &00090129001 &3.4 &2.803\\
J122206.54+134455.9 &Swift &00083911002 &1.3 &7.641\\
J125431.43+262040.6 &Chandra &3074 &5.8 &2.0\\
Tol1326$-$379 &Swift &00034308001 &4.3 &0.0\\
J135908.74+280121.3 &Chandra &12283 &10.1 &3.1\\
J153901.66+353046.0 &Swift &00090113002 &2.8 &0.96\\
J160426.51+174431.1 &Chandra &4996 &22.1 &2.5\\
J171522.97+572440.2 &Chandra &4194 &47.9 &0.0\\
J235744.10$-$001029.9$^{*,a}$ &XMM &-- &-- &-- \\
\hline\\
\multicolumn{6}{l}{$^{*}$ The source is already present in the FR0 sample of B15.} \\
\multicolumn{6}{l}{$^{a}$This source is part of the Third XMM-Newton Serendipitous Source Catalog,}\\
\multicolumn{6}{l}{Sixth Data Release (3XMM-DR6; Rosen et al. 2016).}
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{FIRST versus [OIII] luminosities (both in erg~s$^{-1}$) adapted from B15.
{\it Red points} are the FR0s presented in this paper while {\it empty
cyan circles} are the FR0s of B15. The {\it black dot points} correspond to the SDSS/NVSS
sample analyzed by Baldi \& Capetti (2010), the {\it blue crosses} are the
low-luminosity BL Lacs studied by Capetti \& Raiteri (2015), the {\it empty
pink triangles} are the CoreG galaxies (Balmaverde \& Capetti 2006) and
the {\it green stars} are the FRIs of the 3CR sample. The dashed line marks
the boundary of the location of Seyfert galaxies. The solid line represents
the line-radio correlation followed by the 3CR/FRIs.}
\label{fig1}
\end{figure}
\section{Sample selection}
In order to build our sample of FR0 sources we took at first the SDSS/NVSS/FIRST
sample of radio galaxies by Best \& Heckman (2012) \footnote{Best \& Heckman (2012) built a sample of 18286 AGN by cross-correlating the seventh data release of the Sloan Digital Sky Survey (SDSS) with the NRAO VLA Sky Survey (NVSS) and the Faint Images of the Radio Sky at Twenty Centimeters survey (FIRST). The sample is selected at a flux density level of 5~mJy.} and we applied the criteria listed below, following the approach of B15. This guarantees that we are considering LEG compact sources:
\begin{itemize}
\item [-] redshift z $\leq$0.15;
\item [-] compact in the FIRST images, corresponding to a radio size $\lesssim$10~kpc;
\item [-] FIRST flux $>$30~mJy (to ensures a higher fraction of X-ray detected objects);
\item [-] LEG optical classification.
\end{itemize}
We obtained a list of 73 objects from which we excluded the four sources
classified as low-luminosity BL Lacs (Capetti \& Raiteri 2015).
We performed a search for X-ray observations of the remaining 69 sources
available in the public archives of the X-ray satellites currently
on-flight \footnote{\ttfamily{http://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl}}
and found 15 objects. Some sources are the target of the X-ray pointing, some
others were serendipitous sources in the field of other targets. In order to
enlarge the sample, we also included two FR0s already presented in B15
having public X-ray observations. These sources were not included in our starting sample
since they have FIRST fluxes $\sim$10~mJy.
Finally, during the
writing of the paper Tol1326-379, the first FR0 detected in $\gamma$-rays by
the \emph{Fermi} satellite (Grandi et al. 2016), was observed by \emph{Swift}
as a ToO and therefore it is considered in this work.
The entire sample of FR0s studied here is reported in Table~1.
Figure~1 shows the location of our 19 FR0s in the FIRST versus [OIII] diagram
adapted from B15. FR0s and 3CR/FRIs share the same range in L$_{\rm [OIII]}$, but FR0s
have lower radio luminosities: this strong deficit in total radio emission places FR0s to the left of 3CR/FRIs (B15),
confirming that our selection criteria are valid.
Even considering low-luminosity radio galaxies such as FRICAT sources (Capetti et al. 2017), FR0s still occupy the left side
of the plot (see Figure~6 of Baldi et al. 2018a) forming a continuous distribution from FR0 to sFRICAT, FRICAT and 3CR/FRI sources. The low-luminosity BL Lacs of Capetti \& Raiteri (2015; see also Baldi et al. 2018b) are also shown in Figure~1 and have generally 1.4~GHz
radio luminosities higher than FR0s (see Section 4.2). In the same plot also CoreG galaxies\footnote{CoreG galaxies are low-luminosity radio sources hosted by early-type galaxies and defined "core" on the basis of the presence of a shallow core in their host surface brightness profile.} are reported (for more details see Balmaverde \& Capetti 2006; Baldi \& Capetti 2009).
The radio properties of our sample meet the FR0 criteria discussed by B15.
The sources generally show flat radio spectra and are compact.
Indeed, as shown in Table~2, the ratios between the FIRST and NVSS
fluxes at 1.4~GHz are around 1, indicating that the extended component in these sources is
negligible. The core dominance {\it R} \footnote{{\it R} is defined
as the the ratio between 8.5~GHz (CLASSSCAT: Myers et al. 2003; Browne et al. 2003) and 1.4~GHz (NVSS) flux
densities.} is on average $\sim$30 times higher than 3CR/FRIs and overlaps with the FR0 values of B15 (Figure~2).
The paucity of information at radio frequencies higher than 1.4~GHz for the FRICAT sources
prevent us from comparing their core dominance with our X-ray sample of FR0s.
Finally, the radio spectral indices measured between 8.5~GHz (4.9~GHz) and
1.4~GHz are generally flat with a median value $\alpha_{r}$=-0.04 (see Table~2 for more details).
\begin{center}
\begin{table}
\caption{Radio parameters: (1) source name, (2) flux ratio between FIRST and NVSS fluxes, (3) radio spectral index (S$_{\nu} \propto \nu^{\alpha}$) between 8.5~GHz (The Cosmic Lens All Sky Survey, CLASSSCAT, Myers et al. 2003, Browne et al. 2003) and 1.4~GHz (NVSS) otherwise specified in the notes, (4) core dominance, {\it R}, defined as the ratio between 8.5~GHz (CLASSSCAT) and 1.4~GHz (NVSS) flux densities.}
\label{tab2}
\begin{tabular}{l c r r}
\hline\hline
Source name &F$_{\rm FIRST}$/F$_{\rm NVSS}$ &$\alpha_{r}$ &Log\it{R}\\
\hline
J004150.47-09 &0.74 &-0.13$^{a}$ &-- \\
J010101.12-00 &0.70 &-0.45$^{b}$ &-0.47$^{b}$ \\
J011515.78+00 &1.05 &-0.04$^{b}$ &-0.06$^{b}$ \\
J015127.10-08 &0.88 &-- &-- \\
J080624.94+17 &0.96 &-- &-- \\
J092405.30+14 &0.95 &-- &-- \\
J093346.08+10 &0.80 &-0.47 &-0.37 \\
J094319.15+36 &0.99 &0.82 &0.65 \\
J104028.37+09 &0.94 &-0.63$^{a}$ &-- \\
J114232.84+26 &0.94 &0.11 &0.09 \\
J115954.66+30 &1.06 &-0.01 &-0.01 \\
J122206.54+13 &0.99 &0.28 &0.22 \\
J125431.43+26 &0.98 &-0.47 &-0.37 \\
Tol1326-379 & -- &0.37$^{c}$ &-0.42 \\
J135908.74+28 &1.12 &-0.12 &-0.09 \\
J153901.66+35 &0.92 &-0.10 &-0.08 \\
J160426.51+17 &0.75 &0.15 &0.12 \\
J171522.97+57 &0.86 &-0.43 &-0.34 \\
J235744.10-00 &0.65 &-0.67$^{b}$ &-0.58$^{b}$ \\
\hline\\
\multicolumn{4}{l}{$^{a}$ $\alpha_{r}$ between 4.9~GHz (JVASPOL or PMN) and 1.4~GHz.}\\
\multicolumn{4}{l}{$^{b}$ Values of $\alpha_{r}$ and {\it R} are from B15.}\\
\multicolumn{4}{l}{$^{c}$ The value of $\alpha_{r}$ is from Grandi et al. (2016).}\\
\end{tabular}
\end{table}
\end{center}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{Histograms of the core dominance (adapted from B15) for: {\it upper panel}- 3CR/FRIs
({\it black solid line}), CoreG ({\it blue dashed line}), low-luminosity BL
Lacs ({\it magenta dot-dashed line}). {\it Lower panel}- FR0s of B15 ({\it
red ticked line}) and our sample of FR0s ({\it black solid line}). The
{\it red filled} histograms are the sources in common with B15.}
\label{fig1b}
\end{figure}
\section{X-ray observations and analysis}
\subsection{Data reduction}
Data were collected from different X-ray satellites.
In particular, 7 sources
were observed with \emph{Chandra}, 4 with \emph{XMM-Newton}
and 8 with \emph{Swift}/XRT. The observation log is in Table~1.
Several FR0 sources are not the primary target of the observation but are in
the field of view of the pointing. The offset, i.e. the distance from the
center of the source cone \footnote{See also
\ttfamily{https://heasarc.gsfc.nasa.gov/W3Browse/w3browse-help.html$\#$distance$\_$from$\_$center}}
is also reported in Table~1. When more than one observation was available, we
chose the one with the smaller offset or the longer exposure. We reduced data
for all the sources but one, i.e. J235744.10-00, since it is part of the 3XMM-DR6
catalog (Rosen et al. 2016).
All \emph{Chandra} observations were performed using CCD, both ACIS-S and ACIS-I. Data were reprocessed using CIAO version 4.7 with calibration database CALDB version 4.6.9 and applying standard procedures. Table~A1 reports the extraction regions chosen for the nuclear and background spectra of each source.
Data were then grouped to a minimum of 15 counts per bin over the energy range 0.5-7~keV. None of the seven sources pointed by \emph{Chandra} is affected by pile-up.
For the \emph{XMM-Newton} observations we reduced and analyzed data from the EPIC-pn camera using SAS version 14.0 and the latest calibration files. Periods of high particle background were screened by computing light curves above 10~keV. Extraction regions for the source and background spectra are reported in Table~A1.
Data were then grouped to a minimum of 15 counts per bin over the energy range 0.5-10~keV for two out of the three sources. For the source J235744.10-00 the 0.2-12~keV flux was directly taken from the 3XMM-DR6 catalog, and extrapolated to 2-10~keV
assuming the same spectral slope adopted in the catalog.
\emph{Swift}/XRT data were reduced using the online data analysis tool provided by the ASI Space Science Data Center (SSDC) \footnote{\ttfamily http://swift.asdc.asi.it/}. The only exception is Tol1326-379 that was observed as a ToO during the writing of the paper and the data were processed and analyzed using standard XRT tools (xrtpipeline v.0.13.2 and caldb v.1.0.2). Source spectra for each observation were extracted from a circular region of 20$''$ radius, while the background was taken from an annulus with an inner radius of 40$''$ and outer radius of 80$''$. Spectra were grouped to a minimum of 5 (or 3) counts per bin in the energy range 0.5-10~keV. In the case of the source J153901.66+35 no grouping was applied.
\subsection{Imaging analysis}
The inspection of the X-ray images indicates that 6 sources
are in a dense environment (Figure~3). Four lie at the outskirts of a cluster
of galaxies (i.e. J160426.51+17, J092405.30+14, J135908.74+28, J011515.78+00), J004150.47-09 is located at the
centre of the cluster Abell85 and J171522.97+57 is the brightest member in a compact
group of more than 13 galaxies (Pandge et al. 2012).
For the sources for which a clear extension cannot be confirmed by the X-ray
images, information on the environment was checked in the literature (see
Table~3). Other 4 sources were found in clusters or compact groups (CG), i.e.,
J093346.08+10, J122206.54+13, J080624.94+17, J115954.66+30 (Diaz-Gimenez et al. 2012; Owen et al. 1995;
Koester et al. 2007). In summary, we found that at least 50$\%$ of the FR0s of our sample
is in a dense environment. This value should be considered as a lower limit to the fraction
of FR0s in dense environments in our sample since the analysis suffers from a bias
due to the nature of the sample mainly consisting of X-ray serendipitous
sources. However, we are aware that strong conclusions on the environment of FR0 as a class
cannot be drawn with the available data.
\begin{figure*}
\centering
\includegraphics[width=6cm,height=6cm]{fig3a.pdf}
\hspace{0.1cm}
\includegraphics[width=6cm,height=6cm]{fig3b.pdf}
\hspace{0.1cm}
\includegraphics[width=6cm,height=6cm]{fig3c.pdf}
\hspace{0.1cm}
\includegraphics[width=6cm,height=6cm]{fig3d.pdf}
\hspace{0.1cm}
\includegraphics[width=6cm,height=6cm]{fig3e.pdf}
\hspace{0.1cm}
\includegraphics[width=6cm,height=6cm]{fig3f.pdf}
\caption{Images of the clusters found in the X-ray band. \emph{Panels
(a)-(d)}: \emph{Chandra} 0.3-7~keV images of the sources J160426.51+17, J092405.30+14,
J171522.97+57, J135908.74+28 . The FR0 is labelled in magenta while the name of the cluster is in black. \emph{Panel (e)}: \emph{Chandra} 3-7~keV image of the source J004150.47-09. \emph{Panel (f)}: \emph{XMM-Newton}/pn image of the
source J011515.78+00. All images have been smoothed with a Gaussian with kernel
radius 3.}
\label{fig}
\end{figure*}
\subsection{Spectral analysis}
The spectral analysis was performed using the {\small XSPEC} version 12.9.0
package. We applied a $\chi^{2}$ statistics to spectra binned to a minimum of
at least 15 counts per bin. When the grouping was smaller a C-statistics was
adopted. Errors are quoted at 90$\%$ confidence for one interesting parameter
($\Delta\chi^{2}$=2.71). The summary
of the best-fit spectral results is reported in Table~3. Notes on single
sources can be found in Appendix~A.
Spectral fitting was performed in the energy range 0.5-7~keV (\emph{Chandra})
and 0.5-10~keV (\emph{XMM-Newton} and \emph{Swift}). The X-ray luminosities
presented throughout the paper are calculated in the 2-10~keV range in order to make
a direct comparison with the literature.
As a baseline model, we considered a power-law convolved with the Galactic
column density (Kalberla et al. 2005). In 4 out of 6 sources for which we
could directly observe the cluster in the X-ray images (Figure~3), residuals
showed evidence for the presence of a soft component. Therefore we included a thermal
model ({\small APEC}).
A thermal component was also required in other three FR0s. The presence of a compact group was attested in J093346.08+10 and J115954.66+30 checking the literature. For the third one (i.e J015127.10-08) no information on the environment was found.
The nature of the soft X-ray emission is however uncertain. It could be due to an extended intergalactic medium (that can not be revealed because of poor X-ray spatial resolution and/or short exposure time) or
related to the hot corona typical of early-type galaxies (Fabbiano, Kim \& Trinchieri 1992).
We could measure the power-law photon indices $\Gamma$ for 7 out of 18
objects. The spectral slopes are generally steep, with a mean value
$<\Gamma>$=1.9 and a standard deviation of 0.3.
When it was not possible to leave the photon index free, it was fixed to a value of 2. We
checked whether different values of the photon index lead to significant
changes in the estimate of the fluxes. We found that for $\Gamma$ ranging
between 1.5 and 2.5 the fluxes are consistent within the errors.
In four cases the low statistics did not allow us to constrain the
power-law component and to exclude the presence of thermal emission, therefore we assumed
a simple power-law ($\Gamma$=2 fixed) as the best-fit model and we adopted the
resulting 2-10~keV flux as upper limit for the nuclear component.
Generally, the X-ray spectra of our sample do not show evidence for intrinsic
absorption. Indeed, the addition of an intrinsic absorber component does not improve
significantly the fit. An upper limit to this component can be estimated only for three out of 18 sources (see Appendix~A).
Therefore, we tend to favor the scenario in which the circum-nuclear environment of FR0
is depleted of cold matter, similarly to FRIs (Balmaverde et al. 2006; Baldi \& Capetti 2008, 2010; Hardcastle et al. 2009).
The analyzed FR0s have X-ray nuclear luminosities covering three orders of magnitude
10$^{40-43}$~erg~s$^{-1}$. The average value including the upper limits is $<$LogL$_{\rm X}>$=41.30 (see Figure~4 {\it upper panel}).
\section{Discussion}
\subsection{Compact versus extended low-excitation radio galaxies}
FR0/LEGs and FRI/LEGs reside in similar galaxies and share similar nuclear optical properties
(B15 and references therein). Given that low-ionization optical spectra can be also produced
by shocks or old stellar population emission (Binette et al. 1994; Sarzi et al. 2010; Capetti \& Baldi 2011; Balmaverde \& Capetti 2015; Mingo et al. 2016),
this X-ray study is a key tool to compare FR0 and FRI properties taking advantage
of an energy band directly
related to the nuclear emission processes.
We compared the X-ray luminosities of our sample to those of 35 FRI radio
galaxies belonging to the 3CR/3CRR catalogs and having X-ray data available.
The 2-10~keV luminosities of FRIs are from literature or obtained by a direct
analysis of the data stored in the public archives (see Appendix~B for
details).
Various samples of FRIs can be considered for the comparison with
compact radio sources. For example the 3C sample includes $\sim$30 FRIs,
the B2 sample is formed by $\sim$100 radio galaxies, about half of them being
FRIs. The recent FRICAT catalog is formed by 219 sources, selected from the SDSS/NVSS surveys,
with a flux limit of 5~mJy at 1.4~GHz.
However, the multi-wavelength information is rather limited and, in
particular, the coverage of X-ray observations is very small for all samples
except for the 3C for which {\it Chandra} data are available for all sources up to
z=1 (Massaro et al. 2010, 2012, 2013). Being selected with a rather high flux threshold (9~Jy at
178~MHz), they represent the tip of the iceberg of the FRI population. While
this means that the view of FRIs offered by the 3C sample is limited, they
certainly provide us with a benchmark against which compare the properties
of the compact radio sources.
The two distributions in Figure~4 ({\it upper panel}) clearly overlap: the two sample tests univariate program in the ASURV package
(TWOST, Feigelson \& Nelson 1985; Isobe, Feigelson \&
Nelson 1986) applied to the data (including upper limits) confirms their similarity (P$_{\rm ASURV}$=0.76)\footnote{Probability value according
to the Gehan's generalized Wilcoxon test.}. We assume that
P=0.05 is the probability threshold to rule out the hypothesis that the two samples are
drawn from the same parent population.
This result indicates a strong correspondence between the X-ray cores
of low-excitation FR0 and FRI radio galaxies\footnote{A similar result is obtained even excluding upper limits and applying a
Kolmogorov-Smirnov test to the data (P$_{\rm KS}$=0.45).}.
This point is strengthened by Figure~4 ({\it lower panel}) where the X-ray
(2-10~keV) and radio core (5~GHz) luminosities of FR0s and FRIs are plotted
together. Apart from the two objects having measurements at 4.9~GHz (see Table~2), the luminosities at 5~GHz of the compact sources were extrapolated from
1.4~GHz (FIRST) data considering the radio spectral slopes reported in
Table~2. The core luminosities of the extended radio galaxies are from
literature (Buttiglione et al. 2010, 2011; Hardcastle et al. 2009). The two
samples occupy the same area in the plot: the generalized Kendall's $\tau$
test (ASURV package: Isobe et al. 1986) gives a probability of correlation
greater than $99.99\%$ and $99.95\%$ for FRIs and FR0s (including upper
limits), respectively.
We also tested the possible influence of redshift in driving this correlation
estimating a partial rank coefficient \footnote{The partial rank coefficient
estimates the correlation coefficient between two variables after removing
the effect of a third. If A and B are both related to the variable z, the
partial Kendall's $\tau$ correlation coefficient is:
$\tau_{AB,z}=\frac{\tau_{AB}-\tau_{Az}\tau_{Bz}}{\sqrt{(1-\tau_{Az}^{2})(1-\tau_{Bz}^{2})}}$.}. The
effect is negligible and the value of the correlation coefficient does not
change significantly.
The correlation between L$_{\rm X}$--L$_{\rm 5GHz}$ already found for 3CR/FRIs (Balmaverde
et al. 2006), it is now attested for FR0s pointing towards a jet origin of
both the radio and X-ray photons. As reported in Section 3.3, an intrinsic absorber is not
required by the fit, at least for those sources having good quality X-ray spectra. Therefore,
it is unlikely that the jet-related X-ray emission observed in FR0s is the unabsorbed component
of a HEG-like spectrum. A similar result was obtained by Mingo et al. (2014), who concluded that the LEGs of their 2~Jy sample
cannot be interpreted as simple heavily obscured HEGs (see also Baldi et al. 2010). This point is further strengthened below by
the estimate of the Eddington-scaled luminosities for our FR0 sample.
From the stellar velocity dispersion relation of Tremaine et
al. (2002)\footnote{log(M$_{\rm BH}$/M$_{\odot}$)=(8.13$\pm$0.06)+(4.02$\pm$0.32)log($\sigma$/200~km~s$^{-1}$)} we estimated the black hole masses (M$_{\rm BH}$) of our sample of FR0s. The values of M$_{\rm BH}$ range between $\sim 10^{8}$ and $\sim
10^{9}$~M$_{\odot}$ (see Table~4). From the relation
L$_{\rm bol}$=3500~L$_{\rm [OIII]}$ (Heckman et al. 2004) we derived the bolometric
luminosities and successively the Eddington-scaled luminosities
($\dot{L}$=L$_{\rm bol}$/L$_{\rm Edd}$) given in Table~4. These estimates for
the FR0s of our sample correspond to low values of $\dot{L}$ ($\sim
10^{-3}-10^{-5}$) typical of inefficient accretion modes (ADAF-like, Narayan
\& Yi 1994, 1995) and similar to those found for FRIs (see also Table~B1) and for 2~Jy LEGs (Mingo et al. 2014).
This result strengthens
the interpretation of a non-thermal origin of the high-energy nuclear emission
in low-excitation compact sources, already suggested by the correlation between radio and X-ray
emissions.
\subsection{Compact radio galaxies versus BL Lac objects}
The radio compactness of FR0s is due to the lack of extended emission and it is
not related to a Doppler boosting of the jet radiation. This is evident in the
compact radio galaxies that show emission lines with large equivalent widths in
their optical spectra (such as Tol1326-379, Grandi et al. 2016) but it is less
straightforward in objects overwhelmed by the galaxy emission. Indeed, in this
case, the stellar population dominates the emission hiding the nature of the
underlying AGN that could be both a low-luminosity BL Lac with an extended jet
pointed towards the observer, or a genuine compact radio galaxy.
In Figure~1 it is evident that FR0s and low-luminosity BL Lacs occupy different
regions of the L$_{\rm [OIII]}$-L$_{\rm 1.4~GHz}$ plane. The two classes have similar
emission-line luminosities but BL Lacs are more powerful in the radio band.
This is expected since L$_{\rm [OIII]}$ is an isotropic indicator of the AGN
luminosity, while the radio emission suffers from relativistic effects.
The ratio between the [OIII]$\lambda$5007 line and the 2-10~keV luminosities
(R$_{\rm [OIII]}$=L$_{\rm [OIII]}$/L$_{\rm (2-10~keV)}$) can be considered a useful tool to
distinguish misaligned and aligned jets. We then collected [OIII]
luminosities of FRIs and FR0s from literature (Buttiglione et al. 2010, 2011;
Hardcastle et al. 2009; Leipski et al. 2009) and from SDSS/DR7
survey \footnote{\ttfamily{http://classic.sdss.org/dr7/}}, respectively (see
Appendix~B and Table~4). For the [OIII]-line luminosity of BL Lacs we refer to
the work of Capetti \& Raiteri (2015), while the X-ray 2-10~keV luminosity was
directly obtained from the Swift/XRT instrument using the SSDC online data
analysis tool.
The R$_{\rm [OIII]}$ average values (including upper limits \footnote{We used the
Kaplan-Meier (KM) estimator in ASURV to derive the average values of
R$_{\rm [OIII]}$ in the presence of censored data.}) for FR0s and FRIs are
consistent ($<$R$_{\rm [OIII], FR0}$$>$=-1.6$\pm$0.2 and $<$R$_{\rm [OIII],
FRI}$$>$=-1.7$\pm$0.2). This is statistically attested by the TWOST test in
ASURV (see Section 4.1 for details) that turns out with a probability
P$_{\rm ASURV}$=0.9. On the contrary, the Doppler boosting of the X-ray emission
shifts the BL Lacs to lower values $<$R$_{\rm [OIII], BL Lacs}$$>$=-3.3$\pm$0.2.
\subsection{Compact radio galaxies versus young sources}
The comparison between FR0s and young radio sources in the X-ray band suffers from the paucity of dedicated studies in this field (Kunert-Bajraszewska et al. 2014 and references therein). The samples of young sources for which high-energy information are available include mainly powerful GPS and CSS. These sources are generally different from ours being characterized by high X-ray (2-10~keV), radio (5~GHz) and [OIII]-line luminosities typical of AGN with efficient accretion rates (Guainazzi et al. 2006; Vink et al. 2006; Labiano 2008; Siemiginowska et al.2008; Tengstrand et al. 2009).
Moreover, 16 CSO sources recently studied by Siemiginowska et al. (2016) in X-rays showed spectra generally flat and absorbed by intrinsic column densities (see also Ostorero et al. 2017).
Our sources seem more similar to three low-luminosity compact sources (LLC;
Kunert-Bajraszewska \& Thomasson 2009) discussed by Kunert-Bajraszewska et
al. (2014) and classified as LEG. Their radio and X-ray luminosities (see
Table~2 of their work) locate these sources in our correlation strip shown in
Figure~4 ({\it lower panel}). The authors suggest that such LLC are
intermittent radio sources rather than young objects evolving in FRIs, in line
with the recent demographic study of Baldi et al. (2018a). They showed that the
space density of FR0s in the local Universe (z$<$0.05) is larger by a factor
of $\sim$5 than FRIs, definitively rejecting the hypothesis that FR0s
are young radio galaxies that will all eventually evolve
into extended FRI radio sources.
\begin{figure}
\centering
\includegraphics[width=8cm, height=8cm]{fig4a.pdf}
\includegraphics[width=8.5cm, height=8.5cm]{fig4b.pdf}
\caption{{\bf Upper panel}: histogram of the X-ray luminosity for the FR0s of our sample ({\it black solid line}) and the comparative sample of FRIs ({\it red solid line}). No significant difference is observed between the two samples. The {\it dashed histograms} represent upper limits in each sample. In the plot the probabilities from the Kolmogorov-Smirnov test (P$_{\rm KS}$) and the TWOST test in ASURV (P$_{\rm ASURV}$) are also reported. {\bf Lower panel}: 2-10~keV X-ray luminosity versus 5~GHz radio core luminosity for FR0s ({\it black circles}) and FRIs ({\it red squares}). Arrows indicate upper limits.
The {\it black solid line} is the linear regression for the overall sample of FR0s and FRIs (excluding the upper limits): LogL$_{\rm X}$=(7.8$\pm$0.6)+(0.8$\pm$0.1)LogL$_{\rm 5GHz}$ (see also Hardcastle et al. 2009). The {\it black dashed lines} are the uncertainties on the slope.}
\label{fig4}
\end{figure}
\section{Summary and conclusions}
We analyzed 19 FR0s selected according to the criteria discussed in Section~2 and having public X-ray observations. Most of
the sources have short exposures and/or large offsets. In spite of the limited
quality of the data, our analysis allowed us to characterize for the first
time FR0 sources at high-energies.
FR0s have X-ray luminosities (2-10~keV) between 10$^{40}$-10$^{43}$
~erg~s$^{-1}$, comparable to FRIs. The clear correlation between radio
and X-ray luminosities observed in both compact and extended objects, favours
the interpretation of a non-thermal origin of the 2-10~keV photons. In
agreement with FRIs, the high-energy emission in FR0s is produced by the jet.
Moreover, the high black
hole masses (10$^{8}$-10$^{9}$~M$_{\odot}$) and the small values of the
bolometric luminosities, as deduced from the [OIII]-emission line, suggest an
inefficient accretion process (ADAF-like) at work also in the compact sources.
These results confirm that the nuclear properties of FRIs and FR0s are similar
and that the main difference between the two classes remains the lack of extended emission in FR0s.
We exclude important beaming effects in the X-ray spectra of FR0s on the basis
of the ratio between the [OIII]$\lambda$5007 line and the 2-10~keV luminosity
(R$_{\rm [OIII]}$). While the [OIII]-line luminosity is expected to be emitted isotropically,
the X-ray radiation could be amplified by Doppler boosting effects. While FR0s
and FRIs have similar R$_{\rm [OIII]}$ ($\sim$-1.7), low-luminosity BL Lacs, whose X-ray radiation
is beamed, have smaller values (R$_{\rm [OIII]}$ $\sim$-3.3).
A comparison with the X-ray properties of young sources is limited by the
paucity of X-ray studies of GPS, CSS and CSO. Considering the available data,
we do not find spectral similarities between these sources and
FR0s. Generally, the studied young sources have higher X-ray luminosities
(they are probably associated with an efficient accretion disc) and often show
signatures of intrinsic absorption. Therefore, FR0s could be different sources
characterized by intermittent activity, as in the case of J004150.47-09 and/or by
jets with intrinsic properties that prevent the formation and evolution of extended
structures.
\begin{landscape}
\begin{table}
\caption{Main spectral parameters of the FR0 sample: (1) Name of the source, (2) redshift, (3) best-fit spectrum, (4) Galactic hydrogen column density, (5) power-law photon index, (6) temperature of the thermal component ({\small APEC}), (7) flux of the thermal component in the soft X-ray band (0.5-5~keV) corrected for Galactic absorption, (8) flux of the non-thermal component in the 2-10~keV band corrected for Galactic absorption, (9) $\chi^{2}$/degrees of freedom reported when the grouping was $\geq$15. When the grouping was $<$15 the C-statistics was applied, (10) type of environment, (11) references for the environment.}
\label{tab3}
\footnotesize
\begin{tabular}{l c c c c c c c c cc}
\hline\hline
Source name &z &Best spectrum &N$_{H,Gal}$ &$\Gamma$ &kT &F$_{th,0.5-5~keV}$ &F$_{nuc,2-10~keV}$ &$\chi^{2}$/d.o.f. &Environment &Reference$^{d}$\\ \\
& & &(atoms~cm$^-2$) & &(keV) &(erg~cm~$^{-2}$~s$^{-1}$) &(erg~cm~$^{-2}$~s$^{-1}$) & & \\
\hline
J004150.47-09 &0.055 &PL &2.8$\times$10$^{20}$ &2.0 (fix) &- &- &(4.5$^{+2.9}_{-2.8}$)$\times$10$^{-14}$ &1.4$/$3 &Cluster &1 \\
J010101.12-00 &0.097 &PL &3.2$\times$10$^{20}$ &2.0 (fix) &- &- &<6.6$\times$10$^{-15}$ &- &? &-\\
J011515.78+00 &0.045 &APEC+PL &3.0$\times$10$^{20}$ &2.0$\pm$0.2 &0.8$\pm$0.1 &(1.5$\pm$0.4)$\times$10$^{-14}$ &(2.9$^{+0.5}_{-0.4}$)$\times$10$^{-14}$ & 66/63 &Cluster &1\\
J015127.10-08 &0.018 &APEC+PL &2.3$\times$10$^{20}$ &2.0 (fix) &0.2$^{+0.3}_{-0.1}$ &(1.4$^{+113}_{-1.3}$)$\times$10$^{-13}$ &$<$4.2$\times$10$^{-14}$ &- &? &- \\
J080624.94+17 &0.104 &PL &3.3$\times$10$^{20}$ &2.0 (fix) &- &- &(3.2$^{+1.7}_{-1.4}$)$\times$10$^{-13}$ &- &Cluster &2 \\
J092405.30+14 &0.135 &APEC+PL &3.3$\times$10$^{20}$ &1.8$\pm$0.3 &1.3$^{+1.4}_{-0.6}$ &(5.9$^{+19}_{-5}$)$\times$10$^{-15}$ &(3.2$^{+1.1}_{-2.8}$)$\times$10$^{-14}$ & 3/9&Cluster &1 \\
J093346.08+10 &0.011 &APEC+PL &3.1$\times$10$^{20}$ &2.0(fix) &$<$0.35 &$<$1.2$\times$10$^{-13}$ &(6.2$^{+2.7}_{-2.1}$)$\times$10$^{-14}$. &- &CG$^{c}$ &3\\
J094319.15+36 &0.022 &PL &1.1$\times$10$^{20}$ &2.3$\pm$0.4 &- &- &(7.9$^{+2.3}_{-2.0}$)$\times$10$^{-14}$ &- &? &- \\
J104028.37+09 &0.019 &PL &2.6$\times$10$^{20}$ &2.2$\pm$0.4 &- &- &(1.2$\pm$0.2)$\times$10$^{-14}$ & 3/7 &Isolated &4 \\
J114232.84+26 &0.03 &APEC+PL &2.0$\times$10$^{20}$ &2.0 (fix) &0.67$^{+0.07}_{-0.06}$&(4.5$\pm$0.1)$\times$10$^{-14}$ &$<$1.3$\times$10$^{-14}$ &-&CG$^{c}$ &3\\
J115954.66+30 &0.106 &PL &1.5$\times$10$^{20}$ &2.0 (fix) &- &- &(4.9$^{+4.9}_{-3.1}$)$\times$10$^{-14}$ &- &? &- \\
J122206.54+13 &0.081 &PL &3.5$\times$10$^{20}$ &2.0 (fix) &- &- &(1.0$^{+1.2}_{-0.7}$)$\times$10$^{-13}$ &- &Cluster &5 \\
J125431.43+26 &0.069 &PL &7.5$\times$10$^{19}$ &1.9$\pm$1.4 &- &- &(9.2$^{+1.8}_{-1.6}$)$\times$10$^{-14}$ &4/5 &? &- \\
Tol1326-379 &0.028 &PL &5.5$\times$10$^{20}$ &1.3$\pm$0.4 &- &- &(9.4$^{+1.9}_{-3.1}$)$\times$10$^{-13}$ &- &? &- \\
J135908.74+28 &0.073 &APEC+PL &1.3$\times$10$^{20}$ &2.0 (fix) &0.24 (fix)$^{a}$ &(1.9$^{+0.8}_{-0.7}$)$\times$10$^{-13}$ &(5.0$^{+2.2}_{-2.3}$)$\times$10$^{-14}$ &0.5/2 &Cluster &1 \\
J153901.66+35 &0.078 &PL &1.8$\times$10$^{20}$ &2.0 (fix) &- &- &(1.1$^{+0.7}_{-1.0}$)$\times$10$^{-13}$ &- &? &- \\
J160426.51+17 &0.041 &PL &3.4$\times$10$^{20}$ &1.1$\pm$0.3&- &- &(1.5$^{+0.5}_{-0.3}$)$\times$10$^{-13}$ &2.5$/$8 & Cluster &1 \\
J171522.97+57 &0.027 &APEC+PL &2.2$\times$10$^{20}$ &2.0 (fix) &1.1$\pm$0.1$^{b}$ &(1.2$^{+0.4}_{-0.5}$)$\times$10$^{-13}$ &$<$1.9$\times$10$^{-14}$ &70/53 & CG$^{c}$ &1 \\
J235744.10-00 &0.076 & PL &3.3$\times$10$^{20}$ & 2.0 (fix) &- &- &(9.7$^{+7.9}_{-5.1}$)$\times$10$^{-14}$ &- &Isolated &6 \\
\hline\\
\multicolumn{9}{l}{$^{a}$ Due to the poor statistics of the fit this parameter is fixed to the best fit value.}\\
\multicolumn{9}{l}{$^{b}$ The Abundance parameter is left free and reaches a value of Ab=0.3$^{+0.3}_{-0.1}$.}\\
\multicolumn{9}{l}{$^{c}$ CG=compact group of galaxies.}\\
\multicolumn{9}{l}{$^{d}$ (1)-this work, (2)-Koester et al. (2007), (3)-Diaz-Gimenez et al. (2012), (4)-Colbert et al. (2001), (5)-Owen et al. (1995), (6)-Prada et al. (2003).}\\
\end{tabular}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}
\caption{Radio, optical and X-ray properties of the FR0 sample. (1) Identity number, (2) nuclear radio luminosity at 5~GHz, (3) X-ray nuclear luminosity (2-10~keV) corrected for absorption, (4) [OIII] emission line luminosity. For all luminosities the proper k-correction was considered. (5) Estimated black hole masses for the sources of the sample, (6) Eddington-scaled luminosities ($\dot{L}$=L$_{\rm bol}$/L$_{\rm Edd}$).}
\label{tab4}
\centering
\begin{tabular}{l c c c c c }
\hline\hline
Source name &LogL$_{\rm 5~GHz}$ &LogL$_{\rm X,2-10~keV}$ &LogL$_{\rm [OIII]}$$^{a}$ &LogM$_{BH}$ &$\dot{L}$$^{b}$ \\
&(erg~s$^{-1}$) &(erg~s$^{-1}$) &(erg~s$^{-1}$) &(M$_{\odot}$) \\
\hline
J004150.47-09 &40.21 &41.10 &39.27 &8.96 &5.5$\times$10$^{-5}$ \\
J010101.12-00 &39.52 &$<$41.20 &40.39 &8.43 &2.4$\times$10$^{-3}$ \\
J011515.78+00 &39.97 &41.15 &39.51 &8.57 &2.3$\times$10$^{-4}$ \\
J015127.10-08 &39.10 &$<$40.49 &39.29 &7.97 &5.6$\times$10$^{-4}$ \\
J080624.94+17 &40.69 &43.20 &39.30 &8.39 &2.2$\times$10$^{-4}$ \\
J092405.30+14 &41.38 &42.16 &40.68 &9.06 &1.1$\times$10$^{-3}$ \\
J093346.08+10 &38.46 &40.17 &39.14 &8.16 &2.6$\times$10$^{-4}$ \\
J094319.15+36 &40.25 &40.94 &39.81 &7.89 &2.2$\times$10$^{-3}$ \\
J104028.37+09 &39.10 &39.99 &39.48 &8.29 &4.2$\times$10$^{-4}$ \\
J115954.66+30 &39.71 &$<$40.43 &38.48 &8.97 &8.7$\times$10$^{-6}$ \\
J114232.84+26 &40.91 &42.14 &40.24 &8.51 &1.4$\times$10$^{-3}$ \\
J122206.54+13 &40.62 &42.19 &40.04 &8.36 &1.3$\times$10$^{-3}$ \\
J125431.43+26 &39.93 &42.02 &39.61 &8.59 &2.8$\times$10$^{-4}$ \\
Tol1326-379 &39.79 &42.29 &40.60 &8.30 &5.0$\times$10$^{-3}$ \\
J135908.74+28 &40.51 &41.69 &39.48 &8.46 &2.8$\times$10$^{-4}$ \\
J153901.66+35 &40.82 &42.20 &40.07 &8.31 &1.5$\times$10$^{-3}$ \\
J160426.51+17 &40.37 &41.64 &40.02 &8.34 &1.3$\times$10$^{-3}$ \\
J171522.97+57 &39.46 &$<$40.16 &39.46 &8.79 &1.2$\times$10$^{-4}$ \\
J235744.10-00 & 39.30 &40.93 &40.26 &8.76 &8.5$\times$10$^{-4}$ \\
\hline\\
\multicolumn{6}{l}{$^{a}$ Data are provided by the SDSS Data Release 7 (\ttfamily{http://www.sdss.org/})}\\
\multicolumn{6}{l}{$^{b}$ $\dot{L}$=L$_{\rm bol}$/L$_{\rm Edd}$. The bolometric luminosity is derived using the relation} \\
\multicolumn{6}{l}{L$_{\rm bol}$=3500~L$_{\rm [OIII]}$ measured by Heckman et al. (2004).} \\
\end{tabular}
\end{table}
\end{landscape}
\section*{Acknowledgements}
The authors thank the anonymous referee for his/her thoughtful comments that helped to improve the paper.
ET acknowledges financial support from ASI-INAF grant 2015-023-R.O.
This work is based on data from the {\it Chandra}, {\it XMM-Newton} and {\it Swift} Data Archive. Part of this work is based on archival data, software or online services provided by the ASI Science Data Center (ASDC).
We thank the {\it Swift} team for making the ToO observation of Tol1326-379 possible.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
|
{
"timestamp": "2018-02-26T02:10:05",
"yymm": "1802",
"arxiv_id": "1802.08581",
"language": "en",
"url": "https://arxiv.org/abs/1802.08581"
}
|
\section{#1}\label{#2}\setcounter{equation}{0}}
\textwidth 16truecm \textheight 8.4in\oddsidemargin0.2truecm\evensidemargin0.7truecm\voffset0truecm
\def\nnewpage{}
\f
\newcommand{\ip}[2]{\left<#1,#2\right>}
\usepackage{amsaddr}
\begin{document}
\def{D'g}{{D'g}}
\defu^{\alpha}{u^{\alpha}}
\def\int\!\!\!\!\int{\int\!\!\!\!\int}
\def\int\!\!\!\int{\int\!\!\!\int}
\def\int\!\!\!\!\int\!\!\!\!\int{\int\!\!\!\!\int\!\!\!\!\int}
\def\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int{\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int}
\def{\cor \underline{??????}\cob}{{\cor \underline{??????}\cob}}
\def\nto#1{{\coC \footnote{\em \coC #1}}}
\def\fractext#1#2{{#1}/{#2}}
\def\fracsm#1#2{{\textstyle{\frac{#1}{#2}}}}
\defU{U}
\def{}
\defp_{\alpha}{p_{\alpha}}
\defv_{\alpha}{v_{\alpha}}
\defq_{\alpha}{q_{\alpha}}
\defw_{\alpha}{w_{\alpha}}
\deff_{\alpha}{f_{\alpha}}
\defd_{\alpha}{d_{\alpha}}
\defg_{\alpha}{g_{\alpha}}
\defh_{\alpha}{h_{\alpha}}
\def\psi_{\alpha}{\psi_{\alpha}}
\def\psi_{\beta}{\psi_{\beta}}
\def\beta_{\alpha}{\beta_{\alpha}}
\def\gamma_{\alpha}{\gamma_{\alpha}}
\defT{T}
\defT_{\alpha}{T_{\alpha}}
\defT_{\alpha,k}{T_{\alpha,k}}
\deff^{k}_{\alpha}{f^{k}_{\alpha}}
\def\mathop{\rm cf\,}\nolimits{\mathop{\rm cf\,}\nolimits}
\def\overline a{\overline a}
\def\overline v{\overline v}
\def\overline q{\overline q}
\def\overline w{\overline w}
\def\overline\eta{\overline\eta}
\def\Omega{\Omega}
\def\epsilon{\epsilon}
\def\alpha{\alpha}
\def\Gamma{\Gamma}
\def\mathbb R{\mathbb R}
\defB{B}
\newcommand{{u}}{{u}}
\newcommand {\Dn}[1]{\frac{\partial #1 }{\partial \nu}}
\defm{m}
\def\cor{{}
\def\cog{{}
\def\cob{{}
\def\coe{{}
\def\coA{{}
\def\coB{{}
\def\coC{{}
\def\coD{{}
\def\coE{{}
\def\coF{{}
\ifnum\coloryes=
\def\cog{\color{colordddd}
\def\cob{\color{black}
\def\cor{\color{red}
\def\coe{\color{colorgggg}
\def\coA{\color{coloraaaa}
\def\coB{\color{colorbbbb}
\def\coC{\color{colorcccc}
\def\coD{\color{colordddd}
\def\coE{\color{coloreeee}
\def\coF{\color{colorffff}
\def\coG{\color{colorgggg}
\f
\ifnum\isitdraft=
\baselineskip=17p
\input macros.te
\def\blackdot{{\color{red}{\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm
\def\bdot{{\coC {\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm
\def\purpledot{{\coA{\rule[0mm]{4mm}{4mm}}\cob}
\def\purpledottwo{{\coA{\rule[0mm]{4mm}{4mm}}\cob}
\def\pdot{\purpledot
\els
\baselineskip=15pt
\def\blackdot{{\rule[-3mm]{8mm}{8mm}}
\def\purpledot{{\rule[-3mm]{8mm}{8mm}}
\def\pdot{
\f
\def\nts#1{{\hbox{\bf ~#1~}}}
\def\nts#1{{\cor\hbox{\bf ~#1~}}}
\def\ntsf#1{\footnote{\hbox{\bf ~#1~}}}
\def\ntsf#1{\footnote{\cor\hbox{\bf ~#1~}}}
\def\bigline#1{~\\\hskip2truecm~~~~{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}\\
\def\biglineb{\bigline{$\downarrow\,$ $\downarrow\,$}
\def\biglinem{\bigline{---}
\def\biglinee{\bigline{$\uparrow\,$ $\uparrow\,$}
\def{\overline M}{{\overline M}}
\def\widetilde{\widetilde}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Corollary}[Theorem]{Corollary}
\newtheorem{Definition}[Theorem]{Definition}
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Remark}[Theorem]{Remark}
\newtheorem{definition}{Definition}[section]
\def\thesection.\arabic{equation}{\thesection.\arabic{equation}}
\def\hfill$\Box$\\{\hfill$\Box$\\}
\def\comma{ {\rm ,\qquad{}} }
\def\commaone{ {\rm ,\qquad{}} }
\def\dist{\mathop{\rm dist}\nolimits}
\def\sgn{\mathop{\rm sgn\,}\nolimits}
\def\Tr{\mathop{\rm Tr}\nolimits}
\def\div{\mathop{\rm div}\nolimits}
\def\supp{\mathop{\rm supp}\nolimits}
\def\divtwo{\mathop{{\rm div}_2\,}\nolimits}
\def\re{\mathop{\rm {\mathbb R}e}\nolimits}
\def\indeq{\qquad{}\!\!\!\!}
\def\hbox{\huge\textbullet}{.}
\def\semicolon{\,;}
\newcommand{\mathcal{D}}{\mathcal{D}}
\title[Velocity-vorticity-{V}oigt model]{Global well-posedness of the velocity-vorticity-{V}oigt model of the 3{D} {N}avier-{S}tokes equations}
\date{February 23, 2018}
\author{Adam~Larios}
\address{Department of Mathematics,
University of Nebraska--Lincoln,
Lincoln, NE 68588, USA}
\email{alarios@unl.edu}
\author{Yuan~Pei}
\address{Department of Mathematics,
University of Nebraska--Lincoln,
Lincoln, NE 68588, USA}
\email{ypei4@unl.edu}
\author{Leo~Rebholz}
\address{Department of Mathematical Sciences,
Clemson University,
Clemson, SC 29634, USA}
\email{rebholz@clemson.edu}
\begin{abstract}
The velocity-vorticity formulation of the 3D Navier-Stokes equations was recently found to give excellent numerical results for flows with strong rotation. In this work, we propose a new regularization of the 3D Navier-Stokes equations, which we call the 3D velocity-vorticity-Voigt (VVV) model, with a Voigt regularization term added to momentum equation in velocity-vorticity form, but with no regularizing term in the vorticity equation. We prove global well-posedness and regularity of this model under periodic boundary conditions. We prove convergence of the model's velocity and vorticity to their counterparts in the 3D Navier-Stokes equations as the Voigt modeling parameter tends to zero. We prove that the curl of the model's velocity converges to the model vorticity (which is solved for directly), as the Voigt modeling parameter tends to zero. Finally, we provide a criterion for finite-time blow-up of the 3D Navier-Stokes equations based on this inviscid regularization.
\end{abstract}
\maketitle
\noindent\thanks{\em Keywords:\/}
Vorticity-Velocity formulation,
Euler-Voigt,
Navier-Stokes-Voigt,
Global existence,
Inviscid-regularization,
Turbulence models,
Blow-up criteria,
Voigt-regularization,
Turbulence models,
$\alpha$-models,
\noindent\thanks{\em Mathematics Subject Classification\/}:
35A01,
35B44,
35B65,
35Q30,
35Q35,
76D03,
76D05,
76D17,
76N10
\section{Introduction}
\label{sec1}
In recent years, the Voigt-regularization and the velocity-vorticity formulation have seen much study as promising approaches to alleviating some of the analytical and computational difficulty inherent in the 3D Navier-Stokes equations of incompressible fluid flow. However, as one might expect, neither of these approaches overcomes every difficulty in the equations. For instance, the Voigt-regularization has a strong regularizing effect, so much so that it destroys certain fundamental qualities of the equations, such as parabolicity and viscosity-driven energy decay. On the other hand, the velocity-vorticity formulation is merely a reformulation of the equations, and therefore it has no regularizing effect at all, although it is the basis of many well-behaved numerical algorithms. In this paper, we combine these two approaches, with the intent that the resulting system will retain the best qualities of both systems. Namely, the intent is that the new system will have solutions that are closer to the actual physics of fluids, while still having enough regularization that the equations are better behaved from the standpoints of mathematical analysis, numerical stability, and computational efficiency. In this work, we only address the global well-posedness and convergence properties of the system, but a follow-up work will study the numerical and computational properties of the system.
The incompressible, constant density, 3D Navier-Stokes equations are given by
\begin{subequations}\label{NSE}
\begin{empheq}[left=\empheqlbrace]{align}
\label{NSE_u}
&\frac{\partial \widetilde{u}}{\partial t}
-
\nu\Delta\widetilde{u}
+
(\widetilde{u}\cdot\nabla)\widetilde{u}
+
\nabla \widetilde{p}
=
f,
\\&
\nabla \cdot \widetilde{u} = 0,
\\&
\widetilde{u}(\cdot, 0) = \widetilde{u}_0,
\end{empheq}
\end{subequations}
\cite{Constantin_Foias_1988, Temam_2001_Th_Num} for more details on 3D Navier-Stokes equations.
Note that for smooth solutions, the momentum equation \eqref{NSE_u} can also be written as
\begin{equation}
\label{NSE_vor}
\frac{\partial \widetilde{u}}{\partial t}
-
\nu\Delta\widetilde{u}
+
(\nabla\times\widetilde{u})\times\widetilde{u}
+
\nabla(\widetilde{p}+\frac{1}{2}|\widetilde{u}^2|)
=
f,
\end{equation}
where $\widetilde{u}$ represents the velocity of the fluid, $\widetilde{p}$ represents the (density normalized) pressure, and $f$ represents a body force.
We now propose the following system, which we refer to as the velocity-vorticity-Voigt (or ``VVV'') equations over the three-dimensional periodic box $\mathbb{T}^3 = \mathbb{R}^3/\mathbb{Z}^3=[0, 1]^3$,
\begin{subequations}\label{Sys1}
\begin{empheq}[left=\empheqlbrace]{align}
\label{Sys1_u}
&(I - \alpha^2\Delta)\frac{\partial u}{\partial t}
-
\nu\Delta u
+
w \times u
+
\nabla p
=
f,
\\\label{Sys1_w}&
\frac{\partial w}{\partial t}
-
\nu\Delta w
+
(u\cdot\nabla) w
-
(w\cdot\nabla) u
=
\nabla\times f,
\\&
\nabla \cdot u = 0,
\\&
u(\cdot, 0) = u_0
\quad\text{ and }\quad
w(\cdot, 0) = w_0.
\end{empheq}
\end{subequations}
where $u = (u_1, u_2, u_3)$ represents an averaged velocity, $w = (w_1, w_2, w_3)$, which plays the role of vorticity but for which we do not assume $w=\nabla\times u$, and $f$ is an external forcing term. Without loss of generality, we assume for our analysis in later sections that the viscosity $\nu=1$. Note that in the case where $\alpha=0$, the system formally reduces the velocity-vorticty formulation, while for $\alpha>0$, if one imposes $w = \nabla\times u$, the system formally reduces to the Navier-Stokes-Voigt equations.
The term $-\alpha^2\Delta\partial_t u$ in \eqref{Sys1_u} is often referred to as the ``Voigt-term'', due to an application of modeling Kelvin-Voigt fluids by A.P. Oskolkov \cite{Oskolkov_1973, Oskolkov_1982} (see also \cite{Kalantarov_1986}). In the context of the velocity formulation use of the Voigt term was first proposed as a regularization for either the Navier-Stokes (for $\nu>0$) or Euler (for $\nu=0$) equations in \cite{Cao_Lunasin_Titi_2006}, for small values of the regularization parameter $\alpha$. This paper also proved global well-posedness of the Voigt-regularized versions of the 3D Euler and 3D Navier-Stokes equations. These equations have been studied analytically and extended in a wide variety of contexts (see, e.g., \cite{Bohm_1992,Catania_2009,Catania_Secchi_2009,Cao_Lunasin_Titi_2006,Larios_Titi_2009,Larios_Lunasin_Titi_2015,Ebrahimi_Holst_Lunasin_2012,Levant_Ramos_Titi_2009,Khouider_Titi_2008,Olson_Titi_2007,Oskolkov_1973,Oskolkov_1982,Ramos_Titi_2010,Kalantarov_Levant_Titi_2009,Kalantarov_Titi_2009,Kuberry_Larios_Rebholz_Wilson_2012,DiMolfetta_Krstlulovic_Brachet_2015,Layton_Rebholz_2013_Voigt,Larios_Petersen_Titi_Wingate_2015}, and the references therein). Voigt-regularizations of parabolic equations are a special case of pseudoparabolic equations, that is, equations of the form $Mu_t+Nu=f$, where $M$ and $N$ are (possibly non-linear, or even non-local) operators. For more about pseudoparabolic equations, see, e.g., \cite{DiBenedetto_Showalter_1981,Peszynska_Showalter_Yi_2009,Showalter_1975_nonlin,Showalter_1975_Sobolev2,Showalter_1972_rep,Carroll_Showalter_1976,Showalter_1970_SG,Showalter_1970_odd,Bohm_1992}.
Directly computing for the vorticity variable has recently become popular in simulations of the incompressible Navier-Stokes equations, as it can be the primary variable of interest in vortex dominated and rotating flows \cite{WB02,LYM06,MF00,G91,GHH90,WWW95,HOR17}. Such formulations use the equation of vorticity dynamics,
\begin{equation}\label{vort}
\frac{\partial {\widetilde w}}{\partial t} - \nu \Delta {\widetilde w} +
(\widetilde u \cdot \nabla) {\widetilde w} - (\widetilde w \cdot\nabla) \widetilde u =
{\nabla \times}\,f,
\end{equation}
and close the system with some relation of $\widetilde u$ and $\widetilde w$, the most common being $-\Delta \widetilde u=\nabla \times \widetilde w$, and boundary conditions such as $\widetilde w\left.\right|_{\partial\Omega}=\nabla \times \widetilde u$. While such a boundary condition is easily and accurately implementable in finite difference methods on uniform grids, it is not generally appropriate for finite element methods on unstructured meshes. For this reason, in the recent works \cite{OR10,LOR11,GHOR15,HOR17}, the system is instead closed by coupling to a momentum equation using the $\widetilde w$ variable, such as
\[
\widetilde u_t + \widetilde w\times \widetilde u + \nabla \widetilde P - \Delta \widetilde u = f,
\]
where $\widetilde P$ represents the Bernoulli pressure.
This formulation is able to produce efficient and accurate numerical methods in such settings, in particular due to its use of a natural vorticity boundary condition corresponding to no-slip velocity derived in \cite{GHOR15}.
Because of the numerical successes of such velocity-vorticity systems, it is both natural and important to consider their analysis at the PDE level, as fundamental questions such as well-posedness should be addressed. It is easy to show that determining the global well-posedness of such a system (i.e. \eqref{Sys1} with $\alpha=0$) would solve the Millennium Prize Problem for the 3D Navier-Stokes equations.
We choose in this work to consider the velocity-vorticity system with a Voigt modeling term in \eqref{Sys1}, for several reasons: first, it allows for analysis of the system to be performed; second, the VVV system limiting behavior as $\alpha\rightarrow 0$ can give insight into the behavior of the $\alpha=0$ case; third, the discretized Voigt term corresponds to a commonly used numerical stabilization for second (and lower) order methods \cite{P06,WL97,DP09,ALP04,LLMNR09}, and thus the VVV system is in this sense the PDE generalization of stabilized numerical discretizations of Navier-Stokes equations in velocity-vorticity form. We note that a steady velocity-vorticity system without regularization terms was analyzed in \cite{ORS17} in the case of no slip velocity boundary conditions, no penetration vorticity boundary conditions, and a natural tangential condition for vorticity that is weakly implemented in a boundary functional involving the pressure; well-posedness of this system was proven, however, it (seemingly) required some new analytic techniques.
We note that we do not add a Voigt modeling term $-\alpha^2\Delta \frac{\partial w}{\partial t}$ to the vorticity equation of the system. This could be done, and could make sense as a continuous level generalization of a vorticity equation stabilization. However, from an analysis point of view, it is more challenging to consider the system \eqref{Sys1} where the Voigt modeling is only applied to the momentum equation, and extension is straightforward for the case when Voigt modeling is also applied to the VVV vorticity equation. Moreover, it may lead to more accurate capturing of the vorticity at the numerical level. The reason for this is that in rooted in a computational study \cite{Kuberry_Larios_Rebholz_Wilson_2012} of the magnetohydrodynamic (MHD) equations with Voigt-regularization. Voigt-regularization for MHD was first proposed and studied in \cite{Larios_Titi_2009}, with further study in \cite{Catania_2009,Catania_Secchi_2009,Larios_Titi_2010_MHD}. The MHD system, with Voigt-regularization added only to the momentum equation, is strikingly similar to system \eqref{Sys1}. Indeed, if one were to add the term $(w\cdot\nabla)w$ to the right-hand side of equation \eqref{Sys1_u}, the systems would be identical in the $f=0$ case. In \cite{Kuberry_Larios_Rebholz_Wilson_2012}, it was found that in computational tests of the 2D case on a coarse mesh, putting a Voigt term only on (the MHD analogue of) equation \eqref{Sys1_w} resulted in a better match of level curves of the current density (the analogue of $\nabla\times w$) in fine-mesh simulations than putting a Voigt regularization on both equation, or neither equation. This is the reason for us only applying Voigt-regularization to equation \eqref{Sys1_u}.
\begin{Remark}
We note that the analysis of the 3D VVV system is somewhat distinct that the analysis of the 3D MHD system with Voigt-regularization added only to the momentum equation. This is because the cancellation of the nonlinear terms and that occurs in energy estimates for the MHD-Voigt equations does not occur in the VVV system, and therefore one must deal directly with the analogue of the vortex-stretching term $(w\cdot\nabla) u$. The key is to notice that one may first obtain an energy estimate purely in terms of $u$, and then use this bound to obtain a bound on $w$. Higher-order estimates on $u$ are not $w$-independent, but can be obtained using a bootstrapping technique, going back and forth between the two equations.
\end{Remark}
\begin{Remark}
\label{remark_L2V_bdd}
One might ask whether, in the inviscid (i.e., $\nu=0$) case, results analogous to those in this paper still hold. This is especially in light of global well-posedness results for the so-called Euler-Voigt equations \cite{Cao_Lunasin_Titi_2006,Larios_Titi_2009}, which are formally the inviscid version of the Navier-Stokes-Voigt equations. Two fundamental differences arise. Firstly, the vorticity stretching term $(w\cdot\nabla) u$ can no longer be controlled in the same way, as higher-order derivatives cannot be absorbed into the viscosity. Thus, one must resort to higher-order estimates, but as in case of the 3D Navier-Stokes equations and related $\alpha$-models \cite{Foias_Holm_Titi_2002,Ilyin_Lunasin_Titi_2006, Chen_Foias_Holm_Olson_Titi_Wynne_1998_PF,Chen_Foias_Holm_Olson_Titi_Wynne_1999,Holm_Titi_2005, Chen_Foias_Holm_Olson_Titi_Wynne_1998_PRL,Cheskidov_Holm_Olson_Titi_2005}, it is far from clear how to close these estimates. Secondly, in the proof of convergence as $\alpha\rightarrow 0$ (Theorem \ref{T4} below), the estimates depend crucially on the fact that $\int_0^T\|\nabla u(t)\|_{L^2}^2\,dt$ is bounded independently of $\alpha\in(0,1]$, which is a property that one does not have in the Euler-Voigt equations. Thus, it is not clear how to extend the results of this paper to the inviscid version of the VVV system.
\end{Remark}
The paper is organized as follows. We first provide the necessary preliminaries for our work in subsequent sections in Section~\ref{sec2}, then, we define weak and strong solutions to system \eqref{Sys1}, and state our main theorems. In Section~\ref{sec3}, we prove the existence and uniqueness of global weak solution for \eqref{Sys1} by Galerkin approximation following the ideas from \cite{Constantin_Foias_1988, Temam_2001_Th_Num} (also c.f.\cite{Larios_Pei_triple} with similar approach in full details). In view of the similar behavior of $w$ and the vorticity $\omega = \nabla\times u$, we prove that $w$ indeed tends to $\omega$ in $L^2$ norm as $\alpha\rightarrow 0$ in Section~\ref{sec4}, as well as the convergence of the velocity in \eqref{Sys1} to that of the Navier-Stokes equations. We point out that the numerical and computational studies of the VVV system will be the subject of a forthcoming work.
\section{Preliminaries and Main Results}
\label{sec2}
\subsection{Preliminaries}
\label{subsec2-1}
All through this paper $C$ represents some absolute constant varying line by line,
and similarly $C_{\alpha}$ indicates the dependence of the constant on $\alpha$.
We denote $\phi_{j} = \partial \phi/ \partial x_{j}$
and
$\phi_{t} = \partial \phi/\partial t$.
Also, we denote the mean-free versions of the usual Lebesgue and Sobolev spaces on $\mathbb{T}$ by $L^{p}$ for $1\leq p\leq \infty$ and $H^{s}\equiv W^{s, 2}$ for $s > 0$, respectively; we denote by $C_{w}(I;X)$ the space of weakly continuous functions from an interval $I$ to a Banach space $X$. Let $\mathcal{F}$ be the set of all trigonometric polynomials over $\mathbb{T}^3$
and define the subset of $\mathcal{F}$ with divergence-free and zero-average trigonometric polynomials
$$\mathcal{V} := \left\{ \phi\in\mathcal{F}: \nabla\cdot\phi = 0, \text{ and }\int_{\mathbb{T}^3}\phi\,dx= 0\right\}.$$
We follow the standard convention of denoting by $H$ and $V$ the closures of $\mathcal{V}$ in $L^2$ and $H^1$, respectively,
with inner products
$$(v, \widetilde{v}) = \sum_{i=1}^3\int_{\mathbb{T}^3}v_{i}\widetilde{v}_{i}\,dx \text{ \,\,and\,\, } ((v, \widetilde{v})) = \sum_{i, j=1}^3\int_{\mathbb{T}^3}\partial_{j}v_{i}\partial_{j}\widetilde{v}_{i}\,dx,$$
respectively, associated with the norms $\Vert v\Vert_{H}=(v, v)^{1/2}$ and $\Vert v \Vert_{V}=((v, v))^{1/2}$. For the sake of convenience, we use $\Vert v\Vert_{L^2}$ and $\Vert v\Vert_{H^1}$ to denote the above norms in $H$ and $V$, respectively.
The latter is a norm due to the Poincar\'e inequality
\begin{equation}
\label{Poincare}
\sqrt{\lambda_1}\Vert\phi\Vert_{L^2} \leq \Vert\nabla\phi\Vert_{L^2},
\end{equation}
holding for all $\phi\in V$, where $\lambda_1$ is the first eigenvalue of the Stokes operator $A$ discussed below.
Note that we also have the following compact embeddings (see, e.g., \cite{Constantin_Foias_1988, Temam_2001_Th_Num})
$$V \hookrightarrow H_{\sigma} \hookrightarrow V',$$
where $V'$ denotes the dual space of $V$.
We denote by $H_{\text{curl}}$ a subspace of $H$ whose elements are in $H$ and their curl (taken in the distributional sense) is in $L^2$, i.e.,
$$H_{\text{curl}} := \{ f\in H \left.\right| \nabla\times f \in L^2\},\quad\text{with norm}\quad \Vert f\Vert_{H_{\text{curl}}}:=(\Vert f\Vert_{L^2}^2 + \Vert \nabla\times f\Vert_{L^2}^2)^{1/2},$$
and $H_{\text{curl}}^s$ the subspace of $V$ whose elements are in $H^{s}\cap V$ and their curl is in $H^{s}$, i.e.,
$$H_{\text{curl}}^{s} := \{ f\in V \left.\right| \nabla\times f \in H^s\cap V\}\quad\text{with norm}\quad \Vert f\Vert_{H_{\text{curl}}^{s}}:=(\Vert f\Vert_{H^{s}}^2 + \Vert \nabla\times f\Vert_{H^{s}}^2)^{1/2}.$$
For more discussion on the curl-spaces, we refer the readers to \cite{ORS17,GR86} and the references therein.
The following interpolation result is frequently used in this paper (see, e.g., \cite{Nirenberg} for a detailed proof).
Assume $1 \leq q, r \leq \infty$, and $0<\gamma<1$.
For $v\in L^q(\mathbb{T}^{n})$, such that $\partial^\alpha v\in L^{r} (\mathbb{T}^{n})$, for $|\alpha|=m$, then
\begin{align}\label{PT1}
\Vert\partial_{s}v\Vert_{L^{p}} \leq C\Vert\partial^{\alpha}v\Vert_{L^{r}}^{\gamma}\Vert v\Vert_{L^{q}}^{1-\gamma},
\quad\text{where}\quad
\frac{1}{p} - \frac{s}{n} = \left(\frac{1}{r} - \frac{m}{n}\right) \gamma+ \frac{1}{q}(1-\gamma).
\end{align}
The following results are standard in the study of fluid dynamics,
in particular for the Navier-Stokes equations and related PDEs,
and we refer to reader to \cite{Constantin_Foias_1988, Temam_2001_Th_Num} for more details.
We define the Stokes operator $A:= -P_{\sigma}\Delta$
with domain $\mathcal{D}(A):= D(A)$,
where $P_{\sigma}$ is the Leray-Helmholtz projection.
Notice that due to the periodic boundary conditions,
it holds that $A = -\Delta P_{\sigma}$.
Moreover, the Stokes operator can be extended
as a linear operator from $V$ to $V'$ such that
$$\left<Av, \widetilde{v}\right> = ((v, \widetilde{v})) \text{ for all } v,\,\,\widetilde{v}\in V.$$
It is well-known that $A^{-1} : H \hookrightarrow \mathcal{D}(A)$
is a positive-definite, self-adjoint, compact operator from $H$ into itself,
and $H$ possesses an orthonormal basis of eigenfunctions $\{ w_{k}\}_{k=1}^{\infty}$ of $A^{-1}$, corresponding to a sequence of non-increasing sequence of positive eigenvalues.
Therefore, $A$ has non-decreasing eigenvalues $\lambda_{k}$,
i.e., $0 < \lambda_1 \leq \lambda_2, \ldots$
since $\{w_{k}\}_{k=1}^{\infty}$ are also eigenfunctions of $A$.
Furthermore, for any integer $M > 0$, we define $H_{M}:= \text{span}\{w_1, w_2, \ldots, w_{M}\}$
and $P_{M} : H \to H_{M}$ be the $L^2$-orthogonal projection onto $H_{M}$.
Next, for any $v, \widetilde{v}, w \in \mathcal{V}$,
we introduce the convenient notation for the bilinear term
\begin{align*}
B(v, \widetilde{v}) := P_{\sigma}((v\cdot\nabla)\widetilde{v}),
\end{align*}
which can be extended to a continuous map
$B : V \times V \to V'$, such that, for smooth functions $v,\widetilde{v},w\in\mathcal{V}$,
\begin{align*}
\left<B(v, \widetilde{v}), w\right> = \int_{\mathbb{T}^3}(v\cdot\nabla) \widetilde{v}\cdot w\,dx.
\end{align*}
Moreover ${B}$ has certain symmetry properties, and can be extended as a continuous map ${B}$ on various spaces, as in the following lemma, which is proved in, e.g., \cite{Constantin_Foias_1988, Foias_Manley_Rosa_Temam_2001}.
\begin{Lemma}
\label{L1}
The operator $B$ can be extended to an operator, still denoted by $B$, over spaces indicated below, with the following properties.
\begin{subequations}
\begin{align}
\label{embd1}
\ip{B(u,v)}{w}_{V'} &= -\ip{B(u,w)}{v}_{V'},
&\quad\forall\;u\in V, v\in V, w\in V,\\
\label{embd2}
\ip{B(u,v)}{v}_{V'} &= 0,
&\quad\forall\;u\in V, v\in V, w\in V,\\
\label{B:326}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert u\Vert_{L^2}^{1/2} \Vert\nabla u\Vert_{L^2}^{1/2} \Vert\nabla v\Vert_{L^2} \Vert\nabla w\Vert_{L^2},
&\quad\forall\;u\in V, v\in V, w\in V,\\
\label{B:623}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert\nabla u\Vert_{L^2} \Vert\nabla v\Vert_{L^2} \Vert w\Vert_{L^2}^{1/2} \Vert\nabla w\Vert_{L^2}^{1/2},
&\quad\forall\;u\in V, v\in V, w\in V,\\
\label{B:236}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert u\Vert_{L^2} \Vert\nabla v\Vert_{L^2}^{1/2} \Vert Av\Vert_{L^2}^{1/2} \Vert\nabla w\Vert_{L^2},
&\quad\forall\;u\in H, v\in \mathcal{D}(A), w\in V,\\
\label{B:632}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert\nabla u\Vert_{L^2} \Vert\nabla v\Vert_{L^2}^{1/2} \Vert Av\Vert_{L^2}^{1/2} \Vert w\Vert_{L^2},
&\quad\forall\;u\in V, v\in \mathcal{D}(A), w\in H,\\
\label{B:i22}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert\nabla u\Vert_{L^2}^{1/2} \Vert Au\Vert_{L^2}^{1/2} \Vert\nabla v\Vert_{L^2} \Vert w\Vert_{L^2},
&\quad\forall\;u\in \mathcal{D}(A), v\in V, w\in H,\\
\label{B:623s}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert\nabla u\Vert_{L^2} \Vert \nabla w\Vert_{L^2}
\Vert v\Vert_{L^2}^{1/2}\Vert \nabla v\Vert_{L^2}^{1/2},
&\quad \forall\;u\in V, v\in V, w\in V,\\
\label{B:263}
|\ip{B(u,v)}{w}_{V'}|
&\leq C\Vert u\Vert_{L^2} \Vert Av\Vert_{L^2} \Vert w\Vert_{L^2}^{1/2} \Vert\nabla w\Vert_{L^2}^{1/2},
&\quad\forall\;u\in H, v\in \mathcal{D}(A), w\in V.
\end{align}
\end{subequations}
\end{Lemma}
If we formally apply $P_\sigma$ to equation \eqref{Sys1_u} and \eqref{Sys1_w}, we obtain the following functional formulation of system \eqref{Sys1}
\begin{subequations}\label{Sys1_fun}
\begin{empheq}[left=\empheqlbrace]{align}
\label{Sys1_fun_u}
&\frac{d}{dt}(u + \alpha^2 Au)
+
Au
+
P_\sigma(w\times u)
=
P_\sigma f,
\\
\label{Sys1_fun_w}
&\frac{d w}{dt}
+
A w
+
{B}(u, w)
-
{B}(w, u)
=
\nabla\times f,
\end{empheq}
\end{subequations}
Formulation \eqref{Sys1_fun}, taken to hold in the sense of $L^2(0,T;V')$, can be shown to be equivalent to formulation \eqref{Sys1}. In particular, the pressure gradient can be recovered using a corollary of a deep result of G. de Rham. The corollary states that, for any distribution $g$, the equality $g=\nabla p$ holds for some distribution $p$ if and only if $\left<g, w\right> = 0$ for all $w \in \mathcal {V}$. See \cite{Wang_1993} for an elementary proof of the corollary.
We recall the Agmon inequalities in 3D (see, e.g., \cite{Constantin_Foias_1988,Agmon}). Namely, for any $\phi \in D(A)$,
\begin{equation}
\label{Agmon}
\Vert\phi\Vert_{L^{\infty}} \leq C\Vert\nabla\phi\Vert_{L^2}^{1/2} \Vert A\phi\Vert_{L^2}^{1/2}
\qquad\text{and}\qquad
\Vert\phi\Vert_{L^{\infty}} \leq C\Vert\phi\Vert_{L^2}^{1/4} \Vert A\phi\Vert_{L^2}^{3/4}.
\end{equation}
The following Aubin-Lions Compactness Lemma is needed in order to construct solutions for \eqref{Sys1}.
\begin{Lemma}
\label{L2} Let $X$, $Y$, and $Z$ be separable, reflexive Banach spaces, where $X$ is compactly embedded in $Y$, and $Y$ is continuously embedded in $Z$.
Let $T>0$, $p \in (1, \infty)$ and let $\{f_{n}(t, \cdot)\}_{n=1}^{\infty}$
be a bounded sequence in $L^{p}([0, T]; X)$
such that $\{\partial f_{n}/\partial t\}_{n=1}^{\infty}$ is bounded in $L^{p}([0, T]; Z)$. Then $\{f_{n}\}_{n=1}^{\infty}$ has a strongly convergent subsequence in $C([0, T]; Y)$.
\end{Lemma}
Typically in the theory of the Navier-Stokes equations, to write $\frac12\frac{d}{dt}\Vert u\Vert_{L^2}^2 = (u_t,u)$, one needs $u_t\in L^2(0,T;V')$, $u\in L^2(0,T;V)$ using the Lions-Magenes Lemma (cf., \cite[p. 176]{Temam_2001_Th_Num} or \cite[Corollary 7.3]{Robinson_2001}). However, in our context, we have $u,\,u_t\in L^2(0,T;V)$. Therefore, the following lemma is useful. The proof is a straight-forward exercise via mollification in time, and follows closely the proofs of the Lions-Magenes Lemma in the aforementioned references.
\begin{Lemma}\label{Lemma_Lions_Magenes_type}
Let $V$ be a Hilbert space with norm $\|\cdot\|$ and inner product $((\cdot,\cdot))$. Suppose that for some $T>0$, $u\in L^2(0,T;V)$ and $u_t\in L^2(0,T;V)$. Then the following equality holds in the scalar distribution sense on $(0,T)$.
\begin{align*}
\frac12\frac{d}{dt}\|u\|^2 = ((u_t,u)).
\end{align*}
\end{Lemma}
For the sake of completeness, we state the following uniform Gr\"onwall's inequality, proved in \cite{Jones_Titi_1992} (see also \cite{Farhat_Lunasin_Titi_2017_Horizontal} and the references therein), which will be used frequently throughout the paper.
\begin{Lemma}
\label{L3}
Suppose that $Y(t)$ is a locally integrable and absolutely continuous function that satisfies the following:
$$\frac{d Y}{d t} + \alpha(t) Y \leq \beta(t), \quad\text{ a.e. on } (0, \infty), $$
such that
$$\liminf_{t \to \infty} \int_{t}^{t+\tau} \alpha(s)\,ds \geq \gamma, \quad\quad\quad \limsup_{t \to \infty} \int_{t}^{t+\tau} \alpha^{-}(s)\,ds < \infty, $$
and
$$\lim_{t \to \infty} \int_{t}^{t+\tau} \beta^{+}(s)\,ds = 0, $$
for fixed $\tau > 0$, and $\gamma > 0$,
where
$\alpha^{-} = \max\{-\alpha, 0\}$
and
$\beta^{+} = \max\{\beta, 0\}$.
Then, $Y(t) \to 0$ at an exponential rate as $t \to \infty$.
\end{Lemma}
We record the following local well-posedness result for strong solutions to equations \eqref{NSE} (see, e.g., \cite{Temam_1995_Fun_Anal}, Theorem 4.2).
\begin{Theorem}\label{thm_NSE_local_reg}
Let $\widetilde u_0\in H^s\cap V$, $f\in L^2(0,T;H^{s-1}\cap H)$ for some $s\geq0$. Then there exists a $T>0$ and unique solution $(\widetilde u,\widetilde p)$ to \eqref{NSE} such that $\widetilde u\in C([0,T];H^s\cap V)\cap L^2(0,T;H^{s+1}\cap V)$.
\end{Theorem}
\subsection{Main results of the paper}
\label{subsec2-2}
We first define the weak and strong solutions to system \eqref{Sys1}, respectively.
\begin{Definition}
\label{Def1}
Let $T>0$ be arbitrary. Suppose $u_0 \in V$, $w_0 \in H$, and $f \in L^2(0, T; H)$. We call the pair $(u, w)$ a \textit{weak solution} on the time interval $[0, T]$ to system \eqref{Sys1}, if $u \in C(0, T; V)$, $u_t\in L^{2}(0, T; V)$, $w \in C_{w}(0, T; H) \cap L^2(0, T; V)$, $w_t\in L^{2}(0, T; H^{-1})$, and moreover, $(u, w)$ satisfies system \eqref{Sys1} in the weak sense, i.e.,
\begin{equation}
\left\{
\begin{aligned}
&\alpha^2 ((u_{t}, \psi))
+
(u_{t}, \psi)
+
((u, \psi))
+
\left<w \times u, \psi\right>
=
\left(f, \psi\right),
\\&
\left<w_{t}, \psi\right>
+
((w, \psi))
-
\left<B(u, \psi), w\right>
-
\left<\widetilde{B}(w, u), \psi\right>
=
-\left(f, \nabla\times\psi\right),
\end{aligned}
\right.
\label{Weak_def}
\end{equation}
holds for any $\psi \in L^2(0, T; V)$.
\end{Definition}
Note that by taking $\psi = v\phi$ for $v\in V$ and $\phi\in C^1_c((0,T))$, it follows that formulation \eqref{Sys1_fun} is equivalent to formulation \eqref{Weak_def}, interpreted as an operator equation holding in an appropriate distributional sense.
\begin{Definition}
\label{Def2}
Let $T>0$ be an arbitrarily given time. Suppose $u_0 \in V$, $w_0 \in V$, and $f \in L^2(0, T; H_{\text{curl}})$. We call the pair $(u, w)$ a \textit{strong solution} on the time interval $[0, T]$ to system \eqref{Sys1}, if it is a weak solution as in Definition~\ref{Def1} and satisfies additionally $w \in C([0, T]; V) \cap L^{2}(0, T; D(A))$, and $w_t\in L^{2}(0, T; H)$.
\end{Definition}
The following theorem provides the global existence and uniqueness of weak solution to system \eqref{Sys1}.
\begin{Theorem}
\label{T1}
Suppose $u_0 \in V$, $w_0 \in H$, and $f \in L^2(0, T; H)$. Then, the velocity-vorticity-Voigt system \eqref{Sys1} possesses a unique global weak solution $(u, w)$ in the sense of Definition~\ref{Def1} that satisfies $\nabla\cdot w = 0$. Moreover, the following energy equality holds.
\begin{align}\label{energy_equality_u}
\alpha^2\|\nabla u(t)\|_{L^2}^2
+ \|u(t)\|_{L^2}^2
+2\int_0^t\|\nabla u(s)\|_{L^2}^2\,ds
=
\alpha^2\|\nabla u_0\|_{L^2}^2
+ \|u_0\|_{L^2}^2
+2\int_0^t(u(s),f(s))\,ds
\end{align}
\end{Theorem}
The next theorem is about the global existence and uniqueness of strong solution to system \eqref{Sys1},
as well as the higher-order regularity of the solution.
\begin{Theorem}
\label{T2}
For the initial data $u_0 \in V$, $w_0 \in V$, and $f \in L^2(0, T; H_{\text{curl}})$,
there exists a unique strong solution $(u, w)$ in the sense of Definition~\ref{Def2}.
Moreover, if we further assume that the initial data $u_0 \in H^s\cap V$, $w_0 \in H^s\cap V$, and $f \in L^2(0, T; H_{\text{curl}}^{s-1})$ for $s\geq2$, $s\in\mathbb{N}$,
then, the solution $u \in C_{w}(0, T; H^s\cap V)$ and $w \in C_{w}(0, T; H^s\cap V) \cap L^2(0, T; H^{s+1}\cap V)$.
\end{Theorem}
The following theorem relates the quantity $w$ in \eqref{Sys1_w} to the vorticity $\omega = \nabla\times u$.
\begin{Theorem}
\label{T3}
Denote by $\omega:=\nabla\times u$ the vorticity of the flow and let $u_0 \in H^4\cap V$, $f\in H^{2}$.
Then, we have
$$\Vert\omega(t) - w(t)\Vert_{L^2}^2
+\alpha^2\Vert\nabla\omega(t) - \nabla w(t)\Vert_{L^2}^2
+\int_0^t\Vert\nabla\omega(s) - \nabla w(s)\Vert_{L^2}^2\,ds
\leq
C_0e^{Ct} + \frac{\widetilde{K}\alpha^2}{C}(e^{Ct} - 1),$$
where $C_0$ depends on the initial data and $\widetilde{K}$ is explained in the proof.
If we further assume $w_0 = \nabla\times u_0$,
then, $$\Vert \omega(t) - w(t)\Vert_{L^2}^2
+ \alpha^2\Vert\nabla w(t) - \nabla\omega(t)\Vert_{L^2}^2
+\int_0^t\Vert\nabla\omega(s) - \nabla w(s)\Vert_{L^2}^2\,ds
\leq
K\alpha^2(e^{Ct} - 1),$$
for a.e. $t>0$, i.e., $\Vert w - \omega\Vert_{L^\infty(0,T;L^2)}\sim\mathcal{O}(\alpha)$ and $\Vert w - \omega\Vert_{L^2(0,T;V)}\sim\mathcal{O}(\alpha)$.
In particular, we have $\Vert w - \omega\Vert_{L^\infty(0,T;L^2)} \to 0$ and
$\Vert w - \omega\Vert_{L^2(0,T;V)} \to 0$ as $\alpha \to 0$.
\end{Theorem}
The next theorem describes the relation between the velocity-vorticity-Voigt equations \eqref{Sys1} and the 3D Navier-Stokes equations \eqref{NSE}.
\begin{Theorem}
\label{T4}
Denote by $\widetilde{\omega}:=\nabla\times\widetilde{u}$ the vorticity of $\widetilde{u}$ in \eqref{NSE_u} and let $u_0$, $f$, and $T>0$ be the same as in Theorem~\ref{T3}, and set $w_0 = \nabla\times u_0$ and $\widetilde{u}_0=u_0$. Then, for any $\alpha\in(0,1]$,
$$\Vert\omega(t) - \widetilde{\omega}(t)\Vert_{L^2}^2 + \Vert u(t) - \widetilde{u}(t)\Vert_{L^2}^2 + \alpha^2\Vert\nabla u(t) - \nabla\widetilde{u}(t)\Vert_{L^2}^2
+\int_0^t\Vert\nabla u(s) - \nabla\widetilde{u}(s)\Vert_{L^2}^2\,ds
\leq
C\alpha^2$$
for a.e. $t>0$ in the interval of existence of the solution to \eqref{NSE}, say, up to $T>0
$ and the constant $C$ depends on $\Vert\widetilde{u}\Vert_{H^3}$, $\Vert u\Vert_{H^3}$, as well as $\Vert f\Vert_{H_{\text{curl}}^2}$.
In particular, we have $\Vert \omega - \widetilde{\omega}\Vert_{L^\infty(0,T;H)} \to 0$, $\Vert u - \widetilde{u}\Vert_{L^\infty(0,T;H)} \to 0$, and $\Vert u - \widetilde{u}\Vert_{L^2(0,T;V)} \to 0$ as $\alpha \to 0$.
\end{Theorem}
\begin{Remark}
\label{R1}
We point out that the global well-posedness of \eqref{Sys1} still holds if we remove the divergence-free condition on the initial data $w_0$, i.e., we only assume $w_0\in L^2$ for the weak solution, and $w_0\in H^1$ for the strong solution. Consequently, we obtain the weak solutions $w\in C_{w}(0, T; L^2)\cap L^2(0, T; H^1)$ and the strong solution $w\in C_{w}(0, T; H^1)\cap L^2(0, T; H^2)$ by modifying the proofs in Section~\ref{sec3} and Section~\ref{sec4} accordingly. However, Theorem~\ref{T3} is no longer valid. Also, the assumptions that $\widetilde{u}_0=u_0$ and $w_0 = \nabla\times u_0$ can be removed at the cost of obtaining ``convergence up to and error'' as $\alpha\rightarrow 0$. For the sake of clarity, we do not include these details.
\end{Remark}
The above result yields the following blow-up criterion, which looks identical to the blow-up criterion for the 3D Euler-Voigt and 3D Navier-Stokes equations \cite{Larios_Titi_2009,Larios_Petersen_Titi_Wingate_2015} (see also \cite{Khouider_Titi_2008}).
\begin{Corollary}\label{blow_up}
Assume the hypotheses and the notation of Theorem \ref{T4}. Suppose that there is a $T>0$ and an $\epsilon>0$ such that
\begin{align}\label{blowup_criterion}
\sup_{t\in[0,T]}\limsup_{\alpha\to0^+}\alpha\|\nabla u\|_{L^2}\geq\epsilon>0.
\end{align}
Then solutions to the 3D Navier-Stokes equations with initial data $u_0$ develop a singularity on the time interval $[0,T]$, in the sense that there does not exist a strong solution $\widetilde u\in L^2(0,T;V\cap H^2)\cap C([0,T],V)$.
\end{Corollary}
\section{Proof of Theorem~\ref{T1}}
\label{sec3}
In this section, we provide the construction of weak solution to system \eqref{Sys1} via Galerkin approximation. We also show that the obtained global weak solution is unique.
\subsection{\em Proof of global existence of weak solutions}
\label{subsec3-1}
Consider the following finite-dimensional Galerkin ODE system for \eqref{Sys1}.
\begin{subequations}\label{ODE}
\begin{empheq}[left=\empheqlbrace]{align}
\label{ODEu}
&\frac{d}{dt}(u_{M} + \alpha^2 Au_{M})
+
\nu Au_{M}
+
P_{M}P_\sigma(w_{M}\times u_{M})
=
f_{M},
\\
\label{ODEw}
&\frac{d w_{M}}{dt}
+
A w_{M}
+
P_{M}B(u_{M}, w_{M})
-
P_{M}{B}(w_{M}, u_{M})
=
\nabla\times f_{M},
\end{empheq}
\end{subequations}
with initial data $u_{M}(0) = P_{M}u_0$, $w_{M}(0) = P_{M}w_0$ and forcing $f_{M} = P_{M}f$.
Notice that all the terms in \eqref{ODEu} and \eqref{ODEw} except the time-derivatives are at most quadratic, and thus, they are locally Lipschitz continuous.
Therefore, by the Picard-Lindel\"of Theorem, we know that there exists a solution up to some time $T_{M} > 0$.
Next we take inner-products with the above two equations by $u_{M}$ and $w_{M}$, respectively, integrate by parts, and obtain
\begin{subequations}\label{E}
\begin{empheq}[left=\empheqlbrace]{align}
\label{Eu}
&
\frac{1}{2}\frac{d}{dt}
\left( \Vert u_{M}\Vert_{L^2}^2 + \alpha^2\Vert\nabla u_{M}\Vert_{L^2}^2 \right)
+
\Vert\nabla u_{M}\Vert_{L^2}^2
=
\int_{\mathbb{T}^3}u_{M}\cdot f_{M}\,dx,
\\
\label{Ew}
&\frac{1}{2}\frac{d}{dt}\Vert w_{M}\Vert_{L^2}^2
+
\Vert\nabla w_{M}\Vert_{L^2}^2
=
\int_{\mathbb{T}^3}(w_{M}\cdot\nabla) u_{M}\cdot w_{M}\,dx
+
\int_{\mathbb{T}^3}(\nabla\times f_{M})\cdot w_{M}\,dx,
\end{empheq}
\end{subequations}
where we used $\nabla\cdot u_{M} = 0$ in \eqref{Ew}.
Then, by applying Cauchy-Schwarz inequality to the right side of \eqref{Eu}, we obtain
\begin{align}
&
\frac{d}{dt}
\left( \Vert u_{M}\Vert_{L^2}^2 + \alpha^2\Vert\nabla u_{M}\Vert_{L^2}^2\right)
+
2\Vert\nabla u_{M}\Vert_{L^2}^2
\leq
2\Vert f_{M}\Vert_{L^2}\Vert u_{M}\Vert_{L^2}
\leq
\Vert f_{M}\Vert_{L^2}^2
+
\Vert\nabla u_{M}\Vert_{L^2}^2.
\label{Ineq1_u}
\end{align}
Thus, for any $T\in(0,T_M]$, integrating in time, it holds that for a.e. $t\in(0,T)$,
\begin{align*}
\Vert u_{M}(t)\Vert_{L^2}^2 + \alpha^2\Vert\nabla u_{M}(t)\Vert_{L^2}^2
&\leq
C\Vert f_M\Vert_{L^2(0,T,V)}^2
+
\Vert u_{M}(0)\Vert_{L^2}^2 + \alpha^2\Vert\nabla u_{M}(0)\Vert_{L^2}^2
\\&\leq
C\Vert f\Vert_{L^2(0,T;H)}^2
+
\Vert u_0\Vert_{L^2(0,T;H)}^2 + \alpha^2\Vert\nabla u_0\Vert_{L^2}^2
=:K_{T}.
\end{align*}
Since the right-hand side is finite, $u_M$ can be extended beyond $T_M$, so that the above inequality holds for arbitrary $T>0$ and a.e. $t\in(0,T)$.
In particular, the interval of existence is independent of $M$. Moreover, $\left\{u_M\right\}_{M=1^\infty}$ is uniformly bounded in $L^\infty(0,T;V)$. Using the Banach-Alaoglu Theorem, and extracting a subsequence if necessary (which we relabel if necessary and still denote by $u_M$), we obtain a $u\in L^{\infty}(0, T; V)$ such that $u_M$ converges to $u$ in the weak-$*$ sense of $L^{\infty}(0, T; V)$.
Next, we estimate the right side of \eqref{Ew} and obtain
\begin{align}
\label{wM_L2_bound}
\frac12\frac{d}{dt}\Vert w_{M}\Vert_{L^2}^2
+
\Vert\nabla w_{M}\Vert_{L^2}^2
&\leq
C\Vert\nabla u_{M}\Vert_{L^2} \Vert w_{M}\Vert_{L^2}^{1/2} \Vert\nabla w_{M}\Vert_{L^2}^{3/2}
+
\tfrac12\Vert\nabla\times f_{M}\Vert_{L^2}^2
+
\tfrac12\Vert w_{M}\Vert_{L^2}^2
\notag
\\&
\leq
C\Vert\nabla u_{M}\Vert_{L^2}^4 \Vert w_{M}\Vert_{L^2}^{2}
+
\tfrac12\Vert\nabla w_{M}\Vert_{L^2}^2
+
\tfrac12\Vert\nabla\times f_{M}\Vert_{L^2}^2
+
\tfrac12\Vert w_{M}\Vert_{L^2}^2
\end{align}
where we used Lemma~\ref{L1}.
Then, after rearranging and using Gr\"onwall's inequality, it follows that
\begin{align*}
\Vert w_{M}(t)\Vert_{L^2}^2
&
\leq
\bar{K}_{T}
:=
e^{TC_{K_{T}}}\Vert w_{M}(0)\Vert_{L^2}^2 + \int_0^Te^{C_{K_{T}}(t-s)}\Vert\nabla\times f(s)\Vert_{L^2}^2\,ds<\infty,
\end{align*}
from which we conclude that $w_{M}$ can be extended beyond $T_{M}$ up to any $t < T$ and is uniformly bounded in $L^{\infty}(0, T; L^2)$. Using this fact and integrating \eqref{wM_L2_bound} on $[0,T]$, we also find that $w_{M}$ is uniformly bounded in $L^{2}(0, T; H^1)$. By similar arguments as above for $u_{M}$, we extract a weak-$*$ convergent subsequence, still denoted by $w_{M}$, with limit $w \in L^{\infty}(0, T; L^2)\cap L^{2}(0, T; H^1)$.
Also, by taking the divergence of \eqref{ODEw} and denoting $v_{M} := \nabla\cdot w_{M}$, we obtain
\begin{align}
&
\frac{d v_{M}}{d t}
+
A v_{M}
+
P_{M}B(u_{M}, v_{M})
=
0.
\label{Div-w}
\end{align}
Multiplying the above equation by $v_{M}$ and using \eqref{embd2} and \eqref{Poincare}, we obtain
\begin{align*}
&
\frac{1}{2}\frac{d}{d t} \Vert v_{M}\Vert_{L^2}^2
+
\lambda_1\Vert v_{M}\Vert_{L^2}^2
\leq
0.
\end{align*}
Then, Gr\"onwall's inequality the implies for a.e. $t\in(0,T)$,
$$\Vert v_{M}(t)\Vert_{L^2} \leq \Vert v_{M}(0)\Vert_{L^2}e^{-\lambda_1t}. $$
If the initial data $w_0$ is in $H$, we have $v_{M}(0) = 0$,
which implies $\nabla\cdot w_{M} = v_{M} = 0$ in $L^2([0,T];L^2)$. Since $L^2([0,T];H)$ is closed in $L^2([0,T];L^2)$, this also implies $\nabla\cdot w=0$, so long as $\nabla\cdot w_0=0$.
Namely, we have $w\in L^{\infty}(0, T; H)\cap L^{2}(0, T; V)$.
Note that \eqref{Div-w} also implies
\begin{align*}
&
\Vert v_{M}(T)\Vert_{L^2}^2
+
2\int_{0}^{T}\Vert\nabla v_{M}\Vert_{L^2}^2\,dt
\leq
\Vert v_{M}(0)\Vert_{L^2}^2,
\end{align*}
from which we obtain that $v_{M}$ is uniformly bounded in $L^{2}(0, T; H^1)$.
We consider the pair $(u, w)$ as our candidate solution.
Next, we obtain bounds on
$d u_{M}/dt$ in $L^{2}(0, T; V)$ and bounds on $d w_{M}/dt$ in $L^{2}(0, T; V')$, uniformly with respect to $M$.
Note that
\begin{subequations}\label{Time}
\begin{empheq}[left=\empheqlbrace]{align}
\label{Time_u}
&
(I + \alpha^2A)\frac{d u_{M}}{d t}
=
-
Au_{M}
-
P_{M}(u_{M}\times w_{M})
+
f_{M},
\\&
\label{Time_w}
\frac{d w_{M}}{dt}
=
-A w_{M}
-
P_{M}B(u_{M}, w_{M})
+
P_{M}{B}(w_{M}, u_{M})
+
\nabla\times f_{M}.
\end{empheq}
\end{subequations}
Note in \eqref{Time_u} that $-Au_{M}$ is uniformly bounded in $L^{2}(0, T; V')$
due to the fact that $u_{M}$ is also uniformly bounded in $L^{2}(0, T; V)$.
On the other hand, by Lemma~\ref{L1}, we have
\begin{align*}
\left|\int_{\mathbb{T}^3}P_M(u_{M}\times w_{M})\cdot \phi\,dx\right|
&=
\left|\int_{\mathbb{T}^3}(u_{M}\times w_{M})\cdot P_M\phi\,dx\right|
\leq
C\Vert u_{M}\Vert_{L^2}^{1/2} \Vert\nabla u_{M}\Vert_{L^2}^{1/2} \Vert w_{M}\Vert_{L^2} \Vert P_M\phi\Vert_{H^1}
\\&\leq
C\Vert u_{M}\Vert_{L^2}^{1/2} \Vert\nabla u_{M}\Vert_{L^2}^{1/2} \Vert w_{M}\Vert_{L^2} \Vert \phi\Vert_{H^1},
\end{align*}
for all test functions $\phi\in V$.
Thus, $P_{M}(u_{M}\times w_{M})$ is also uniformly bounded in $L^{2}(0, T; V')$.
It is easily seen that,
uniformly in $M$, $f_{M}$ and $\nabla\times f_{M}$ are bounded in $L^{2}(0, T; V')$,
as well as $Au_{M}$, $A w_{M}$.
Therefore,
$$(I + \alpha^2 A)\frac{d u_{M}}{d t} \text{\quad is uniformly bounded in\quad} L^2(0, T; V').$$
By inverting the Helmholtz operator $(I+\alpha^2 A)$ with respect to zero-mean, periodic boundary conditions, we obtain
$$\frac{d u_{M}}{d t} \text{\quad is uniformly bounded in\quad} L^2(0, T; V).
$$
Next, notice in \eqref{Time_w} that $\Delta w_{M}$ is bounded in $L^{2}(0, T; H^{-1}) \subset L^{2}(0, T; V')$,
and for the nonlinear terms,
we integrate by parts and use Lemma~\ref{L1} in order to get
\begin{align*}
\left|\int_{\mathbb{T}^3}P_M((u_{M}\cdot\nabla) w_{M})\cdot \phi\,dx\right|
&=
\left|\int_{\mathbb{T}^3}(u_{M}\cdot\nabla) w_{M}\cdot P_M\phi\,dx\right|
\leq
\Vert u_{M}\Vert_{L^2}^{1/2} \Vert\nabla u_{M}\Vert_{L^2}^{1/2} \Vert w_{M}\Vert_{H^1} \Vert\phi\Vert_{H^1}
\end{align*}
for all test functions $\phi\in V$.
Therefore, $-P_{M}B(u_{M}, w_{M})$ is uniformly bounded in $L^{2}(0, T; V')$.
Similar estimates show that $P_{M}{B}(w_{M}, u_{M})$ is also bounded in $L^{2}(0, T; V')$, uniformly in $M$.
Thus, we get
$$\frac{d w_{M}}{d t} \text{\quad is uniformly bounded in\quad} L^2(0, T; V').$$
Also, by the bounds we obtained above and Lemma~\ref{L2},
there is a subsequence, still labeled as $(u_{M}, w_{M})$ that satisfies
\begin{subequations}
\begin{align}
\label{Convergence1}
u_{M} \rightarrow u \text{\quad strongly in \quad} L^2(0, T; V) \text{\quad and\quad} &w_{M} \rightarrow w \text{\quad strongly in \quad} L^2(0, T; H),
\\
\label{Convergence2}
&w_{M} \rightharpoonup w \text{\quad weakly in\quad} L^2(0, T; V),
\\
\label{Convergence3}
u_{M} \rightharpoonup u \text{\quad weak-$\ast$ in\quad} L^{\infty}([0, T]; V) \text{\quad and\quad} &w_{M} \rightharpoonup w \text{\quad weak-$\ast$ in\quad} L^{\infty}(0, T; H),
\end{align}
\end{subequations}
for all $T>0$.
Hence, for $0<\bar{T}\leq\widetilde{T}<T$, by taking inner products of \eqref{ODEu} and \eqref{ODEw} with test function $\psi \in C_{c}^1([0, \widetilde{T}); V)$, integrating in time over $[0, \bar{T}]$, and integrating by parts, we obtain
\begin{subequations}\label{Weak}
\begin{empheq}[left=\empheqlbrace]{align}
\label{Weak_u}
&-\int_{0}^{\bar{T}}(u_{M}, \psi_{t})\,dt
+
(u_{M}(\cdot, \bar{T}), \psi(\cdot, \bar{T}))
-
(u_{M}(\cdot, 0), \psi(\cdot, 0)) \nonumber
\\&\quad
-
\alpha^2\int_{0}^{\bar{T}}((u_{M}, \psi_{t}))\,dt
-
\alpha^2((u_{M}(\cdot, \bar{T}), \psi(\cdot, \bar{T})))
+
\alpha^2((u_{M}(\cdot, 0), \psi(\cdot, 0))) \nonumber
\\&\quad\quad
+
\int_{0}^{\bar{T}}((u_{M}, \psi))\,dt
+
\int_{0}^{\bar{T}}\left<(w_{M}\times u_{M}), P_{M}\psi\right>\,dt
=
\int_{0}^{\bar{T}}(f_{M}, \psi)\,dt,
\\
\label{Weak_w}
&-\int_{0}^{\bar{T}}(w_{M}, \psi_{t})\,dt
+
(w_{M}(\cdot, \bar{T}), \psi(\cdot, \bar{T}))
-
(w_{M}(\cdot, 0), \psi(\cdot, 0))
+
\int_{0}^{\bar{T}}((u_{M}, \psi))\,dt \nonumber
\\&\quad
+
\int_{0}^{\bar{T}}\left<B(u_{M}, w_{M}), P_{M}\psi\right>\,dt
-
\int_{0}^{\bar{T}}\left<{B}(w_{M}, u_{M}), P_{M}\psi\right>\,dt
=
\int_{0}^{\bar{T}}(f_{M}, \nabla\times\psi)\,dt.
\end{empheq}
\end{subequations}
Using the standard arguments from the theory of the Navier-Stokes equations (see, e.g., \cite{Constantin_Foias_1988, Temam_2001_Th_Num}),
we have that each of the integrals in \eqref{Weak_u} and \eqref{Weak_w} converges to the time integral of the corresponding term in \eqref{Weak}.
For the sake of completeness, we provide the details below.
First, convergence of integrals of the linear terms follows from \eqref{Convergence1} as well as $f\in H$.
The choice of $\psi$ and \eqref{Convergence1} imply the convergence of the boundary terms at $t=0$ and $t=\bar{T}$ in both \eqref{Weak_u} and \eqref{Weak_w}.
Regarding the nonlinear term in \eqref{Weak_u}, we have
\begin{align*}
&\left| \int_{0}^{\bar{T}}\left<(w_{M}\times u_{M}), P_{M}\psi\right>\,dt - \int_{0}^{\bar{T}}\left<(w\times u), \psi\right>\,dt \right|
\\&
\leq
\int_{0}^{\bar{T}}\left|\left<(w_{M}\times (u_{M}-u)), P_{M}\psi\right>\right|\,dt
+
\int_{0}^{\bar{T}}\left|\left<(w_{M}-w)\times u, P_{M}\psi\right>\right|\,dt
\\&\quad
+
\int_{0}^{\bar{T}}\left|\left<w\times u, P_{M}\psi - \psi\right>\right|\,dt
\\&
\leq
\Vert w_{M}\Vert_{L^2(L^2)} \Vert u-u_{M}\Vert_{L^2(H^1)} \Vert\psi\Vert_{L^{\infty}(L^2)}^{1/2} \Vert\psi\Vert_{L^{\infty}(H^1)}^{1/2}
\\&\quad
+
\Vert w_{M}-w\Vert_{L^2(L^2)} \Vert u\Vert_{L^2(H^1)} \Vert\psi\Vert_{L^{\infty}(L^2)}^{1/2} \Vert\psi\Vert_{L^{\infty}(H^1)}^{1/2}
\\&\quad
+
\Vert w\Vert_{L^2(H^1)} \Vert u\Vert_{L^{\infty}(L^2)}^{1/2}\Vert u\Vert_{L^{\infty}(H^1)}^{1/2} \Vert P_{M}\psi - \psi\Vert_{L^2(L^2)}
\end{align*}
which converges to $0$ in view of \eqref{Convergence1} and the uniform bounds on $w_{M}$ and $u$.
As for the first nonlinear term in \eqref{Weak_w}, we use \eqref{embd1} and estimate as
\begin{align*}
&\left| \int_{0}^{\bar{T}}\left<B(u_{M}, P_{M}\psi), w_{M}\right>\,dt - \int_{0}^{\bar{T}}\left<B(u, \psi), w\right>\,dt\right|
\\&
\leq
\int_{0}^{\bar{T}}\left|\left<B(u_{M}, P_{M}\psi - \psi), w_{M}\right>\right|\,dt
+
\int_{0}^{\bar{T}}\left|\left<B(u_{M}-u, \psi), w_{M}\right>\right|\,dt
\\&\quad
+
\left|\int_{0}^{\bar{T}}\left<B(u_{M}, \psi), w_{M}-w\right>\,dt\right|
\\&
\leq
\Vert u_{M}\Vert_{L^2(H^1)} \Vert P_{M}\psi-\psi\Vert_{L^{\infty}(H^1)} \Vert w_{M}\Vert_{L^2(L^2)}^{1/2} \Vert w_{M}\Vert_{L^2(H^1)}^{1/2}
\\&\quad
+
\Vert u_{M}-u\Vert_{L^2(H^1)} \Vert\psi\Vert_{L^{\infty}(H^1)} \Vert w_{M}\Vert_{L^2(L^2)}^{1/2} \Vert w_{M}\Vert_{L^2(H^1)}^{1/2}
\\&\quad
+
\Vert u_{M}\Vert_{L^{2}(H^1)} \Vert \psi\Vert_{L^{\infty}(H^1)} \Vert w_{M}-w\Vert_{L^2(L^2)}^{1/2} \Vert w_{M}-w\Vert_{L^2(H^1)}^{1/2}
\\&
\rightarrow 0 \text{\quad as\quad } M\rightarrow\infty,
\end{align*}
where we used \eqref{Convergence2}, the uniform boundedness of $w_{M}$ in $H$ and integrated by parts in the last integral.
Finally, convergence of the second nonlinear term in \eqref{Weak_w} is obtained as
\begin{align*}
&\left| \int_{0}^{\bar{T}}\left<{B}(w_{M}, u_{M}), P_{M}\psi\right>\,dt - \int_{0}^{\bar{T}}\left<{B}(w, u), \psi\right>\,dt\right|
\\&
\leq
\int_{0}^{\bar{T}}\left|\left<{B}(w_{M}, u_{M}), P_{M}\psi-\psi\right>\right|\,dt
+
\int_{0}^{\bar{T}}\left|\left<{B}(w_{M}, \psi), u_{M}-u\right>\right|\,dt
\\&\quad
+
\int_{0}^{\bar{T}}\left|\left<{B}(w_{M}-w, u), \psi\right>\right|\,dt
\\&
\leq
\Vert w_{M}\Vert_{L^2(L^{2})}^{1/2}\Vert w_{M}\Vert_{L^2(H^{1})}^{1/2} \Vert u_{M}\Vert_{L^{2}(H^2)} \Vert P_{M}\psi-\psi\Vert_{L^{\infty}(L^2)}
\\&\quad
+
\Vert w_{M}\Vert_{L^2(L^2)}^{1/2}\Vert w_{M}\Vert_{L^2(H^1)}^{1/2} \Vert\psi\Vert_{L^{\infty}(H^1)} \Vert u_{M}-u\Vert_{L^{2}(H^1)}
\\&\quad
+
\Vert w_{M}-w\Vert_{L^2(L^{2})} \Vert u\Vert_{L^{2}(H^2)} \Vert\psi\Vert_{L^{\infty}(L^2)}^{1/2} \Vert\psi\Vert_{L^{\infty}(H^1)}^{1/2}
\\&
\rightarrow 0 \text{\quad as\quad } M\rightarrow\infty,
\end{align*}
due to \eqref{Convergence2} and the uniform boundedness of $w_{M}$.
Similar arguments also apply to equation \eqref{Div-w} for $v_{M}:=\nabla\cdot w_{M}$.
Namely, each term in
\begin{align*}
&-\int_{0}^{\bar{T}}(v_{M}, \psi_{t})\,dt
+
(v_{M}(\cdot, \bar{T}), \psi(\cdot, \bar{T}))
-
(v_{M}(\cdot, 0), \psi(\cdot, 0))
\\&\quad
+
\int_{0}^{\bar{T}}((v_{M}, \psi))\,dt
+
\int_{0}^{\bar{T}}\left<B(u_{M}, v_{M}), P_{M}\psi\right>\,dt
=
0
\end{align*}
converges to the time integral of the following weak formulation of $v$ in view of \eqref{Div-w},
\begin{align*}
&
\left<v_{t}, \psi\right>
+
((v, \psi))
+
\left<B(u, v), \psi\right>
=
0.
\end{align*}
Specifically, the linear terms converge due to the fact that $v_{M}$ is bounded in $L^{2}(0, T; H^1)$ from \eqref{Div-w},
while the convergence of the nonlinear term follows similar to those in \eqref{Weak_w}, i.e.,
after integration by parts, we have
\begin{align*}
&\left| \int_{0}^{\bar{T}}\left<B(u_{M}, P_{M}\psi), v_{M}\right>\,dt - \int_{0}^{\bar{T}}\left<B(u, \psi), v\right>\,dt\right|
\\&
\leq
\int_{0}^{\bar{T}}\left|\left<B(u_{M}, P_{M}\psi - \psi), v_{M}\right>\right|\,dt
+
\int_{0}^{\bar{T}}\left|\left<B(u_{M}-u, \psi), v_{M}\right>\right|\,dt
\\&\quad
+
\left|\int_{0}^{\bar{T}}\left<B(u_{M}, \psi), v_{M}-v\right>\,dt\right|
\\&
\leq
\Vert u_{M}\Vert_{L^2(H^1)} \Vert P_{M}\psi-\psi\Vert_{L^{\infty}(H^1)} \Vert v_{M}\Vert_{L^2(L^2)}^{1/2} \Vert v_{M}\Vert_{L^2(H^1)}^{1/2}
\\&\quad
+
\Vert u_{M}-u\Vert_{L^2(H^1)} \Vert\psi\Vert_{L^{\infty}(H^1)} \Vert v_{M}\Vert_{L^2(L^2)}^{1/2} \Vert v_{M}\Vert_{L^2(H^1)}^{1/2}
\\&\quad
+
\Vert u_{M}\Vert_{L^{2}(H^1)} \Vert \psi\Vert_{L^{\infty}(H^1)} \Vert v_{M}-v\Vert_{L^2(L^2)}^{1/2} \Vert v_{M}-v\Vert_{L^2(H^1)}^{1/2}
\\&
\rightarrow 0 \text{\quad as\quad } M\rightarrow\infty,
\end{align*}
where we also used $v_M\rightarrow v$ in $L^2(0, T; L^2)$ as $M\rightarrow \infty$, as well as \eqref{Convergence1}.
Note that by the fact $v_{M}=0$ in $L^2([0,T];L^2)$,
we have $v(t) = \nabla\cdot w(t) = 0$ for a.e. $t<T$.
Now, all the above convergence is valid if we take $\psi=v\phi(t)$ where $v\in C^{\infty}$ and $\phi\in C^{1}({0, \widetilde{T}})$. In particular, the convergence is valid for all $\phi\in \mathcal{D}([0,\widetilde{T}])$, thus, \eqref{Weak_def} holds in the sense of distributions, which in turn implies that \eqref{Sys1_fun} is valid as an equation of operators.
Namely, in view of the embedding $V \hookrightarrow H \hookrightarrow V'$, we conclude that equations \eqref{Sys1} hold in the weak sense by Lemma~\ref{L2},
while the pressure term $p$ is recovered by the approach mentioned in Section~\ref{sec2}.
Finally, by integrating in time over $[\bar{t}, \bar{T}]$ for $0\leq \bar{t} < \bar{T}$ and sending $\bar{T} \to \bar{t}$ one can use similar convergence arguments as above (c.f. \cite{Constantin_Foias_1988, Temam_2001_Th_Num}) and show that $(u, w)$ is in fact weakly continuous with respect to time in $V\times H$, so that the initial condition is satisfied in the weak sense.
Since $T>0$ is arbitrary, the existence in Theorem~\ref{T1} is thus proven.
\subsection{\em Proof of uniqueness}
\label{subsec3-2}
Suppose there exist two pairs of solution $(u, w, p)$ and $(\bar{u}, \bar{w}, \bar{p})$ to system \eqref{Sys1}, with the same initial data $u_0\in V$, $w_0\in H$ and the same forcing $f$ on their common time interval of existence $(0, T)$.
By subtracting the equations of the two pairs of solution
and denoting $\widetilde{u} = u - \bar{u}$, $\widetilde{w} = w - \bar{w}$, and $\widetilde{p} = p- \bar{p}$,
we obtain
\begin{equation}
\left\{
\begin{aligned}
&(I - \alpha^2\Delta)\frac{\partial \widetilde{u}}{\partial t}
-
\Delta \widetilde{u}
+
\widetilde{w} \times u
+
\bar{w} \times \widetilde{u}
+
\nabla \widetilde{p}
=
0,
\\&
\frac{\partial \widetilde{w}}{\partial t}
-
\Delta \widetilde{w}
+
(u\cdot\nabla) \widetilde{w}
+
(\widetilde{u}\cdot\nabla) \bar{w}
-
(w\cdot\nabla) \widetilde{u}
-
(\widetilde{w}\cdot\nabla) \bar{u}
=
0,
\\&
\nabla \cdot \widetilde{u} = 0,
\\&
\widetilde{u}(\cdot, 0) = 0
\quad\text{ and }\quad
\widetilde{w}(\cdot, 0) = 0.
\end{aligned}
\right.
\label{Diff}
\end{equation}
Multiplying the two equations by $\widetilde{u}$ and $\widetilde{w}$, respectively,
integrating by parts over $\mathbb{T}^3$, and adding, we obtain
\begin{align}
&
\frac{1}{2}\frac{d}{d t}\left(\Vert\widetilde{u}\Vert_{L^2}^2 + \alpha^2\Vert\nabla\widetilde{u}\Vert_{L^2}^2 + \Vert\widetilde{w}\Vert_{L^2}^2\right)
+
\Vert\nabla\widetilde{u}\Vert_{L^2}^2
+
\Vert\nabla\widetilde{w}\Vert_{L^2}^2 \nonumber
\\&
=
-
\int_{\mathbb{T}^3} (\widetilde{w}\times u)\cdot \widetilde{u}\,dx
-
\int_{\mathbb{T}^3} (\widetilde{u}\cdot\nabla)\bar{w}\cdot \widetilde{w}\,dx
-
\int_{\mathbb{T}^3} (w\cdot\nabla)\widetilde{u}\cdot \widetilde{w}\,dx
+
\int_{\mathbb{T}^3} (\widetilde{w}\cdot\nabla)\bar{u}\cdot \widetilde{w}\,dx,
\label{Diff_eq}
\end{align}
where we used $\nabla\cdot u = \nabla\cdot \bar{u} = \nabla\cdot \widetilde{u} = 0$.
Next, we estimate the four terms on the right side of \eqref{Diff_eq}.
By Lemma~\ref{L1}, H\"older's and Young's inequalities,
the first integral is bounded by
\begin{align*}
\int_{\mathbb{T}^3} |u| |\widetilde{u}| |\widetilde{w}|\,dx
&
\leq
C\Vert u\Vert_{L^2} \Vert\widetilde{u}\Vert_{L^2}^{1/2} \Vert\nabla\widetilde{u}\Vert_{L^2}^{1/2} \Vert\nabla\widetilde{w}\Vert_{L^2}
\leq
C\Vert u\Vert_{L^2}^2 \Vert\widetilde{u}\Vert_{L^2} \Vert\nabla\widetilde{u}\Vert_{L^2}
+
\frac{1}{8}\Vert\nabla\widetilde{w}\Vert_{L^2}^2
\\&
\leq
C_{\alpha}\Vert u\Vert_{L^2}^2\left( \Vert\widetilde{u}\Vert_{L^2}^2 + \alpha^2\Vert\nabla\widetilde{u}\Vert_{L^2}^2 \right)
+
\frac{1}{8}\Vert\nabla\widetilde{w}\Vert_{L^2}^2.
\end{align*}
By similar estimates, the second integral is bounded by
\begin{align*}
\int_{\mathbb{T}^3} |\bar{w}| |\widetilde{u}| |\nabla\widetilde{w}|\,dx
&
\leq
C\Vert\bar{w}\Vert_{H^1} \Vert\nabla\widetilde{u}\Vert_{L^2} \Vert\nabla\widetilde{w}\Vert_{L^2}
\leq
C_{\alpha} \Vert\bar{w}\Vert_{H^1}^2 \left(\alpha^2 \Vert\nabla\widetilde{u}\Vert_{L^2}^2\right)
+
\frac{1}{8} \Vert\nabla\widetilde{w}\Vert_{L^2}^2,
\end{align*}
where we integrated by parts and used $\nabla\cdot\widetilde{u} = 0$.
By Lemma~\ref{L1}, the third integral is bounded by
\begin{align*}
\int_{\mathbb{T}^3} |w| |\nabla\widetilde{u}| |\widetilde{w}|\,dx
&
\leq
C\Vert w\Vert_{H^1} \Vert\nabla\widetilde{u}\Vert_{L^2} \Vert\widetilde{w}\Vert_{L^2}^{1/2} \Vert\nabla\widetilde{w}\Vert_{L^2}^{1/2}
\leq
C\Vert w\Vert_{H^1}^{4/3} \Vert\nabla\widetilde{u}\Vert_{L^2}^{4/3} \Vert\widetilde{w}\Vert_{L^2}^{2/3}
+
\frac{1}{8} \Vert\nabla\widetilde{w}\Vert_{L^2}^2
\\&
\leq
C_{\alpha} \Vert w\Vert_{H^1}^2 \left(\alpha^2 \Vert\nabla\widetilde{u}\Vert_{L^2}^2\right)
+
C_{\alpha} \Vert\widetilde{w}\Vert_{L^2}^2
+
\frac{1}{8} \Vert\nabla\widetilde{w}\Vert_{L^2}^2,
\end{align*}
where we applied H\"older's and Young's inequalities.
As for the last integral, we obtain the upper bound analogously as
\begin{align*}
\int_{\mathbb{T}^3} |\nabla\bar{u}| |\widetilde{w}|^2\,dx
&
\leq
C\Vert\bar{u}\Vert_{H^1} \Vert\widetilde{w}\Vert_{L^2}^{1/2} \Vert\nabla\widetilde{w}\Vert_{L^2}^{3/2}
\leq
C\Vert\bar{u}\Vert_{H^1}^4 \Vert\widetilde{w}\Vert_{L^2}^{2}
+
\frac{1}{8} \Vert\nabla\widetilde{w}\Vert_{L^2}^2.
\end{align*}
Summing up all the above estimates,
we obtain
\begin{align*}
&
\frac{d}{d t}\left(\Vert\widetilde{u}\Vert_{L^2}^2 + \alpha^2\Vert\nabla\widetilde{u}\Vert_{L^2}^2 + \Vert\widetilde{w}\Vert_{L^2}^2\right)
+
\Vert\nabla\widetilde{u}\Vert_{L^2}^2
+
\Vert\nabla\widetilde{w}\Vert_{L^2}^2
\leq
M_{\alpha}\left(\Vert\widetilde{u}\Vert_{L^2}^2 + \alpha^2\Vert\nabla\widetilde{u}\Vert_{L^2}^2 + \Vert\widetilde{w}\Vert_{L^2}^2\right).
\end{align*}
Here $M_{\alpha}:=M_{1} + M_{2}(\Vert w\Vert_{H^1}^2+\Vert\bar{w}\Vert_{H^1}^2)$, is such that $M_1$ depends on $\Vert\bar{u}\Vert_{H^1}^4$ and $\Vert u\Vert_{L^2}^2$, which are bounded, while $M_2$ is an absolute constant. Therefore, by Gr\"onwall's inequality and $w, \bar{w}\in L^{2}(0, T; V)$, we conclude that
$$\Vert\widetilde{u}(t)\Vert_{L^2}^2 + \Vert\widetilde{w}(t)\Vert_{L^2}^2 = 0,$$
since $\widetilde{u}_0 = \widetilde{w}_0 = 0$.
Namely, we have $u(t) = \bar{u}(t)$ and $w(t) = \bar{w}(t)$. Finally, setting $\psi = u\in L^2(0,T;V)$ in Definition \ref{Def1}, using Lemma \eqref{Lemma_Lions_Magenes_type}, and integrating in time, we obtain \eqref{energy_equality_u}.
The proof of Theorem~\ref{T1} is thus complete.
\section{Proof of Theorem~\ref{T2}}
\label{sec4}
In this section, we show that system \eqref{Sys1} has a unique global strong solution and provide the {\it{a priori}} estimates for the higher order regularity of such solution $(u, w)$ to \eqref{Sys1}, with $u_0, w_0 \in V$. In view of Definition~\ref{Def2}, it suffices to prove the uniform boundedness of $w_{M}$ in $V$.
\subsection{\em Proof of global existence of strong solutions}
\label{subsec4-1}
To begin, we multiply \eqref{ODEw} by $Aw_{M}$, respectively, integrate by parts over $\mathbb{T}^3$ and obtain
\begin{align}
\frac{1}{2}\frac{d}{d t}\Vert\nabla w_{M}\Vert_{L^2}^2
+
\Vert Aw_{M}\Vert_{L^2}^2 \nonumber
&=
\int_{\mathbb{T}^3} (u_{M}\cdot\nabla)w_{M}\cdot Aw_{M}\,dx
-
\int_{\mathbb{T}^3} (w_{M}\cdot\nabla)u_{M}\cdot Aw_{M}\,dx \nonumber
\\&\quad
-
\int_{\mathbb{T}^3} (\nabla\times f_{M})\cdot Aw_{M}\,dx.
\label{Eq3}
\end{align}
Then, we estimate the three terms on the right side of \eqref{Eq3}.
For the first term, we integrate by parts and apply Lemma~\ref{L1} as
\begin{align*}
\int_{\mathbb{T}^3} (u_{M}\cdot\nabla)w_{M}\cdot Aw_{M}\,dx
&\leq
C\Vert\nabla u_{M}\Vert_{L^2} \Vert\nabla w_{M}\Vert_{L^2}^{1/2} \Vert Aw_{M}\Vert_{L^2}^{3/2}
\\&
\leq
C\Vert\nabla u_{M}\Vert_{L^2}^{4} \Vert\nabla w_{M}\Vert_{L^2}^{2}
+
\frac{1}{8}\Vert Aw_{M}\Vert_{L^2}^2.
\end{align*}
By \eqref{Agmon}, the second term is bounded by
\begin{align*}
\int_{\mathbb{T}^3}|w_{M}| |\nabla u_{M}| |Aw_{M}|\,dx
&\leq
C\Vert w_{M}\Vert_{L^{\infty}} \Vert\nabla u_{M}\Vert_{L^2} \Vert Aw_{M}\Vert_{L^2}
\leq
C \Vert\nabla u_{M}\Vert_{L^2} \Vert\nabla w_{M}\Vert_{L^2}^{1/2} \Vert Aw_{M}\Vert_{L^2}^{3/2}
\\&
\leq
C\Vert\nabla u_{M}\Vert_{L^2}^4\Vert\nabla w_{M}\Vert_{L^2}^2
+
\frac{1}{8}\Vert Aw_{M}\Vert_{L^2}^2.
\end{align*}
By H\"older's inequality, the last term is bounded by
\begin{align*}
\int_{\mathbb{T}^3}|\nabla\times f_{M}| |Aw_{M}|\,dx
&\leq
\Vert\nabla\times f_{M}\Vert_{L^2} \Vert Aw_{M}\Vert_{L^2}
\leq
C\Vert\nabla\times f\Vert_{L^2}^2
+
\frac{1}{8}\Vert Aw_{M}\Vert_{L^2}^2.
\end{align*}
Combining all the above estimates and denoting
$$X_{M}(t) = \Vert\nabla w_{M}(t)\Vert_{L^2}^2 \text{\quad and \quad} Y_{M}(t) = \Vert Aw_{M}(t)\Vert_{L^2}^2,$$
for $0\leq t\leq T$,
we get
\begin{align*}
\frac{d}{d t}X_{M}(t) + Y_{M}(t)
\leq
CX_{M}(t) + C\Vert f\Vert_{H_{\text{curl}}}^2.
\end{align*}
Thus, the uniform bound of $u$ in $V$ and Gr\"onwall's inequality imply that $w_{M}$ is uniformly bounded in $L^{\infty}(0, T; V)$.
Integrating in time over $[0, T]$, we also have that $w_{M}$ is uniformly bounded in $L^2(0, T; D(A))$.
As for the time derivative of $w_{M}$ in \eqref{Time_w},
we use the fact that $w_{M}$ are bounded in $L^{2}(0, T; D(A))$
and all the nonlinear terms are bounded in $L^{2}(0, T; L^2)$, and conclude that
$$\frac{d w_{M}}{d t} \text{\quad is uniformly bounded in\quad} L^2(0, T; H).$$
A standard simple argument (see, e.g., \cite{Temam_2001_Th_Num}) then shows that $dw/dt\in L^2(0, T; H)$, thanks to the convergence properties of $w_M$ proven above.
Therefore, we obtain the global existence of strong solutions.
\subsection{\em Proof of higher regularity}
\label{subsec4-3}
In this subsection, we first provide the $H^2$ and $H^3$ {\it{a priori}} estimates for $u$ and $w$,
then we obtain the $H^s$ bounds on $u$ and $w$ for $s\geq 4$.
\subsubsection{\em $H^2$ bounds}
\label{subsubsec4-3-1}
First we provide the $H^2$ {\it a priori} estimates for $u$ with initial data $u_0\in D(A)$. We work formally for the sake of clarity, we point out that the following estimates can be justified at the Galerkin level following similar arguments to those in Section~\ref{subsec3-1} and Section~\ref{subsec4-1}.
We begin by multiplying \eqref{Sys1_u} by $-\Delta u$, integrating by parts over $\mathbb{T}^3$, and obtain
\begin{align}
&
\frac{1}{2}\frac{d}{d t}\left(\Vert\nabla u\Vert_{L^2}^2 + \alpha^2\Vert \Delta u\Vert_{L^2}^2\right)
+
\Vert \Delta u\Vert_{L^2}^2
=
\int_{\mathbb{T}^3} w\times u\cdot\Delta u\,dx
+
\int_{\mathbb{T}^3} f\cdot\Delta u\,dx,
\label{Eq2}
\end{align}
where we used $\nabla\cdot u=0$.
By Lemma~\ref{L1}, the first term on the right side of \eqref{Eq2} is bounded by
\begin{align*}
\int_{\mathbb{T}^3}|w| |u| |\Delta u|\,dx
&\leq
C\Vert w\Vert_{L^2}^{1/2} \Vert\nabla w\Vert_{L^2}^{1/2} \Vert\nabla u\Vert_{L^2} \Vert \Delta u\Vert_{L^2}
\leq
\Vert w\Vert_{L^2} \Vert\nabla w\Vert_{L^2} \Vert\nabla u\Vert_{L^2}^2
+
\frac{1}{4} \Vert\Delta u\Vert_{L^2}^2,
\end{align*}
where we used Young's inequality.
By H\"older's inequality, we bound the second term by
\begin{align*}
\int_{\mathbb{T}^3}|f| |\Delta u|\,dx
&\leq
\Vert f\Vert_{L^2} \Vert \Delta u\Vert_{L^2}
\leq
C\Vert f\Vert_{L^2}^2
+
\frac{1}{4} \Vert \Delta u\Vert_{L^2}^2.
\end{align*}
Combining all the above estimates and denoting
$$\bar{X}(t) = \Vert\nabla u(t)\Vert_{L^2}^2 + \alpha^2\Vert \Delta u(t)\Vert_{L^2}^2 \text{\quad and \quad} \bar{Y}(t) = \Vert \Delta u(t)\Vert_{L^2}^2,$$
for $0\leq t\leq T$,
we arrive at
\begin{align*}
\frac{d}{d t}\bar{X}(t) + \bar{Y}(t)
\leq
K_1\bar{X}(t) + C\Vert f\Vert_{L^2}^2,
\end{align*}
where the constant $K_1$ depends on the $H^1$ norms of $u$ and $w$.
Thus, Gr\"onwall's inequality implies that $u\in L^\infty(0,T;D(A))$.
Next, we show the $H^2$ boundedness of $w$.
We multiply the $w$ equation in system \eqref{Sys1} by $\Delta^2 w$, integrate by parts over $\mathbb{T}^3$, and obtain
\begin{align}
\frac{1}{2}\frac{d}{d t}\left(\Vert\Delta w\Vert_{L^2}^2\right)
+
\Vert\nabla\Delta w\Vert_{L^2}^2
&=
-
\int_{\mathbb{T}^3} (u\cdot\nabla)w\cdot \Delta^2 w\,dx
+
\int_{\mathbb{T}^3} (w\cdot\nabla)u\cdot \Delta^2 w\,dx \nonumber
\\&\quad
+
\int_{\mathbb{T}^3} (\nabla\times f)\cdot \Delta^2 w\,dx.
\label{Eq4}
\end{align}
We then estimate the three terms on the right side of \eqref{Eq4}.
After integration by parts,
we use Lemma~\ref{L1} and H\"older's inequality in order to bound the first integral by
\begin{align*}
&
\int_{\mathbb{T}^3} |\nabla u| |\nabla w| |\nabla\Delta w|\,dx
+
\int_{\mathbb{T}^3} |u| |\nabla\nabla w| |\nabla\Delta w|\,dx
\\&\quad
\leq
C\Vert\Delta u\Vert_{L^2} \Vert\nabla w\Vert_{L^2}^{1/2}\Vert\Delta w\Vert_{L^2}^{1/2} \Vert\nabla\Delta w\Vert_{L^2}
+
C\Vert\nabla u\Vert_{L^2}^{1/2} \Vert\Delta u\Vert_{L^2}^{1/2} \Vert\nabla\nabla w\Vert_{L^2} \Vert\nabla\Delta w\Vert_{L^2}
\\&\quad
\leq
\frac{C}{\sqrt{\lambda_1}}\Vert\Delta u\Vert_{L^2}^2 \Vert\Delta w\Vert_{L^2}^2
+
C\Vert\nabla u\Vert_{L^2} \Vert\Delta u\Vert_{L^2} \Vert\Delta w\Vert_{L^2}^2
+
\frac{1}{8} \Vert\nabla\Delta w\Vert_{L^2}^2,
\end{align*}
where we used \eqref{Agmon} and Young's inequality.
Similarly, by Lemma~\ref{L1}, the second term is bounded by
\begin{align*}
&
\int_{\mathbb{T}^3} |\nabla w| |\nabla u| |\nabla\Delta w|\,dx
+
\int_{\mathbb{T}^3} |w| |\nabla\nabla u| |\nabla\Delta w|\,dx
\\&\quad
\leq
C\Vert\Delta u\Vert_{L^2} \Vert\nabla w\Vert_{L^2}^{1/2} \Vert\Delta w\Vert_{L^2}^{1/2} \Vert\nabla\Delta w\Vert_{L^2}
+
C\Vert\nabla\nabla u\Vert_{L^2} \Vert\nabla w\Vert_{L^2}^{1/2} \Vert\Delta w\Vert_{L^2}^{1/2} \Vert\nabla\Delta w\Vert_{L^2}
\\&\quad
\leq
\frac{C}{\sqrt{\lambda_1}}\Vert\Delta u\Vert_{L^2}^2 \Vert\Delta w\Vert_{L^2}^2
+
\frac{1}{8} \Vert\nabla\Delta w\Vert_{L^2}^2,
\end{align*}
where we also used \eqref{Poincare} and \eqref{Agmon}.
The last integral is bounded by
\begin{align*}
\int_{\mathbb{T}^3}|\nabla(\nabla\times f)| |\nabla\Delta w|\,dx
&\leq
\Vert\nabla(\nabla\times f)\Vert_{L^2} \Vert\nabla\Delta w\Vert_{L^2}
\leq
C\Vert f\Vert_{H_{\text{curl}}^1}^2
+
\frac{1}{8}\Vert\nabla\Delta w\Vert_{L^2}^2,
\end{align*}
where we integrated by parts and used H\"older's inequality.
Summing up all the above estimates and denoting
$$\widetilde{X}(t) = \Vert\Delta w(t)\Vert_{L^2}^2 \text{\quad and \quad} \widetilde{Y}(t) = \Vert\nabla\Delta w(t)\Vert_{L^2}^2,$$
for $0\leq t\leq T$, we obtain
\begin{align*}
\frac{d}{d t}\widetilde{X}(t) + \widetilde{Y}(t)
\leq
K_2\widetilde{X}(t) + C\Vert f\Vert_{H_{\text{curl}}^1}^2,
\end{align*}
where the first constant $K_2$ depends on $\Vert u\Vert_{H^2}$ and $\lambda_1$.
Thus, Gr\"onwall's inequality and the $H^2$ bound of $u$ imply that $w$ is also bounded in $H^2$.
\subsubsection{\em $H^3$ bounds}
\label{subsubsec4-3-2}
We start by testing \eqref{Sys1_u} with $\Delta^{2}u$, integrating by parts, and obtain
\begin{align}
&
\frac{1}{2}\frac{d}{d t}\left(\Vert \Delta u\Vert_{L^2}^2 + \alpha^2\Vert\nabla \Delta u\Vert_{L^2}^2\right)
+
\Vert\nabla \Delta u\Vert_{L^2}^2
=
\int_{\mathbb{T}^3} w\times u\cdot \Delta^{2}u\,dx
+
\int_{\mathbb{T}^3} f\cdot \Delta^{2}u\,dx.
\label{Eq5}
\end{align}
After integration by parts, the first term on the right side of \eqref{Eq5} is bounded by
\begin{align*}
\int_{\mathbb{T}^3}|\nabla w| |u| |\nabla \Delta u|\,dx
+
\int_{\mathbb{T}^3}|w| |\nabla u| |\nabla \Delta u|\,dx
&\leq
C\Vert u\Vert_{H^1}^{1/2} \Vert u\Vert_{H^2}^{1/2} \Vert\nabla w\Vert_{L^2} \Vert\nabla \Delta u\Vert_{L^2}
\\&\quad
+
C\Vert w\Vert_{H^1}^{1/2} \Vert u\Vert_{H^2}^{1/2} \Vert\nabla u\Vert_{L^2} \Vert\nabla \Delta u\Vert_{L^2}
\\&
\leq
C\Vert u\Vert_{H^1} \Vert u\Vert_{H^2} \Vert\nabla w\Vert_{L^2}^2
+
C\Vert w\Vert_{H^1} \Vert w\Vert_{H^2} \Vert\nabla u\Vert_{L^2}^2
\\&\quad
+
\frac{1}{4}\Vert\nabla \Delta u\Vert_{L^2}^2
\end{align*}
where we applied \eqref{Agmon}.
By H\"older's inequality and integration by parts, we bound the second term by
\begin{align*}
\int_{\mathbb{T}^3}|\nabla f| |\nabla \Delta u|\,dx
&\leq
\Vert\nabla f\Vert_{L^2} \Vert\nabla \Delta u\Vert_{L^2}
\leq
C\Vert f\Vert_{H^1}^2
+
\frac{1}{4} \Vert\nabla \Delta u\Vert_{L^2}^2.
\end{align*}
Combining the above estimates, we have
\begin{equation*}
\frac{d}{dt}\left(\Vert \Delta u\Vert_{L^2}^2 + \alpha^2\Vert\nabla \Delta u\Vert_{L^2}^2\right)
\leq
K_3 + C\Vert f\Vert_{H^1}^2,
\end{equation*}
where the constant $K_3$ depends on the $H^2$ norms of $u$ and $w$.
Therefore, Gr\"onwall's inequality implies that $u\in L^{\infty}(0, T; H^3\cap V)$.
Next, we multiply \eqref{Sys1_w} by $\partial^{\beta}w$ after applying the operator $\partial^{\beta}$,
where $\beta$ is a multi-index with $|\beta|=3$, and obtain
\begin{align}
\frac{1}{2}\frac{d}{d t}\left(\Vert\partial^{\beta}w\Vert_{L^2}^2\right)
+
\Vert\nabla\partial^{\beta}w\Vert_{L^2}^2
&=
-
\sum_{0<\gamma\leq\beta}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} (\partial^{\gamma}u\cdot\nabla)\partial^{\beta-\gamma}w\cdot\partial^{\beta}w\,dx \nonumber
\\&\quad
+
\sum_{0\leq\gamma\leq\beta}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} (\partial^{\gamma}w\cdot\nabla)\partial^{\beta-\gamma}u\cdot\partial^{\beta}w\,dx \nonumber
\\&\quad
+
\int_{\mathbb{T}^3} \partial^{\beta}(\nabla\times f)\cdot \partial^{\beta}w\,dx,
\label{Eq6}
\end{align}
where we used $\nabla\cdot u = 0$ and $\gamma$ is also a multi-index and $\gamma \leq \beta$ indicates that $|\gamma| \leq |\beta|$ and $\gamma_{i}\leq\beta_{i}$ for $i=1,2,3$.
Then, we estimate the first term on the right side of \eqref{Eq6} in the following three cases.
For $\gamma \leq \beta$ and $|\gamma|=1$, it is bounded by
\begin{align*}
\sum_{|\gamma|=1}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}u| |\nabla\partial^{\beta-\gamma}w| |\partial^{\beta}w|\,dx
&
\leq
C\Vert\partial^{\gamma}u\Vert_{L^{\infty}} \Vert\nabla\partial^{\beta-\gamma}w\Vert_{L^2} \Vert\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert u\Vert_{H^3} \Vert w\Vert_{H^{|\beta|}}^2,
\end{align*}
where we used \eqref{Agmon}.
For $\gamma \leq \beta$ and $|\gamma|=2$, it is bounded by
\begin{align*}
\sum_{|\gamma|=2}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}u| |\nabla\partial^{\beta-\gamma}w| |\partial^{\beta}w|\,dx
&
\leq
C\Vert\partial^{\gamma}u\Vert_{L^{3}} \Vert\nabla\partial^{\beta-\gamma}w\Vert_{L^6} \Vert\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert u\Vert_{H^2}^{1/2} \Vert u\Vert_{H^3}^{1/2} \Vert w\Vert_{H^{|\beta|}}^2
\end{align*}
For $\gamma \leq \beta$ and $|\gamma|=3$,
it is bounded from above by
\begin{align*}
\sum_{|\gamma|=3}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}u| |\nabla\partial^{\beta-\gamma}w| |\partial^{\beta}w|\,dx
&
\leq
C\Vert\partial^{\gamma}u\Vert_{L^{2}} \Vert\nabla\partial^{\beta-\gamma}w\Vert_{L^3} \Vert\partial^{\beta}w\Vert_{L^6}
\\&
\leq
C\Vert u\Vert_{H^3} \Vert w\Vert_{H^1}^{1/2} \Vert w\Vert_{H^2}^{1/2} \Vert\nabla\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert u\Vert_{H^3}^2 \Vert w\Vert_{H^1} \Vert w\Vert_{H^2}
+
\frac{1}{8}\Vert\nabla\partial^{\beta}w\Vert_{L^2}^2.
\end{align*}
The estimates for the second term on the right side of \eqref{Eq6} follow similarly in the following four cases.
For $\gamma \leq \beta$ and $|\gamma|=0$, we integrate by parts and bound it by
\begin{align*}
\sum_{|\gamma|=0}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |w| |\partial^{\beta-\gamma}u| |\nabla\partial^{\beta}w|\,dx
&
\leq
C\Vert w\Vert_{L^{\infty}} \Vert\partial^{\beta-\gamma}u\Vert_{L^2} \Vert\nabla\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert w\Vert_{H^2}^2 \Vert u\Vert_{H^3}^2
+
\frac{1}{8}\Vert\nabla\partial^{\beta}w\Vert_{L^2}^2,
\end{align*}
where we used $\nabla\cdot w=0$.
For $\gamma \leq \beta$ and $|\gamma|=1$, it is bounded by
\begin{align*}
\sum_{|\gamma|=1}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}w| |\nabla\partial^{\beta-\gamma}u| |\partial^{\beta}w|\,dx
&
\leq
C\Vert\partial^{\gamma}w\Vert_{L^{\infty}} \Vert\nabla\partial^{\beta-\gamma}u\Vert_{L^2} \Vert\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert u\Vert_{H^3} \Vert w\Vert_{H^{|\beta|}}^2,
\end{align*}
where we applied \eqref{Agmon}.
For $\gamma \leq \beta$ and $|\gamma|=2$,
it is estimated in the same way for the first integral with $|\gamma|=2$, i.e., we have
\begin{align*}
\sum_{|\gamma|=2}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}w| |\nabla\partial^{\beta-\gamma}u| |\partial^{\beta}w|\,dx
&
\leq
C\Vert u\Vert_{H^2}^{1/2} \Vert u\Vert_{H^3}^{1/2} \Vert w\Vert_{H^{|\beta|}}^2.
\end{align*}
For $\gamma \leq \beta$ and $|\gamma|=3$, it is bounded by
\begin{align*}
\sum_{|\gamma|=3}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3} |\partial^{\gamma}w| |\nabla\partial^{\beta-\gamma}u| |\partial^{\beta}w|\,dx
&
\leq
C\Vert\nabla\partial^{\beta-\gamma} u\Vert_{L^{\infty}} \Vert\partial^{\gamma}w\Vert_{L^2} \Vert\partial^{\beta}w\Vert_{L^2}
\leq
C\Vert u\Vert_{H^3} \Vert w\Vert_{H^{|\beta|}}^2.
\end{align*}
As for the last term on the right side of \eqref{Eq6}, we integrate by parts and estimate as
\begin{align*}
\int_{\mathbb{T}^3} \partial^{\beta}(\nabla\times f)\cdot \partial^{\beta}w\,dx
&
\leq
\sum_{\gamma=2}{\binom{\beta}{\gamma}}\int_{\mathbb{T}^3}|\partial^{\gamma}(\nabla\times f)| |\partial\partial^{\beta}w|\,dx
\leq
C\Vert\nabla\times f\Vert_{H^2} \Vert\nabla\partial^{\beta}w\Vert_{L^2}
\\&
\leq
C\Vert f\Vert_{H_{\text{curl}}^2}^2
+
\frac{1}{8}\Vert\nabla\partial^{\beta}w\Vert_{L^2}^2.
\end{align*}
Summing up all the above estimates, we have
\begin{equation*}
\frac{d}{dt}\Vert\partial^{\beta}w\Vert_{L^2}^2 + \Vert\nabla\partial^{\beta}w\Vert_{L^2}^2
\leq
K_4\Vert\partial^{\beta}u\Vert_{L^2}^2 + K_5,
\end{equation*}
where the constants $K_4$ and $K_5$ depend on the $H^3$ norm of $u$, $H^2$ norm of $w$,
while $K_5$ also depends on the $H_{\text{curl}}^2$ norm of $f$.
Therefore, Gr\"onwall's inequality implies that $w\in L^{\infty}(0, T; H^3\cap V)\cap L^{2}(0, T; H^4\cap V)$.
Therefore, by repeating similar arguments as above inductively, we get $H^s$ uniform bound on $u$ and $w$ for all integers $s\geq 4$.
Proof of Theorem~\ref{T2} is thus complete.
\section{Proof of Convergence Results}
\label{sec5}
In this section, we prove our convergence results Theorem~\ref{T3} and Theorem~\ref{T4}.
Notice that from the proof of Theorem~\ref{T1}, we have $\nabla\cdot w(t) = 0$ for a.e. $t>0$ as long as $\nabla\cdot w_0= 0$.
\subsection{Convergence of $\nabla\times u$ to $w$ in $L^2$}
\label{subsec5-1}
{\smallskip\noindent {\em Proof of Theorem~\ref{T3}.}}
We start by applying the {\it{curl}} operator ``$\nabla\times$" to \eqref{Sys1_u} and after denoting by $\omega$ the vorticity $\nabla\times u$,
we obtain
\begin{align}
&
(I - \alpha^2\Delta)\omega_{t}
-
\nu\Delta \omega
+
(u\cdot\nabla)w
-
(\nabla\cdot w)u
-
(w\cdot\nabla)u
=
\nabla\times f,
\label{Curl}
\end{align}
where we used $\nabla\cdot u = 0$ and the identity
$$\nabla\times({\bf{F}}\times{\bf{G}}) = \left( (\nabla\cdot{\bf{G}}) + {\bf{G}}\cdot\nabla \right){\bf{F}} - \left( (\nabla\cdot{\bf{F}}) + {\bf{F}}\cdot\nabla \right){\bf{G}}$$
for arbitrary smooth vector fields ${\bf{F}}$ and ${\bf{G}}$ in $\mathbb{R}^3$.
Denoting by $\xi$ the difference $\omega - w$ and subtracting the $w$ equation of system \eqref{Sys1} from \eqref{Curl} lead to
\begin{align}
&
\xi_{t}
-
\alpha^2\Delta\omega_{t}
-
\Delta\xi
-
(\nabla\cdot w)u
=
0.
\label{Curl_diff}
\end{align}
Then, we use Theorem~\ref{T1} and rewrite \eqref{Curl_diff} as
\begin{align*}
&
\xi_{t}
-
\alpha^2\Delta\xi_{t}
-
\Delta\xi
=
\alpha^2\Delta w_{t}
+
(\nabla\cdot w)u,
\end{align*}
to which we multiply $\xi$ and integrate by parts over $\mathbb{T}^3$, and obtain
\begin{align}
&
\frac{1}{2}\frac{d}{d t}\left(\Vert\xi\Vert_{L^2}^2 + \alpha^2\Vert\nabla\xi\Vert_{L^2}^2\right)
+
\Vert\nabla\xi\Vert_{L^2}^2
=
\alpha^2\int_{\mathbb{T}^3} \Delta w_{t}\cdot\xi\,dx
+
\int_{\mathbb{T}^3} (\nabla\cdot w)u\cdot\xi\,dx.
\label{Eq7}
\end{align}
Note that the second term on the right side of \eqref{Eq7} vanishes due to Theorem \ref{T1} since $\nabla\cdot w(0) = 0$.
In order to estimate the first term on the right side of \eqref{Eq7},
we integrate by parts and use the equation of $w$ and obtain
\begin{align}
\alpha^2\int_{\mathbb{T}^3} \Delta w_{t}\cdot\xi\,dx
=
\alpha^2\int_{\mathbb{T}^3} w_{t}\cdot\Delta\xi\,dx
&=
\alpha^2\int_{\mathbb{T}^3}\Delta w\cdot\Delta\xi\,dx
-
\alpha^2\int_{\mathbb{T}^3}(u\cdot\nabla)w\cdot\Delta\xi\,dx \nonumber
\\&\quad\quad
+
\alpha^2\int_{\mathbb{T}^3}(w\cdot\nabla)u\cdot\Delta\xi\,dx
+
\alpha^2\int_{\mathbb{T}^3}\nabla\times f\cdot\Delta\xi\,dx.
\label{Eq8}
\end{align}
Then, we estimates the four integrals on the right side of \eqref{Eq8}.
After integration by parts, we bound the first integral by
\begin{align*}
\alpha^2\int_{\mathbb{T}^3} |\nabla\Delta w|\,|\nabla\xi|\,dx
&\leq
C\alpha^2\Vert\nabla\Delta w\Vert_{L^2} \Vert\nabla\xi\Vert_{L^2}
\leq
C\alpha^2\Vert w\Vert_{H^3}^2
+
\alpha^2\Vert\nabla\xi\Vert_{L^2}^2.
\end{align*}
Using Lemma~\ref{L1}, the second integral is bounded by
\begin{align*}
&
\alpha^2\int_{\mathbb{T}^3} |\nabla u|\,|\nabla w|\,|\nabla\xi|\,dx
+
\alpha^2\int_{\mathbb{T}^3} |u|\,|\Delta w|\,|\nabla\xi|\,dx
\\&\quad
\leq
C\alpha^2\Vert\Delta u\Vert_{L^2} \Vert w\Vert_{H^2} \Vert\nabla\xi\Vert_{L^2}
+
C\alpha^2\Vert\Delta u\Vert_{L^2} \Vert\Delta w\Vert_{L^2} \Vert\nabla\xi\Vert_{L^2}
\\&\quad
\leq
C\alpha^2\Vert u\Vert_{H^2}^2 \Vert w\Vert_{H^2}^2
+
\alpha^2\Vert\nabla\xi\Vert_{L^2}^2.
\end{align*}
Estimates for the third integral is similar and we have
\begin{align*}
\alpha^2\int_{\mathbb{T}^3}(w\cdot\nabla)u\,\Delta\xi\,dx
&
\leq
C\alpha^2\Vert\Delta u\Vert_{L^2} \Vert w\Vert_{H^2} \Vert\nabla\xi\Vert_{L^2}
+
C\alpha^2\Vert\Delta u\Vert_{L^2} \Vert w\Vert_{H^2} \Vert\nabla\xi\Vert_{L^2}
\\&
\leq
C\alpha^2\Vert u\Vert_{H^2}^2 \Vert w\Vert_{H^2}^2
+
\alpha^2\Vert\nabla\xi\Vert_{L^2}^2.
\end{align*}
As for the last integral in \eqref{Eq8},
we integrate by parts and use H\"older's inequality, and bound it by
\begin{align*}
\alpha^2\int_{\mathbb{T}^3}|\Delta f|\,|\nabla\xi|\,dx
&
\leq
C\alpha^2\Vert\Delta f\Vert_{L^2} \Vert\nabla\xi\Vert_{L^2}
\leq
C\alpha^2\Vert f\Vert_{H^2}^2
+
\alpha^2\Vert\nabla\xi\Vert_{L^2}^2.
\end{align*}
Summing up all the above estimates, we obtain
\begin{align}
&
\frac{d}{d t}\left(\Vert\xi\Vert_{L^2}^2 + \alpha^2\Vert\nabla\xi\Vert_{L^2}^2\right)
\leq
C\left(\Vert\xi\Vert_{L^2}^2 + \alpha^2\Vert\nabla\xi\Vert_{L^2}^2\right)
+
\widetilde{K}\alpha^2
\label{Eq9}
\end{align}
where the constant $\widetilde{K}$ depends on the $H^3$ norms of $u$ and $w$, as well as $H^2$ norm of $f$.
By Lemma~\ref{L3} we have
\begin{align*}
&
\Vert\xi(t)\Vert_{L^2}^2 + \alpha^2\Vert\nabla\xi(t)\Vert_{L^2}^2
+
\int_0^t\Vert\nabla\xi(s)\Vert_{L^2}^2\,ds
\leq
\widetilde{K}\alpha^2(e^{Ct} - 1),
\end{align*}
where we used $\xi_0 = w_0 - \omega_0 = 0$.
Therefore,
$\Vert\xi(t)\Vert_{L^\infty(0,T;H)}^2+
\Vert\xi(t)\Vert_{L^2(0,T;V)}^2 \leq C\alpha^2 e^{CT} \to 0$ as $\alpha\to 0$.
The proof of Theorem~\ref{T3} is thus complete.
\subsection{Convergence of $\omega$ to $\widetilde{\omega}$ and $u$ to $\widetilde{u}$ in $L^2$}
\label{subsec5-2}
{\smallskip\noindent {\em Proof of Theorem~\ref{T4}.}}
Assume the hypotheses and the notation of Theorem \ref{T4}. Since $u_0\in H^4\cap V$, by Theorem \ref{thm_NSE_local_reg}, there exists a time $T>0$ and a unique strong solution $(\widetilde u,\widetilde p)$ to \eqref{NSE} satisfying $\widetilde u \in C([0,T];H^4\cap V)\cap L^2(0,T;H^{5}\cap V)$.
In view of Theorem~\ref{T3}, it suffices to show that $\Vert w - \widetilde{\omega}\Vert_{L^2} + \Vert u - \widetilde{u}\Vert_{L^2} \sim\mathcal{O}(\alpha)$.
We start by applying the curl operator ``$\nabla\times$" to \eqref{NSE_u} and obtain
\begin{align}
&
\frac{\partial \widetilde{\omega}}{\partial t}
-
\Delta \widetilde{\omega}
+
(\widetilde{u}\cdot\nabla)\widetilde{\omega}
-
(\widetilde{\omega}\cdot\nabla)\widetilde{u}
=
\nabla\times f.
\label{vorticity}
\end{align}
Then, by taking the difference of \eqref{Sys1_w} and \eqref{vorticity} and denoting by $\theta:=w - \widetilde{\omega}$, we have
\begin{align*}
&
\frac{\partial\theta}{\partial t}
-
\Delta\theta
+
(u\cdot\nabla)w
-
(\widetilde{u}\cdot\nabla)\widetilde{\omega}
-
(w\cdot\nabla)u
+
(\widetilde{\omega}\cdot\nabla)\widetilde{u}
=
0.
\end{align*}
Denoting $\zeta:=u-\widetilde{u}$, we rewrite the above as
\begin{align}
&
\frac{\partial\theta}{\partial t}
-
\Delta\theta
+
(u\cdot\nabla)\theta
+
(\zeta\cdot\nabla)\widetilde{\omega}
-
(\theta\cdot\nabla)u
-
(\widetilde{\omega}\cdot\nabla)\zeta
=
0,
\label{vor_diff}
\end{align}
with $\theta(\cdot, 0) = \theta_0 = 0$.
Next, by subtracting \eqref{NSE_vor} from \eqref{Sys1_u}, we obtain the following system for $\zeta:=u-\widetilde{u}$.
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
&
(I - \alpha^2\Delta)\frac{\partial \zeta}{\partial t}
-
\alpha^2\Delta\frac{\partial\widetilde{u}}{\partial t}
-
\Delta\zeta
+
w\times u - \widetilde{\omega}\times\widetilde{u}
+
\nabla\Pi
=
0, \label{u_diff}
\\&
\nabla\cdot\zeta = 0, \label{u_diff_div_free}
\\&
\zeta(\cdot, 0) = \zeta_0 = 0, \label{u_diff_ini}
\end{empheq}
\end{subequations}
where $\Pi = p - \widetilde{p} - \frac{1}{2}|\widetilde{u}|^2$.
We multiply \eqref{vor_diff} by $\theta$ and \eqref{u_diff} by $\zeta$, respectively,
integrate by parts over $\mathbb{T}^3$, and add, to obtain
\begin{align}
&
\frac{1}{2}\frac{d}{dt}\left(\Vert\theta\Vert_{L^2}^2 + \Vert\zeta\Vert_{L^2}^2 + \alpha^2\Vert\nabla\zeta\Vert_{L^2}^2\right)
+
\Vert\nabla\zeta\Vert_{L^2}^2
+
\Vert\nabla\theta\Vert_{L^2}^2 \nonumber
\\&
=
-\int_{\mathbb{T}^3}(\zeta\cdot\nabla)\widetilde{\omega}\cdot\theta\,dx
+
\int_{\mathbb{T}^3}(\theta\cdot\nabla)u\cdot\theta\,dx
+
\int_{\mathbb{T}^3}(\widetilde{\omega}\cdot\nabla)\zeta\cdot\theta\,dx \nonumber
\\&\quad
+
\alpha^2\int_{\mathbb{T}^3}\Delta\frac{\partial\widetilde{u}}{\partial t}\cdot\zeta\,dx
+
\int_{\mathbb{T}^3}(\theta\times \widetilde{u})\cdot\zeta\,dx,
\label{u_diff_energy}
\end{align}
where we used $w\times u - \widetilde{\omega}\times\widetilde{u} = \theta\times \widetilde{u} + w\times\zeta$,
$(w\times\zeta)\cdot\zeta = 0$, \eqref{embd2}, and \eqref{u_diff_div_free}.
Next, we estimate the five integrals on the right side of \eqref{u_diff_energy}.
Using \eqref{B:326}, the first integral is bounded by
\begin{align*}
\Vert\nabla\widetilde{\omega}\Vert_{L^2} \Vert\zeta\Vert_{L^2}^{1/2} \Vert\nabla\zeta\Vert_{L^2}^{1/2} \Vert\nabla\theta\Vert_{L^2}
\leq
C\Vert\zeta\Vert_{L^2}^{2}
+
\frac{1}{4}\Vert\nabla\theta\Vert_{L^2}^2
+
\frac{1}{6}\Vert\nabla\zeta\Vert_{L^2}^2,
\end{align*}
where $\Vert\nabla\widetilde{\omega}\Vert_{L^\infty(0,T;H)}\leq C$.
The second integral can also be estimated using \eqref{B:623s}:
\begin{align*}
\int_{\mathbb{T}^3}(\theta\cdot\nabla)u\cdot\theta\,dx
&\leq
\Vert\nabla\theta\Vert_{L^2}^{2} \Vert u\Vert_{L^2}^{1/2} \Vert\nabla u\Vert_{L^2}^{1/2}
\leq
C\Vert \nabla u\Vert_{L^2}\Vert\theta\Vert_{L^2}^{2}.
\end{align*}
We note that, as in Remark \ref{remark_L2V_bdd}, that $\Vert \nabla u\Vert_{L^2}$ might not be bounded independently of $\alpha$, but $\int_0^T\Vert \nabla u(t)\Vert_{L^2}\,dt\leq T^{1/2}\|u\|_{L^2(0,T;V)}$ is bounded independently of $\alpha\in(0,1]$.
The third integral can be estimated using \eqref{B:623}:
\begin{align*}
\int_{\mathbb{T}^3}(\widetilde{\omega}\cdot\nabla)\zeta\cdot\theta\,dx
&\leq
\Vert\nabla\widetilde{\omega}\Vert_{L^2} \Vert\theta\Vert_{L^2}^{1/2} \Vert\nabla\theta\Vert_{L^2}^{1/2} \Vert\nabla\zeta\Vert_{L^2}
\leq
C\Vert\theta\Vert_{L^2}^{2}
+
\frac{1}{4}\Vert\nabla\theta\Vert_{L^2}^2
+
\frac{1}{6}\Vert\nabla\zeta\Vert_{L^2}^2.
\end{align*}
By substituting ${\partial \widetilde{u}}/{\partial t}$ from \eqref{NSE_u} and integration by parts,
we bound the fourth term on the right side of \eqref{u_diff_energy} by
\begin{align*}
&
\alpha^2\int_{\mathbb{T}^3}|\nabla\Delta\widetilde{u}||\nabla\zeta|\,dx
+
\alpha^2\int_{\mathbb{T}^3}|\nabla(\widetilde{\omega}\times\widetilde{u})||\nabla\zeta|\,dx
+
\alpha^2\int_{\mathbb{T}^3}|\nabla f||\nabla\zeta|\,dx
\\&
\leq
C\alpha^2\Vert\widetilde{u}\Vert_{H^3}^2
+
C\alpha^2\Vert\widetilde{u}\Vert_{H^1}^2\Vert\widetilde{u}\Vert_{H^2}\Vert\widetilde{u}\Vert_{H^3}
+
C\alpha^2\Vert\widetilde{u}\Vert_{H^1}\Vert\widetilde{u}\Vert_{H^2}^3
+
C\alpha^2\Vert f\Vert_{H^1}^2
+
\frac{1}{6}\Vert\nabla\zeta\Vert_{L^2}^2,
\end{align*}
where we used the hypothesis of the theorem that $\alpha\in(0,1]$.
As for the last integral, we use \eqref{Agmon} to bound it by
\begin{align*}
\Vert \widetilde{u}\Vert_{H^2} \Vert\theta\Vert_{L^2} \Vert\zeta\Vert_{L^2}
\leq
C\Vert\theta\Vert_{L^2}^2
+
C\Vert\zeta\Vert_{L^2}^2,
\end{align*}
where $\Vert \widetilde{u}\Vert_{H^2}\leq C$.
Combining all the above estimates and using \eqref{u_diff_ini}, we obtain
\begin{align*}
&\quad
\frac{d}{dt}\left(\Vert\theta(t)\Vert_{L^2}^2 + \Vert\zeta(t)\Vert_{L^2}^2 + \alpha^2\Vert\nabla\zeta(t)\Vert_{L^2}^2\right)
+ \Vert\nabla\zeta\Vert_{L^2}^2
+ \Vert\nabla\theta\Vert_{L^2}^2
\\&\leq
C\alpha^2
+
\widetilde{C}\Vert \nabla u\Vert_{L^2}\left(\Vert\theta(t)\Vert_{L^2}^2
+ \Vert\zeta(t)\Vert_{L^2}^2
+ \alpha^2\Vert\nabla\zeta(t)\Vert_{L^2}^2\right),
\end{align*}
where the constants $C$ and $\widetilde{C}$ are uniform-in-time bounds for $\Vert\widetilde{u}\Vert_{H^3}$, as well as $\widetilde{K}$.
Using Gr\"onwall's inequality, the integrability of $\Vert \nabla u(t)\Vert_{L^2}$, and the fact that $\|u\|_{L^2(0,T;V)}$ is bounded independently of $\alpha\in(0,1]$, we obtain that $\Vert\theta(t)\Vert_{L^2}^2 + \Vert\zeta(t)\Vert_{L^2}^2
+ \int_0^t\Vert\nabla\zeta(s)\Vert_{L^2}^2\,ds
+ \int_0^t\Vert\nabla\theta(s)\Vert_{L^2}^2\,ds
\leq
C\alpha^2T$.
Thus, the proof of Theorem~\ref{T4} is now complete.
\subsection{Proof of Corollary~\ref{blow_up}}
Assume the hypothses. From Theorem \ref{T4}, $\Vert u - \widetilde{u}\Vert_{L^\infty(0,T;H)} \to 0$ and $\Vert u - \widetilde{u}\Vert_{L^2(0,T;V)}$ as $\alpha \to 0$. Thus, passing to the limit as $\alpha\to0$ in the energy equality \eqref{energy_equality_u} yields
\begin{align}\label{blow_up_lim}
\limsup_{\alpha\rightarrow 0}\alpha^2\|\nabla u(t)\|_{L^2}^2
+ \|\widetilde u(t)\|_{L^2}^2
+2\int_0^t\|\nabla \widetilde u(s)\|_{L^2}^2\,ds
=
\|u_0\|_{L^2}^2+
2\int_0^t( \widetilde u(s),f(s))\,ds
\end{align}
However, if $\widetilde u$ is a strong solution to the Navier-Stokes equation on $[0,T]$, the energy identity
\begin{align}\label{energy_equality_NSE}
\|\widetilde u(t)\|_{L^2}^2
+2\int_0^t\|\nabla \widetilde u(s)\|_{L^2}^2\,ds
=
\|u_0\|_{L^2}^2
+2\int_0^t( \widetilde u(s),f(s))\,ds,
\end{align}
holds (see, e.g., \cite{Constantin_Foias_1988, Foias_Manley_Rosa_Temam_2001}). Thus, \eqref{blow_up_lim} together with \eqref{energy_equality_NSE} contradict \eqref{blowup_criterion}.
\section*{Acknowledgments}
\noindent Author A.L. is partially supported by NSF grant DMS-1716801.
\noindent Author L.R. is partially supported by NSF grant DMS-1522191.
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-02-27T02:02:45",
"yymm": "1802",
"arxiv_id": "1802.08766",
"language": "en",
"url": "https://arxiv.org/abs/1802.08766"
}
|
\section{Introduction}\label{intro}
Modelling biological movement has received significant attention, with a large body of
work devoted to deriving macroscopic (PDE) equations for the mean
behaviour of some underlying microscopic movement model. A common description
assumes movement follows a velocity-jump random walk, an alternating sequence of runs (movement
with a fixed velocity) and reorientations (choosing a new velocity). When the movement
is subject to an external bias, such as a chemical attractant, a series of studies
dating to Patlak \cite{patlak1953random} has generated solid understanding on how
microscopic detail translates into a diffusion-advection
type equation. \cite{bellomo2008modeling, othmer2013} Many derivations follow a fairly standard
set of assumptions on individual behaviour, such as negligible waiting times between jumps
and that the distribution of runtimes follows a Poisson distribution, as observed for classic
studies on cells such as {\em E. coli}. \cite{berg1972chemotaxis}
Under these assumptions, the macroscopic diffusion is of classic Fickian form.
Yet these assumptions do not apply universally, such as when
searching for sparsely distributed targets. Recent years have witnessed
reports on the tendency towards long-range diffusion, where a particle's motion
follows the characteristics of a L{\'{e}}vy flight: occasional non-localised flights
that interrupt local movements. Intuitively, the probability of remaining stuck in
non-productive regions decreases and the mean time taken to find rare targets is reduced.
Non-Brownian search strategies have been reported for microorganisms, including
\textit{E.~coli} \cite{korobkova2004molecular} and {\em Dictyostelium}, \cite{li2008persistent}
immune cells, \cite{harris} and large organisms (e.g. mussels, \cite{de2011levy} marine predators \cite{humphries2010environmental,sims2008scaling} and monkeys \cite{ramos2004levy}). The natural strategies have been adopted for robots. \cite{KrivonosovDZ16}
Motivated by the movements of immune cells in chronically infected brain tissue, \cite{harris}
here we derive the macroscopic model for a microscopic velocity-jump random walk
(Section \ref{sec: micro}) in which both the runtime distance and waiting time between
re-orientations follow long-tail (approximate L\'{e}vy) distributions.
The delay is the key new ingredient from a modelling perspective, observed in experiments.
\cite{harris, miller} We derive the appropriate kinetic-transport equation, where the
\enquote{collision} term describes the nonlocal motion. Solving an equation for the
resting population introduces a nonlocal delay in time for the moving population and, via
a perturbation argument and appropriate space/time scaling, obtain the following
nonlocal equation for the population density ($u_\textnormal{tot}$):
\begin{equation}
{}_{t}^C\mathds{D}^\kappa u_\textnormal{tot}=\nabla\cdot\left(C_{\alpha,\kappa}\nabla^{\alpha-1}u_\textnormal{tot} \right)\label{eq: final in intro}\ .
\end{equation}
In the above, ${}_{t}^C\mathds{D}^\kappa$ is the fractional time derivative in the sense of Caputo,
$\kappa \in (0,1)$, while $\nabla^{\alpha-1}$ denotes a fractional gradient for $\alpha\in(1,2)$, \textcolor{black}{see the Definition in \ref{def: fractional}}. In the physical regime
$\frac{\alpha}{\kappa} \in [1,2]$: this ranges from ballistic motion for $\alpha= \kappa=1$, with
a resulting fractional heat equation governed by a L\'{e}vy process, to standard diffusion
for $\alpha=2$, $\kappa=1$. The population is governed by a diffusion term with
coefficient $C_{\alpha,\kappa}$ (defined at the end of Section \ref{section:finalequation}) that
represents a random component to motility. As described in greater detail below,
experimental data on immune cell movements lead to $\alpha = 1.15$ and $\kappa = 0.7$. \cite{harris}
While our approach is often applied in the context of chemotaxis, it is noted that
\eqref{eq: final in intro} does not contain a chemotactic component; this is in agreement
with Ref.~\refcite{harris} where the
immune cells do not appear to exhibit directional migration on the experimental time/length scales.
The simple structure of Eq.~\eqref{eq: final in intro} allows analytic insights
not directly visible from the microscopic model. In particular, in Section
\ref{sec: fundamental solution} we explicitly write down the fundamental solution
in $\mathbb{R}^d$ and, as direct applications, we discuss hitting and mean first passage
times. Numerical experiments are presented in Section \ref{numerics}, allowing
efficient quantitative description and a basis for parametric studies into immune cell
search strategies.
\section{Background and data}\label{bio}
{\em Toxoplasma gondii} ({\em T. gondii}) is a species-crossing parasitic
pathogen \cite{blanchard2015persistence} with high seroprevalence in humans. Acute infection
is followed by chronic infection, with the parasite taking up
lifetime residence in the host’s central nervous system (CNS). While regarded
as generally symptomless, infected individuals with compromised immune systems are at
greater risk of life-threatening recurrence and chronic infection has also been
linked to altered neurological behaviour. \cite{parlog2015toxoplasma} Long term
immunity and control of chronic {\em T. gondii} infection primarily relies on
CD8$^+$ T cells, \cite{hwang2015cd8} which continuously search for and eliminate
infected cells through contact. A recent study of CD8$^+$ dynamics in infected brain
tissue has revealed a number of insights into their chemical control and movement
patterns. \cite{harris} At a chemical level, the CXCL10/CXCR3 chemokine signalling
system controls both the initial recruitment and subsequent maintenance of a
CD8$^+$ population \cite{harris}: anti-CXCL10 treatments lower the resident
population of T cells and increase parasite densities. Further, CXCL10 appears to act
as a chemokinetic agent during the chronic phase, with anti-CXCL10 treatment reducing
average cell velocities. \cite{harris}
\begin{figure}[ht]
\centering
\subfloat[\label{fig: Levy walk2}]{\includegraphics[width = 0.52\textwidth]{extractdata_a.eps}}
\subfloat[\label{fig: Mittag-Levy2}]{\includegraphics[width=0.52\textwidth]{extractdata_b.eps}}
\caption{Reproduction of CD8$^+$ T cell tracking data in \cite{harris}, indicating generalized
L\'{e}vy diffusive behaviour in the central nervous system. (a) mean squared distance for $CD8^+$ T
cells in control tissue (blue) and two treatments that impact on chemokine signalling (mice
treated with anti-CXCL10 antibodies, red, and mice treated with PTX, a chemokine
signalling inhibitor, black). (b) Spatial scaling factor of the self-similar diffusion.}
\end{figure}\label{fig: comparison2}
Analyses of CD8$^+$ T cell tracks in Ref.~\refcite{harris} suggests that they
follow a generalised L{\'{e}}vy walk. We reproduce the mean squared distances showing
superdiffusive behaviour ($\langle x^2\rangle \sim t^{1.4}$) in Figure \ref{fig: Levy walk2}.
Yet, dependence of the spatial scaling on time (Figure \ref{fig: Mittag-Levy2}) is inconsistent
with a L\'{e}vy walk in the absence of waiting times. In Ref.~\refcite{harris} various models for
T cell migration are examined, including random walks, persistent random walks and
L\'{e}vy walks, with the conclusion that the experimental results are best described by
a generalized L\'{e}vy walk. The microscopic description is as follows: (1) cells make
straight runs with fixed velocity but random orientation, where the run distance
is chosen randomly from a L\'{e}vy distribution ($L_\mu(\ell)\sim\ell^{-\mu}$) with
exponent $\mu_{run}=2.15$; (2) following each run, cells pause for a time that is also
distributed according to a L\'{e}vy distribution with exponent $\mu_{pause}=1.7$.
L\'{e}vy distributions for the distance $\ell$ and times $\tau$ are drawn from the
following expressions
\[
Z_{\mu}=\frac{\sin((\mu-1)X)}{(\cos X)^{1/(\mu-1)}}\left(\frac{\cos((2-\mu)X)}{Y} \right)^{(2-\mu)/(\mu-1)}
\]
where $X$ is a uniform random variable on the interval $[-\pi/2,\pi/2]$ and $Y=-\ln X'$.
For runs, once a distance $\ell$ is chosen, the walker moves in a randomly chosen
direction for a time $\ell/v$, where $v$ is the velocity of the walker. For pauses,
once a time $\tau$ is chosen, the walker remains stationary for that length of time.
While anti-CXCL10 treatments reduce CD8$^+$ T cell speed and/or increase pauses, other
migration statistics of the T cells remain the same: $\mu_{run}=2.15$ and $\mu_{pause}=1.7$,
as in the control case. Thus, CXCL10 appears to operate as a chemokinetic agent through
increasing the rate at which patrolling CD8$^+$ T cells encounter their sparsely distributed targets,
with CXCL10 (and other chemokines) shortening capture time through
faster movement speeds. \cite{harris}
\iffalse
\section{Description of the continuous time random walk with long jumps}
The modelling equation
\begin{align*}
\partial_t\bar{\sigma}+c\theta\cdot\nabla\bar{\sigma} & =T\int_0^t\mathcal{B}(\mathbf{x},t-s)\left(\int_0^s\varphi_r(s-r)\bar{\sigma}(\mathbf{x}-c\theta(t-s),r,\theta)dr\right)ds\nonumber\\ & -\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds.
\end{align*}
obtained in Section \ref{sec: modelling equations} take into account only the density of particles that are moving. The first term describes the density of particles arriving at a point $\mathbf{x}$ at time $t$, after being waiting for time $r$. The second term describes the particles that escape from that point and start a new run.
In this section we consider a one dimensional lattice where the particle can only jump left or right and after each jump it waits for a time $r$ \cite{angstmann2015generalized}. Then we derived a system of equation that take into account the total density of particles.
Let $\sigma(x_+,t)$ the density of particles moving right, $\sigma(x_-,t)$ the density of particles moving left and $\sigma_0(x,t)$ the particles that are at rest. Also $\varphi_r(r)$ is the probability density that a particle waits for a time $r$ and $\varphi(x_{+/-},t)$ gives the probability that a particle jumps to the right or to the left respectively.
The density of moving particles is then described by
\begin{align*}
\partial_t\tilde{\sigma}($\mathbf{x}$,t) & +c\partial_x\tilde{\sigma}($\mathbf{x}$,t)=\varphi_r(r)\int_0^t\varphi($\mathbf{x}$_+,t-\tau)\sigma(x_+,\tau)d\tau\\ & +\varphi_r(r)\int_0^t\varphi(x_-,t-\tau)\sigma(x_-,\tau)d\tau-\int_0^t\varphi(x,t-\tau)\sigma(x,\tau)d\tau\\ \sigma_0(x,t)& =\int_0^t\psi_r(t-r)\sigma(x,r)dr,
\end{align*}
where $\psi_r(t-r)$ is the survival probability of a particle not jumping from $x$ until time $t$, given that it arrived at $x$ at an earlier time $r$. Here $\tau$ is the run time.
The new density $\tilde{\sigma}$ is the density of particles that are moving, then we can write it in terms of a running survival probability as follows:
\[
\tilde{\sigma}(x,t)=\int_0^{t}\psi(x,t-\tau)\sigma(x,\tau)d\tau=(\psi\ast\sigma)(t).
\]
We can rewrite $\varphi(x,t-\tau)\sigma(x,\tau)$ in terms of $\mathcal{B}(x,t-\tau)\tilde{\sigma}(x,\tau)$, where the kernel $\mathcal{B}$ is the solution of
\[
\partial_t\psi=-\int_0^t\mathcal{B}(x,t-s)\psi(s)ds=-\left(\mathcal{B}\ast\psi \right)(t).
\]
Therefore, since $\varphi(x,t)=-\partial_t\psi(x,t)=(\mathcal{B}\ast\psi)(t)$ we finally have
\begin{align*}
\partial_t\tilde{\sigma}+c\partial_x\tilde{\sigma}& =\varphi_r(r)((\mathcal{B}\ast\psi)\ast\sigma)_+(t)+\varphi_r(r)((\mathcal{B}\ast\psi)\ast\sigma)_-(t)-((\mathcal{B}\ast\psi)\ast\sigma)(t)\\ &= \varphi_r(r)\int_0^{t}\mathcal{B}(x_+,t-\tau)\tilde{\sigma}(x_+,\tau)d\tau+\varphi_r(r)\int_0^{t}\mathcal{B}(x_-,t-\tau)\tilde{\sigma}(x_-,\tau)d\tau\\ & -\int_0^t\mathcal{B}(x,t-\tau)\sigma(x,\tau)d\tau.
\end{align*}
\fi
\section{Microscopic model description}\label{sec: micro}
We model a population of CD8$^{+}$ T cells moving in a medium in $\mathbb{R}^n$. It is noted
that for the experimental system of Ref.~\refcite{harris}, the resident T cell population
numbers somewhere between $300,000$ and $450,000$ across a volume of
$3.2-4.4\times 10^{11}\mu m^3$, motivating a continuum description for their
collective movement. Microscopically, we assume each individual performs a
generalized L\'{e}vy walk with the following properties:
\begin{enumerate}
\item The interactions between individuals are taken to be negligible. This assumption appears
reasonable, given the relatively low densities of T cells.
\item Starting at position $\mathbf{x}$ and time $t$, we assume an individual runs in
direction $\theta$ for some time $\tau$, called the \enquote{run time}. This run time is selected
from a distribution $\psi$.
\item During runs, individuals are assumed to move with constant forward speed $c$ and take a straight
line motion between reorientations.
\item Each time the individual stops it selects a new direction $\eta$ according to
a distribution $k(\mathbf{x},t,\mathbf{\theta};\mathbf{\eta})$ which only depends on $|\theta - \eta|$, after waiting for some time $r$. The choice of new direction is taken to be independent
of chemical concentrations/gradients.
\item The reorientation time $r$ follows a L\'{e}vy distribution $\psi_r(r)$.
\end{enumerate}
Note that assumptions (3-4) derive from the experimental conditions of CD8$^+$ T cells in
Ref.~\refcite{harris}: while the speed $c$ is a function of CXCL10, other walk statistics are unaffected.
Without explicit data stating otherwise CXCL10 is assumed here to be (approximately) uniformly
distributed at the spatial scale of observed tissue, and hence $c$ is taken as spatially constant. Investigations into the impact of anti-CXCL10 treatments can be recreated through changing the
size of $c$.
\subsection{Turn angle distribution}
To describe the motion of T cells we assume, following Ref.~\refcite{harris}, that the new
direction is chosen independently of the target's position. Thus, we take
\begin{equation}
k(\mathbf{x},t,\mathbf{\theta};\mathbf{\eta})=\ell(\mathbf{x},t,|\eta-\theta|)\label{eq: turn angle distribution}
\end{equation}
where the new direction $\eta$ is symmetrically distributed with respect to the previous
direction $\theta$, according to the symmetric distribution $\ell$. \cite{alt1980biased}
$|\eta-\theta|$ denotes the distance between two directions on the unit sphere $S$.
More generally, immune cells can orient in response to environmental factors, such as
attractant gradients or the structure of the extracellular matrix. In the
absence of data suggesting that such guidance cues play any (significant) role in
the behaviour observed in Ref.~\refcite{harris}, we presently exclude this possibility.
\subsection{Running probability and resting times}
As described in Ref.~\refcite{harris}, the motion of CD$8^+$ T cells is characterized by
long runs, distributed according to a L\'{e}vy distribution, combined with resting
times $r$. Within our microscopic description, we therefore assume the following
power-law distribution for the running probability
\begin{equation}
\psi(\mathbf{x},\tau)=\left(\frac{\tau_0(\mathbf{x})}{\tau_0(\mathbf{x})+\tau}\right)^\alpha\ , \ \textnormal{for}\ 1<\alpha<2 \label{eq: running probability}\ ,
\end{equation}
while resting times are distributed according to
\begin{equation}
\psi_r(r)=\left(\frac{r_0}{r_0+r} \right)^\kappa \ \textnormal{for}\ 0<\kappa<1\ .\label{eq: resting probability}
\end{equation}
$\psi$ describes the probability that a moving cell stops after time $\tau$. The
resting time distribution, $\psi_r$, gives the probability that a cell does not
move for a time $r$.
\iffalse
The resting times distribution comes from the so called Mittag-Leffler distribution \cite{klafter2011first, metzler2000random} given by
\begin{equation}
E_\kappa(z)=\sum_{m=0}^{\infty}\frac{z^m}{\Gamma(\kappa m+1)}.
\end{equation}
In order to describe the resting phases performed by the cells, we specifically consider \cite{klafter2011first,falconer2014subdiffusive}
\[
E_\kappa\left(-\left(\frac{r_0+r}{r_0} \right)^\kappa\right)\simeq\begin{cases}
\exp\left\{ \frac{1}{\Gamma(1-\kappa)}\left(\frac{r_0+r}{r_0} \right)^\kappa\right\}, & \ \textnormal{for}\ r\ll\Gamma(1+\kappa)^{1/\kappa}r_0,\\
\frac{1}{\Gamma(1-\kappa)}\left(\frac{r_0+r}{r_0} \right)^{-\kappa}, & \ \textnormal{for}\ r\rightarrow\infty.
\end{cases}
\]
\fi
The running and waiting probabilities, $\psi$ and $\psi_r$, are related to the stopping and waiting frequency $\beta$ and $\beta_r$, via
\begin{align}
\psi(\mathbf{x},\tau) & =\exp\left(-\int_0^{\tau}\beta(\mathbf{x}+cs\theta,s) ds\right) \textnormal{and}\\ \psi_r(r) & =\exp\left(-\int_0^{r}\beta_r(s)ds \right)\ .
\end{align}
Moreover, explicit expressions for both rates, $\beta(\mathbf{x},\tau)$ and $\beta_r(r)$, can be computed from the relations:
\begin{align}
\beta(\mathbf{x},\tau)& =\frac{\varphi(\mathbf{x},\tau)}{\psi(\mathbf{x},\tau)}=\frac{-\partial_\tau\psi}{\psi}=\frac{\alpha}{\tau_0+\tau}\ ,\label{eq: beta}\\ \beta_r(r)& =\frac{\phi(r)}{\psi_r(r)}=\frac{-\partial_r\psi_r}{\psi_r}=\frac{\kappa}{r_0+r}\label{eq: beta_r}\ .
\end{align}
\section{Modelling equations}\label{sec: modelling equations}
Considering the assumptions in Section \ref{sec: micro} and following the approach of Ref.~\refcite{alt1980biased},
densities of moving $\sigma(\mathbf{x},t,\theta,\tau)$ and resting $\sigma_0(\mathbf{x},t,\theta,\tau)$ populations
are described by the following system of equations:
\begin{align}
(\partial_\tau+\partial_t+c\theta\cdot\nabla)\sigma(\cdot,\theta,\tau)&=-\beta(\mathbf{x},\tau)\sigma(\cdot,\theta,\tau) \ , \label{eq: kinetic}\\
(\partial_t-\partial_\tau)\sigma_0(\cdot, \theta,\tau)& =T \beta(\mathbf{x},\tau)\sigma(\cdot,\theta,\tau)\ , \label{sigma0equation} \\ \sigma(\cdot,\theta,0)&= \sigma_0(\cdot,\theta,0)\ , \label{sigma0sigma}
\end{align}
where the dot denotes dependence in space, $\mathbf{x}$, and time $t$. Here the turn angle operator $T$, given by
\begin{equation}
T\phi(\eta)=\int_S k(\cdot,\mathbf{\theta};\mathbf{\eta})\phi(\theta)d\theta \label{eq: turn angle operator}\ ,
\end{equation} describes the effect of changing from direction $\mathbf{\theta}$ to a new direction $\mathbf{\eta}$. The initial condition for the particles that start a new run at $\tau=0$ is given by
\begin{equation}
\sigma(\cdot,\eta,0)=\int_0^t\phi(r)\int_0^{t-r}d\tau\int_S\beta(\mathbf{x},\tau)\sigma(\mathbf{x},t-r,\theta,\tau)k(\cdot,\theta;\eta)d\theta dr\ .\label{eq: new run}
\end{equation}
The left hand side of equation \eqref{eq: kinetic} describes the temporal variation and transport of the density $\sigma(\cdot,\theta,\tau)$, while the right hand side gives the density of individuals that are left behind due to reorientation. These particles reappear in the resting mode described by \eqref{sigma0equation}, where stopping with frequency $\beta({\mathbf{x},\tau})$ eventually generates a new run ($\tau=0$) following a pause of some time $r$, with a probability given by the probability density function $\phi(r)$. This is described by equations \eqref{sigma0sigma} and \eqref{eq: new run}.
Using the method of characteristics we find the solution of (\ref{eq: kinetic}),
\begin{equation}
\sigma(\cdot,\theta,\tau)=\sigma(\mathbf{x}-c\theta\tau,t-\tau,\theta,0)\exp\left(-\int_0^\tau\beta(\mathbf{x}+cs\theta,s)ds \right).\label{eq: solution}
\end{equation}
We can rewrite expression (\ref{eq: new run}) as
\begin{equation}
\sigma(\cdot,\eta,0)=\int_Sk(\cdot,\theta;\eta)\left[\int_0^td\tau \int_0^{t-\tau}\phi(r)\beta(\mathbf{x},\tau)\sigma(\mathbf{x},t-r,\theta,\tau)dr\right]d\theta\ , \label{eq: initial run}
\end{equation}
after changing the limits of integration. Then, integrating (\ref{eq: kinetic}) and \eqref{sigma0equation} with respect to $\tau$ and substituting \eqref{sigma0sigma} and \eqref{eq: initial run}, we obtain
\begin{align}
\partial_t\bar{\sigma}+c\theta\cdot\nabla\bar{\sigma} & =T\int_0^t\beta({\mathbf{x},\tau})\left(\int_0^{t-\tau}\phi(r)\sigma(\mathbf{x},t-r,\theta,\tau)dr\right)d\tau\nonumber\\ & -\int_0^t\beta(\mathbf{x},\tau)\sigma(\mathbf{x},t,\theta,\tau)d\tau\label{eq: tau independent}\ ,\\
\partial_t\bar{\sigma}_0 &= T \int_0^t\beta(\mathbf{x},\tau)\sigma(\mathbf{x},t,\theta,\tau)d\tau\nonumber \\ & -T\int_0^t\beta({\mathbf{x},\tau})\left(\int_0^{t-\tau}\phi(r)\sigma(\mathbf{x},t-r,\theta,\tau)dr\right)d\tau\ . \label{eq: tau independent2}
\end{align}
Here $\bar{\sigma}$ and $\bar{\sigma}_0$ are defined as
\begin{equation}
\bar{\sigma}(\cdot,\theta)=\int_0^t\sigma(\cdot,\theta,\tau)d\tau,\ \ \bar{\sigma}_0(\cdot,\theta)=\int_0^t\sigma_0(\cdot,\theta,\tau)d\tau\ .\label{eq: sigma bar}
\end{equation}
From (\ref{eq: tau independent}) and (\ref{eq: tau independent2}) we can define the arrival rate of particles at a point $(\mathbf{x},t)$, after waiting for time $r$, as
\[
j(\cdot,\theta)=\int_0^t\beta(\mathbf{x},\tau)\left(\int_0^{t-\tau}\phi(r)\sigma(\mathbf{x},t-r,\theta,\tau)dr\right)d\tau
\]
and the density of cells leaving the point $\mathbf{x}$ for all times $\tau$ from $0$ to $t$, also called the escape rate, as
\begin{equation}
i(\cdot,\theta)=\int_0^t\beta(\mathbf{x},\tau)\sigma(\mathbf{x},t,\theta,\tau)d\tau\ . \label{eq: escape rate i}
\end{equation}
Using (\ref{eq: solution}) and the relations in (\ref{eq: beta}), we can write
\[
i(\cdot,\theta)=\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds\ ,
\]
as derived in Ref.~\refcite{pks}, where $\mathcal{B}$ is given, in the Laplace space
\begin{equation}
\hat{\mathcal{B}}(\mathbf{x},\lambda+ c\theta\cdot\nabla)=\frac{\hat{\varphi}(\mathbf{x},\lambda+c\theta\cdot\nabla)}{\hat{\psi}(\mathbf{x},\lambda+c\theta\cdot\nabla)}+\textnormal{l.o.t.}.\label{eq:kernel}
\end{equation}
To rewrite $j(\cdot,\theta)$ in terms of $\bar{\sigma}$ we use (\ref{eq: solution}) again and
let $s=t-\tau$. Hence,
\begin{align*}
j(\cdot,\theta) & =\int_0^t\beta(\mathbf{x},\tau)\left(\int_0^{t-\tau}\phi(r)\sigma(\mathbf{x}-c\theta\tau,t-\tau-r,\theta,0)\psi(\mathbf{x},\tau)dr\right)d\tau\\ & =\int_0^t\beta(\mathbf{x},t-s) \psi(\mathbf{x},t-s)e^{-(t-s)c\theta\cdot\nabla}\left(\int_0^s\phi(s-r)\sigma(\mathbf{x},r,\theta,0)dr\right)ds\\ & =\int_0^t\varphi(\mathbf{x},t-s)e^{-(t-s)c\theta\cdot\nabla}\left(\phi(s)\ast\sigma(\mathbf{x},s,\theta,0) \right)ds\ .
\end{align*}
Note that $j(\cdot,\theta)$ is still written in terms of $\sigma$ instead of $\bar{\sigma}$. So, taking the Laplace transform of $j(\cdot,\theta)$ and using the relation,
\[
\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta)=\hat{\sigma}(\mathbf{x},\lambda,\theta,0)\hat{\psi}(\mathbf{x},\lambda+c\theta\cdot\nabla)
\]
which was obtained from (\ref{eq: sigma bar}) and (\ref{eq: solution}) (see Ref.~\refcite{pks} for details), we can rewrite as,
\begin{align*}
\hat{j}(\mathbf{x},\lambda,\theta)=\hat{\varphi}(\mathbf{x},\lambda+c\theta\cdot\nabla)\hat{\phi}(\lambda)\hat{\sigma}(\mathbf{x},\lambda,\theta,0)=\frac{\hat{\varphi}(\mathbf{x},\lambda+c\theta\cdot\nabla)}{\hat{\psi}(\mathbf{x},\lambda+c\theta\cdot\nabla)}\hat{\phi}(\lambda)\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta)\ .
\end{align*}
Equations (\ref{eq: tau independent}) and (\ref{eq: tau independent2}) now can be written as
\begin{align}
\partial_t\bar{\sigma}+c\theta\cdot\nabla\bar{\sigma} & =T\int_0^t\mathcal{B}(\mathbf{x},t-s)\left(\int_0^s\phi(s-r)\bar{\sigma}(\mathbf{x}-c\theta(t-s),r,\theta)dr\right)ds\nonumber\\ & -\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds\ ,\label{eq: important}\\ \partial_t\bar{\sigma}_0 & = T\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds\nonumber\\ & -T\int_0^t\mathcal{B}(\mathbf{x},t-s)\left(\int_0^s\phi(s-r)\bar{\sigma}(\mathbf{x}-c\theta(t-s),r,\theta)dr \right)ds\ .\label{eq: important 2}
\end{align}
\subsection{Scaling}\label{subsec: scaling}
Consider macroscopic space and time scales $\textsf{X}$ and $\textsf{T}$ respectively.
We assume that the mean run time $\bar{\tau}$ and the mean waiting time $\bar{r}$ are small compared to the macroscopic time, i.e. $\bar{\tau}/\textsf{T}$ and $\bar{r}/\textsf{T}$ are equal to $\varepsilon^{\textnormal{power}}\ll 1$. We scale as follows,
\begin{equation}
t_n= \varepsilon t\ ,\ \mathbf{x}_n=\frac{\varepsilon \mathbf{x}}{s}\ ,\ c_n=\varepsilon^{-\gamma} c_0\ , \ r_n=\varepsilon^\varrho r\ ,\ \textnormal{and}\ \tau_n=\tau\varepsilon^{\mu}\ ,\label{eq: scaling}
\end{equation}
for $\mu>0$, $\gamma>0$ and $\varrho>0$. \textcolor{black}{The scaling here is of parabolic type. It corresponds to a limit of the physical system with small average waiting and run times, small spatial run lengths, and large velocities compared to the macroscopic scales of an experiment. The values of the parameters $\gamma, \varrho, \mu$ are specified in Section \ref{section:finalequation}.}
Introducing this scaling we have,
\[
\psi_\varepsilon(\mathbf{x},\tau)=\left(\frac{\varepsilon^\mu\tau_0}{\varepsilon^\mu\tau_0+\tau} \right)^\alpha,\ \varphi_\varepsilon(\mathbf{x},\tau)=\frac{\alpha\left(\varepsilon^\mu\tau_0 \right)^\alpha}{\left(\varepsilon^\mu\tau_0+\tau\right)^{\alpha+1}}
\]
and
\[\phi_{\varepsilon}(r)=\frac{\kappa\left(\varepsilon^\varrho r_0 \right)^\kappa}{(\varepsilon^\varrho r_0+r)^{\kappa+1}}\ .
\]
Moreover, (\ref{eq: important}) is given by
\begin{align}
\varepsilon\partial_t\bar{\sigma}+\varepsilon^{1-\gamma}c_0\theta\cdot\nabla\bar{\sigma}& =T\int_0^t\mathcal{B}(\mathbf{x},t-s)\left(\int_0^s\phi_{\varepsilon}(s-r)\bar{\sigma}({\mathbf{x}-c\theta(t-s),r,\theta})dr\right)ds\nonumber\\ &-\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds\label{eq: important scaled}\ .
\end{align}
Computing the Laplace transform of the above expression we obtain
\begin{align}
\left(\varepsilon\lambda+\varepsilon^{1-\gamma}c_0\theta\cdot\nabla \right)\hat{\bar{\sigma}}&(\mathbf{x},\lambda,\theta) -\varepsilon\bar{\sigma}^0(\mathbf{x},\theta)\nonumber\\ & \simeq -\left(\mathds{1}-\hat{\phi}_{\varepsilon}\left(\varepsilon\lambda \right)T \right)\hat{\mathcal{B}}_\varepsilon\left(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla \right)\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta)\ ,\label{eq: laplace space}
\end{align}
where we have assumed $\hat{\mathcal{B}}_\varepsilon(\mathbf{x},\varepsilon\lambda+\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)\simeq\hat{\mathcal{B}}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)$ for $1>1-\gamma$.
The Laplace transform of the resting time density function $\phi_{\varepsilon}$, is given by
\[
\hat{\phi}_\varepsilon(\varepsilon\lambda)=\kappa\left(a\lambda \right)^\kappa\Gamma(-\kappa,a\lambda)e^{a\lambda}
\]
where $a=\varepsilon^{\varrho+1}r_0$. Using the following asymptotic expansion for the incomplete Gamma function \cite{NIST:DLMF}
\begin{align}
\Gamma(b,z) & = \Gamma(b)\left( 1-z^{b}e^{-z}\sum_{k=0}^{\infty}\frac{z^k}{\Gamma(b+k+1)}\right)\ ,\label{eq: 3.23}
\end{align}
where $b$ is positive non-integer, and recalling that $b\Gamma(b)=\Gamma(b+1)$, we get
\begin{equation}
\hat{\phi}_\varepsilon(\varepsilon\lambda)=1-\varepsilon^{(1+\varrho)\kappa}r_0^\kappa\lambda^\kappa+\mathcal{O}(a\lambda)\label{eq: expasnion of phi}
\end{equation}
since $0<\kappa<1$. Note that in the above we have considered $e^{a\lambda}=1+\mathcal{O}(a\lambda)$ and this approximation is valid for $(1+\varrho)\kappa>0$.
Hence, substituting (\ref{eq: expasnion of phi}) into (\ref{eq: laplace space}) we obtain the following,
\begin{align}
(\varepsilon\lambda+\varepsilon^{1-\gamma}&c_0\theta\cdot\nabla)\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta) -\varepsilon\bar{\sigma}^0(\mathbf{x},\theta)\nonumber\\ & \simeq -\left(\mathds{1}-(1-r_0^\kappa\varepsilon^{(1+\varrho)\kappa} \lambda^{\kappa})T\right)\hat{\mathcal{B}}_\varepsilon\left(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla \right)\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta)\ .\label{eq: full Laplace transform}
\end{align}
Transforming back to the $(\mathbf{x},t)$-space we get
\begin{align}
\varepsilon\partial_t\bar{\sigma}(\cdot,\theta)&+\varepsilon^{1-\gamma}c_0\theta \cdot\nabla\bar{\sigma}(\cdot,\theta) \simeq-(\mathds{1}-T)\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)\bar{\sigma}(\cdot,\theta)\nonumber\\ & -r_0^\kappa\varepsilon^{(1+\varrho)\kappa}T{}_{t}\mathds{D}^{\kappa}\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)\bar{\sigma}(\cdot,\theta)\ .\label{eq: important equation}
\end{align}
Here we have used the fact that the Laplace transform of the Riemann-Liouville fractional derivative ${}_{t}\mathds{D}^\kappa$ is given by Ref.~\refcite{klages2008anomalous}
\[
\mathcal{L}\left\{ {}_{t}\mathds{D}^\kappa f(t)\right\}=\lambda^\kappa \hat{f}(\lambda)- \sum_{m=0}^{n-1}\lambda^m\lim_{t\rightarrow 0}{}_{t}\mathds{D}^{\kappa-m-1}f(0^+)\ \textnormal{for}, \ n-1<\kappa<n\ ,
\]
where we assumed $f(0^+)=0$, since there is no scattering at time zero.
Scaling \eqref{eq: important 2} and changing the order of integration, the particles at rest satisfy the following equation
\begin{align}
\varepsilon\partial_t\bar{\sigma}_0(\cdot,\theta)& =T\int_0^t\mathcal{B}(\mathbf{x},t-s)\bar{\sigma}(\mathbf{x}-c\theta(t-s),s,\theta)ds\nonumber \\ &-T\int_0^t\phi_\varepsilon(t-s)\left(\int_0^s\mathcal{B}(\mathbf{x},t-s')\bar{\sigma}(\mathbf{x}-c\theta(t-s'),s',\theta)ds'\right)ds\ .\label{eq: density particles at rest}
\end{align}
The Laplace transform of this expression is
\begin{align}
\varepsilon\lambda\hat{\bar{\sigma}}_0(\mathbf{x},\lambda,\theta)-\varepsilon\bar{\sigma}^0_0(\mathbf{x},\theta) =r_0^\kappa\varepsilon^{(1+\varrho)\kappa}\lambda^\kappa T\hat{\mathcal{B}}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)\hat{\bar{\sigma}}(\mathbf{x},\lambda,\theta)\ ,\label{eq: laplace transform resting particles}
\end{align}
and if we assume that $1>1-\gamma$ as before we get
\begin{align}
\varepsilon\partial_t\bar{\sigma}_0(\cdot,\theta) =r_0^\kappa\varepsilon^{(1+\varrho)\kappa}T{}_{t}\mathds{D}^\kappa\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla) & \bar{\sigma}(\cdot,\theta)\ .\label{eq: resting particles}
\end{align}
\subsection{Conservation of particles}\label{sec: resting particles}
From the system (\ref{eq: kinetic})-(\ref{sigma0sigma}) we can obtain a particle
conservation equation, considering $\sigma_{\textnormal{tot}}(\mathbf{x},t,\theta)=\bar{\sigma}(\mathbf{x},t,\theta)+\bar{\sigma}_0(\mathbf{x},t,\theta)$, where $\bar{\sigma}$ and $\bar{\sigma}_0$ are given by (\ref{eq: important}) and (\ref{eq: important 2}) respectively. The conservation equation reads
\[
\varepsilon\partial_t\int_S\sigma_{\textnormal{tot}}d\theta+\varepsilon^{1-\gamma}c_0\int_S\theta\cdot\nabla\sigma_{\textnormal{tot}}d\theta=0\ ,
\]
where $S$ is the unit sphere.
Hence, substituting (\ref{eq: important equation}) and (\ref{eq: resting particles}) into the above expression we get
\begin{align}
\varepsilon\partial_t\int_S\sigma_{\textnormal{tot}}d\theta &+\varepsilon^{1-\gamma}c_0\int_S\theta\cdot\nabla\sigma_{\textnormal{tot}}d\theta =-\int_S(\mathds{1}-T)\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)\bar{\sigma}d\theta\nonumber\\ &-r_0^\kappa\varepsilon^{(1+\varrho)\kappa}\int_S T\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla){}_{t}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},t,\theta)d\theta\nonumber\\ & +r_0^\kappa\varepsilon^{(1+\varrho)\kappa}\int_S T\mathcal{B}_\varepsilon(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla){}_{t}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},t,\theta)d\theta=0\ . \label{eq: conservation long}
\end{align}
Note that here we have used the conservation of particles during the tumbling phase given in (\ref{eq: conservation T}).
If we consider $\sigma_{\textnormal{tot}}(\mathbf{x},t,\theta)=\frac{1}{|S|}\left(\bar{u}+\bar{u}_0+\varepsilon^\vartheta n\theta\cdot\bar{w}\right)$ then we finally have
\begin{equation}
\varepsilon\partial_t(\bar{u}+\bar{u}_0)+\varepsilon^{\vartheta+1-\gamma} nc_0\nabla\cdot\bar{w}=0\ ,\label{eq: conservation}
\end{equation}
where
\[
\bar{u}_0(\mathbf{x},t)=\frac{1}{|S|}\int_S\bar{\sigma}_0(\cdot,\theta)d\theta\ ,
\]
and $\bar{u}$ and $\bar{w}$ are defined in Lemma \ref{lem: eigenfunctions}. The equation (\ref{eq: conservation}) is non-trivial only for $\vartheta=\gamma$.
We can define a new density, independent of the direction $\theta$, $u_{\textnormal{tot}}(\mathbf{x},t)=\bar{u}+\bar{u}_0$, that takes into account the moving and resting particles. Then, the conservation equation finally reads
\begin{equation}
\partial_tu_{\textnormal{tot}}+nc_0\nabla\cdot\bar{w}=0\ .\label{eq: conservation eqaution}
\end{equation}
\section{Fractional space-time equation}\label{section:finalequation}
\iffalse
We can rewrite the expression (\ref{eq: important equation}) by expanding the convolution term as follows. First note that
\[
{}_{t-s}\mathds{D}^\kappa={}_{t}\mathds{D}^\kappa+{}_{t}\mathds{D}^{\kappa+1}(-s)+\textnormal{l.o.t.},
\]
hence, multiplying by $\bar{\sigma}(\mathbf{x},s,\theta)$ and integrating in time we get
\begin{align}
\int_0^t{}_{t-s}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},s,\theta)ds & ={}_{t}\mathds{D}^\kappa\int_0^t\bar{\sigma}(\mathbf{x},s,\theta)ds+\textnormal{l.o.t.}\nonumber\\ &\simeq{}_{t}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},t,\theta)-{}_{t}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},0,\theta)\nonumber\\ &={}_{t}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},t,\theta)-\frac{t^{-\kappa}}{\Gamma(1-\kappa)}\delta(x)={}_{t}^{C}\mathds{D}^\kappa\bar{\sigma}(\mathbf{x},t,\theta),
\end{align}
where we have used the fact that \cite{gorenflo1997fractional, klages2008anomalous}
\[
{}_{t}\mathds{D}^\kappa 1=\frac{t^{-\kappa}}{\Gamma(1-\kappa)}\ ,\ {}_{t}^C\mathds{D}^{\kappa}f(t)={}_{t}\mathds{D}^\kappa f(t)-\frac{t^{-\kappa}}{\Gamma(1-\kappa)}f(0^+)
\] and the initial condition. Here ${}_{t}^{C}\mathds{D}^\kappa$ denotes the fractional Caputo derivative.
\fi
Next we obtain an expression for the mean direction $\bar{w}$, depending only on the density of moving particles $\bar{u}$.
Multiplying (\ref{eq: important equation}) by $\theta$ and integrating over all directions we obtain
\begin{align}
n\varepsilon^{1+\gamma}\partial_t\bar{w}&+\varepsilon^{1-\gamma}c_0 \cdot\nabla\bar{u} \simeq-\frac{1}{|S|}\int_S\theta(\mathds{1}-T)\mathcal{B}_\varepsilon\left(\bar{u}+n\varepsilon^{\gamma}\theta\cdot\bar{w}\right)d\theta\nonumber\\ & -\frac{r_0^\kappa\varepsilon^{(1+\varrho)\kappa}}{|S|}\int_S\theta T\mathcal{B}_\varepsilon \ {}_{t}\mathds{D}^{\kappa}\left(\bar{u}+n\varepsilon^{\gamma}\theta\cdot\bar{w} \right)d\theta\ . \label{eq: expansion new densities}
\end{align}
From equation (\ref{eq:kernel}) and for $\hat{\varphi}(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)$ and $\hat{\psi}(\mathbf{x},\varepsilon^{1-\gamma}c_0\theta\cdot\nabla)$ given as in Ref.~\refcite{pks} we find
\begin{align}
\mathcal{B}_\varepsilon= \frac{\varepsilon^{-\mu}(\alpha-1)}{\tau_0}&-\frac{\varepsilon^{1-\gamma}c_0\theta\cdot\nabla}{2-\alpha}-\tau_0^{\alpha-2}\varepsilon^{\mu(\alpha-2)+(1-\gamma)(\alpha-1)}(1-\alpha)^2\nonumber\\ & \times
\Gamma(-\alpha+1)(c_0\theta\cdot\nabla)^{\alpha-1} +\mathcal{O}\left(\tau_0^{\alpha-1}\varepsilon^{\mu(\alpha-1)}\lambda^\alpha \right)\ .
\end{align}
Substituting $\mathcal{B}_\varepsilon$ into (\ref{eq: expansion new densities}), we compare the leading powers of $\varepsilon$.
\iffalse
\begin{table}[ht]
\caption{Terms for the scaling.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
No & Terms without delay & Terms with delay \\
\hline
1 & $\varepsilon^{-\mu}\bar{u}$ & $\varepsilon^{-\mu+(\nu+\varrho)\kappa}{}_{t}\mathds{D}^\kappa\bar{u}$ \\
\hline
2 & $\varepsilon^{1-\gamma}\theta\cdot\nabla\bar{u}$ & $\varepsilon^{1-\gamma+(\nu+\varrho)\kappa}(\theta\cdot\nabla){}_{t}\mathds{D}^\kappa\bar{u}$ \\
\hline
3 & $\varepsilon^{\mu(\alpha-2)+(1-\gamma)(\alpha-1)}(\theta\cdot\nabla)^{\alpha-1}\bar{u}$ & $\varepsilon^{\mu(\alpha-2)+(1-\gamma)(\alpha-1)+(\nu+\varrho)\kappa}(\theta\cdot\nabla)^{\alpha-1}{}_{t}\mathds{D}^\kappa\bar{u}$ \\
\hline
4 & $\varepsilon^{-\mu+\nu-1+\gamma}(\theta\cdot\bar{w})$ & $\varepsilon^{-\mu+(\nu+\varrho)\kappa+\nu-1+\gamma}{}_{t}\mathds{D}^\kappa(\theta\cdot\bar{w})$ \\
\hline
5 & $\varepsilon^\nu(\theta\cdot\nabla)(\theta\cdot\bar{w})$ & $\varepsilon^{(\nu+\varrho)\kappa+\nu}(\theta\cdot\nabla){}_{t}\mathds{D}^\kappa(\theta\cdot\bar{w})$ \\
\hline
6 & $\varepsilon^{(\alpha-2)(\mu+1-\gamma)+\nu}(\theta\cdot\nabla)^{\alpha-1}(\theta\cdot\bar{w})$ & $\varepsilon^{(\nu+\varrho)\kappa+\nu+(\alpha-2)(\mu+1-\gamma)}(\theta\cdot\nabla)^{\alpha-1}{}_{t}\mathds{D}^\kappa(\theta\cdot\bar{w})$ \\
\hline
\end{tabular}
\end{center}
\label{default}
\end{table}
\fi
Considering that $(1+\varrho)\kappa>0$ as in Section \ref{subsec: scaling}, we observe that the terms involving the delay are of lower order with the exception of the term $\varepsilon^{-\mu+(1+\varrho)\kappa}{}_{t}\mathds{D}^\kappa\bar{u}$.
The physically relevant scaling regime involves a fractional transport term in the expression for $\bar{w}$, hence we choose
\begin{equation}
\mu=\frac{1-\alpha(1-\gamma)}{\alpha-1}\ \textnormal{and}\ \gamma>1-\frac{1}{\alpha}\ \label{eq: scaling final}
\end{equation}
to guarantee that $\mu>0$. Moreover, to ensure that the term involving a time delay is of lower order we also choose $(1+\varrho)\kappa>(\alpha-1)(\mu+1-\gamma)$. \textcolor{black}{Taking these relations into account, the right hand side of (\ref{eq: expansion new densities}) can be rewritten as
\begin{align}
-\frac{1}{|S|}&\int_S\theta(\mathds{1}-T)\Bigl[\varepsilon^{-\mu}\frac{(\alpha-1)\bar{u}}{\tau_0}-\varepsilon^{-\mu+\gamma}\frac{(\alpha-1)}{\tau_0}n\theta\cdot\bar{w}-\tau_0^{\alpha-2}\varepsilon^{\mu(\alpha-2)+(1-\gamma)(\alpha-1)}\nonumber\\ &\times(1-\alpha)^2\Gamma(-\alpha+1)(c_0\theta\cdot\nabla)^{\alpha-1}\bar{u} \Bigr]d\theta+\mathcal{O}\left(\varepsilon^{\min\{-\mu+(1+\varrho)\kappa,\ \mu(\alpha-1)\}} \right).\label{eq: clearer}
\end{align}
From the coefficient
of the leading term $\varepsilon^{-\mu}$ in (\ref{eq: clearer}) we then obtain,}
\begin{align}
0=-\frac{1}{|S|}\int_S\theta(\mathds{1}-T)\frac{\alpha-1}{\tau_0}\bar{u}d\theta\label{eq: zero equation}\ .
\end{align}
The subleading term is of order $\varepsilon^{\mu(\alpha-2)+(1-\gamma)(\alpha-1)}$ and we get
\begin{align}
0 & =-\frac{1}{|S|}\int_S\theta(\mathds{1}-T)\Bigl(\tau_0^{\alpha-2}(1-\alpha)^2\Gamma(-\alpha+1)c_0^{\alpha-1}(\theta\cdot\nabla)^{\alpha-1}\bar{u}\nonumber\\ & +\frac{n(\alpha-1)}{\tau_0}(\theta\cdot\bar{w}) \Bigr)d\theta\label{eq: first mean direction}\ .
\end{align}
Note that we have obtained the same fractional diffusion equation as in Ref.~\refcite{pks} and Ref.~\refcite{perthame2018fractional}
for a constant chemoattractant concentration.
From (\ref{eq: first mean direction}) we can obtain the mean direction $\bar{w}$ after applying the operator $T$ to the right hand side. Therefore, we obtain
\begin{equation}
\bar{w}=\frac{\pi\tau_0^{\alpha-1}(\alpha-1)}{\sin(\pi\alpha)\Gamma(\alpha)}\frac{(n^2\nu_1-|S|)}{n|S|(\nu_1-1)}c_0^{\alpha-1}\nabla^{\alpha-1}\bar{u}\label{eq: mean direction final}\ .
\end{equation}
Substituting $\bar{w}$ into the conservation equation (\ref{eq: conservation eqaution}) we obtain
\begin{equation}
\partial_tu_{\textnormal{tot}}=\nabla\cdot \left(C_\alpha\nabla^{\alpha-1}\bar{u}\right)\label{eq: preliminary}\ ,
\end{equation}
where
\[
C_\alpha=-\frac{n\pi\tau_0^{\alpha-1}(\alpha-1)}{\sin(\pi\alpha)\Gamma(\alpha)}\frac{(n^2\nu_1-|S|)}{n|S|(\nu_1-1)}c_0^{\alpha}>0 \ \textnormal{for}\ 1<\alpha<2\ .
\]
Next we write the right hand side of (\ref{eq: preliminary}) in terms
of $u_{\textnormal{tot}}$, and for this we return to the resting particles
equation (\ref{eq: resting particles}).
Expanding the right hand side of (\ref{eq: resting particles}) and choosing only the
leading terms we obtain, in the Laplace space,
\[
\lambda\hat{\bar{\sigma}}_0(\mathbf{x},\lambda,\theta)-\bar{\sigma}_0^0(\mathbf{x},0,\theta)=r_0^\kappa\lambda^\kappa \frac{(\alpha-1)}{\tau_0}\hat{\bar{u}}+\mathcal{O}\left(\varepsilon^{(1+\varrho)\kappa+\mu(\alpha-2)+(\alpha-1)(1-\gamma)} \right)\ .
\]
Here we have chosen $(1+\varrho)\kappa=\alpha(\mu+1-\gamma)$ which agrees with our previous assumption $(1+\varrho)\kappa>(\alpha-1)(\mu+1-\gamma)$. Integrating the above expression with respect to $\theta$ we can write it in terms of the Laplace transform of $\bar{u}_0$. Substituting $\bar{u}=u_{\textnormal{tot}}-\bar{u}_0$ into the right hand side and grouping terms we obtain
\begin{equation}
\hat{\bar{u}}_0(\mathbf{x},\lambda)-\frac{1}{\lambda}\bar{u}_0^0(\mathbf{x},0)=\frac{\hat{u}_{\textnormal{tot}}}{1+\frac{\tau_0}{r_0^\kappa(\alpha-1)}\lambda^{1-\kappa}}\ .
\end{equation}
Since $\lambda\rightarrow 0$ then, applying a Taylor expansion and assuming all particles are moving at $t=0$, i.e. $\bar{u}_0^0=0$, we have
\begin{equation}
\hat{\bar{u}}_0=\left(1-\frac{\tau_0\lambda^{1-\kappa}}{r_0^\kappa(\alpha-1)}+\mathcal{O}\left(\lambda^{2(1-\kappa)} \right) \right)\hat{u}_{\textnormal{tot}}\ .\label{eq: resting particles approximation}
\end{equation}
Substituting the inverse Laplace transform of (\ref{eq: resting particles approximation}) back into (\ref{eq: preliminary}) we get
\begin{align}
\partial_tu_{\textnormal{tot}} & =\nabla\cdot\left( C_\alpha\nabla^{\alpha-1}(u_{\textnormal{tot}}-\bar{u}_0)\right)\nonumber\\ &={}_{t}\mathds{D}^{1-\kappa} \nabla\cdot\left(C_{\alpha,\kappa}\nabla^{\alpha-1} u_{\textnormal{tot}} \right)\label{eq: FINAL}
\end{align}
where
\[
C_{\alpha,\kappa}=\frac{\tau_0}{r_0^\kappa(\alpha-1)}C_\alpha\ .
\]
In fact, we can also write equation (\ref{eq: FINAL}) using the Laplace transform as
\begin{align}
\lambda^{\kappa}\hat{u}_\textnormal{tot}-\lambda^{\kappa-1}u_\textnormal{tot}^0 & =\nabla\cdot\left(C_{\alpha,\kappa}\nabla^{\alpha-1}\hat{u}_\textnormal{tot}\right)\ ,
\end{align}
and using the fact that
\[
\mathcal{L}\left\{{}_{t}^C\mathds{D}^\kappa f(t) \right\}=\lambda^\kappa\hat{f}(\lambda)-\sum_{m=0}^{n-1}\lambda^{\kappa-m-1}f^{(m)}(0)\ ,
\] we have
\begin{equation}
{}_{t}^C\mathds{D}^\kappa u_\textnormal{tot}=\nabla\cdot\left(C_{\alpha,\kappa}\nabla^{\alpha-1}u_\textnormal{tot} \right)\label{eq: final final}\ .
\end{equation}
\begin{rem}
As previously noted, equation \eqref{eq: final final} does not contain a chemotactic component: this lies in agreement with Ref.~\refcite{harris}, where CD8$^+$ T cells do not exhibit directional migration on the time and length scales relevant to their experiments.
\end{rem}
\iffalse
\textcolor{black}{\begin{rem}
Since we want $\varrho\gg\nu$ for the long runs to be observed \textcolor{black}{(????)} then, from $(\nu+\varrho)\kappa=\alpha(\mu+1-\gamma)$ and the expression for $\nu$ we obtain that
\[
\varrho=\alpha\frac{1-\kappa}{\kappa}(\mu+1-\gamma)+\mu>0
\] and for $\varrho\gg\nu$,
\[
\kappa\ll\frac{1}{2-\frac{2\mu}{\alpha(\mu+1-\gamma)}}.
\]
\end{rem}
\begin{rem}
If we consider the same scaling as in \cite{pks} for $\gamma=1/2$ and $\mu=(2-\alpha)/2(\alpha-1)$, the terms involving the linear $\bar{w}$ and $(\theta\cdot\nabla)^{\alpha-1}\bar{u}$ have to be of the same order, then
\[
\frac{2\alpha-3}{2(\alpha-1)}=\frac{-1}{2(\alpha-1)}+\nu
\]
which implies that $\nu=1$.
Also, all the terms involving delay have to be of lower order, for that we require
\[
\frac{\alpha-2}{2(\alpha-1)}+(1+\varrho)\kappa>\frac{2\alpha-3}{2(\alpha-1)},
\]
therefore,
\[
\kappa>\frac{1}{2(1+\varrho)}\ \textnormal{which\ implies}\ \varrho>-1/2.
\]
\end{rem}}
\fi
\begin{rem}\label{sec: scaling relations}
From the analysis in the previous section, relevant scaling parameters satisfy
the following relations:
\begin{align}
\mu & =\frac{1-\alpha(1-\gamma)}{\alpha-1}\ , \ \varrho =\frac{\alpha \gamma}{\kappa(\alpha-1)}-1\ ,
\end{align}
for $0<\kappa<1$ and $1<\alpha<2$. From (\ref{eq: scaling final}) and knowing that $\varrho>0$ we conclude that
\[
\kappa-\frac{\kappa}{\alpha}<\gamma<1-\frac{1}{\alpha} \ .
\]
For $\alpha=1.15$ and $\kappa=0.7$ as in \cite{harris}, $0.092<\gamma<1.15$. Choosing $\gamma=0.5$ then
$$\mu\approx 3.8 \ \textnormal{and}\ \varrho\approx 4.47.$$ In this regime, the scaling of the long runs ($\mu$) and the scaling of the waiting times ($\varrho$) are of similar order.
\end{rem}
\subsection{Fundamental solution}\label{sec: fundamental solution}
Assuming that the stopping rate $\psi$ is independent of the position of the
particle, we can write \eqref{eq: final final} as
\begin{equation}
{}_{t}^C\mathds{D}^\kappa\ u_\textnormal{tot}=C_{\alpha,\kappa}\nabla\cdot\left(\nabla^{\alpha-1}u_\textnormal{tot} \right)= \widetilde{C}_{\alpha,\kappa} (-\Delta)^{\alpha/2} u_\textnormal{tot} \label{eq: constant c}.
\end{equation}
Here, according to \eqref{appendixnormal} in two dimensions, for $1<\alpha<2$,
\[
\widetilde{C}_{\alpha,\kappa}=-2\sqrt{\pi} C_{\alpha,\kappa}\cos\left(\frac{\pi\alpha}{2} \right)\frac{\Gamma\left(\frac{\alpha+1}{2} \right)}{\Gamma\left(\frac{\alpha+2}{2} \right)}.
\]
Following Ref.~\refcite{fundsol}, the fundamental solution of \eqref{eq: constant c} in $\mathbb{R}^n$, with initial condition $\delta_0$ and diffusion constant $C_{\alpha,\kappa}$,, can be found with the help of the Fourier-Laplace transform
\begin{equation}
\hat{\overline{G}}(\lambda, \xi) = \frac{\lambda^{\kappa-1}}{\lambda^{\kappa} + \widetilde{C}_{\alpha,\kappa} \vert \xi \vert^{\alpha}}.
\end{equation}
Note that the Laplace transform of the Mittag-Leffler function is
\begin{equation}
\mathcal{L}\; E_{\kappa} (ct^{\kappa}) = \frac{\lambda^{\kappa-1}}{\lambda^{\kappa} -c}.
\end{equation}
Thus,
\begin{equation}
\hat{G}(t,\xi) = E_{\kappa} ( -\widetilde{C}_{\alpha,\kappa} \vert \xi \vert^{\alpha} t^{\kappa}).
\end{equation}
Using the formula for the inverse transformation of a radial function, we obtain
\begin{equation}
G(t,\mathbf{x}) = \frac{\vert \mathbf{x} \vert^{1-n/2}}{(2\pi)^{n/2}} \int_0^{\infty} E_{\kappa} (-\widetilde{C}_{\alpha,\kappa}\tau^{\alpha} t^{\kappa}) \tau^{n/2} J_{n/2-1}(\tau \vert \mathbf{x} \vert) d \tau,
\end{equation}
where $J_r(z)$ is a Bessel function. Passing through the Mellin and inverse Mellin transform we conclude
\begin{equation}
G(t,\mathbf{x}) = \frac{1}{\pi^{n/2}|\mathbf{x}|^{n}} H^{2,1}_{2,3}\left(\frac{|\mathbf{x}|^{\alpha}}{2^{\alpha} \widetilde{C}_{\alpha,\kappa} t^\kappa} \Big|^{(1,1); (1, \kappa)}_{(n/2,\alpha/2); (1, 1);(1,\alpha/2)}\right)\label{eq: asymptotic of fundamental}\ ,
\end{equation}
where $H^{2,1}_{2,3}(z)$ is a Fox $H$-function. Useful identities and asymptotics may be found in Ref.~\refcite{Braaksma} and Ref.~\refcite{Hbook}. In particular by Theorem 3 in Ref.~\refcite{Braaksma} for $1< \alpha <2,\; 0<\kappa<1,\;\mathrm{and}\; \alpha<2\kappa$,
\begin{equation}\label{eq: asympt}
G(t,\mathbf{x}) \simeq \frac{1}{|\mathbf{x}|^{n}} \ \left(\frac{|\mathbf{x}|^{\alpha}}{2^{\alpha} \widetilde{C}_{\alpha,\kappa} t^\kappa}\right)^q
\end{equation}
when $\frac{|\mathbf{x}|^{\alpha}}{\widetilde{C}_{\alpha,\kappa} t^\kappa} \ll 1$, where $q= 1$.
In the limit $\frac{|\mathbf{x}|^{\alpha}}{\widetilde{C}_{\alpha,\kappa} t^\kappa} \gg 1$, we have $q = -1$. Note that these estimates hold in the regime of the experiments in Ref.~\refcite{harris} discussed above, as well as for examples of superdiffusion without waiting times, \cite{pks} relevant for certain studies of \textit{E.~coli} and \textit{Dictyostelium discoideum}.\\
Other regimes generate the exponentially small tails known for Brownian motion, including the presence of waiting times. For example, for Brownian motion with waiting times,
corresponding to $\alpha = 2$ and $\kappa<1$, we obtain the fundamental solution
\begin{equation}
G(t,\mathbf{x}) = \frac{1}{2 \pi^{n/2}|\mathbf{x}|^{n}} H^{2,0}_{1,2}\left(\frac{|\mathbf{x}|}{2 \sqrt{\widetilde{C}_{1,\kappa}} t^{\kappa/2}} \Big|^{(1,\kappa/2)}_{(1,1/2); (n/2, 1/2)}\right)\ .
\end{equation}
It has exponentially small tails as $|\mathbf{x}| t^{-\kappa/2} \to \infty$:
\begin{equation}
G(t,\mathbf{x}) \simeq \frac{1}{|\mathbf{x}|^n}\left(\frac{|\mathbf{x}|}{2 \sqrt{\widetilde{C}_{2, \kappa}} t^{\kappa/2}} \right)^{-\frac{n}{2-\kappa}} \exp \left( 2(\frac{\kappa}{2}-1) \kappa^{\frac{\kappa}{2-\kappa}}\left(\frac{|\mathbf{x}|}{2 \sqrt{\widetilde{C}_{2, \kappa}} t^{\kappa/2}}\right)^{\frac{2}{2-\kappa}}\right).
\end{equation}
In particular, the range in which the asymptotics \eqref{eq: asympt} holds shrinks to $0$ when $\alpha$ and $\kappa$ approach the boundary of the admissible region $1< \alpha <2,\; 0<\kappa<1,\;\mathrm{and}\; \alpha<2\kappa$.
\subsection{Hitting times}
The fundamental solution of the continuum model derived in the previous subsection allows us
to extract analytical approximations for biologically relevant quantities. As an example, we derive an expression for the time at which a particle hits some distant target $T$ with
radius $a$, in the experimentally relevant regime $1< \alpha <2,\; 0<\kappa<1,\;\mathrm{and}\; \alpha<2\kappa$. We seek the first time at which the density of the solution in $T$ reaches a certain threshold $\delta$. That is, we seek $t_0$ such that
\begin{equation}
\delta = \int_T \int_{\mathbf{R}^n} G(\mathbf{x}-\mathbf{y},t_0)u_0(\mathbf{y}) d\mathbf{y} d\mathbf{x}.
\end{equation}
Assuming that the initial positions of the particles are given by $\mathbf{x}_i$, so that $u_0(\mathbf{x}) = \sum_i \delta_{\mathbf{x}_i}(\mathbf{x})$, we obtain
\begin{align}
\delta &= \sum_i \int_T G(\mathbf{x}-\mathbf{x}_i,t_0) dx \nonumber \\& = \sum_i \int_T \frac{1}{\pi^{n/2}|\mathbf{x}-\mathbf{x}_i|^{n}} H^{2,1}_{2,3}\left(\frac{|\mathbf{x}-\mathbf{x}_i|^{\alpha}}{2^{\alpha} \widetilde{C}_{\alpha,\kappa} t_0^\kappa} \Big|^{(1,1); (1, \kappa)}_{(n/2,\alpha/2); (1, 1);(1,\alpha/2)}\right)\ d\mathbf{x}
\end{align}
If all initial positions are at distance $\gg (\widetilde{C}_{\alpha,\kappa} \tau^\kappa)^{1/\alpha}$ from the target $T$, we may use the asymptotic expansion of the $H$-function from the previous subsection to obtain
\begin{equation}
\begin{split}
\delta &\simeq \frac{2^{\alpha} \widetilde{C}_{\alpha,\kappa} t_0^{\kappa}}{\pi^{n/2}} \sum_i \int_T \vert \mathbf{x}-\mathbf{x}_i \vert^{-\alpha - n} d\mathbf{x} \\
& \simeq \frac{2^{\alpha} \widetilde{C}_{\alpha,\kappa} t_0^{\kappa}}{\pi^{n/2}} \mathrm{vol}(T) \sum_i \vert \mathbf{x}_0 - \mathbf{x}_i\vert^{-\alpha-n},
\end{split}
\end{equation}
where $\mathbf{x}_0$ is a centre of the target $T$. Thus,
\begin{equation}\label{hittimes}
t_0 \simeq \left( \frac{\delta \pi^{n/2}}{2^{\alpha} \widetilde{C}_{\alpha,\kappa} \mathrm{vol}(T) \sum_i \vert \mathbf{x}_0 - \mathbf{x}_i \vert^{-\alpha-n}} \right)^{1/\kappa}.
\end{equation}
This formula holds in the regime where the asymptotic expansion \eqref{eq: asympt} is valid.
\section{Numerical methods} \label{numerics}
In addition to the detailed qualitative information provided by the fundamental solution,
the space-time fractional continuum equation allows efficient quantitative
modelling of immune cell behaviour. We briefly describe the numerical approximation of the
nonlocal operators. Challenges include the numerical evaluation of the singular
integrals and the lack of boundary regularity, which leads to reduced convergence
rates in naive approaches. Our numerical approximation of Equation \eqref{eq: final final}
uses a finite element discretisation in space as discussed, for example, in Ref.~\refcite{VIPreprint}
and a time stepping method based on convolution quadrature as in Ref.~\refcite{acosta2017finite}.
\iffalse
In addition to the detailed qualitative information provided by the fundamental solution, the space-time fractional continuum equation allows the efficient quantitative modelling of immune cell behaviour. Typical experimental geometries/dimensions:
Parasites/cysts and T cells have radii of $5 -10 \mu\mathrm{m}$. T cells explore a volume at most of $3.2-4.4 \times 10^{11} \mu \mathrm{m}^3$. Sensing radius of T cells is estimated to be $10-50 \mu\mathrm{m}$. Estimated numbers for T cells are $300000-450000$ and $300-500$ cysts in the brain.\\
Simulation runs in \cite{harris} use $a=40 \mu \mathrm{m}$ for the target in the search volume of $6.4 \time 10^8 \mu \mathrm{m}^3$ with 900 T cells.\\
Typical geometry is sphere/disc of sufficient size (as above). Estimated time for capture is approximately 200 minutes \cite{harris}.\\
Some references for numerics added \cite{ainsworth, ford11, baeumer05, Feng16, Deng}. For fractional time derivatives both finite difference discretizations and discontinuous Galerkin finite elements in time should be suitable. The exact form of the discretization will be dependent on the operator. (However, adding the fractional time derivative on top of the fractional Laplacian will not be an issue.)\\
Eventually boundary conditions corresponding to trajectories reflected at the boundary (= some Neumann condition) seem appropriate.
\fi
Let $\Omega \subset \mathbb{R}^n$ be a bounded domain with polygonal boundary and let $f\in C^0([0,T) \times \Omega)$. For $\alpha \in (1,2)$ and $\kappa \in (0,1)$, we consider the problem
\begin{align}\label{eq: num_set}
{}_{t}^C\mathds{D}^\kappa u + \nabla \cdot (C_{\alpha,\kappa} \nabla^{\alpha-1} u) & = f & \mathrm{in}\; \Omega \times [0,T) \nonumber\\
u &= 0 & \mathrm{in}\; \Omega^c \times [0,T) \\
u(\cdot, 0) &= u_0 & \mathrm{in}\; \Omega. \nonumber
\end{align}
Let $\mathcal{T}_h$ be a shape regular and quasi-uniform triangulation of the region $\Omega$, with triangles of diameter at most $h$. Let $H_h$ be the subspace of piecewise linear functions of $H_0^{\alpha/2}(\Omega)$ associated with $\mathcal{T}_h$. Then, the semidiscrete weak formulation of the problem is as follows: Find $u_h \in C^0([0,T); H_h) \cap C^\kappa([0,T); L^2(\Omega))$ such that
\begin{align}
({}_{t}^C\mathds{D}^\kappa u_h,v) + a(u_h,v) &= (f,v),\\
u_h(0) &= u_{0},
\end{align}
for all $v \in H_h$. Here $a(\cdot,\cdot)$ represents the bilinear form
$$a(u,v) = (C_{\alpha,\kappa} \nabla^{\alpha-1} u, \nabla v)$$
of the fractional Laplacian and, for simplicity, we assume that $u_0 \in H_h$.
The discrete fractional Laplace operator $\Lambda_h$ is defined as the unique operator satisfying
\begin{equation}
(\Lambda_h u_h,v_h) = a(u_h,v_h),\; \mathrm{for \; all}\; u_h,v_h \in H_h,
\end{equation}
and the mass matrix $M_h$ is given by
\begin{equation}
(M_h u_h,v_h) = (u_h,v_h),\; \mathrm{for \; all}\; u_h,v_h \in H_h.
\end{equation}
We conclude a strong reformulation of the semidiscrete problem: Find $u_h \in C^0([0,T); H_h) \cap C^\kappa([0,T); L^2(\Omega))$ such that
\begin{align}
M_h {}_{t}^C\mathds{D}^\kappa u_h + \Lambda_h u_h &= f_h & \mathrm{in}\; \Omega \times [0,T) \\
u_h &= 0 & \mathrm{in}\; \Omega^c \times [0,T) \nonumber\\
u(\cdot, 0) &= u_0 & \mathrm{in}\; \Omega\ .\nonumber
\end{align}
For the discretisation of this equation in time we follow the approach of Ref.~\refcite{acosta2017finite}. Dividing the time interval $[0,T)$ uniformly with time step $\tau = T/N$ of size $h^{\alpha/\kappa}$, we seek a numerical approximation of the convolution integral $K \ast g(t)$ associated with the Caputo time fractional derivative, by means of a finite sum as
\begin{equation}
K \ast g(t) = \int_0^t K(s)g(t-s) \mathrm{d}s \approx \sum_{j=0}^n w_j g(t-j\tau).
\end{equation}
The weights $w_j$ are computed from the Taylor expansion of $\mathcal{K}(\delta(y)/\tau)$. Here, $\mathcal{K}$ is the Laplace transform of the kernel $K$ and $\delta(y)$ is the quotient of the generating polynomials of a multistep method.
The $w_j$ are calculated from the recursion relation
\begin{align*}
w_0 &= \tau^{-\alpha/2},\\
w_j &= \left(1-\frac{\alpha+2}{2 j}\right)w_{j-1}.
\end{align*}
For full details see Ref.~\refcite{Lubich1988_1} and Ref.~\refcite{Lubich_2}. \\
The fully discrete time stepping scheme for \eqref{eq: num_set} is then given as follows: Find $\{u_h^1, u_h^2, \dots \} \subset H_h$ such that
\begin{equation}
(w_0 M + A) u_h^n = M \left((\sum_{j=0}^n w_j) u_h^0 - \sum_{j=1}^n w_j u_h^{n-j} + f_h^n \right),
\end{equation}
where $u_h^0$ is given. $M, A$ are the mass and stiffness matrices related to the piecewise linear basis functions $\varphi_i$ of $H_h$ defined by $M_{ij} = (\varphi_i,\varphi_j),\; A_{ij} = a(\varphi_i,\varphi_j)$, and $f_h^n = \sum_i (f(\cdot, n\tau), \phi_i) \phi_i$ is the Galerkin projection of $f$ onto $H_h$ at time $n \tau$.\\
To illustrate the effect of the fractional derivative in time in the biologically relevant regime, we consider problem \eqref{eq: num_set} in (a polygonal approximation of) $\Omega = B(0,10)$ with $f \equiv 0 $ and $u_0(\mathbf{x}) = \mathrm{max}(\exp(-5 \vert \mathbf{x} \vert^2) - 0.2, 0)$, using $h \simeq 0.025$ and $\tau \simeq h^{\alpha/\kappa}$. \textcolor{black}{This setup corresponds to a Petri dish with an initial density of cells in the center. The domain is large enough so that the dominant effects correspond to diffusion rather than boundary effects, as in the experiment \cite{harris}.} The solution at time $t = 1$ is shown for $\alpha = 1.15$ and $\kappa = 0.7$ in Figure \ref{f:delay} and for $\alpha = 1.15$ and $\kappa=1$ in Figure \ref{f:nodelay}. The figures clearly exhibit the memory effects induced by the fractional derivative in time.
\begin{figure}[!ht]
\centering
\subfloat[][]{
\includegraphics[width = 0.5\textwidth]{delay_alpha115.eps}
\label{f:delay}
}
\subfloat[][]{
\includegraphics[width = 0.5\textwidth]{nodelay_alpha115.eps}
\label{f:nodelay}
}
\caption{\textbf{(a)} Solution to \eqref{eq: num_set} at time $t = 1$ for $\alpha = 1.15$, $\kappa = 0.7$ (resting). \textbf{(b)} Solution to \eqref{eq: num_set} at time $t = 1$ for $\alpha = 1.15$, $\kappa = 1$ (no resting).}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width = \textwidth]{MSD_t_10.eps}
\caption{Width of solution depending on $\kappa$ as a function of time for $\alpha = 1.15$.}\label{f:CrossPlot}
\end{figure}
\textcolor{black}{Figure~\ref{f:CrossPlot} shows the width of the cell density as a function of time for different $\kappa$.}\\
Cross-sections of solutions of Equation~\eqref{eq: final final} with initial condition given by $u(0,\mathbf{x}) = \mathrm{max}(\exp(-5 \vert \mathbf{x} \vert^2) - 0.2, 0)$ are given in Figures~\ref{f:A107Kvar} and \ref{f:K07Avar}, for time $t=0.02$.
\textcolor{black}{The time $t = 0.02$ is chosen in order to exhibit the long tails of the L\'{e}vy diffusion, with their known slope. The influence of the boundary becomes more relevant for longer times.}
Figure~\ref{f:A107Kvar} shows a cross section of the solution for values of $\kappa$ from $0.6$ to $1$, for the experimentally obtained $\alpha=1.15$ as in \cite{harris}. In particular, it depicts the expected tail of the density with decay $|\mathbf{x}|^{-n-\alpha} = |\mathbf{x}|^{-3.15}$, independent of $\kappa$, as well as the Markovian limit $\kappa \to 1$. Figure~\ref{f:K07Avar} varies the coefficient $\alpha$ from $1.15$ to $2$, for $\kappa=0.7$ as in Ref.~\refcite{harris}. As long as $\alpha<2$, the density again decays like $|\mathbf{x}|^{-n-\alpha}$ away from the initial bump, while it exhibits the faster Gaussian decay for $\alpha = 2$. \textcolor{black}{As $\alpha \to 2^-$ the onset of algebraic decay is only visible on larger and larger spatial scales. We highlight the fact that the exponent of the decay does not depend on $\kappa$. This is due to the fact that the decay exponent of the fundamental solution for $|x| \to \infty$ depends only on $\alpha$, while it is independent of $\kappa$, see \eqref{eq: asympt}.}
\begin{figure}[!ht]
\centering
\includegraphics[width = \textwidth]{ll_kappa2.eps}
\caption{Cross-section of solution depending on $\kappa$ for $\alpha = 1.15$.}
\label{f:A107Kvar}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width = \textwidth]{AlphaVariedKappa07withHighAlpha.eps}
\caption{Cross-section of solution depending on $\alpha$ for $\kappa = 0.7$.}
\label{f:K07Avar}
\end{figure}
\section{Conclusions \& Outlook}
In this paper we have derived effective macroscopic diffusion equations for organisms
exhibiting long-range behaviour and pauses. Beginning with a microscopic model in which
run times and waiting times followed a power-law, as observed for certain T cell populations controlling
chronic infections, we obtained a system of kinetic equations for the moving and resting
particles. The fractional diffusion equation \eqref{eq: final final} emerges in a
realistic limit.
The paper initiates a study into the interplay between long-range behaviour in
space and long delays between runs, contributing to recent interest in
anomalous diffusion processes. On the one hand, L\'{e}vy walks in space with
short / negligible delays have been suggested for the movements of organisms
such as \textit{E.~coli} under low nutrient levels and their macroscopic evolution has been
shown to be described by fractional Patlak-Keller-Segel equations.
\cite{bellouquid2016kinetic, pks, perthame2018fractional} They have also inspired search strategies for swarm robotic systems.\cite{RoboticsPreprint} On the other hand, Brownian
motion with subdiffusive behaviour in time has been investigated in the
context of death processes \cite{fedotov2015persistent} or nonlinear interactions. \cite{edotov2013nonlinear, straka2015transport} A discussion of resting
times in velocity-jump models is found in Ref.~\refcite{TaylorKing}.
The macroscopic diffusion equation \eqref{eq: final final} permits analytical
insights into the evolution of the density. For example, it reveals that the microscopic
description enters via three parameters: the exponents $\alpha$ and $\kappa$ of the run and
waiting times and the diffusion constant $C_{\alpha,\kappa}$. Chemotactic terms are of
lower order: the long-range searching strategy is thus not disrupted by local
gradient following. Of course, immune cells are well known for their responses
to chemoattractants \cite{griffith2014chemokines}: in the context of the T cells studied here,
it is possible that their detection of a local attractant gradient would trigger a conversion from
long range searching behaviour to local gradient following.
The fundamental solution in $\mathbb{R}^n$, \eqref{eq: asymptotic of fundamental},
provides an explicit formula for the probability distribution for the movement of a
single particle. It leads to approximations for hitting times, \eqref{hittimes}, allows us
to study the sensitivity to parameter changes and provides a step towards the analysis
of mean first passage times, see below. On the other hand, Section \ref{numerics} offers efficient and accurate numerical methods to employ the fractional PDE \eqref{eq: final final} for parametric studies, despite its nonlocal nature, and more extensive modelling is
addressed elsewhere.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.8\textwidth]{semilogy_t0.eps}
\caption{Hitting time $t_0$ as function of $\alpha$ and $\kappa$ in the range of validity of \eqref{hittimes}.} \label{alphakappa}
\end{figure}
The experiments of Ref.~\refcite{harris} specifically studied the effect of the CXCL10 concentration
on T cell velocity: CD8$^+$ T cells in mice treated with anti-CXCL10 were, on average, 23\%
slower than the cells of a control population with normal responses. From the
fundamental solution of the space-time fractional equation we observe that velocity
changes only alter the time scale of diffusion, corresponding to $c^{\frac{\alpha}{\kappa}}$.
Thus, for the experimentally determined values $\alpha = 1.15$ and $\kappa = 0.7$, a 23\% reduction in the velocity would yield an approximately 35\% reduction in the diffusion timescale, and hence less efficient searching. In the absence of data stating otherwise,
here we have assumed the CD8$^+$ T cells migrate in an environment with
homogeneous CXCL10 levels, and therefore constant velocities $c$. More generally, it
would be of high interest to explore the impact of spatially-dependent velocities, resulting
from nonuniform chemical profiles. The microscopic modelling of such problems, however,
appears to be challenging even for velocity-jump models with standard Brownian motion.
The model could also be extended to include extra complexity. For example, in bacteria,
the stopping probability is linked to molecular components, which could enter as internal variables.
We refer to work in this direction by Perthame et~al.~\cite{perthame2018fractional} for the run-and-tumble of
bacteria including a biochemical pathway, and a more detailed discussion of the impact of
including internal variables is provided in. \cite{xue2009individual}
In the context of organisms searching for targets, a basic quantity of interest is the mean first passage time. It is defined as the time taken for a moving organism to reach a target or, more formally \cite{gardiner1986handbook} as
\[
\mathcal{T}(\mathbf{x})=\int_0^\infty \int_{\Omega}p(\mathbf{x}',t\mid \mathbf{x},0)d\mathbf{x}'\ dt\ .
\]
Here, $p(\mathbf{x}',t\mid \mathbf{x},0)$ is the probability that the particle is at $\mathbf{x}'$ at time $t$ provided that it was at $\mathbf{x}$ at time $0$, i.e.~the Green's function of the fractional equation. For the diffusion equation \eqref{eq: final final} two regimes have been considered in one dimension: For subdiffusion, $0<\kappa<1$ and $\alpha=2$, it was shown in Ref.~\refcite{yuste2004comment} that $\mathcal{T}(\mathbf{x})\rightarrow\infty$ for a target in a bounded domain, while for superdiffusion, $\kappa=1$ and $1<\alpha<2$, $\mathcal{T}(\mathbf{x})$ is finite under the same conditions. \cite{gitterman2000mean} It is the nonlocality of the equation here that generates the challenge, raising as it does the possibility of ``leapfrogging'' a target. The analysis in higher dimensions remains open. \cite{yuste2004comment}
\textcolor{black}{Motivated by the differential movement of cells in gray and white brain matter,\cite{giese1996migration}
upcoming work on interface problems will consider velocities that take different values
in distinct regions of the domain. While our current article addresses uncorrelated
run and waiting times, correlations between these are also of interest. In the special
case of perfect correlations between run and waiting times the macroscopic limit
coincides with the one obtained from a velocity jump model for a correspondingly
reduced velocity. Weaker forms of correlation are an interesting topic for future
research.}
From a search-area coverage perspective, a long-tailed distribution of waiting times makes
little sense: Figure \ref{alphakappa} shows that waiting only increases the hitting time, and hence
decreases the searching efficiency. Of course, such apparent contradictions can only be explained
through considering the underlying problem: following a migration, T cells must spend a certain time
controlling their local environment for any antigen presenting (i.e. infected) cells, often
detected through direct cell-cell contact, and hence `waiting' is an intrinsic component
of the search/detection process. While we have followed the data of Ref.~\refcite{harris} and assumed
independence between the selection of run and wait times, it is of course possible that a link
exists: for example, a T cell performs a thorough check of some environment (checks a large number of
cells) before embarking on a long run. The extent to which such considerations impact on the
subsequent PDE remain to be explored.
|
{
"timestamp": "2018-10-30T01:27:55",
"yymm": "1802",
"arxiv_id": "1802.08675",
"language": "en",
"url": "https://arxiv.org/abs/1802.08675"
}
|
\section{Introduction}
Scanning probe experiments often reveal complex pattern formation at
the surface of strongly correlated electronic systems\cite{basov-science,kohsaka-science-2007,phillabaum-2012,shuo-vo2,Dagotto,
dagotto-moreo-manganite}. Since their invention in 1982, scanning
probes have revolutionized our understanding of materials,
yielding an ever increasing wealth of data on a wide variety of
materials\cite{stm-rmp-nobel}. To date, the majority of theoretical
treatments have focused on microscopic physics \cite{basov-rmp},
with few theoretical treatments offering guidance for how to
interpret the detailed spatial information available in the emergent
multiscale pattern formation often observed on surfaces.
For systems
near criticality, the spatial configurations of geometric
clusters become scale-free, displaying spatial complexity on multiple length scales
in a way that is controlled by the critical fixed point.
Therefore,
the geometric properties encode critical exponents,
as we have shown elsewhere.\cite{phillabaum-2012,superstripes-erice-2014,shuo-vo2}
Such scale-free complexity can arise from interactions deep
within a material, or from surface-only physics, or even
from a non-interacting model at the right concentration.
We show here that artificial neural networks
can be trained to identify which physics is responsible
for the complex pattern formation, identifying whether
interactions are present, and if so, whether the interactions
arise from deep inside the material, or whether they
arise from surface physics.
Machine learning (ML)
is a burgeoning field in computer science and data
science, with broad applications across disciplines such as bioinformatics, computer vision,
marketing, economics, and medical diagnosis.
A computer program is said to exhibit ML if its ability to perform a given task
increases with experience, as determined by some performance metric.
That is, the program accumulates iterative modifications when presented with
certain input (``experience''), in such a way as to improve its performance
at the task, without those modifications being explicitly programmed.\cite{ML-def}
In physics, ML has been applied to to physics at a range of scales,
from galaxy clusters\cite{astro-ml} to elementary particles\cite{particle_phys-ml}.
ML is beginning to be applied in condensed matter physics,
for example in quantum many body problems\cite{troyer-ml}, electronic quantum transport\cite{lopez-ml}, glassy dynamics\cite{schoenholz-ml}, phase transitions\cite{wang-ml,melko-ml}, renormalization group\cite{ML-prec4,ML-prec5}, and big data issues of materials science\cite{kusne-ml,ghiringhelli-ml,kalinin-ml}.
Arsenault {\em et al.} have used ML to address problems in many-body physics such as the Anderson impurity model\cite{millis-prb}
and dynamical mean-field theory.\cite{millis-arxiv}.
Carleo and Troyer used ML to study the wavefunction of quantum many-body
interacting spin systems.\cite{troyer-ml}
Wang showed that an unsupervised learning algorithm could ``discover''
that the order parameter and structure factor change significantly as the temperature
is varied through the phase transition.\cite{wang-ml}
Carrasquilla and Melko have shown that for a few different models, supervised learning
can distinguish the ordered from the disordered phase.\cite{melko-ml}
However, no one has yet tasked ML with identifying
which underlying Hamiltonian is actually responsible for the phase transition.
We show in this paper that ML can be used to identify which model
was used to generate a particular spin configuration, focusing on configurations
that are close to criticality. We focus on near-critical configurations, since
these are most relevant to interpreting the multiscale pattern formation observed
in many spatially resolved experiments.\cite{basov-science,kohsaka-science-2007,phillabaum-2012,shuo-vo2,Dagotto,
dagotto-moreo-manganite}
This type of identification can reveal information
such as which interactions are important,
whether quenched disorder is a relevant term in the Hamiltonian,
and how many dimensions are involved in the phenomenon
(to determine, for example, whether observed patterns in data
are driven by surface physics, or from the bulk of the material).
In this paper, we are interested in whether ML can
classify microscopy images according to the underlying physics
driving the complex pattern formation. To be more specific, we want
to know whether ML can capture the universal
properties and critical behavior implied by the image patterns, and
then identify which model generated the image.
In the present work we
limit ourselves to three theoretical models: the two-dimensional (2D) clean Ising model on a square lattice (Fig.~\ref{ising2D});
the three-dimensional (3D) clean Ising model on a cubic lattice (Fig.~\ref{ising3D});
and the square lattice site percolation model (Fig.~\ref{percolation}).
The Ising models may be written as:
\begin{equation}
H=- J \sum_{<i j>} \sigma_i \sigma_j
\end{equation}
where $\sigma_i = \pm 1$ and $J$ is the coupling strength between nearest neighbor sites,
and the summation runs over the sites of either a two-dimensional square lattice
or a three-dimensional cubic lattice.
Ising models were first used to describe magnetic transitions,
from a ferromagnetically ordered phase at low temperature $T < T_c$,
to a paramagnetic, disordered phase
above the magnetic ordering temperature $T_c$,\cite{fisher-rmp,stanley-book}
in systems where magnetic moments are constrained to point
either ``up'' or ``down.''
However, the model can be applied to a wide variety of physical systems.
For example, the behavior of the critical endpoint of the
liquid-gas phase transition is well described by the criticality
of the three-dimensional Ising model.
Electrons inside of solids have their own phase transitions, and
when comparing to experiments on these systems,
$\sigma$ may be viewed as a generalized pseudospin in the
context of two-component electronic behavior.
This may be mapped, for example, to the two perpendicular orientations of nematic stripes in cuprate superconductors,\cite{phillabaum-2012}
or to the metal and insulator islands in VO$_2$.\cite{shuo-vo2}
In the absence of interactions, pseudospins may be said to follow a percolation model,
which is a non-interacting model.
Therefore we also consider site percolation on a square lattice, assigning a probability $p$
to having pseudospin $\sigma =1$ on any given site, otherwise
the pseudospin is assigned as $\sigma = -1$.
In two dimensions, this model has a continuous phase transition (and therefore
exhibits criticality) at $p_c = 0.59$.
We aim to explore
the efficiency of ML for capturing features associated with
the corresponding universality class, including interactions, disorder, and dimension.
Other universal features, such as the type of random disorder in the system, and the symmetry
of the order parameter, are left for future work.
\begin{figure}
\centerline{\includegraphics[width=.95\columnwidth]{ising2D.eps}}
\caption{Ising 2D images. Row 1: $T\textless T_c$, row 2: $T\approx T_c=2.269$, row 3: $T\textgreater T_c$. Temperatures are in units of J, which is the coupling strength between Ising variables.
Black and white pixels represent Ising variables $\sigma = +1$ and $-1$ respectively.
}
\label{ising2D}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=.95\columnwidth]{ising3D.eps}}
\caption{Ising 3D images. Row 1: $T\textless T_c$, row 2: $T\approx T_c=4.512$, row 3: $T\textgreater T_c$. Temperatures are in units of J, which is the coupling strength between Ising variables.
Black and white pixels represent Ising variables $\sigma = +1$ and $-1$ respectively.
}
\label{ising3D}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=.95\columnwidth]{percolation.eps}}
\caption{Percolation images. Row 1: $p\textless p_c$, row 2: $p\approx p_c$, row 3: $p\textgreater p_c$. On a square lattice, the site percolation threshold is $p_c = 0.59$.
Black and white pixels represent variables $\sigma = +1$ and $-1$ respectively.
}
\label{percolation}
\end{figure}
\section{Methods}
With ML, a software program undergoes significant changes based on new input,
without those changes being explicitly hard-coded by the programmer.
Rather, training algorithms contained within the program train
the neural network as new input is received.\cite{ML-def}
Whereas human visual pattern classification and recognition can be viewed as a qualitative process, ML turns this into an explicitly quantitative process, albeit within some margin of error.
In our case, we have used a scaled conjugate gradient (SCG) algorithm\cite{moller-scg}
under supervised learning conditions
to train a neural network algorithm
using the MATLAB Neural Network Toolbox through XSEDE\cite{XSEDE}.
With supervised learning, the program is presented with a training set of data for which
the ``right answer'' is also supplied to the training algorithm.
The point is to develop an algorithm that gives the correct output for a given input.
In our case, the goal will be to identify the underlying interacting physics model (the output) from which
a particular Ising spin configuration was generated (the input).
In order to accomplish this, we have used an artificial neural network with a single hidden layer containing 200 ``neurons''.
Each neuron consists of multiple inputs, a nonlinear activation function, and a single output.
We use a hyperbolic tangent activation function, which results in
outputs between -1 and 1.
The network is defined by its topology (how the neurons are interconnected) and the weights associated with each connection.
We use a feedforward neural network, which
can be represented by a directed acyclic graph where each edge has a weight and each node has a bias.
A very small example of a neural network topology is illustrated in Fig.~\ref{fig-dac}. The input layer consists
of the black and white image itself, where each circle represents one pixel.
In the hidden layer, each circle represents a single neuron, with multiple inputs,
which turns on according to some nonlinear function of its input weights.
The output layer in our case has three nodes, one for
each of the three models which may generate a spatially complex image
near criticality.
The goal of neural network training is to optimize the weights and biases of the network,
by minimizing a loss function.
In the present case, we use cross entropy as the loss function.
The training proceeds towards a point where the loss function is at a minimum.
We have used scaled conjugate gradient descent method\cite{moller-scg} to
minimize the loss function.
We furthermore use backpropagation of errors, a type of automatic differentiation in which
the derivatives associated with the chain rule are computed in reverse order ({\em i.e.} working from
the topmost dependent variable down through to the independent variables).
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Neurons & 50 & 75 & 100 & 125 & 150 & 200 & 250 \\
\hline
\% Accuracy & 94.93 & 95.53 & 95.95 & 96.48 & 96.68 & 97.00 & 96.72 \\
\hline
\end{tabular}
\caption{Classification accuracy for a given number of hidden neurons.
Accuracy initially increases with increasing number of hidden neurons.}
\label{tab:neurons}
\end{table}
We use the 2D percolation model and Monte Carlo simulations of the 2D clean Ising model and 3D clean Ising model to generate images to train the neural network. For the 2D models, the neural network is directly fed spin configurations near criticality for training purposes. For the 3D model, the neural network is fed spin configurations from a 2D {\em slice} of the 3D lattice. This is to mimic surface probe experiments, which have access to only 2D information.
The goal is for the neural network to learn to identify which model generated which images.
The ``interesting'' cases occur near criticality, where geometric clusters (as defined by connected sets of like-spin nearest neighbors)
grow in such a way as to have structure over multiple length scales.
Near criticality, systems experience fluctuations over all length scales from
atomic to the size of the system. Because there is no characteristic length scale in between,
critical systems are described by power law behavior,
where the exponent of each power law is called a ``critical exponent.''
At any second order phase transition, the unique set of critical exponents
acts like a fingerprint to identify the universality class of the phase transition,
which is set by universal features such as the dimension of the lattice and the symmetry of the order parameter
($Z_2$ in the case of Ising variables $\sigma = \pm 1$), and independent of
``non-universal'' short-distance physics like the inclusion of 2nd-nearest-neighbor interactions.\cite{fisher-rmp,stanley-book}
Ultimately, this behavior arises because the geometric clusters have a fractal character near criticality,
whereby each cluster is scale-free.
In these cases, we have shown\cite{phillabaum-2012,superstripes-erice-2014,shuo-vo2} that the statistics of the shapes of the clusters in the images encodes the universality class of the underlying model.
The hardest cases to judge by eye are those with roughly equal numbers of up and down spins.
For this reason, we focus attention within 4\% of the critical region of the Ising models
(since magnetization rapidly develops inside the ordered region), and within 15\% of the percolation model
(because equal numbers of up and down spins in that case is p=0.5, which is close but not at the critical
value of $p_c = 0.59$ for site percolation on a square lattice).
Note that net magnetization is not a trivial discriminator of these models near criticality.
Within these parameter ranges, each model covers states with no net magnetization
as well as those with net magnetization in the thermodynamic (infinite size) limit.
Even more importantly, for the same nominal set of model parameters,
thermal fluctuations can cause a net magnetization to appear in a finite size system
for $T > T_c$.
It is especially important to distinguish whether interactions are present,
in order to determine in any given surface probe experiment whether interactions
are responsible for the observed pattern formation, in order to distinguish it from
putative ``dirt effects'' which may arise at the surface of a material.
Because the Ising models are near criticality, we use the Wolff algorithm, and sample spin configurations every 100 Monte Carlo steps to ensure independent sampling.
We simulate system sizes of $L^2 = 100^2$ for the 2D case, and $L^3 = 100^3$ for the 3D case.
(For comparison, experimental images from surface probes in condensed matter physics today are typically about
$200 \times 200$ pixels.\cite{basov-rmp})
The images passed to the neural network consist of $100 \times 100 = 10,000$ pixels, called ``features'' in the context of ML;
the number of training examples given to the neural network must be significantly greater than
We use 400,000 configurations from each model, for a total of 1,200,000 configurations.
These configurations are then divided randomly, using 70\% for training,
15\% for validation, and 15\% for testing.
\begin{figure}
\centerline{\includegraphics[width=.8\columnwidth]{network.eps}}
\caption{
Schematic of a small artificial neural network. In this case there are 4 features in the input data (for example pixels in a 2x2 image), 3 hidden neurons, and 2 outputs (for example cat/dog, or percolation/Ising).
}
\label{fig-dac}
\end{figure}
\section{Results and Discussion}
We find that the neural network, once trained to identify which model produced which spin configurations,
achieves an average classification accuracy
of about 97\%, once the number of hidden neurons is at least 150.
The dependence of the classification accuracyon the number of hidden neurons is shown in Table~\ref{tab:neurons}.
The error bars are calculated from the case with 200 neurons where multiple training runs where conducted. The classification error drops from around 5$\%$ at 50 neurons to around 3$\%$ at 150 neurons. Results between 150 and 250 neurons are very similar and adding more neurons than 150 does not help much, although there does appear to be a minimum in the error for 200 neurons.
The case study presented in Table~\ref{tab:error-matrix} shows how errors were distributed among the models. The rows represent the models that images belong to and columns represent the models that images were classified as. The most misclassified model is the 3D Ising model.
It was particularly hard for the neural network to distinguish between the 2D and 3D Ising models.
Ising 2D and percolation are the most easily distinguished with almost negligible errors.
We have shown that ML can be used to classify configuration images into their associated universality classes, which means that it identifies implicit universal properties underlying the complex pattern formation of image configurations near a critical point. The conventional approach to identifying a universality class is to first explicitly fit the experimental data in order to find the critical exponents, then match that set of critical exponents to a known universality class.
However, the approach developed here does not depend upon critical exponents. Instead, we feed image data and universality class labels to the machine learning model and let the ML algorithm develop its own internal parameters in the neural network. We haven't programmed the model to include any theories from physics, but the model learns directly from the data, and then captures the universal features in the configuration images to identify the
universality class.
Thus, we have shown that it is possible to go directly from data to an identification of the universality class of that data, without explicitly considering any critical exponents or scaling theory.
This finding broadens our understanding of the extent to which ML can be applied,
and thus has important implications in the field of ML and computer vision where
the image classification problem is mainly focused on explicit object recognition,
rather than discovering underlying physics. We also expect
that machine learning can be used to study and even learn more complex physics given large
enough datasets. This synergy between machine learning and physics opens a new perspective
for future research in condensed matter physics, or even in all fields of physics, that we can
draw important conclusions via ML, {\em without the need to interpret the intermediate steps} of the ML algorithm.
\begin{table}
\begin{center}
\begin{tabular}{cc|c|c|c|}
\cline{3-5}
& & \multicolumn{3}{ c| }{Neural network output} \\
\cline{3-5}
& & perc & I-2D & I-3D \\
\hline
\multicolumn{1}{ |c }{\multirow{3}{*}{Input model}} & \multicolumn{1}{ |c| }{perc} & 98.05\% & 0.01\% & 1.94\% \\
\cline{2-5}
\multicolumn{1}{ |c }{} &\multicolumn{1}{ |c| }{ I-2D} & 0.03\% & 98.30\% & 1.67\% \\
\cline{2-5}
\multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{I-3D} & 1.82\% & 2.96\% & 95.22\% \\
\hline
\end{tabular}
\caption{Classfication results for testing data
for the case of 200 neurons in the hidden layer.
The model listed on each row denotes the Hamiltonian used to generate spin configurations to
present to the neural network. In each column, the output of the neural network is listed.
The goal is to train the neural network to identify which model produced which spin configurations.
}
\end{center}
\label{tab:error-matrix}
\end{table}
\section {Conclusion}
In conclusion, we have shown that machine learning can successfully determine
which spin configurations were generated from which theoretical model when the images are near criticality. As scanning probe experimental capabilities grow, they are producing a growing wealth of data, including more examples of systems in which the electronic textures display multiscale pattern formation at the surface of the material.\cite{basov-science,kohsaka-science-2007,V2O3,milan-iridates}
In cases where the pattern formation is driven by proximity to a critical point,\cite{phillabaum-2012,superstripes-erice-2014,shuo-vo2} the techniques employed here can be used to identify the underlying
physics
driving the pattern formation, without
the need to explicitly determine critical exponents or scaling forms.
\section{Acknowledgements}
LB acknowledges support from the SURF program through the College of Engineering at Purdue University.
SL acknowledges support from the Bilsland Dissertation Fellowship at Purdue University.
EWC and SL acknowledge support from NSF DMR-1508236 and
Dept. of Education Grant No. P116F140459.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575.
|
{
"timestamp": "2018-02-27T02:06:30",
"yymm": "1802",
"arxiv_id": "1802.08862",
"language": "en",
"url": "https://arxiv.org/abs/1802.08862"
}
|
\section{\label{sec:Introduction}Introduction}
Neutron-induced reactions play an essential role in nucleosynthesis, advanced nuclear reactor design, and stockpile stewardship. Future nuclear reactors may use fast neutrons, which would allow for more energy to be extracted from the fuel, and reduce the lifetime of nuclear waste. Since the neutron spectrum in fast reactors is different than in light water reactors, greater precision neutron-induced cross section data at higher energies is required~\cite{Aliberti2006ANE,Aliberti2008NDS,Salvatores2008NDS}.
Cross section data is typically fit with nuclear reaction theory models such as EMPIRE~\cite{Herman2007NDS}, TALYS~\cite{Koning2007NDST}, and GNASH~\cite{Young1997LA}, which allow for determination of quantities such as fission barrier heights, nuclear level densities, and fission fragment anisotropy in the compound nucleus. The normalized cross section ratio measured in this work can be used to better understand these nuclear properties.
The fission Time Projection Chamber (fissionTPC) is a two-volume MICROMEGAS detector designed and built by the NIFFTE (Neutron-Induced Fission Fragment Tracking Experiment) Collaboration to measure neutron-induced fission cross sections with high precision~\cite{Heffner2014NIMA}. Ionization tracks deposited by fission fragments, $\alpha$-particles, and recoils from neutron-scattering are drifted across each chamber, and an electron avalanche multiplies the charge before collection on a pixelated pad plane. Full three-dimensional track reconstruction is used for particle identification and determination of the fission fragment detection efficiency and related systematic uncertainties.
Past neutron-induced fission cross section measurements have used parallel-plate ionization chambers~\cite{Wender1993NIMA}, which include stacks of
foils separated by a distance smaller than the typical particle range. Light ions such as $\alpha$-particles have a longer range and much smaller stopping power than fission fragments, and deposit very little energy in the space between foils. Fission fragments have much higher stopping power between the foils, and can usually be distinguished from $\alpha$-particles. Twin Frisch-grid ionization chambers~\cite{Salvador2015PRC1,Salvador2015PRC2} allow for an inference of the charged-particle track angle, which provides additional information for determination of the fission fragment detection efficiency.
The fissionTPC has the additional capability of full three-dimensional track reconstruction, which provides the particle's origin, energy, angle, length, and ionization profile. In addition to providing more information from
which to determine detection efficiency, these quantities also allow in situ measurement of the target atom density and neutron beam flux.
\begin{figure*}[t!]
\includegraphics[scale=0.9]{figures/FullU8U5Ratio}
\caption{\label{fig:FullU8U5Ratio} (Color online) Past measurements of the $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio shown between $0.5 - 30$~MeV~\cite{White1967JNE,Stein1968CP,Meadows1972NSE,Meadows1975NSE,Meadows1975PC,Nordborg1976CP,Evans1976R,Behrens1977NSE,Fursov1977AE,Varnagy1982NIM,Androsenko1983CP,Manabe1988FC,Meadows1988ANE,Jingwen1989CJNP,Lisowski1991CP,Shcherbakov2002JNST,Tovesson2014NSE}. Data is compared to the ENDF/B-VIII.$\beta$5~\cite{ENDFB8B5} evaluation, shown with the evaluated uncertainty.
The uncertainty at $20$~MeV, the maximum value at which an uncertainty is given, is used for energies greater than that value.
An expanded view of the data is shown in the inset, compared to the ENDF/B-VII.1 \cite{Chadwick2011NDS} and ENDF/B-VIII.$\beta$5 evaluations, indicating a recent 40\% change to the $^{238}$U(n,f) cross section at $1.2$~MeV.}
\end{figure*}
The NIFFTE Collaboration aims to measure the $^{239}$Pu(n,f)/$^{235}$U(n,f) cross section ratio to a total uncertainty of $<$1\% using the fissionTPC. Previous measurements have reported uncertainties of a similar magnitude~\cite{Staples1998NSE}, but the scatter amongst these suggests that one or more systematic uncertainties may have been unrecognized or underestimated. The additional information provided by the fissionTPC enables an independent measurement intended to resolve these discrepancies and improve the quality and reliability of the derived nuclear data. Cross section measurements with $^{239}$Pu targets are more challenging than many other actinides, since the short $^{239}$Pu half-life ($24,110$~years) results in high $\alpha$-particle activity that can lead to significant event pile-up.
The normalized $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio presented here has been measured with the fissionTPC in order to demonstrate the measurement technique using this new instrument and quantify sources of systematic uncertainty without the presence of a large $\alpha$-decay background.
This work presents the energy dependence of the neutron-induced cross section ratio normalized to the ENDF/B-VIII.$\beta$5 evaluation~\cite{ENDFB8B5} at $14.5$~MeV. Calculation of an absolute normalization was not possible in this work due to the large uncertainties in the neutron beam flux introduced by the chosen target geometry, as described in Section~\ref{sec:Discussion}.
The $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio is a valuable reference, since the $^{238}$U(n,f) cross section is a standard used in neutron flux measurements \cite{Chadwick2011NDS}. Errors in this ratio can therefore produce correlated errors between different nuclear data sets. A comparison of past data~\cite{White1967JNE,Stein1968CP,Meadows1972NSE,Meadows1975NSE,Meadows1975PC,Nordborg1976CP,Evans1976R,Behrens1977NSE,Fursov1977AE,Varnagy1982NIM,Androsenko1983CP,Manabe1988FC,Meadows1988ANE,Jingwen1989CJNP,Lisowski1991CP,Shcherbakov2002JNST,Tovesson2014NSE} to the ENDF/B-VIII.$\beta$5 evaluation is displayed in Fig.~\ref{fig:FullU8U5Ratio} along with the evaluated uncertainty.
A change was recently made to the $^{238}$U(n,f) cross section evaluation at neutron energies of $\sim1.2$~MeV, as reflected in a comparison of the ENDF/B-VII.1~\cite{Chadwick2011NDS} and ENDF/B-VIII.$\beta$5 evaluations (Fig. \ref{fig:FullU8U5Ratio}, inset). The 40\% change in the $^{238}$U(n,f) cross section brings the evaluation closer to recent measurements~\cite{Lisowski1991CP,Shcherbakov2002JNST,Tovesson2014NSE}, but other measurements are scattered between the two evaluations. The cross section ratio measurement reported here provides additional input for evaluation of the $^{238}$U(n,f) standard.
The threshold energy for $^{238}$U(n,f) is $\sim1.2$~MeV, and the $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio drops dramatically below that energy. The energy range $0.5 - 30$~MeV was chosen
because many past measurements begin at this same lower bound and the measured ratio uncertainty becomes larger than the ratio at this energy. The upper bound was chosen because the primary applications of this work do not require measurement at higher energy, the $30$~MeV is the maximum energy reported in ENDF/B-VIII.$\beta$5 for the $^{235}$U(n,f) and $^{238}$U(n,f) cross sections, and wrap-around corrections grow at larger neutron energies.
The following sections of this paper describe the $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio measurement using the fissionTPC. Section \ref{sec:Experiment} describes the experimental conditions of the measurement, including the detector, beam properties, and data acquisition. Section \ref{sec:Data} provides details of the methods used to extract particle information from the recorded data. In Section \ref{sec:Ratio}, these quantities are combined with a Monte Carlo based efficiency model to generate the cross section ratio, as well as the ratio covariance matrix as a function of neutron energy.
\section{\label{sec:Experiment}Experiment}
The $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio was determined by measuring fission fragments from half-disk targets of $^{238}$U(n,f) and $^{235}$U(n,f) on a thin 100 $\mu$g/cm$^2$ carbon backing. The detector was operated on the 90L beam line of the Weapons Neutron Research (WNR) facility at the Los Alamos Neutron Science Center (LANSCE)~\cite{Lisowski2006NIMA}, where an 800-MeV proton accelerator provided 125 ps micropulses which are spaced $\sim$1.8 $\mu$s apart. There were 348 micropulses per macropulse, and 100 macropulses per second delivered to the unmoderated tungsten WNR neutron production target. The energy of fast neutrons produced via spallation was determined via neutron time-of-flight (nToF). A steel pipe with a 2 cm inner diameter collimates the neutron beam.
\subsection{\label{sec:FissionTPC}FissionTPC}
The fissionTPC is a two-volume MICROMEGAS TPC, operated using a mixture of argon and isobutane, with an actinide target mounted to the central cathode~\cite{Heffner2014NIMA}. Ionization electrons produced by charged particle energy loss are drifted away from the cathode by an applied electric field, inducing a current signal in the cathode.
Collection of ionization charge on a two-dimensional array of readout pads allows $x-y$ reconstruction of interaction positions, while the relative charge arrival time provides the position along the $z$-axis (neutron beam direction).
To reduce the readout time and lower the event multiplicity, the $5.4$~cm drift length of the device is significantly smaller than that typical for TPCs
used for high-energy physics experiments.
Having the actinide targets deposited on a thin carbon backing enabled fission fragments to travel into either volume, allowing measurement of both fragments and increasing the magnitude of the induced cathode signal.
The fissionTPC drift gas composition (high-purity argon and 5\% isobutane) was chosen because it proved to be resistant to discharges in the MICROMEGAS when operating in a neutron beam. The operating pressure of $550$~Torr (73.3 kPa) was selected such that spontaneous decay $\alpha$-particles and fission fragment tracks were fully contained within the active area of the detector volume. At this pressure, a local maximum in the drift velocity would be achieved at an applied drift field of about $200$~V/cm. It is typical to operate TPCs close to this maximum to reduce sensitivity to temperature and pressure fluctuations. However, the ionization charge density produced by fission fragments is significantly greater than is generally observed in light-ion TPCs, and trapping by the large ion space charge was observed to retard the drift of a significant fraction of the ionization electrons. These trapped electrons resulted in a large charge tail, which complicated tracking and biased the detected track angle.
By operating at an increased drift field of
$520$~V/cm this effect was significantly reduced, at the cost of slower drift times and greater potential instability in the drift velocity.
The MICROMEGAS gain stage at the anode includes a thin mesh separated from the pad plane by $75$~$\mu$m. The 28 kV/cm electric field in this region is significantly higher than in the drift region, resulting in an avalanche that produces a signal gain of 34 at the pad plane. The pad plane consists of hexagonal pads of $2$~mm pitch.
\subsection{\label{sec:ActinideTarget}Actinide Target}
The target consists of two half-disks of the actinides $^{235}$U and $^{238}$U formed on a thin carbon backing by vacuum deposition~\cite{Loveland2009JRNC}.
The activity of both long-lived isotopes can be measured using an autoradiograph, i.e. direct in situ counting of their respective $\alpha$-decay rates (Section~\ref{sec:TargetIsotopics}).
This procedure is complicated by the presence of shorter half-life uranium isotopes, but the relative amounts of these species can be determined by analysis of $\alpha$-particle track length distributions.
The $^{238}$U deposit includes measurable $^{235}$U contamination, which is corrected for in the final ratio analysis.
\subsection{\label{sec:DataAcquisition}Data Acquisition}
Each of the $5952$~pads included in the fissionTPC is recorded by a $50$~MHz digitizer~\cite{Heffner2013IEEE}. These are arranged in $192$~EtherDAQ cards of $32$~channels each~\cite{Heffner2013IEEE,Heffner2014NIMA}. When a digitizer channel exceeds a specified threshold, event recording commences and does not terminate until the signal falls below threshold. The cathode signal was recorded by a $1$~GHz digitizer.
\section{\label{sec:Data}Data Analysis}
Analysis of the fissionTPC $^{238}$U(n,f)/$^{235}$U(n,f) data set involves many steps. Track reconstruction is performed on voxels generated from pad-plane signals to determine quantities such as energy, length, and direction.
The cathode signal is analyzed to determine neutron time-of-flight (nToF) values, which can be converted into neutron energy. The target isotopics and overall activity must be determined in order to normalize the cross section ratio and correct for all actinide species present. A beam-target correction must be generated by examining the spatial overlap of the neutron beam with the target actinide density. Finally, a wrap-around correction is needed to remove contributions from low-energy neutrons.
The following sections describe how each of these quantities and corrections is determined.
\subsection{\label{sec:TrackReconstruction}Track Reconstruction}
\begin{figure}[h]
\includegraphics[scale=0.7]{figures/FissionVisualization}
\caption{\label{fig:FissionVisualization} (Color online) Visualization of fission event in the fissionTPC. The thin target allows for both fission fragments to be detected, one in each chamber. The gray disk represents the target holder, and has a 4 cm diameter. The two fission fragments have a common start vertex, but are displaced in the z-direction to force all voxels of charge into their respective volumes.}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=1.2]{figures/CarbonBreakup}
\caption{\label{fig:CarbonBreakup} (Color online) Visualization of a spallation event in the fissionTPC. Several light ions can be seen in a single volume all with a common vertex. The fissionTPC is capable of tracking each particle separately.}
\end{figure}
The first step in the data analysis is to reconstruct the charge clouds recorded by the pad-plane digitizers.
Since the EtherDAQ front-end performs a charge integration, digitizer waveforms are differentiated using a discrete filter to generate voxels of charge, yielding a three-dimensional representation of the charge cloud detected in the event.
Fission fragments, $\alpha$-particles, recoil protons, and recoil argon and carbon ions can all occur during the same event, even when separated by a significant distance.
For example, Fig.~\ref{fig:FissionVisualization} displays a fission event reconstruction with a fragment in each volume. In addition, Fig.~\ref{fig:CarbonBreakup} shows several light ions produced from a spallation event sharing a common vertex.
After the distribution of voxels is generated, tracking algorithms separate individual particles. The primary tracker used in this work separates non-contiguous charge clouds with an adjacency check. To increase efficiency, the strips of voxels produced by single pads are combined into columns, before merging with adjacent columns. The benefits of this tracker are that it is simple, efficient, and it properly handles most multi-particle events. After separating the charge clouds, a track fitter is used to find the track start vertex, end vertex, orientation, and energy.
The track fitting algorithm begins with the assumption that the particle passes through the center of charge of the cloud, and then finds the axis that minimizes the distance-squared between the axis and each voxel of charge. The charge is then projected onto that track axis, the track start and end vertices are found by determining where the charge profile crosses a specific threshold, and then extrapolating back to zero charge. The threshold is set low enough to primarily be influenced by diffusion. When identifying the track start and end for fission fragments and $\alpha$-particles, the particles are assumed to travel away from the target plane. The track fitting threshold depends on diffusion and space-charge effects, and is tuned to the argon/isobutane mixture used for the experiment.
The $x$-$y$ vertex pointing resolution is $280~\mu$m, as determined from the spatial distribution of the actinide deposit edge. The pointing resolution results in a halo around the target distribution, which can be seen in the fission fragment spatial distribution shown in Fig.~\ref{fig:TargetCuts}. At larger than 1 cm radius, a background with very low statistics can be seen, which is assumed to result from mis-tracked fragments. The charge clouds are constructed from data recorded by the anode pixels, and missing pixels bias the spatial profile of the target by lowering some tracks below the fission fragment detection threshold. Two adjacent missing pixels can be seen in Fig.~\ref{fig:TargetCuts}, and an $x>0$ cut is placed on the data to avoid this bias.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/TargetCuts}
\caption{\label{fig:TargetCuts} The $x-y$ spatial distribution of fission fragment vertices reconstructed by the fissionTPC. The upper half disk is the $^{238}$U deposit while the lower is the $^{235}$U deposit. Black lines indicate spatial selection cuts: the radial cut prevents backgrounds from actinide contamination on the cathode, while the cut bisecting the two deposits identifies which actinide has fissioned. Two adjacent missing pixels can be seen near the position (-0.5,-0.4), so only $x>0$ data is considered for the analysis.}
\end{figure}
The track fit quality is evaluated by calculating the charge fraction near the fit axis. Track fits of poor quality typically occur when charge from different particles have spatial overlap. A Hough transform tracking approach is used in such cases~\cite{Hough1959CP,Hough1962Patent}. The $x-y$, $y-z$, and $x-z$ projections of the three-dimensional charge cloud are analyzed. The line of highest charge density is iteratively removed from the event, with projections being repeated on each iteration. This has the benefit of cleanly selecting fission fragments, but can result in the splitting of $\alpha$-particles, protons, and recoil ion tracks. This algorithm is considerably more computationally intensive, and is only used when the initial tracker fails to produce a quality fit.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/EnergyLength}
\caption{\label{fig:EnergyLength} (Color online) Length vs. energy for particles observed in the fissionTPC with the LANSCE neutron beam impinging the device. ADC channel refers to the uncalibrated particle energy recorded by the digitizers. Different particles have unique stopping power profiles in the drift gas, and length/energy cuts can be used to isolate specific particle types. Labels have been added to the different particle distributions.
}
\end{figure}
Once the fit is complete, the track parameters length and energy can be used to select particles of different atomic mass and atomic number (Fig.~\ref{fig:EnergyLength}).
The large proton flux observed primarily results from $^{1}$H(n,el) in the isobutane, $\alpha$-particles from carbon breakup and $\alpha$-decay, and recoil ions from neutron scattering on carbon and argon.
\subsection{\label{sec:NeutronToF}Neutron Time-of-Flight}
The neutron energy is determined by measuring nToF between the spallation and actinide targets.
An electromagnetic pickup signal provides the start timing reference, while detection of a fission on the fissionTPC cathode provides the stop signal.
Observation of photofission from $\gamma$-rays produced by spallation of the tungsten target allows for determination of the propagation delay of the beam between the pickup and the spallation target.
The accelerator micropulses are separated by $\sim$1.8 $\mu$s, and can be combined by accounting for this time difference.
The remaining unknown in the conversion of time to energy is the distance between the tungsten spallation target and actinide target in the fissionTPC. The nuclide $^{12}$C is known to have large neutron scattering resonances~\cite{Chadwick2011NDS}; insertion of carbon material (a ``carbon filter'') between the production and actinide targets creates notches in the measured nToF distribution at well-known energies.
The measured nToF, i.e. neutron energy, corresponding to the notch at $2.08$~MeV, in combination with the offset provided by the photo-fission feature, determines the distance between the two targets to be $8.059(3)$~m, where the primary source of uncertainty was event statistics.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/nToFShape}
\caption{\label{fig:nToFShape} (Color online) nToF distribution of the combined $^{235}$U and $^{238}$U targets. The inset shows a Gaussian fit to the nToF photo-fission distribution, yielding a timing resolution of $2.03(2)$~ns FWHM.
}
\end{figure}
Cathode signal timing is obtained by applying a digital moving-average filter~\cite{Jordanov1994NIMA} and interpolating the rising edge back to the zero-crossing.
The nToF resolution of $2.03(2)$~ns FWHM is determined by fitting the photo-fission feature with a Gaussian distribution on a flat background (Fig.~\ref{fig:nToFShape}).
The cathode efficiency relative to the anode for the two actinide deposits was found to be $\sim$99\% for events included in the cross section analysis, with this quantity largely canceling in the cross section ratio.
\subsection{\label{sec:TargetIsotopics}Target Isotopics}
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/U8U5Isotopics}
\caption{\label{fig:UIsotopics} (Color online) Distribution of $\alpha$-particle track lengths emitted by the (a) $^{238}$U and (b) $^{235}$U targets. Track length is related to energy by the $\alpha$-particle stopping power.}
\end{figure}
To determine the cross section ratio, it is essential to identify the target atom number of each isotope and correct for any contaminants that could add to the fission fragment count.
The total $\alpha$-particle activity of each target can be determined by operating the fissionTPC with no incident neutron beam (an autoradiograph).
The contribution from individual isotopes can be identified through fitting of energy or equivalently, length spectra using the known $\alpha$-particle lines of likely actinide constituents (Fig.~\ref{fig:UIsotopics}). The length distribution of $\alpha$-particle lines was found to have higher resolution than the particle energy distribution.
The width and energy scales each have linear calibrations with two free parameters. A skew term is added to describe energy straggling in the target, resulting in a peak shape that is the convolution of an exponential with a Gaussian. The peak areas for each isotope are additional free parameters.
Resulting isotopic abundances are given in Table \ref{tab:isotopics}.
The $^{238}$U target contains $0.57(10)$\%~$^{235}$U, which must be corrected for when calculating the fission cross section. $^{238}$U has a neutron-induced fission threshold of $\sim$1.2 MeV, and the $^{235}$U contaminant would appear in the cross section ratio as a flat, non-zero value below threshold, due to the contaminant being in ratio with itself. The $^{235}$U target contains 0.25(4)\%~$^{236}$U, an amount that results in a small fission rate and is not corrected for here. The ratio of $^{235}$U atoms to $^{238}$U atoms in the respective targets was found to be $0.917(13)$.
The $^{233}$U and $^{234}$U contaminants in both targets have a negligible effect on the fission cross section ratio due to their small atomic fractions. The $\alpha$-decay activity from these isotopes is significant due to their short half-lives, and must be accounted for when determining the actinide density of the isotopes of interest: in the $^{235}$U target, the $^{235}$U was found to contribute $35\%$ of the total $\alpha$-decay activity, compared to $50\%$ $^{238}$U $\alpha$-decay activity for the $^{238}$U target.
\begin{table}
\caption{\label{tab:isotopics}Measured isotopic abundances in the two targets.}
\begin{ruledtabular}
\begin{tabular}{lcc}
Isotope & $^{238}$U Target (\%)& $^{235}$U Target (\%) \\
\hline
$^{233}$U & 0.0003(2) & 0.002(1) \\
$^{234}$U & 0.0046(4) & 0.060(2) \\
$^{235}$U & 0.57(10) & 99.69(4) \\
$^{236}$U & 0.005(3) & 0.25(4) \\
$^{238}$U & 99.4(1) & \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{\label{sec:TargetBeam}Target-Beam Correction}
The measured fission rate from each target is proportional to the overlap between the spatial distribution of the actinide deposit and the neutron flux. Recording the start vertex of protons produced by elastic scattering of neutrons on hydrogen in the isobutane gives a measure of the spatial profile of the neutron flux in the fissionTPC (Fig.~\ref{fig:BeamFlux}). In order to record full proton tracks, including the start vertex where the ionization density is lowest, it was necessary to periodically record a subset of data with increased MICROMEGAS gain. The neutron flux spatial profile was found to be static in time, which allows this subset of proton data to be applied to the longer fission measurement. The beam and target overlap is determined from the product of the normalized neutron distribution and the normalized actinide distribution (Fig. \ref{fig:BeamTarget}). The ability to measure the spatial dependence of the target and beam overlap in situ is a unique feature of the fissionTPC, providing the opportunity to study associated systematic uncertainties.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/BeamFlux}
\caption{\label{fig:BeamFlux} (Color online) Normalized distribution of recoil proton start vertices recorded during high-gain fissionTPC operation, representing the spatial distribution of neutrons incident on the actinide target. The distribution has been convolved with a Gaussian of $\sigma=0.3$~mm to reduce aliasing effects from ADC thresholds.
The black curves outline the regions of actinide target deposit used for the cross section ratio determination, where only $x>0$ is considered due to inactive pad plane pixels. The aliased shape of the outline represents the binning of the target deposit histogram.}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/BeamTarget}
\caption{\label{fig:BeamTarget} (Color online) Normalized distributions of the beam flux and target activity for the $^{238}$U ((a) \& (b)) and $^{235}$U ((c) \& (d)) actinide deposits.
Bin-by-bin multiplication of these distributions determines the beam and target overlap. Only $x>0$ is considered due to inactive pad plane pixels.
}
\end{figure}
\subsection{\label{sec:WrapAround}Wrap-Around Correction}
\begin{figure}[h]
\includegraphics[scale=0.8]{figures/WrapExpand}
\caption{\label{fig:WrapExpand} (Color online) Determination of the wrap-around correction in the $^{235}$U data . The nToF data (green), averaged in the low-energy tail region (magenta), are fit to determine the wrap-around contribution (red line) to the nToF model (blue line). The nToF model consists of a logarithmic spline following the distribution of prompt neutron data.}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/WrapCombined}
\caption{\label{fig:WrapCombined} (Color online) The nToF data and wrap-around correction after all micropulses are combined. The band around the red line represents the uncertainty of the wraparound fit, which was produced by propagating the fit covariances.}
\end{figure}
The LANSCE proton accelerator produces bunches spaced $\sim$1.8~$\mu$s apart, and low-energy neutrons from one bunch may carry-over to later bunches. This results in low-energy contributions to the high-energy region of the time-of-flight distribution, which must be subtracted. The nToF distribution represents the product of the neutron flux with the fission cross section, converted from energy to time.
Without nToF measurements taken with a larger bunch spacing, this distribution can be difficult to determine.
The recorded data continues $\sim 70~\mu$s beyond the last micropulse, allowing the wrap-around contribution to be determined via a fitting procedure that must also account for contributions from previous micropulses.
A logarithmic spline was used to describe the wrap-around contribution, as illustrated for $^{235}$U data in Fig.~\ref{fig:WrapExpand},
since this was found to describe the low-energy tail of the nToF distribution well over large time scales.
Combining the micropulses produces a total nToF distribution (Fig.~\ref{fig:WrapCombined}). The fit parameter covariance matrix is used to generate Monte Carlo variations of the fit parameters, which are interpreted as the uncertainty band associated with the wrap-around fitting procedure.
\section{\label{sec:Ratio}Cross Section Ratio}
The cross section ratio measured here is defined by Eq. \ref{eq:CSRatio}, where $x$ refers to the unknown and $s$ refers to the standard actinide. In this case, $^{238}$U is considered the unknown and $^{235}$U the standard:
\begin{multline}
\frac{\sigma_x}{\sigma_s}=\frac{\epsilon^s_{ff}}{\epsilon^x_{ff}}\cdot\frac{\Phi_s}{\Phi_x}\cdot\frac{N_s}{N_x}\cdot\frac{\Sigma_{XY}(\phi_{s,XY}\cdot n_{s,XY})}{\Sigma_{XY}(\phi_{x,XY}\cdot n_{x,XY})}\cdot\frac{w_s}{w_x}\\
\cdot\left(\frac{(C^x_{ff}-C^x_{r}-C^x_{\alpha})-C^x_{bb}}{(C^s_{ff}-C^s_{r}-C^s_{\alpha})-C^s_{bb}}-G^{sx}_{ss}\right)
\label{eq:CSRatio}
\end{multline}
In this equation, $\epsilon_{ff}$ refers to fission fragment detection efficiency, which will be described in Section \ref{sec:Efficiency}.
${\Phi_s}/{\Phi_x}$ represents the neutron flux ratio, which was found to be $1.028(1)$ using the proton spatial profiles shown in Fig.~\ref{fig:TargetCuts}.
${N_s}/{N_x}$ is the number ratio of the two actinides, which was shown to be $0.917(13)$ in Section \ref{sec:TargetIsotopics}.
${\Sigma_{XY}(\phi_{s,XY}\cdot n_{s,XY})}/{\Sigma_{XY}(\phi_{x,XY}\cdot n_{x,XY})}$ is the beam and target overlap term, which was found to be $1.002(7)$.
${w_s}/{w_x}$ refers to the detector live time ratio, which is estimated to be $100\%$.
The $C$ terms refer to events detected per neutron energy bin. $C_{ff}$ is the number of fission fragment counts in an energy bin after particle identification cuts are applied. $C_r$ is the estimated number of background recoil events misidentified as fission fragments. $C_\alpha$ is the estimated number of pile-up $\alpha$-particle events misidentified as fission fragments. $C_{bb}$ is the wrap-around correction factor, which is fit for both $^{235}$U and $^{238}$U. $G^{sx}_{ss}$ refers to the ratio of the number of atoms of isotope $s$ found in deposit $x$ to the number of atoms found in deposit $s$. This is a contaminant correction for the presence of $^{235}$U in the $^{238}$U target, and is found to be $0.63(10)\%$.
Although Eq.~\ref{eq:CSRatio} is formulated to produce an absolute cross section ratio, the ratio reported here is normalized to the ENDF/B-VIII.$\beta$5 evaluation at 14.5 MeV.
An absolute normalization was not possible for this measurement due to a large normalization uncertainty ($\sim$10\%) resulting from the separated actinide deposits. Difficulties associated with mapping the proton beam flux shown in Fig.~\ref{fig:TargetCuts} to a neutron flux are assumed to be the cause.
Future work will use thick backed targets with back-to-back actinide deposits that will allow multiple neutron beam flux measurement methods. With a back-to-back target, any spatial flux variations are common to both targets.
Additionally, the neutron beam spatial distribution can be measured directly by dividing the fission distribution by the actinide density.
\subsection{\label{sec:FissionCuts}Fission Fragment Selection Cuts}
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/PIDCuts}
\caption{\label{fig:PIDCuts} Selection cuts are applied to the energy vs. length distribution of detected particles. There are two static cuts which remove background particles, and one dynamic cut (varied within the range shown by vertical red lines) which is used to determine residual uncertainties from the efficiency correction. ADC channel refers to the uncalibrated particle energy recorded by the digitizers.
}
\end{figure}
The fission fragment detection efficiency term and all of the $C$ terms in Eq. \ref{eq:CSRatio} depend strongly on the particle identification (PID) cuts that are applied to the data set.
By definition, as different event selection criteria are applied to the fissionTPC data, $C_{ff}$ and $\epsilon_{ff}$ will change proportionately if the efficiency term is calculated correctly.
Visual representations of the fission fragment selection cuts applied are displayed in Fig.~\ref{fig:TargetCuts} (spatial actinide selection) and Fig.~\ref{fig:PIDCuts} (particle identification).
Two particle identification static cuts remove non-fragment background, while a dynamic cut is used to estimate residual uncertainties in the fission fragment detection efficiency ($\epsilon_{ff}$) determination process. The dynamic cut is sampled over a range of different values.
Since $\epsilon_{ff}$ represents the fraction of fission fragments that are observed in the detector with all selection cuts applied, any observed variation in the cross section ratio as the dynamic cut varies is considered to be a residual uncertainty in the determination of $\epsilon_{ff}$ itself.
\begin{figure*}[t!]
\includegraphics[scale=0.43]{figures/EfficiencySimCompare}
\caption{\label{fig:EfficiencySimCompare} Energy of observed fission fragments as a function of emission angle from the target ($\cos(\theta)$, $\cos(\theta)=1$ is emission perpendicular to the target). ADC refers to the uncalibrated particle energy recorded by the digitizers. These distributions are compared for data (A \& B) and simulation (C \& D) for both isotopes.
The two vertical bands represent the escape angle distributions of light and heavy fission fragments.
These distributions are used to determine $\epsilon_{ff}$ via a fitting procedure with a Monte Carlo simulation.
The data excludes $\cos(\theta)>0.95$ to avoid electronics saturation effects.
Different neutron energy ranges are displayed for the two actinides: $^{238}$U between $1.33-2.51$~MeV and $^{235}$U between $0.16-0.42$~MeV.
}
\end{figure*}
The dynamic energy cut varies over the range shown in Fig. \ref{fig:PIDCuts}: the low-energy limit is chosen to eliminate the vast majority of non-fragment background, while the high-energy limit removes a small fraction of low-energy fission fragments.
Through careful selection of these limits, the $C_r$ and $C_a$ terms of Eq. \ref{eq:CSRatio} are rendered negligible.
However, increasing the energy threshold for fission fragment detection has the consequence of increasing the relative uncertainty of $\epsilon_{ff}$.
\subsection{\label{sec:Efficiency}Fission Fragment Detection Efficiency}
The efficiency with which the fissionTPC experimental configuration detects fission fragments, $\epsilon_{ff}$, is clearly of central importance to any fission cross section ratio measurement. The detailed event-by-event information captured by the fissionTPC is used to build and tune a complex phenomenological
efficiency model as a function of incident neutron energy. The efficiency model captures a myriad of transport and loss effects, in addition to underlying nuclear data and the analysis selections described in Section~\ref{sec:FissionCuts}.
Processes and parameters that have empirically been found necessary to represent the fissionTPC data include fission product yields, fission fragment stopping power, quantum and kinematic anisotropy, and target thickness, composition, and surface roughness.
Monte Carlo simulations of these effects are used to implement the efficiency model, with the required parameters being determined by fitting observable distributions to fissionTPC data.
This method is computationally intensive ($\sim 2000$~CPU hours for the final efficiency calculation) since a Monte Carlo realization must be generated for each parameter set. However, there is no analytical approach of which we are aware for this complex problem.
Changes in the fission fragment detection efficiency, $\epsilon_{ff}$, as fragment energy selection cuts are applied is primarily caused by variable energy loss in the target as a function of emission angle ($\cos(\theta)$, $\cos(\theta)=1$ is emission perpendicular to the target).
When a fission fragment escapes from the target traveling perpendicular to the target plane, there is minimal energy loss.
When a fission fragment travels parallel to the target plane, significantly more energy loss can occur, which can result in the fragment stopping in the target and being undetected.
Furthermore, the minimum energy selection cut displayed Fig.~\ref{fig:PIDCuts} can result in additional fission fragment losses.
These losses can be observed by examining the relationship between emission angle and energy (Fig.~\ref{fig:EfficiencySimCompare}), where the fission fragment distributions trend towards lower energy at smaller values of $\cos(\theta)$.
This emission angle versus fragment energy distribution is the primary representation of fissionTPC data that we use to build and constrain the efficiency model. As we will describe, features in this distribution are sensitive to a number of experiment parameters that are otherwise difficult or impossible to access.
To better highlight these features, a neutron energy selection has been applied to the distributions shown in Fig.~\ref{fig:EfficiencySimCompare}: for $^{238}$U neutron energies between $1.33-2.51$~MeV are displayed while for $^{235}$U the range is $0.16-0.42$~MeV .
At higher energies, fission anisotropy and the kinematic boost from the incident neutron energy cause forward peaking in the fission fragment angular distribution, so the energy selection upper bound is kept as low as possible while maintaining adequate statistics for the efficiency modeling procedure.
Because of the $^{238}$U fission threshold, this target is sampled at higher energy than that for $^{235}$U.
Fission fragment angular distributions in the fissionTPC have been previously studied in detail over a range of incident neutron energies~\cite{Kleinrath2016Thesis}.
At very forward angles ($\cos(\theta)>0.95$) saturation of pad-plane amplifiers occurs since such tracks occupy few pad-plane pixels. Accordingly, such tracks are excluded from the efficiency modeling procedure.
We use Monte Carlo simulation to recreate the measured $\cos(\theta)$ vs. energy distribution.
The parameters required are found by performing a multi-dimensional fit to minimize a $\chi^2$ comparison of data and the Monte Carlo representation.
We build the Monte Carlo simulation by considering fission fragment transport from the target into the active region of the TPC. The Fission Product Yields (FPY) for each neutron energy bin are determined using the energy of forward-escaping particles in the data ($\cos(\theta)$ between $0.775 - 0.975$), which have minimal energy loss. The approximate fragment mass is calculated kinematically using the fragment energy and total actinide mass. The small amount of energy straggling for forward-traveling fragments is corrected for in the FPY determination by deconvolving the estimated energy loss in the target. During fission fragment transport, energy loss of these particles traveling through the target is determined using parametrized stopping power functions derived from SRIM~\cite{Ziegler2010NIMB}.
The validity of SRIM stopping powers for fission fragments in thin foils was previously studied, and roughly mass-independent differences of up to 30\% were found \cite{Knyazheva2006}. For the efficiency model in this work, these differences are correlated with the target thickness fit parameter, and should not impact the calculated efficiency.
\begin{figure}[h]
\includegraphics[scale=0.38]{figures/SimRoughness}
\caption{\label{fig:SimRoughness} Representative roughness distribution of the actinide targets calculated with a fractal noise model. The axis units are arbitrary, but are common for all axes. The ratio of the height to the wavelength determines the surface normal distribution.}
\end{figure}
Target roughness must also be considered to account for the difference between the surface normal and the TPC drift field direction. Past work with molecular-plated targets on thick backing \cite{Sadi2011NIM} revealed short-wavelength roughness ($\sim$5 $\mu$m), but in the case of a thin carbon backing longer wavelengths are expected \cite{Henderson2011NIM}.
The surface roughness for this work is represented by a simple fractal noise model, generated by combining Perlin noise fields \cite{Perlin1985SCG}, and Fig. \ref{fig:SimRoughness} displays a representative target roughness distribution. The axis units are arbitrary, but are common for all axes.
Sampling the surface normal distribution yields a $\cos(\theta)$ distribution with the form $\exp(x/\beta-1)$ where the parameter $\beta$ represents the roughness.
Having found this simple representation of the target roughness, we have similarly investigated the effect of that roughness on particle transport from the target surface into the detector gas volume.
A simple Monte Carlo model in which fragments that escape from the target, but then collide with a different region of the target are removed is used.
The efficiency of escape into the gas volume was found to have the form $1-\exp(-x/\gamma)$, where the parameter $\gamma$ represents the roughness.
The procedure used to generate the $\cos(\theta)$ vs. energy distribution using Monte Carlo is as follows.
A fission fragment is generated at a random depth in the actinide deposit and is propagated until stopping or escaping from the target, using the SRIM derived stopping power functions.
There are a total of eight parameters that are varied in each Monte Carlo iteration and whose values are determined via a $\chi^2$ minimization with respect to the fissionTPC data.
The first parameter in the model is the thickness of the UF$_4$ deposit in the target.
The next two parameters are $\beta$ and $\gamma$, which describe the target roughness. The fourth parameter is the total fission kinetic energy.
The fifth and sixth parameters represent an angle scatter after leaving the target, which is interpreted as the fragment scattering off of argon in the gas, i.e. being detected at angle different from that it was emitted. A significant number of such tracks have been observed in the fissionTPC data.
These two parameters are the slope and intercept of the scattering angle as a function of fragment energy. A fission anisotropy term is included to describe quantum anisotropy in the fission process. Finally, an eighth term is included to represent the thickness of inert material on the surface of the target.
The data and the Monte Carlo model realization for the `best-fit' parameters that result from the $\chi^2$ minimization are shown for $^{238}$U and $^{235}$U in Fig.~\ref{fig:EfficiencySimCompare}. The slope towards lower energy at low $\cos(\theta)$ strongly constrains the target deposit thickness parameter.
The variation in the intensity of the distribution as a function of $\cos(\theta)$ is most strongly influenced by the anisotropy term.
The strong fall off in event statistics at low $\cos(\theta)$ is caused by preferential stopping of fragments in the target and the target roughness escape efficiency.
The broadness of the distribution at low energy depends on the final scatter term in the gas and the surface roughness distribution.
The $\chi^2$ minimization provides a single anisotropy term for each target, and an additional fitting procedure is needed to describe the change in quantum anisotropy as a function of energy.
The ratio between the best-fit Monte Carlo model realization and data for each neutron energy bin is fit using a second-order Legendre polynomial.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/Efficiency}
\caption{\label{fig:Efficiency} Calculated fission detection efficiency for $^{238}$U and $^{235}$U.}
\end{figure}
\begin{table}
\caption{\label{tab:efficiency}Efficiency fit parameters and uncertainties for the two targets. }
\begin{ruledtabular}
\begin{tabular}{lcc}
Parameter & $^{238}$U & $^{235}$U \\
\hline
UF$_4$ thickness (mg/cm$^2$) & 0.346(1) & 0.292(1) \\
$\beta$ roughness & 0.00938(4) & 0.0101(1) \\
$\gamma$ roughness & 0.0267(7) & 0.0380(4) \\
Total energy (MeV) & 179.9(3) & 183.5(3) \\
Scatter offset (deg) & 20.3(1) & 21.3(1) \\
Scatter slope (deg/MeV) & -0.356(1) & -0.375(3) \\
Anisotropy & 1.201(5) & 0.935(3) \\
Inert thickness (mg/cm$^2$) & 0.0158(1) & 0.0156(1) \\
\end{tabular}
\end{ruledtabular}
\end{table}
The fragment transport and anisotropy best-fit parameters are combined to calculate the fission fragment detection efficiency as a function of energy, and Monte Carlo error propagation (see Section \ref{sec:ErrorPropagation}) is used to calculate the efficiency uncertainty from the fitting procedure covariances (Fig. \ref{fig:Efficiency}).
The Monte-Carlo transport and anisotropy model, using the best-fit parameters directly describe the fraction of fission fragments that enter the active volume of the fissionTPC and that would pass analysis selection cuts.
The upward slope as a function of energy is a consequence of the kinematic boost from neutron momentum transfer.
The fission fragment detection volume for both targets is downstream from the neutron beam, and the momentum transfer increases the number of fragments entering that volume.
The energy-dependent structure in the efficiencies result from the quantum anisotropy of fission, which must be measured for each energy bin.
The larger uncertainties at low energy for $^{238}$U are due to low statistics below the fission threshold.
\begin{figure*}[t!]
\includegraphics[scale=0.8]{figures/DataCompareFull}
\caption{\label{fig:DataCompareFull} The normalized $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio measured in this work, compared with the three most recent measurements and the ENDF/B-VIII.$\beta$5~\cite{ENDFB8B5} evaluation shown with the evaluated uncertainty. In many cases, the uncertainty on the measured data is smaller than the symbol. The inset shows a comparison to the ENDF/B-VII.1~\cite{Chadwick2011NDS}
and ENDF/B-VIII.$\beta$5 evaluations near $1.2$~MeV. The lower plot shows the residual of the four data sets and ENDF/B-VIII.$\beta$5, shown with the evaluated uncertainty.}
\end{figure*}
The best-fit parameters and uncertainties for the two targets are shown in Table \ref{tab:efficiency}. While the model was based upon a physical description of the processes affecting the efficiency, the parameters involved are not necessarily physically precise values, as a number of compensating effects can occur. For example, uncertainties in the SRIM-derived stopping powers would be correlated with the target thickness, the total energy would correlate with the choice of digitizer channel to energy conversion factor, and the anisotropy would correlate with the choice of TPC drift velocity. The purpose of these calculations is to describe the fission fragment distribution as accurately as possible, and extrapolate the data below an energy threshold using the efficiency model. Compensating factors like these should not significantly affect the extrapolation.
\subsection{\label{sec:ErrorPropagation}Uncertainty Propagation}
The ratio defined in Eq.~\ref{eq:CSRatio} can be described by a probability distribution for each neutron energy range, and the covariance of these values is calculated via Monte Carlo error propagation.
Each term in the ratio has an assigned uncertainty, and some terms have fit parameter covariance matrices.
The product of the transposed Cholesky decomposition~\cite{Gentle1998Book} and random Gaussian vectors are used to generate $100$~realizations of all ratio terms. The same Gaussian vectors are applied across the full neutron energy range, and a ratio covariance as a function of energy can be found by analyzing the ratio distributions for pairs of energies. The $C_{bb}$ and $\epsilon_{ff}$ terms use full covariance matrix error propagation, the $G$ term is considered fully correlated, and $C_{ff}$ is considered uncorrelated as a function of energy.
The anisotropy contribution was solved for independently of the main efficiency model covariance matrix, i.e. the parameters describing anisotropy were varied around their best-fit values independently of those for the efficiency model.
As mentioned in Section \ref{sec:FissionCuts}, the PID energy cut is varied over a range of values, where the minimum value is above $\alpha$-particle and recoil contaminants, and the maximum value removes a small fraction of fission events in the fission distribution. The fission fragment detection efficiency in the cross section ratio
is calculated for each cut variation, and any dependence of the ratio on the cut energy is considered a residual efficiency uncertainty. For this analysis, the cut is varied~100 times across a uniform distribution.
The final cross section ratio is calculated by performing the 100 Monte Carlo term variations for each of 100 energy cut variations, resulting in 10,000 values in the cross section ratio distribution for each energy bin. The mean of the ratio distribution is calculated for each energy bin, and covariance is calculated with pairs of energy bins. The normalized cross section ratio with all uncertainties is shown in Fig.~\ref{fig:DataCompareFull}, compared to the ENDF/B-VII.1~\cite{Chadwick2011NDS} and ENDF/B-VIII.$\beta$5~\cite{ENDFB8B5} evaluations. The correlation matrix for the cross section ratio is shown in Fig. \ref{fig:U8U5Correlation}, where the z-axis represents the value of the correlation matrix elements. The ratio is normalized to the ENDF/B-VIII.$\beta$5 evaluation at $14.5$~MeV. The covariance matrix is related to the correlation matrix by the uncertainties shown in Fig.~\ref{fig:DataCompareFull}.
\section{\label{sec:Discussion}Discussion}
The various uncertainty contributions to the measured cross section ratio can be isolated by enabling individual contributions in the error propagation procedure (Fig. \ref{fig:Uncertainty}). The largest contribution to the total uncertainty at high energies is the statistical uncertainty, while at low energies the contaminant uncertainty dominates, because the cross section ratio drops dramatically below the fission threshold. The efficiency fit contributes the next largest uncertainty at high energy, although the contribution is significantly smaller than the statistical uncertainty. The residual uncertainty refers to the sensitivity of the cross section ratio to variations in the energy-based PID cut, which is similar to the efficiency uncertainty at higher energy. The wrap-around correction is a minor contribution to the total uncertainty.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/U8U5Correlation}
\caption{\label{fig:U8U5Correlation} The $^{238}$U(n,f)/$^{235}$U(n,f) correlation matrix measured in this work. At low neutron energy, the contaminant correction becomes the largest source of uncertainty, resulting in a large correlated region in the correlation matrix. The contaminant correction is a fixed value at all energies and as the ratio becomes small at low energy, a large relative uncertainty results. The z-axis represents the value of the correlation matrix elements.}
\end{figure}
The cross section ratio has been normalized to the ENDF/B-VIII.$\beta$5 evaluation at 14.5 MeV, as the uncertainty at this energy is relatively small~\cite{ENDFB8B5}. The beam flux $\Phi$ and actinide density $N$ factor out of Eq. \ref{eq:CSRatio} when normalizing, which removes the uncertainty associated with those terms. The neutron beam flux was calculated with the measured proton distribution in the fissionTPC, resulting from neutrons scattering off of hydrogen in the drift gas, and it was found that a small tilt in the detector or gain variations across the pad plane could result in a difference between the measured proton distribution and neutron flux at the target.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/Uncertainty}
\caption{\label{fig:Uncertainty} Uncertainty contributions to the $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio. At low neutron energy, the contaminant correction becomes the largest source of uncertainty, and statistical uncertainty is largest at high energy. The contaminant correction is a fixed value at all energies and as the ratio becomes small at low energy, a large relative uncertainty is found.}
\end{figure}
With thick-backed actinide targets which overlap in the $x$-$y$ dimensions, the fission and $\alpha$-particle spatial distribution can be used as a second method for calculating the neutron flux, and this would not be sensitive to the tilt of the detector or gain variations. The target used for this measurement has two half-disk actinide deposits on a thin carbon-backed target which did not have any actinide overlap in $x$ and $y$, and such a correction could not be made. Future measurements will include thick-backed targets with actinide deposits on both sides.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/LeftRightFlux}
\caption{\label{fig:DataCompareZoom} The fission ratio of the left and right half of the $^{235}$U target measured from this work.}
\end{figure}
Typical neutron-induced fission cross section measurements have stacks of targets that have roughly the same spatial distribution of actinide deposits and neutron flux.
The ability of the fissionTPC to identify energy, length, track angle, and start position allows for the thin-backed half-disk target used in this work.
It was previously assumed that the neutron beam flux varied spatially, but that the neutron energy spectrum did not.
To test this, a ratio of fission counts was taken between different regions of the target, and a 7\% variation in neutron flux as a function of energy was observed.
This ratio can be seen in Fig.~\ref{fig:DataCompareZoom}, with a gradual increase occurring between $0.5$ and $10$~MeV.
MCNP simulations \cite{Pelowitz2011LA} show that this is due to an intervening neutron collimator exposing off-axis areas of the fission foil to different sections of the tungsten spallation target.
As the proton beam slows down in the tungsten, the neutron spectrum softens leading to a spatially varying neutron energy spectrum.
Such a flux variation should only be observed in the direction of the beam, which is parallel to the ground. The half-disk targets used in this measurement are bisected by a plane consistent with the beam direction, and therefore flux variations should not be observed between the two targets.
This was confirmed experimentally in a separate measurement of different actinides, which had deposits rotated 90$^{\circ}$ relative to the target used in this experiment. In that confirmation measurement, the top/bottom ratio was consistent with unity.
\begin{figure}[h]
\includegraphics[scale=0.43]{figures/DataCompareMid}
\caption{\label{fig:DataCompareMid} The normalized $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio measured in this work, compared with the three most recent measurements and to the ENDF/B-VIII.$\beta$5 \cite{ENDFB8B5} evaluation shown with the evaluated uncertainty. The neutron energy range $1.6$ to $3.4$~MeV is shown, and the new data is seen to differ from ENDF/B-VIII.$\beta$5 by $\sim$2.5\% at $2.4$~MeV.}
\end{figure}
The result here is compared to the three most recent $^{238}$U(n,f)/$^{235}$U(n,f) measurements, as well as to the ENDF/B-VIII.$\beta$5 evaluation in Fig.~\ref{fig:DataCompareFull}.
There was a recent change in ENDF/B-VIII.$\beta$5 for the $^{238}$U(n,f) cross section, which resulted from a 40\% change in the evaluation at $1.2$~MeV.
A comparison of this work to ENDF/B-VII.1~\cite{Chadwick2011NDS} and ENDF/B-VIII.$\beta$5~\cite{ENDFB8B5} is shown with three previous data sets in the inset of Fig. \ref{fig:DataCompareFull}.
The $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio measured in this work agrees with most recent data, and provides support for the recent change in the evaluation.
A significant difference in the cross section is observed between this work and past measurements in the energy range $2-3$~MeV (Fig.~\ref{fig:DataCompareMid}), with this work most closely agreeing with Shcherbakov~\cite{Shcherbakov2002JNST}.
The disagreement between this measurement and ENDF/B-VIII.$\beta$5 is greatest ($\sim$2.5\%) near $2.4$~MeV.
The cross section ratio presented here is normalized to ENDF/B-VIII.$\beta$5 at $14.5$~MeV neutron energy, and the disagreement at $2.4$~MeV indicates a difference in the cross section ratio shape.
Without an absolute normalization, we are not able to determine the energy range in which the disagreement occurs.
\section{\label{sec:Conclusions}Conclusions}
The normalized $^{238}$U(n,f)/$^{235}$U(n,f) cross section ratio has been measured using the fissionTPC over the neutron energy range $0.5$ to $30$~MeV. The fissionTPC allowed for a detailed analysis of systematic uncertainties by providing particle information which is unique to this technique.
By fitting the distributions of fission fragment energy and angle using a Monte Carlo simulation of the target, an efficiency correction factor could be applied to the measured fission event count, which allows for a higher energy cut to exclude $\alpha$-particle and neutron recoil backgrounds.
Error propagation of the wrap-around and efficiency fits were combined with a variational analysis to produce an accurate measure of the systematic covariance for the cross section ratio.
The cross section ratio presented here is normalized to the ENDF/B-VIII.$\beta$5 evaluation at 14.5 MeV.
This allows the shape of the ratio to be reported over the full neutron energy range without the large neutron beam flux uncertainty introduced by the target geometry. Future measurements will be performed with thick-backed targets and back-to-back actinide deposits which will allow for precise determination of the neutron beam flux and absolute normalization.
This cross section ratio has the potential to impact other measurements, because the $^{238}$U(n,f) cross section is a standard used in neutron flux measurements, and can cause correlations between different nuclear data sets. This new data
provides additional support for the recent 40\% change of the $^{238}$U(n,f) cross section reflected in the ENDF/B-VIII.$\beta$5 evaluation. In addition, the measured cross section ratio shape can be used to improve nuclear physics knowledge of the compound nuclei by fitting the data with nuclear reaction models.
\begin{acknowledgments}
The authors would like to thank Caleb Mattoon for valuable discussions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The neutron beam for this work was provided by LANSCE, which is funded by the U.S. Department of Energy and operated by Los Alamos National Security, LLC, under contract DE-AC52-06NA25396. University collaborators acknowledge support for this work from the U.S. Department of Energy Nuclear Energy Research Initiative Project Number 08-014, the U.S. Department of Energy Idaho National Laboratory operated by Battle Energy Alliance under contract number 00043028-00116, the DOE-NNSA Stewardship Science Academic Alliances Program, under Award Number DE-NA0002921, and through subcontracts from LLNL and LANL.
\end{acknowledgments}
|
{
"timestamp": "2018-02-27T02:01:06",
"yymm": "1802",
"arxiv_id": "1802.08721",
"language": "en",
"url": "https://arxiv.org/abs/1802.08721"
}
|
\section{Introduction}
\textit{In situ} sequencing \cite{ke2013situ} is a very powerful tool to quantify gene expression directly in biological tissue samples without losing spatial information on tissue morphology. Investigated genes are targeted with controlled design of barcoded padlock probes, locally amplified and sequenced by repeated fluorescent staining and imaging cycles.
An \textit{in situ} sequencing dataset consists of five fluorescent channels for each sequencing cycle: one nuclei channel and four colour channels where the fluorescent signals belonging to the four bases of the genetic code (T, G, C, A) are imaged.
An additional general stain is used in the first cycle to detect all four bases in a single image used as reference.
Fluorescent signals appear as bright spots of 3-7px size in a noisy background caused by scattering light and autofluorescence. Moreover, fluorescent spots have a blurry appearance without any clear border due to the low signal-to-noise ratio caused by the diffraction limit of the microscope. Therefore, it is often difficult to distinguish real fluorescent signals from noise and background structures focusing the analysis on criteria based only on intensity values.
We previously published an image analysis pipeline for signal decoding of the barcoded padlock probes mapping targeted mRNAs with morphological and spatial information in cells and tissue \cite{pacureanu2014image}.
This previous approach detected fluorescent signals by applying a global threshold on intensities of the general stain image, followed by a watershed segmentation to resolve clusters. The resulting segmentation mask was then used to extract fluorescence intensities in the four colour channels for each sequencing cycle. Thereafter, barcodes were decoded selecting the base from the channel with the highest intensity for each sequencing cycle and finally filtered using a quality threshold based on intensity values.
As compared to visual assessment, many signals were missed using this approach.
In order to improve recall of the decoded barcode sequences we here present a pipeline that aims to be as inclusive as possible in the first processing steps. Thus, we delay the decision of which fluorescent signal candidates contribute to an expected sequence.
We use a Convolutional Neural Network (CNN) to extract self-learned features from signal candidates and use them as a probability prediction to describe how similar a signal candidate is compared to a true signal judged by visual examination. We thereafter feed signal candidates to a graphical model that resolves the sequences and provides the final barcodes. Finally, a quality measurement of the decoded sequences is assessed, making it possible for the user to set a threshold to achieve high recall or high accuracy.
\section{Method}
\subsection{Image Registration}
Before extracting the signals we align the images to compensate image misalignment among successive sequencing cycles.
We firstly apply a maximum intensity projection (MIP) of the general stain and the nuclei channel of the first sequencing cycle and we use the resulting image as a fixed reference. Second, for each sequencing cycle we combine the five channels (nuclei, T, G, C, A) by MIP and use them as moving images to perform a rigid registration to the reference using Elastix \cite{klein2010elastix}. Finally the same transformation matrices are applied to the four colour channels (T, G, C, A) of the respective sequencing cycle achieving their final registration.
\subsection{Signal Candidate Detection}
After image registration we extract signal candidates from each channel and each sequencing cycle performing a coarse detection including noise, since the graphical model resolves the sequences picking up only later high quality candidates to build up a barcode sequence.
This allow us to retrieve also the weakest signals that might be confused with noise and discarded using high threshold parameters.
\subsubsection{Normalization}
In order to be able to compare intensity values among different channels, we normalize the images scaling the intensities between the estimated background value (\textit{i.e.} mode of the image) and the brightest fluorescent signal values (\textit{i.e.} \(99^{th}\) percentile of intensity values):
\[{I\_norm_{ij}=\frac{I_{ij}-mode(I_{ij})}{P99(I_{ij})-mode(I_{ij})}}\]
where \(I_{ij}\) is the registered image of channel \(j\) at sequencing cycle \(i\) and \(P99(I_{ij})\) is its \(99^{th}\) percentile.
\subsubsection{Signal Candidate Extraction}
Before extracting the signal candidates, we first enhance bright spots and attenuate background contribution using Top-Hat filtering with a disk of radius 5 as structuring element.
Afterwards, we perform an h-maxima transformation \cite{soille2013morphological} in which all local maxima are found, but only those whose dynamic is higher than a given value \(h\)
are extracted as signal candidates.
Furthermore, since some very bright signals present saturated intensity values, the h-maxima transform detects regions of local maxima consisting of several connected pixels. Thus, we consider the centroid coordinates of these regions as signal candidate positions.
The result of this step is a mask, having the same size as the original image, with white pixels corresponding to the detected candidate positions.
This step is applied to the general stain channel image of the first sequencing cycle and to each colour channel of every sequencing cycle.
\subsubsection{Signals Merging}
Due to photo-bleaching and broad emission spectra, fluorescent signals can bleed-through to other channels. Thus, to filter multiple detections associated to the same fluorescent signal, the extracted signal candidates belonging to different masks are associated as follows:
\begin{enumerate}
\item signal candidate coordinates of the general stain channel are extracted from the relative mask and taken as reference;
\item for each couple of coordinates extracted we check, in each colour channel mask for each sequencing cycle, if there is a corresponding candidate having coordinates located in a 3x3 px window centred in the reference coordinates;
\item if any corresponding candidate is found we retrieve the intensity value of the closest coordinate from the respective normalized image (otherwise intensities are extracted from each channel normalized image using the reference coordinates).
\end{enumerate}
After this step, for each associated candidate that belongs to the same fluorescent signal, we select the signal having the maximum fluorescence intensity.
\subsection{Signal Candidate Predictions}
For each signal candidate extracted in the previous step, we adopt a CNN based self-learning approach to determine the probability of being signal and noise. Using training data, the CNN learns the underlying discriminative features to predict the similarity between a signal candidate and a true signal.
The CNN architecture is composed of a convolutional layer with 200 filters of 5x5 convolutions, a fully connected layer of 128 neurons and a softmax layer with 2 output classes. Two dropout layers are used, as regularizers, before and after the fully connected layer to avoid overfitting.
The convolutional layer takes as input an array of 5x5 px windows. Each window is centred on a signal candidate location and contains the intensity values extracted from the relative normalized image.
The network was first trained on an annotated dataset from a different \textit{in situ} sequencing experiment. Using a new set of 200 annotated signals,
selected randomly across all color channels and sequencing cycles
we fine-tuned the pretrained network to compensate it for different experimental parameters and biological sources of the samples. The output is the probability predictions of a signal being true or noise.
\subsection{Graphical Model}
We finally resolve the sequences combining all signal candidates and probability predictions. The graphical model helps us to model the uncertainty of the data (probability that a detection is a true signal or not) in combination with a representation of the logical structure of the data itself (topological constraints applied to the graph that promote the formation of paths representing the decoded sequences).
Signal candidates are encoded in the graph as \textit{detection variables} represented as \(D\) nodes (Fig.~\ref{gm_fig}). Each detection variable is a boolean random variable that represents the nature of the detection: \textit{False} for false positive detection (noise), \textit{True} for true positive detection (signal). Relationships among signals candidates belonging to different sequencing cycles are encoded as \textit{transition variables}, represented as \(T\) nodes. Each transition variable connects a pair of detection variables representing signals candidates from any pair of different cycles whose distance is less then a given threshold \(d_{th}\). Therefore, transition variables are boolean random variables that assume \textit{True} values when a pair of true signal candidates belong to the same barcode sequence (\textit{i.e.} mRNA molecule).
We can formally describe our graphical model as a graph \(G=\{D,T,E,f,g\}\) where, \(D=\{D_1,D_2,\dots,D_n\}\) is the set of detection variables, \(T=\{T_1, T_2,\dots,T_m\}\) is the set of transition variables, \(E =\{e_1,e_2,\dots,e_z\}\) is the set of edges, \(f=\{f_1,f_2,\dots,f_n\}\) and \(g=\{g_1,g_2,\dots,g_m\}\) are cost functions respectively for candidate selection and aggregation among sequencing cycles defined as:
\begin{equation}
f_i(D_i)=\left\{%
\begin{array}{lc}
-log(p_0) & D_i=0 \\
-log(p_1) & D_i=1
\end{array}\right., \forall i \in [1,n]
\end{equation}
where $p_0$ and $p_1$ are probability predictions from the CNN of being respectively noise and signal; and
\begin{equation}
g_j(T_j)=\left\{%
\begin{array}{lc}
-log(\mu_t) & T_j=0 \\
-log(1-\mu_t) & T_j=1
\end{array}\right., \forall j \in [1,m]
\end{equation}
where $\mu_t$ is an affinity function that describes how close two signal candidates belonging to different cycles are related to each other in terms of distance and intensity value. Specifically:
\begin{equation}
\mu_t = \frac{1}{(1+k_1 \cdot \Delta I)(1+k_2 \cdot d)}
\end{equation}
The affinity function $\mu_t$ is inversely proportional to the difference of intensity values between the pair of relative signal candidates $\Delta I$ and the euclidean distance $d$ between them. Two weighting parameters, $k_1$ and $k_2$, are used to modulate the contribution of $\Delta I$ and $d$.
The graph is solved minimizing the cost function $C$ defined as:
\begin{equation}
C(D,T)=\sum_{i=1}^n f_i(D_i) + \sum_{j=1}^m g_j(T_j)
\end{equation}
However, because of the nature of the problem only certain configurations of the random variable states can encode a valid representation of a barcode sequence. Thus, in order to allow only a subset of the solution space, the following constraints are added to the graph:
\begin{enumerate}
\item resolved sequences must have a length equal to the number of sequencing cycles,
\item resolved sequences must be encoded by $D$ variables belonging to different sequencing cycles,
\item each $T$ and $D$ can only encode a single barcode,
\item if a $D$ variable is set to $False$, then all $T$ variables connected to it are set to $False$.
\end{enumerate}
\subsection{Quality of Decoded Barcodes}
Finally, we define a quality metric $Q_b$ for each decoded sequence $b$ encoded by the set $\{D_{b1},\dots,D_{bh}\} \subset D$, where $h$ is the number of sequencing cycles:
\begin{equation}
Q_b = \frac{1}{\sum_{i=1}^h f_{bi}(D_{bi})}
\end{equation}
\section{Experiments and Results}
To validate our method we performed sequence decoding on a previously published \textit{in situ} sequencing dataset \cite{ke2013situ}. The dataset consists of a co-culture of human and mouse cells where a four-base-long sequence of \textit{ACTB} mRNA has been sequenced. The sequence is identical in the two type of cells except for a variation in the second base that allow to differentiate between human and mouse mRNAs. The dataset has been acquired in 3D at different focal depths and then combined into a single image with MIP by the proprietary microscope software. We refer to \cite{ke2013situ} for further image acquisition details.
We evaluate the results in terms of recall in respect to the state-of-the-art image analysis pipeline \cite{pacureanu2014image} and in terms of precision evaluating the spatial correlation of the detected sequences in relation to the biological information. The analysis is performed running the pipeline with the parameters setting: $h=0.15$, $d_{th}=2$, $k_1=0.23$, and $k_2=0.1$. We consider resolved barcodes of the targeted gene (with sequence \textit{AGGC} and \textit{AAGC}) as \textit{true positive} (TP) decoded sequence. And sequences that differ from the expected barcodes as \textit{false positive} (FP) results.
We exclude sequences belonging to \textit{homopolymers} (e.g. barcodes consisting of a single letter, such as \textit{AAAA}, \textit{CCCC}, \textit{GGGG} and \textit{TTTT}) from the FP count. This kind of false detections arise from autofluorescent of biological structures that often appear very similar to real fluorescent signals, and are never used when designing barcodes for an \textit{in situ} sequencing experiment.
Comparison of results between the state-of-the-art and the proposed pipeline are shown in Fig.~\ref{fig1}. The proposed pipeline decodes $304$ TP and $122$ FP sequences prior to quality thresholding.
After evaluating the receiver operating characteristics (ROC) (Fig.~\ref{TPFP}), we selected a quality threshold to exclude unexpected sequences that generally have a low quality metric since they often come from noise and false detections. Setting the quality threshold high enough to exclude all false positive results lead to $270$ TP signals compared to $213$ barcodes correctly identified by the state-of-the-art pipeline.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fp-tp_curve.pdf}
\caption{ROC curve. The recall of decoded barcodes corresponding to the targeted sequences (\textit{true positive rate}) against $1-$specificity of decoded barcodes corresponding to unexpected sequences (\textit{false positive rate}), at decreasing quality thresholds.}
\label{TPFP}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.935\textwidth]{Fig1.pdf}
\caption{Markers overlap on the general stain image of the two targeted barcodes (\textit{AGGC}, \textit{AAGC}) and unexpected barcodes, decoded by the state-of-the-art pipeline \textit{M1} \cite{pacureanu2014image} and the proposed method \textit{M2}.
The top-left image shows sequences decoded by \textit{M2} without any quality filtering.
The bottom-left image shows barcodes with a quality metric higher than a given threshold whose value has been set to filter out unexpected sequences. The right side of the figure shows the zooms of four different regions where overlapping markers of decoded barcode sequences with high quality are shown in detail. M2 often resolve signals that were merged by M1.}
\label{fig1}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.935\textwidth]{gm.pdf}
\caption{Graphic representation of barcode sequences decoding example through the graphical model of three different mRNA molecules. On the left, labeled with numbers from 1 to 4, cut-out composite images of a small region from the four sequencing cycles. Fluorescent signals from different channels are shown with different colours (A in orange, C in green, G in magenta, and T in cyan). Each $D$ variable represents a signal detection marked with red crosses in each cut-out. For each pair of signal detections of two different cycles with distance smaller than \(d_{th}\), a $T$ variable is inserted and represented with white nodes in the graph. Each $D$ and $T$ nodes shows respectively the probability to be signal $p_1$ and the affinity value $\mu_t$. The graphical representation of the signal candidates present in the image cut-outs result in two independent connected components represented by the two graphs with labels A and B. Solving the graphs by energy minimization decodes \textit{AGCA}, \textit{CTGT}, \textit{CTGC} barcode sequences highlighted by the red paths in figure.
}
\label{gm_fig}
\end{figure*}
\section{Conclusion}
To conclude, the proposed signal detection and decoding approach increases signal recall by $27\%$ while maintaining detection sensitivity. Based on a visual comparison of the proposed approach and the state-of-the-art, it is clear that clustered signals are resolved with higher accuracy. Increased recall will have impact on all subsequent spatial statistics of tissue heterogeneity approached by \textit{is situ} sequencing.
\section*{Acknowledgment}
This research is part of the TissueMaps project, funded by the European Research council via ERC Consolidator grant 682810 to C. W{\"a}hlby. The data was kindly provided by M. Nilsson and his research group at SciLifeLab Stockholm.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-02-27T02:07:37",
"yymm": "1802",
"arxiv_id": "1802.08894",
"language": "en",
"url": "https://arxiv.org/abs/1802.08894"
}
|
\section{Introduction}
\label{s:intro}
Glaucoma is one of the leading causes of blindness in the world. While its etiology is not fully understood, it is known to be caused by damage to the optic nerve head (ONH) that can be induced by intraocular pressure (IOP). Researchers hypothesized that {the} biomechanics of {the peripapillary (PP)} scleral region close to the ONH, shown to be an important determinant of ONH biomechanics, may play a major role in glaucoma pathogenesis and progression \citep{Sigal05,Burgoyne08}. Recently, novel custom instrumentation was developed that can induce a fixed level of IOP and precisely measure the mechanical strain in the posterior human sclera \citep{Fazio12a,Fazio12b}. A commercial laser speckle interferometer (ESPI) is used to measure the IOP-induced scleral displacement continuously around the {posterior} eye. The scleral displacement is processed to estimate the mechanical strain tensor on a grid of points on the scleral surface, and the largest eigenvalue of this strain tensor computed to yield the maximum principal strain (MPS), a scalar measurement for each location on the scleral surface that summarizes the magnitude of {tensile} strain at that location. Intuitively, scleral regions with higher MPS for a given level of IOP are more pliable, which in principle {could either} relieve IOP and reduce the potential for ONH damage {or focus strain at the ONH and increase glaucoma damage}. Age is a primary glaucoma risk factor and age-related stiffening occurs in other load-bearing soft tissues. Thus, the researchers hypothesized that age-related changes in scleral stiffness might contribute to the known age-related increase in glaucoma risk.
Utilizing this custom instrumentation, \citet{Fazio14} conducted a study to test the hypothesis that scleral stiffness increases with age, with the expectation that future work will follow to elucidate the specific role of scleral stiffness in glaucoma. They obtained twenty pairs of eyes from normal human donors in the Lions Eye Bank of Oregon in Portland, Oregon, and the Alabama Eye Bank in Birmingham, Alabama.
From each subject, the MPS was measured {in the posterior globes of} both left and right eyes on {a} partial spherical domain with 120 circumferential locations $\phi\in(\SI{ 0}{\degree},\SI{360}{\degree})$ and 120 meridional locations $\theta \in(\SI{ 9}{\degree},\SI{24}{\degree})$, where $\theta=\SI{0}{\degree}$ corresponds to the ONH. For each eye, MPS was measured under nine different IOP levels (7, 10, 15, 20, 25, 30, 35, 40, and {45 mmHg}). {Further scientific and technical details can be found in Section \ref{s:Application}}. Figure \ref{fg1} plots a polar azimuthal projection of MPS functions for one subject under {45 mmHg} IOP level. The center of each panel corresponds to the ONH of each eye. The hypothesis is that MPS will decrease with age, especially in scleral regions closer to the ONH.
\begin{figure}
\centerline{\includegraphics[scale=0.6]{rawMPS_for_paper.png}}
\caption{\footnotesize \textbf{Data plot:} Polar azimuthal projection of partial spherical MPS function for both left and right eyes from one subject of age 66yr under {45 mmHg} of intraocular pressure.}\label{fg1}
\end{figure}
The resulting data set is complex and high dimensional with many layers of structure. First, for each eye at each IOP level, the MPS measurements constitute a function on a two-dimensional partial spherical domain for which one needs to account for within-function (intrafunctional) correlations. Second, there are multiple sources of between-function (interfunctional) correlation. The measurements from the left and right eye from the same subject are expected to be correlated, and measurements from the same eye across the IOP levels should be serially correlated. The strength of these nested and serial correlations may potentially vary around the scleral surface. Based on preliminary looks at the data, it appears that the age effect on MPS may not be linear, and also appears to vary around the scleral surface. This data set is also enormous, with over 4.5 million measurements, which poses practical problems for model building and fitting.
Many researchers, when faced with such complexities, would use one of several strategies to simplify the data so they can analyze them. Some researchers would get rid of the complexities of the functional data by computing summaries and modeling only those, while discarding the original functional data. In this application area, some researchers create summaries in the peripapillary (PP) and mid-peripheral (MP) scleral regions by integrating over the region closest to the ONH ($ \SI{9}{\degree} - \SI{17}{\degree}$) and further out ($\SI{17}{\degree} - \SI{24} {\degree}$), respectively, or over several circumferential sections. This practice would miss any scientific insights not captured by these summaries. Alternatively, some researchers would only model a subset of the data (e.g. only a single IOP or single eye per subject) to avoid having to deal with potentially complex interfunctional correlation, or ignore this correlation entirely by modeling IOP within eyes or eyes within subject as independent. Some would ignore intrafunctional correlation by modeling individual pixels on the image independently. Many researchers would also only consider parametric or linear effects for covariates such as age without considering whether more complex nonparametric covariate effects might be necessary to capture the true relationship. All of these simplification strategies have potential statistical downsides.
In our opinion, it would be far preferable to model this complex function data set in its entirety using a statistical model that is flexible enough to capture all of its potentially complex intrafunctional and interfunctional structure. However, to the best of our knowledge, no statistical model has been presented in existing literature that simultaneously handles all of this structure. In this paper, we use the Bayesian functional mixed model (BayesFMM) framework, introduced in \cite{Morris06} and further developed in subsequent papers, to model these data, which has sufficient flexibility to capture nonparametric covariate effects, serial and nested interfunctional correlation, functions on a fine 2d partial spherical domain, and is computationally efficient enough to scale up to this enormous size and produce Bayesian inference for functional parameters as well as any desired summaries. This requires a careful description of how to utilize the BayesFMM framework to model smooth nonparametric effects and serial interfunctional correlation, which has not been done in any existing literature to date.
While motivated by and applied to the glaucoma scleral strain data, the BayesFMM framework we present here is more generally applicable to many types of complex, high dimensional functional data of modern interest including wearable computing data, genome-wide data, proteomics data, geospatial time series data and neuroimaging data. Our hope is that besides revealing scientific insights into glaucoma, this paper can serve as a template for performing a state-of-the-art functional response regression analysis for other complex functional data sets.
The rest of the paper is organized as follows. Section 2 contains a brief review of some relevant literature in functional regression, including existing methods for modeling nested or serial interfunctional correlation and nonparametric smooth covariate functional effects. Section 3 contains methods, overviewing the BayesFMM framework and providing methodological details for how to fit serially correlated functions and nonparametric smooth functional effects in this framework and discussing its properties. Section 4 describes a model selection heuristic that can be used to assess which of the various potential complex modeling structures are needed for the current data before having to run a full MCMC. Section 5 presents the details of our analysis of the glaucoma scleral strain data, including details on basis function and modeling components chosen, a summary of results, and various sensitivity analyses.
Section 6 contains a discussion of general scientific conclusions from our analysis and an assessment of the strengths and weaknesses of the BayesFMM framework for this setting of complex functional regression models. Supplementary materials provide additional computational details and graphical results of the analysis, as well as code to fit the models contained in the paper.
\section{Literature Review}
There is a rich and rapidly expanding literature on methods to perform functional regression, which includes \textit{functional predictor regression} (scalar-on-function), \textit{functional response regression} (function-on-scalar), and \textit{function-on-function regression}. For example, see \cite{Morris15} for an extensive review of work in this area, which has exploded in the past decade. In this paper, our goal is to regress the scleral strain MPS functions on predictors age and IOP, so we are interested in functional response regression. Here, we will summarize some key existing literature relevant to the structures present in the glaucoma scleral strain data set, including methods that account for nested and serial interfunctional correlation and smooth nonparametric covariate effects
As summarized in \cite{Morris15}, much of the work on functional response regression assumes independently sampled functions, but a number of published models can handle interfunctional correlation. Many are focused on nested or crossed sampling designs that induce compound symmetry covariance structures among functions sampled within the same cluster, including \cite{Brumback98}, \cite{Morris2003}, \cite{Berhane08}, \cite{Aston10}, and \cite{Goldsmith16}. There are relatively few that model serially correlated functions, for which functions are observed at multiple levels of some continuous variable for the same subject. Most commonly, the functions occur along a grid of time points, which could be called \textit{longitudinally correlated functional data}, but can also occur over other continuous variables such as IOP for our glaucoma data. There are a number of papers that focus on functional predictor regression \citep{Goldsmith2012, Gertheiss2013, Kundu2016, Islam2016} or estimate multi-level principal components \citep{Greven10, Zipunnikov2011, Chen2012, Li2014, Zipunnikov2014, Park2015, Shou2015, Hasenstab2017} for serially correlated functions, but these papers do not deal directly with functional response regression, i.e. do not regress the functions on covariates while accounting for this serial correlation in the error structure.
Any methods built for a general functional mixed modeling framework containing multiple levels of general random effect functions, including the BayesFMM framework first introduced by \cite{Morris06} and the functional additive mixed model (FAMM) framework first introduced by \cite{Scheipl15}, can be used for functional response regression while accounting for serial interfunctional correlation if an acceptable parametric form for the serial variable can be found. However, no specific models or examples in existing papers have presented such a case. In this paper, we will demonstrate in detail how to account for serial correlation within the BayesFMM framework.
Most functional response regression work, while modeling coefficients as nonparametric in the functional domain $t$, has focused on models linear in covariates $x$, e.g. with terms such as $x \beta(t)$. A number of methods allow functional coefficients that are smooth and nonparametric in covariate $x$ as well, e.g. $f(x, t)$. The last chapter of \cite{GAM} describes additive mixed models (AMMs) that extend \cite{Laird1982} to include nonparametric fixed effects and parametric random effects. After specifying a parametric form for $t$ in fixed and random effects, this framework can fit terms $f(x,t)$ that are nonparametric in $x$ but parametric in $t$ using {\tt mgcv} in R. \cite{Scheipl15} describe a generalization of AMMs to the functional regression settting, yielding \emph{functional additive mixed models (FAMM)} that can fit functional response, functional predictor, or function-on-function regression models, and allowing terms $f(x, t)$ that are nonparametric in both $x$ and $t$, using {\tt mgcv} to fit the underlying models. Initially designed for splines, this work was extended to allow the use of functional principal components (fPC) for the random effect functions to handle sparse, irregularly sampled outcomes in \cite{Cederbaum2015}, to model generalized outcomes in \cite{Scheipl2016}, and to utilize a new boosting-based fitting procedure for some models that leads to other computational and modeling benefits in \cite{Brockhaus2015}. This series of work is summarized in a review article \cite{Greven2017}. These works utilize the fact, going way back to \cite{Wahba1978}, that penalized splines can be represented using a linear mixed model framework. Using similar representations, the BayesFMM framework first introduced in \cite{Morris06} can also be used to fit nonparametric terms $f(x,t)$ with appropriate specification of the design matrices $X$ and $Z$ in the model, but this has not been done in any existing paper. In this paper, we will describe in detail how to accommodate smooth nonparametric terms $f(x,t)$ in the BayesFMM framework.
To our knowledge, the only methods in existing literature with the flexibility to both fit smooth nonparametric functional covariate terms $f(x,t)$ and simultaneously account for nested and serial correlation are the FAMM framework \citep{Scheipl15,Greven2017} and the BayesFMM framework \citep{Morris06}. Both of these frameworks are extremely flexible, based on functional extensions of linear mixed models, but have important differences, which are discussed in detail in \cite{Morris2017} and \cite{Greven2017rejoinder}. The FAMM framework has several limitations that prevent its application to our glaucoma scleral strain data, including the requirement of functions to be on 1d Euclidean domains, use of spline bases for fixed effects with L2 penalties, and use of a computational approach that may not scale up well to enormous data sets like this. Thus, to fit these data, we adapt the BayesFMM framework to include smooth nonparametric age effects and serial correlation across IOP, and demonstrate how we can use it to reveal insights into the biomechanical etiology of glaucoma.
\section{Methods}
\label{s:method}
We first overview the BayesFMM framework in Section 3.1, and then for ease of exposition we build up the necessary components of our model separately before presenting our final general model. In Section 3.2, we demonstrate how to capture serial interfunctional correlation through functional growth curve models, and then demonstrate how to accommodate smooth nonparametric covariate functional effects in Section 3.3. Besides presenting the models, we also describe their properties and contrast with existing alternative methods. In Section 3.4 we present a general alternative form of the BayesFMM that includes smooth nonparametric terms that we use to fit our glaucoma scleral strain data.
\subsection{{Bayesian Functional Mixed Models (BayesFMM)}}
\label{sec:BayesFMM}
Suppose we have a sample of functions $Y_i(\mathbf{t}), i=1, \ldots, N$ observed on a common fine grid of size $T$ on a domain $\mathcal{T}$, which is potentially multi-dimensional and/or non-Euclidean \citep{Morris11}. The FMM introduced by \cite{Morris06} is a functional response regression model given by
\begin{eqnarray}
Y_i(\mathbf{t}) &=& \sum_{a=1}^{A} X_{ia} B_a(\mathbf{t}) + \sum_{h=1}^H \sum_{m=1}^{M_h} Z_{ihm} U_{hm}(\mathbf{t}) + E_i(\mathbf{t}). \label{eq:FMM},
\end{eqnarray}
where $B_a(\mathbf{t})$ are fixed effect functions that model the effect of covariate $X_{ia}$ on the response $Y$ at position $\mathbf{t}$ for each covariate $a=1,\ldots,A$, and the $U_{hm}(\mathbf{t})$ are random effect functions at level $h=1,\ldots,H$ corresponding to design matrix $Z_{ihm}$, with $m=1,\ldots,M_h$ being the number of random effects at the respective levels. As in linear mixed models for scalar data, the fixed and random effect predictors can be discrete or continuous, and can involve individual covariates or interactions of multiple covariates. Although not explicitly portrayed in (\ref{eq:FMM}), this modeling framework also accommodates functional predictors to perform function-on-function regression \citep{Meyer15}.
\textbf{Distributional and Covariance Assumptions:} Here, for simplicity, we first describe the Gaussian FMM with conditionally independent random effect and residual error functions, and then mention some other alternatives available in this framework. In this case, the random effect functions $U_{hm}(\mathbf{t})$ are iid mean zero Gaussian Processes with intrafunctional covariance cov$\{U_{hm}(\mathbf{t}_1),U_{hm}(\mathbf{t}_2)\} = Q_h(\mathbf{t}_1,\mathbf{t}_2)$ and the residual error functions $E_i(\mathbf{t})$ are iid mean zero Gaussian Processes with intrafunctional covariance cov$\{E_i(\mathbf{t}_1),E_i(\mathbf{t}_2)\}=S(\mathbf{t}_1, \mathbf{t}_2)$. Other extensions of this framework allow the option of conditional autoregressive (CAR) \citep{Zhang14} or Matern spatial covariance or AR($p$) temporal interfunctional correlation structures in the residual errors \citep{Zhu14}. Although focusing on Gaussian regression here, a robust version of this framework assuming heavier tailed distributions on the random effects or residuals is available \citep{Zhu11} if robustness to outliers is desired, and can also be utilized with any other features or modeling components in the BayesFMM framework.
\textbf{Basis Transform Modeling Approach:} A \textit{basis transform modeling approach} is used to fit model (\ref{eq:FMM}). This first involves representing the observed functions with a basis expansion with a set of basis functions $\psi_k(\mathbf{t}), k=1,\ldots, K$:
\begin{eqnarray}
Y_i(\mathbf{t}) &=& \sum_{k=1}^{K} Y^*_{ik} \psi_k(\mathbf{t}) \label{eq:basis}
\end{eqnarray}
While initially developed for wavelets \citep{Morris06}, as first discussed in \cite{Morris11}, the modeling approach can be used with any basis. It is meant to be used with \textit{lossless} transforms with $Y_i(\mathbf{t}) \equiv \sum_k Y^*_{ik} \psi_k(\mathbf{t})$ for all observed $\mathbf{t}$, so that the basis coefficients $\{Y^*_{ik}; k=1,\ldots,K\}$ contain all information within the observed functional data $\{Y_i(\mathbf{t}); \mathbf{t} = \mathbf{t}_1, \ldots, \mathbf{t}_T\}$, or at least \textit{near-lossless} with
\begin{eqnarray}
\left\lVert Y_i(\mathbf{t}) - \sum_{k=1}^K Y^*_{ik} \psi_k(\mathbf{t}) \right\lVert < \epsilon \hspace{8pt} \forall i=1,\ldots,N \label{eq:near-lossless}
\end{eqnarray}
for some small value $\epsilon$ and measure $\lVert \bullet \lVert$. This assures that the chosen basis is sufficiently rich such that for practical purposes it can recapitulate the observed functional data, and visual inspection of the raw functions and basis transformation should reveal virtually no difference. Any basis functions can be used, including commonly used choices splines, wavelets, Fourier bases, PCs or creatively constructed custom bases, and can be defined on multi-dimensional or non-Euclidean domains $\mathcal{T}$.
Rather than including the bases in a design matrix and using scalar regression methods to fit the model, our approach is to transform the observed functions into the basis space to obtain the basis coefficients $\mbox{\bf Y}^*$, a $N \times K$ matrix for which element $(i,k)$ contains the basis coefficient $k$ for observed function $i$, fit a basis-space version of the FMM to these coefficients, and then transform results back to the data space model (\ref{eq:FMM}) for estimation and inference. With basis representation written in matrix form $\mbox{\bf Y}=\mbox{\bf Y}^* \boldsymbol{\Psi}$ with $\boldsymbol{\Psi}$ a $K \times T$ matrix of basis functions evaluated on the observational grid with $\boldsymbol{\Psi}_{kj}=\psi_k(\mathbf{t}_j)$, the coefficients can be computed by $\mbox{\bf Y}^*=\mbox{\bf Y} \boldsymbol{\Psi}^-$ with $\boldsymbol{\Psi}^-=\boldsymbol{\Psi}' (\boldsymbol{\Psi} \boldsymbol{\Psi}')^{-1}$ as long as rank$(\boldsymbol{\Psi})=K$, or for certain basis functions including wavelets and Fourier bases their special structure enables fast algorithms for computing these coefficients.
\textbf{Basis Space Model:} Thus, rather than fitting model (\ref{eq:FMM}) directly, the basis-space version of the model is fit for each basis coefficient $k=1,\ldots,K$:
\begin{eqnarray}
\vspace{-24pt}
Y^*_{ik} = \sum_{a=1}^A X_{ia} B^*_{ak} + \sum_{h=1}^H \sum_{m=1}^{M_h} Z_{ihm} U^*_{hmk} + E^*_{ik}, \label{eq:basis_FMM}
\end{eqnarray}
where $B^*_{ak}$, $U^*_{hmk}$, and $E^*_{ik}$ are basis coefficients for the functional fixed effects $B_a(\mathbf{t})=\sum_k B^*_{ak} \psi_k(\mathbf{t})$, functional random effects $U_{hm}(\mathbf{t})=\sum_k U^*_{hmk} \psi_k(\mathbf{t})$, and functional residuals $E_i(\mathbf{t})=\sum_k E^*_{ik} \psi_k(\mathbf{t})$, respectively. While in principle correlation across basis coefficients can be accommodated, for complex, high-dimensional functions, it may be beneficial to model these basis coefficients independently. For the Gaussian FMM with conditionally independent random effect and residual error functions, it is assumed $U^*_{hmk} \sim N(0,q_{hk})$ and $E^*_{ik} \sim N(0,s_k)$ with $q_{hk}$ and $s_k$ scalar variance components. Although modeling independently in the basis space, this structure induces intrafunctional correlation according to the chosen basis functions. For example, for the residual error functions, the induced $T \times T$ intrafunctional correlation $\mbox{\bf S}$ with $\mbox{\bf S}_{ij}=\mbox{\bf S}(\mathbf{t}_i, \mathbf{t}_j)$ is given by:
\begin{eqnarray}
\mbox{\bf S}&=&\boldsymbol{\Psi}^{'} \mbox{\bf S}^* \boldsymbol{\Psi}, \label{eq:covariance}
\end{eqnarray}
where $\mbox{\bf S}^*=\mbox{diag}\{s_k; k=1,\ldots,K\}$, and $\mbox{\bf Q}_h$ defined likewise. For suitably chosen basis functions that effectively capture the characteristic structure of the observed functions $Y_i(\mathbf{t})$, this can allow a flexible class of covariance structures indexed by $K$ covariance parameters. One can assess the suitability of these assumptions by taking the basis-space variance components, computing (\ref{eq:covariance}), and plotting this covariance matrix to see if it appears to capture the salient structure. Figure \ref{f:combo_plots} panels (e) and (f) plot the intrafunctional correlation structure induced by the chosen tensor wavelet basis for the scleral strain MPS data set for a particular scleral location for the eye-to-eye random intercepts and residual error functions, and the file {\tt intrafunctional\_correlation.mp4} in the supplement shows a more extensive summary of all random levels across scleral locations.
\textbf{Shrinkage Priors for Regularization of Fixed Effects:} If fitting this model using a frequentist approach, L1 or L2 penalties could be imposed on the basis space fixed effects to induce regularization/smoothing of the fixed effect functions $B_a(\mathbf{t})$, as described in \cite{Morris15}. However, this framework was designed to use a Bayesian modeling approach, in which case regularization of the fixed effect functions $B_a(\mathbf{t})$ is accomplished through specification of shrinkage priors for the corresponding basis coefficients:
\begin{eqnarray}
B^*_{ak} \sim g(\boldsymbol{\gamma}_{aj}) \label{eq:shrinkage_prior}
\end{eqnarray}
for some mean zero distribution $g(\bullet)$ with corresponding regularization parameters $\boldsymbol{\gamma}_{aj}$ indexed by $j=1,\ldots,J$ that define a partitioning of the basis coefficients $k=1,\ldots,K$ into \textit{regularization sets}, which are subsets of basis coefficients sharing the same regularization parameters. \cite{Morris06} used the spike-slab prior \citep{George1993} for $g(\bullet)$ and we also use that here, but other alternatives include Gaussian, Laplace \citep{Park2008}, Horseshoe \citep{Carvahlo2010}, Normal-Gamma \citep{Griffin2010}, and Dirichlet-Laplace \citep{Bhattacharya2015}. The regularization parameters $\boldsymbol{\gamma}_{aj}$ can be given hyperpriors or be estimated by empirical Bayes, which is described in detail for the spike-slab in \cite{Morris06}.
\textbf{Model Fitting Approach:} A fully Bayesian modeling approach is used to fit a Markov Chain Monte Carlo (MCMC) to the basis space model (\ref{eq:basis_FMM}) for each $k$, and then transforming back to the data space, e.g. using $B_a(\mathbf{t})=\sum_k B^*_{ak} \psi_k(\mathbf{t})$ to yield posterior samples for each parameter in the data-space FMM (\ref{eq:FMM}). This requires specification of priors for the variance components; a vague empirical Bayes approach is used that centers the prior on REML starting values of these parameters, minimally informative to be equivalent to two data points of information, as detailed in Section \ref{s:Application}. BayesFMM fits a marginalized version of model (\ref{eq:basis_FMM}) with all random effects $\mbox{\bf U}^*$ integrated out, which integrates over the interfunctional correlation induced by the random effect levels when updating the fixed effects, and speeds convergence of the chain and calculations since the random effect functions themselves need not be sampled. As detailed in the supplement, the fixed effect updates involve conjugate (spike-slab) Gibbs steps and variance components are updated via Metropolis-Hastings with proposal variances automatically computed based on the corresponding Fisher information. If desired, posterior samples for random effects can be obtained by sampling from conjugate Gaussians.
\textbf{Bayesian Inference:} Given these posterior samples, one can compute any desired posterior probabilities and pointwise or joint credible bands that can be used for Bayesian inference. Joint bands can be constructed using the approach described in \cite{Ruppert03}. These inferential summaries can be constructed in the data or basis space for any functional of model parameters, including contrasts (e.g. $B_1(\mathbf{t}) - B_2(\mathbf{t})$), nonlinear transformations (e.g. exp$\{B_a(\mathbf{t})\}$), derivatives (e.g. $\partial f(x_a,\mathbf{t})/\partial x_a)$, or integrals (e.g. $\int_{\mathbf{t} \in \mathcal{T}_0} B_a(\mathbf{t}) d \mathbf{t}$ for some $\mathcal{T}_0 \subset \mathcal{T}$) aggregating information across regions of $\mathbf{t}$. We make use of these to produce inference on numerous scientifically interesting summaries for our scleral strain MPS data in Section \ref{s:Application}.
\textbf{Example FMM for Scleral Strain MPS Data:}. To illustrate this framework, suppose that we model MPS functions from both left and right eyes for each subject, but only for a specific IOP level, and suppose we are willing to assume a linear \textit{age} effect. Let $Y_{i1}(\mathbf{t})$ and $Y_{i2}(\mathbf{t})$ be MPS functions for the left and right eyes respectively from the $i$-th subject, with $\mathbf{t} \in \mathcal{T}$ indexing the scleral domain, with $\mathbf{t}=(\theta, \phi)$ being spherical coordinates on the scleral surface. We could represent these data with the following FMM:
\begin{align}
\label{e:FMMd2} Y_{ij}(\mathbf{t})= B_0(\mathbf{t})+X_{\mbox{age},i}B_{\mbox{age}}(\mathbf{t})+U_{i}(\mathbf{t})+E_{ij}(\mathbf{t}),
\end{align}
with $U_i(\mathbf{t}) \sim N\{\mbox{\bf 0}, Q(\bullet)\}$ and $E_{ij}(\mathbf{t}) \sim N\{\mbox{\bf 0}, S(\bullet)\}$. The $U_i(\mathbf{t})$ induce a generalized compound symmetry covariance structure between the functions from the left and right eyes from the same subject, with cov$\{Y_{ij}(\mathbf{t}),Y_{ij'}(\mathbf{t})\}=Q(\mathbf{t},\mathbf{t})+S(\mathbf{t},\mathbf{t})I(j=j')$. To simultaneously model data for all IOP, we would need to add a random effects level to capture the serial correlation across MPS functions for different IOP for the same eye. We next describe how this can be done.
\subsection{Accommodating Serial Interfunctional Correlation via Functional Growth Curves}
\label{s:growth}
Recall that in our scleral strain MPS data set, for each eye we have MPS functions from each of a series of IOP levels ranging from {7 mmHg to 45 mmHg}, which induces a serial correlation across scleral strain functions for the same eye according to IOP. It is important to account for this serial interfunctional correlation in some appropriate fashion in order to obtain efficient estimates and accurate inference for any fixed effect functions in the model. Here we demonstrate how functional random effects in a functional mixed modeling framework can be used to capture this serial correlation using a type of functional growth curve model, and discuss the properties of this strategy in the context of the BayesFMM framework using a basis transform modeling approach.
\textbf{Basic Growth Curve Model for Scleral Strain MPS Data:} For illustration, here we only consider the serial effect of IOP and omit any age effect and consider only the left eye for each subject. Suppose $Y_{ip}(\mathbf{t})$ is the MPS function for the $i^{th}$ subject, $i=1,\ldots,N$, after exposure to an IOP level of $p$. {Note that $Y_{ip}(\mathbf{t})$ is a function of $\mathbf{t}$ that varies across serial variable $p$. To capture the serial effect of $p$}, we consider the following \textit{functional growth curve model}:
{\begin{align}
\label{e:gcurve} Y_{ip}(\mathbf{t}) = m(p, \mathbf{t})+u_{i}(p, \mathbf{t})+E_{ip}(\mathbf{t}),
\end{align}}
where {$m(p,\mathbf{t})$ is the mean MPS for IOP level $p$ and scleral location $\mathbf{t}$, $u_{i}(p, \mathbf{t})$ is a mean zero random effect for subject $i$ that represents a subject-specific growth curve in $p$ that is allowed to vary across scleral location $\mathbf{t}$, and $E_{ip}(\mathbf{t})$ are residual error functions assumed to be independent and identically distributed mean zero Gaussians with covariance $S(\bullet)$. If we can find a suitable parametric form for the serial effect of $p$ with basis functions $G_d(p), d=0,1,\ldots,D$, we assume $m(p, \mathbf{t})=\sum_{d=0}^{D}B_{d}(\mathbf{t})G_d(p)$ and $u_{i}(p, \mathbf{t})=\sum_{d=0}^{D}U_{i,d}(\mathbf{t})G_d(p)$ with cov$\{U_{i,d}(\mathbf{t}_1), U_{i,d}(\mathbf{t}_2)\}=Q_d(\mathbf{t}_1, \mathbf{t}_2)$. In practice, we recommend using basis functions that are orthogonal across $d$, i.e. $\int G_d(p) G_{d'}(p) dp = 0$ for $d \ne d'$ to obviate the need to have cross-covariance terms between $U_{i,d}(\mathbf{t})$ and $U_{i,d'}(\mathbf{t})$ to avoid additional computational complexity in the model.
The introduction of this eye-level random growth curve induces serial covariance across functions for the same eye at different levels of IOP, with cov$\{Y_{ip}(\mathbf{t}),Y_{ip'}(\mathbf{t})\}=\sum_{d=0}^D{G_d(p)G_d(p')Q_d(\mathbf{t},\mathbf{t})}.$ Indexed by $\mathbf{t}$, the strength and shape of this serial covariance can vary across the scleral surface $\mathbf{t}$. Figure \ref{f:combo_plots} panel (d) contains the induced serial correlation across IOP for the marked scleral location, and the file {\tt IOP\_corr.mp4} in the supplementary materials is a movie file that demonstrates how this correlation varies over the scleral surface. Note also that cov$\{Y_{ip}(\mathbf{t}_1),Y_{ip'}(\mathbf{t}_2)\}=\sum_{d=0}^D{G_d(p)G_d(p')Q_d(\mathbf{t}_1,\mathbf{t}_2)},$ meaning that this structure enables ``borrowing of strength" from nearby $\mathbf{t}$ in determining the strength and shape of the serial covariance according to the intrafunctional covariance indicated by the off-diagonal elements of $Q_d(\bullet)$. If model (\ref{e:gcurve}) is marginalized with respect to the random effect functions, the resulting error terms can be seen to contain the induced serial correlation structure, which is subsequently accounted for in any estimation or inference of the fixed effect functions.
\textbf{Incorporation into BayesFMM framework:} It can be seen that model (\ref{e:gcurve}) can be written as a functional mixed model with $D+1$ fixed effect predictors and $D+1$ random effect levels, each with $N$ subject-specific random effect functions. Thus, any FMM framework allowing multiple levels of random effect functions could be used to fit this model. In the BayesFMM framework, after transforming the observed functions $Y_{ip}(\mathbf{t})$ into the basis space through $Y_{ip}(\mathbf{t})=\sum_{k=1}^K Y^*_{ipk} \psi_k(\mathbf{t})$ as described in Section \ref{sec:BayesFMM}, the model for coefficient $k$ would be given by:
{\begin{align}
Y^*_{ipk} = \sum_{d=0}^D G_d(p) B^*_{dk}+\sum_{d=0}^D G_d(p) U^*_{idk}+E_{ipk},
\end{align}}
with $U^*_{idk} \sim N(0,q_{dk})$ and $E^*_{ipk} \sim N(0,s_{k})$. This would induce serial covariance across $p$ in the model for each basis coefficient with cov$(Y^*_{ipk}, Y^*_{ip'k})=\sum_{d=0}^D G_d(p) G_d(p') q_{dk}$, and the marginalized model that is fit with the $U^*_{idk}$ integrated out will contain this serial covariance in the error structure and explicitly account for it when updating the fixed effect coefficients. This basis space model induces the serial covariance structure described in the data space model (\ref{e:gcurve}), with $Q_d(\mathbf{t}_1, \mathbf{t}_2)$ = $\sum_{k=1}^K \psi_k(\mathbf{t}_1) \psi_k(\mathbf{t}_2) q_{dk}$. The heteroscedasticity of the variance components across basis functions $k$ allows the strength and shape of the serial covariance to vary across the scleral surface $\mathbf{t}$, and effectively borrows strength from nearby $\mathbf{t}$ through the chosen basis functions as determined by the induced off-diagonal elements of $Q_d(\mathbf{t}_1, \mathbf{t}_2)$. Figure \ref{f:combo_plots} panel (e) plots the intrafunctional correlation structure corresponding the random intercept $Q_0(\mathbf{t}_1,\mathbf{t}_2)$ induced by the tensor wavelet basis chosen for the scleral strain MPS data set at the marked scleral location, and file {\tt intrafunctional\_correlation.mp4} in the supplement presents the induced intrafunctional correlation for each of the $Q_d(\mathbf{t}_1, \mathbf{t}_2), (d=0,1,2)$ across all scleral locations.
This strategy can be used with any parametric model indicated by the $G_d(p), d=0,\ldots,D$, preferably orthogonalized. Section \ref{s:Application} demonstrates that the IOP effect in the scleral strain MPS data is hyperbolic, so we devise an orthogonalized hyperbolic model for these data. Also, note that while the serial variable IOP is sampled on a common grid across subjects for our data, this strategy can allow each the grid points for the serial variable to vary across subjects.
\subsection{Smooth Nonparametric Covariate Functional Effects}
\label{s:nonpara}
One of the primary scientific goals in the scleral strain MPS data is to study the effect of age on MPS and assess how it varies around the scleral surface. Preliminary investigations of the data suggest that the age effect might not follow a simple parametric form, and a nonparametric representation might be appropriate. While the fixed effect functions in the BayesFMM framework are linear in the covariates, using the mixed model representation of penalized splines shown by \cite{Wahba1978}, it is possible to fit a semiparametric functional mixed model with a smooth nonparametric age effect using this framework, as we will demonstrate in this section.
\textbf{Smooth Nonparametric Age Effect for Scleral Strain MPS data}: For ease of exposition, in this section we just consider a single smooth nonparametric term with no other covariates or random effects. Thus, suppose for each subject $i=1,\ldots,N$ we only model a single scleral strain function $Y_i(\mathbf{t})$, say for left eye and IOP=45mmHg, with the model:
\begin{align}
\label{e:NPM0} Y_{i}(\mathbf{t})= B_0(\mathbf{t}) + f(X_{\mbox{age}_i}, \mathbf{t})+E_{i}(\mathbf{t}),
\end{align}
where $B_0(\mathbf{t})$ is a functional intercept and $f(X_{\mbox{age}_i}, \mathbf{t})$ represents a nonparametric effect of age on MPS at scleral location $\mathbf{t}$, with $\int f(x,\mathbf{t})dx=0 \forall \mathbf{t}$ and penalizing $\int \{f''(x,\mathbf{t})\}^2 dx$ to induce smoothness across $x$ for each $\mathbf{t}$.
As we will demonstrate, it is possible to represent this smooth nonparametric term as a sum of a linear fixed effect function for age and spline random effect functions,
\begin{align}
\label{e:splineFMM} f(X_{\mbox{age},i},\mathbf{t})=X_{\mbox{age}_i} B_1(\mathbf{t}) + \sum_{m=1}^{M+2} Z_{\boldsymbol{\mathcal{B}},m}(X_{\mbox{age}_i}) U_{\mathcal{S} m}(\mathbf{t})
\end{align}
for some suitably constructed random effect design matrix $\{Z_{\boldsymbol{\mathcal{B}},m}(x), m=1,\ldots,M+2\}$ based on Demmler-Reinsch basis functions \citep{DemmlerReinsch}, with spline random effects $U_{\mathcal{S} m}(\mathbf{t})$ following a mean zero Gaussian with cov$\{U_{\mathcal{S} m}(\mathbf{t}_1),U_{\mathcal{S} m}(\mathbf{t}_2)\} = Q_\mathcal{S}(\mathbf{t}_1,\mathbf{t}_2)$. These model components can be incorporated within a FMM framework like BayesFMM, as we now describe.
\textbf{Incorporation into BayesFMM framework:} To fit model (\ref{e:NPM0}) using the BayesFMM framework, we fit separate penalized splines for each basis coefficient $k$, which induces correlated penalized spline fits for each scleral location $\mathbf{t}$. Specifically, after transforming the observed functions $Y_{i}(\mathbf{t})$ into the basis space according to the basis representation $Y_{i}(\mathbf{t})=\sum_{k=1}^K Y^*_{ik} \psi_k(\mathbf{t})$ as described in Section \ref{sec:BayesFMM}, we specify the following model for each basis coefficient $k=1,\ldots,K$:
\begin{align}
\label{e:NPM} Y^*_{ik}= B^*_{0k} + f^*_k(X_{\mbox{age}_{i}})+E^*_{ik},
\end{align}
with $E^*_{ik} \sim N(0,s_k)$ and $f^*_k(x)$ a smooth nonparametric function of $x$ for basis function $k$. We pull out the intercept $B^*_{0k}$ and constrain $\int f^*_k(x) dx=0$ to ensure identifiability in additive models that contain multiple smooth nonparametric terms. We represent $B^*_{0k}+f^*_k(x)$ using B-spline basis functions,
\begin{align}
\label{e:Bspline} B^*_{0k}+f^*_k(x)=\sum_{m=1}^{M+4}\mathcal{B}_m(x)\nu^*_{mk},
\end{align}
where $\nu^*_{mk}$ are B-spline coefficients and $\mathcal{B}_m(x), m=1,\ldots,M+4$ are the cubic B-spline basis functions defined by the knots $\eta_1,\ldots,\eta_{M+8}$ such that
\begin{align}
\nonumber a=\eta_1=\eta_2=\eta_3=\eta_4<\eta_5<\cdots<\eta_{M+4}=\eta_{M+5}=\eta_{M+6}=\eta_{M+7}=\eta_{M+8}=b,
\end{align}
and $a$ and $b$ are two boundary knots \citep{Hastie09}. We can write model \eqref{e:NPM} in matrix form,
\begin{align}
\label{e:NPM2} \mbox{\bf y}^*_k=\boldsymbol{\mathcal{B}} \boldsymbol{\nu}^*_k +\mbox{\bf e}^*_k,
\end{align}
where $\mbox{\bf y}^*_k=(Y^*_{1k},\ldots,Y^*_{Nk})'$, $\boldsymbol{\mathcal{B}}$ is the $N\times(M+4)$ B-spline design matrix with the $(i,m)$-th entry being $\mathcal{B}_m(X_{\mbox{age},i})$, $\boldsymbol{\nu}^*_k=(\nu^*_{1k},\ldots,\nu^*_{(M+4)k})'$, and $\mbox{\bf e}^*_k=(E^*_{1k},\ldots,E_{Nk})'\sim N(\mbox{\bf 0}, s_k I_N)$. Following \cite{Wand08}, we assume the following prior distribution on the
B-spline coefficients:
\begin{align}
\boldsymbol{\nu}^*_k \sim MVN(\mbox{\bf 0}, q_{\mathcal{S} k} \boldsymbol{\Omega})
\end{align}
where $\boldsymbol{\Omega}$ is a $(M+4) \times (M+4)$ matrix with $\boldsymbol{\Omega}_{mm'}=\int_a^b \mathcal{B}_{m}''(x)\mathcal{B}_{m'}''(x) dx$. The resulting posterior mean of the spline random effects is $\hat{\boldsymbol{\nu}}^*_k=(\boldsymbol{\mathcal{B}}'\boldsymbol{\mathcal{B}}+\lambda^*_k\Omega)^{-1}\boldsymbol{\mathcal{B}}'\mbox{\bf y}^*_k$, where $\lambda^*_k=s_k/q_{\mathcal{S} k}$. It can be shown that $\boldsymbol{\mathcal{B}} \hat{\boldsymbol{\nu}}^*_k$ corresponds to the O'Sullivan penalized spline estimator of $B^*_{0k}+f^*_k(x)$ with penalty term $\lambda^*_k \int_a^b \{f^{*''}_k(x)\}^2 dx$ \citep{Wand08}, and if the knots are placed at each observed $X_{\mbox{age}_i}$, then this corresponds to the cubic smoothing spline estimator.
The spectral decomposition of $\Omega$ allows us to reformulate this prior specification as a mixed model with independent random effects as follows. It is known that $\mbox{rank}(\Omega)=M+2$. Therefore, the spectral decomposition of $\Omega$ has the form of $\Omega=PDP'$, where $D=\mbox{diag}(0,0,d_1,\ldots,d_{M+2})$ and $P'P=I_{M+4}$. Let $P=(X_{\Omega},Z_{\Omega})$, where $X_{\Omega}$ is a $(M+4)\times 2$ sub-matrix of $P$ corresponding to the first two columns of $P$ and $Z_{\Omega}$ is a $(M+4)\times (M+2)$ sub-matrix of $P$ corresponding to the other columns. Let $\boldsymbol{\beta}^*_k$ be a two-dimensional vector, and $\mbox{\bf u}^*_{\mathcal{S} k}=(U^*_{\mathcal{S} 1k},\ldots,U^*_{\mathcal{S} (M+2)k})'$ be an $(M+2)$-dimensional random vector. It can be shown that $\boldsymbol{\nu}^*_k=X_{\Omega} \boldsymbol{\beta}^*_k+Z_{\Omega}\mbox{diag}(d_1^{-1/2},\ldots,d_{M+2}^{-1/2})\mbox{\bf u}^*_{\mathcal{S} k}$ with $\boldsymbol{\beta}^*_k$ a fixed effect and $\mbox{\bf u}^*_{\mathcal{S} k} \sim MVN(\mbox{\bf 0}, q_{\mathcal{S} k} I_{M+2})$.
Therefore, we have the following mixed model representation of \eqref{e:NPM},
\begin{align}
\nonumber \mbox{\bf y}^*_k &=\boldsymbol{\mathcal{B}} \boldsymbol{\nu}^*_k +\mbox{\bf e}^*_k \\
\nonumber &=\boldsymbol{\mathcal{B}}\{X_{\Omega}\boldsymbol{\beta}^*_k+Z_{\Omega}\mbox{diag}(d_1^{-1/2},\ldots,d_{M+2}^{-1/2})\mbox{\bf u}^*_{\mathcal{S} k}\}+\mbox{\bf e}^*_k\\
\label{e:Bspline2} &=X_{\boldsymbol{\mathcal{B}}}\boldsymbol{\beta}^*_k+Z_{\boldsymbol{\mathcal{B}}}\mbox{\bf u}^*_{\mathcal{S} k}+\mbox{\bf e}^*_k,
\end{align}
where $X_{\boldsymbol{\mathcal{B}}}=\boldsymbol{\mathcal{B}} X_{\Omega}$ and $Z_{\boldsymbol{\mathcal{B}}}=\boldsymbol{\mathcal{B}} Z_{\Omega}\mbox{diag}(d_1^{-1/2},\ldots,d_{M+2}^{-1/2})$. The $Z_{\boldsymbol{\mathcal{B}}}$ are called the Demmler-Reinsch spline bases \citep{DemmlerReinsch}. It can be shown that $X_{\boldsymbol{\mathcal{B}}}$ is a basis for the space of the straight line, so model \label{e:Bspline2} can equivalently be rewritten as
\begin{align}
\label{e:Bspline3} \mbox{\bf y}^*_k= \mbox{\bf 1}_N B^*_{0k}+\mathbf{\bf x}_{\mbox{age}} B^*_{1k}+Z_{\boldsymbol{\mathcal{B}}}\mbox{\bf u}^*_{\mathcal{S} k}+\mbox{\bf e}^*_k,
\end{align}
where $\mbox{\bf 1}_N$ is an $N$-dimensional vector consisting of 1's and $\mathbf{\bf x}_{\mbox{age}}=(X_{\mbox{age},1},\ldots,X_{\mbox{age},N})'$. We see that this is the form of a linear mixed model with a level of spline random effects with design matrix $Z_{\boldsymbol{\mathcal{B}}}$ and random effects $\mbox{\bf u}^*_{\mathcal{S} k} \sim MVN(\mbox{\bf 0}, q_{\mathcal{S} k} I_{M+2})$. With the intercept term pulled out as implied by the $\int f^*_k(x) dx=0$ assumption, the term $f^*_k(X_{\mbox{age}_i})$ in (\ref{e:NPM}) is given by $X_{\mbox{age}_i} B^*_{1k} + Z_{\boldsymbol{\mathcal{B}}}(X_{\mbox{age}_i}) \mbox{\bf u}^*_{\mathcal{S} k}$, and thus in the BayesFMM framework we can incorporate a nonparametric term $f^*_k(x)$ by simply including a linear fixed effect $x B^*_{1k}$ plus a level of random effects with the corresponding Demmler-Reinsch design matrix $\sum_{m=1}^{M+2} Z_{\boldsymbol{\mathcal{B}} m}(x) U^*_{\mathcal{S} mk}$. These penalized splines for each basis $k$, when projected back to the function space, induce a smooth nonparametric functional effect $f(X_{\mbox{age}_i},\mathbf{t})$ given by (\ref{e:splineFMM}), with $Q_\mathcal{S}(\mathbf{t}_1,\mathbf{t}_2)=\sum_k \psi_k(\mathbf{t}_1) \psi_k(\mathbf{t}_2) q_{\mathcal{S} k}$. Based on these derivations, we can add any additional smooth nonparametric term $f(z,\mathbf{t})$ to the FMM framework by simply adding a linear fixed effect function $z B_2(\mathbf{t})$ and an additional level of spline random effects $\sum_{m=1}^{M_z+2} Z_{\boldsymbol{\mathcal{B}}(z)} U_{\mathcal{S}_z m}(\mathbf{t})$ with $U_{\mathcal{S}_z m}(\mathbf{t})$ a mean zero Gaussian with covariance cov$\{U_{\mathcal{S}_z m}(\mathbf{t}_1),U_{\mathcal{S}_z m}(\mathbf{t}_2)\}=Q_{\mathcal{S}_z}(\mathbf{t}_1, \mathbf{t}_2).$
A similar procedure could be followed to utilize other spline modeling approaches within this framework, e.g. P-splines \citep{PSplines} with differencing penalties or truncated polynomial splines \citep{Ruppert03}, but we prefer the O'Sullivan splines \citep{Wand08} given their natural second derivative penalty and formal connection to smoothing splines.
\textbf{Intrafunctional Correlation of $f(x,\mathbf{t})$ Across $\mathbf{t}$:} This framework allows the nonparametric smooth effect $f(x,\mathbf{t})$ of $x$ to vary over $\mathbf{t}$, but is not the same as modeling independent splines for each $\mathbf{t}$. Because the splines are fit in the basis space, the nonparametric fits are correlated intrafunctionally by:
\begin{align}
\label{e:SplineCorr} \mbox{cov}\{f(x,\mathbf{t}_1), f(x,\mathbf{t}_2)|B_1(\mathbf{t}_1),B_1(\mathbf{t}_2),q_{\mathcal{S} m}\} = \sum_{k=1}^K \sum_{m=1}^{M+2} \psi_k(\mathbf{t}_1) \psi_k(\mathbf{t}_2) \{Z_{\mathcal{B}_m(x)}\}^2 q_{\mathcal{S} m}.
\end{align}
This means that the spline fit for $\mathbf{t}_1$ borrows strength from other functional locations $\mathbf{t}_2$ according to the effective intrafunctional covariance structure $Q_{\mathcal{S}}(\mathbf{t}_1, \mathbf{t}_2)$ that induces smoothing across $\mathbf{t}$ in the spline fits of $f(x,\mathbf{t})$.
\textbf{Smoothing Parameter of $x$, $\lambda(\mathbf{t})$, is Nonstationary and Smooth Across $\mathbf{t}$}: In our BayesFMM implementation of the smooth nonparametric term $f(x,\mathbf{t})$, we allow the penalized spline for each basis function $k$ to have its own smoothing parameter $\lambda_k=s_{k}/q_{\mathcal{S} k}$. The basis space model induces a residual error covariance matrix cov$\{E_i(\mathbf{t}_1), E_i(\mathbf{t}_2)\} = S_{\mathbf{t}_1, \mathbf{t}_2}$ back in the data space, with diagonal elements $s(\mathbf{t})$, and a spline random effect covariance matrix cov$\{U_{\mathcal{S} m}(\mathbf{t}_1),U_{\mathcal{S} m}(\mathbf{t}_2)\}=Q_{\mathcal{S}}(\mathbf{t}_1, \mathbf{t}_2)$ back in the data space, with diagonal elements $q_{\mathcal{S}}(\mathbf{t})$. Thus, the effective smoothing parameter for the induced spline fit $f(x,\mathbf{t})$ at location $\mathbf{t}$ is given by
\begin{align}
\label{e:lambdat} \lambda(\mathbf{t}) = s(\mathbf{t})/q_{\mathcal{S}}(\mathbf{t}),
\end{align}
meaning that the smoothness in $x$ is allowed to vary across $\mathbf{t}$, enabling some parts of the function to be linear with large $\lambda(\mathbf{t})$ and others to be nonlinear with small $\lambda(\mathbf{t})$. Also, this smoothing parameter is not estimated independently for each $\mathbf{t}$, but the off-diagonal elements of $\mbox{\bf S}$ and $\mbox{\bf Q}_{\mathcal{S}}$ imply a dependency across $\mathbf{t}$ in $\lambda(\mathbf{t})$, meaning that the model ``borrows strength'' across $\mathbf{t}$ leading to smoothness in $\lambda(\mathbf{t})$ across $\mathbf{t}$.
We believe this to be the first presentation of a model with such flexibility in the literature, i.e. with $f(x,\mathbf{t})$ varying smoothly across $\mathbf{t}$ with the smoothing parameter in $x$, $\lambda(\mathbf{t})$, also varying smoothly across $\mathbf{t}$. The FAMM models of \cite{Scheipl15} and \cite{Greven2017} estimate terms like $f(x,\mathbf{t})$ that are smooth across both $x$ and $\mathbf{t}$, but utilize an additive penalty term involving marginal smoothing parameters in the $x$ and $\mathbf{t}$ directions, $\lambda_x$ and $\lambda_{\mathbf{t}}$. This structure does not allow the type of nonstationarities enabled here, which in Section 5 of the supplement we demonstrate are necessary to accurately model the scleral strain MPS data. It may be possible in the FAMM framework to accommodate this type of flexibility by putting a spline on $\lambda_x$ that varies smoothly across $\mathbf{t}$, but this has not been done in any published paper to date, and it is not clear whether such an approach would be computationally feasible for large functional data sets.
\textbf{Degrees of Freedom Function $DF(\mathbf{t})$:} In the penalized spline literature with penalized spline estimator given by $\hat{f}(x)=\mathcal{B}(\mathcal{B}' \mathcal{B} + \lambda \Omega)^{-1} \mathcal{B}' \mbox{\bf y}=\mbox{\bf X}(\lambda) \mbox{\bf y}$, a standard summary of the nonlinearity of the fit is given by the dimensionality of the projection space given by $DF=\mbox{trace}\{\mbox{\bf X}(\lambda)\}$, called the \textit{degrees of freedom} of the fit. A $DF=2$ indicates a linear model and $DF \gg 2$ indicates significant nonlinearity. To assess how the degree of nonlinearity of the spline fit $f(x,\mathbf{t})$ varies over $\mathbf{t}$, we can compute the \textit{degrees of freedom function} $DF(\mathbf{t})$ marginally across $\mathbf{t}$ by
\begin{align}
\label{e:dft} DF(\mathbf{t}) = \mbox{trace}[X\{\lambda({\mathbf{t}})\}] = \mbox{trace}[\mathcal{B}\{\mathcal{B}'\mathcal{B} + \lambda({\mathbf{t}}) \Omega\}^{-1} \mathcal{B}'],
\end{align}
with $\lambda({\mathbf{t}})$ defined as in (\ref{e:lambdat}). In general semiparametric functional mixed models with other levels of random effects to account for interfunctional covariance according to model (\ref{e:SPFMM}) below, as necessary for modeling our scleral strain MPS data, the derivation for $DF(\mathbf{t})$ is more complex, and outlined in Section 2 of the supplementary materials. Panel (c) of Figure \ref{f:combo_plots} presents $DF(\mathbf{t})$ for the MPS data.
\subsection{General Bayesian Semiparametric Functional Mixed Model}
In order to model the highly structured scleral strain MPS data set, we need include all of the modeling structures described in the preceding sections, including random effects to capture nested and serial interfunctional correlation and smooth nonparametric smooth covariate effect functions, together in a common BayesFMM model. To highlight its ability to model smooth nonparametric structures as described in Section \ref{s:nonpara}, it is useful to adapt the notation of the core BayesFMM model to explicitly include these terms. We term this version of the FMM a \textit{semiparametric functional mixed model} since it includes both linear and smooth covariate effects.
Given a sample of functions $Y_{i}(\mathbf{t}); i=1,\ldots,N; \mathbf{t} \in \mathscr{T}$, with covariates for fixed linear effects $X_{ia_l}, a_l=1,\ldots,A_l$, smooth nonparametric effects $X_{ia_n}, a_n=1,\ldots,A_n$, and $H$ levels of random effect covariates $Z_{ihm}, h=1,\ldots, H; m=1,\ldots, M_h$, we have the following semiparametric FMM:
\begin{align}
Y_{i}(\mathbf{t})=&\sum_{a_l=1}^{A_l} X_{ia_l} B_{a_l}(\mathbf{t}) + \sum_{a_n=1}^{A_n} f(X_{ia_n},\mathbf{t}) + \sum_{h=1}^{H} \sum_{m=1}^{M_h} Z_{ihm} U_{hm}(\mathbf{t}) + E_{i}(\mathbf{t}), \label{e:SPFMM}
\end{align}
with $U_{hm}(\mathbf{t}) \sim GP(\mathbf{0}, Q_h)$ and $E_{i}(\mathbf{t}) \sim GP(\mathbf{0}, S)$ being mean zero Gaussian processes with covariance surfaces $Q_h, h=1,\ldots,H$ and $S$ defined on $\mathscr{T} \times \mathscr{T}$.
Using the structures defined in Section \ref{s:nonpara}, this model can be directly fit by the BayesFMM software of \cite{Morris06} using the following FMM:
\begin{align}
\label{e:SPFMM2} Y_{i}(\mathbf{t})=&\sum_{a_l=1}^{A_l} X_{ia_l} B_{a_l}(\mathbf{t}) + \sum_{a_n=1}^{A_n} X_{ia_n} B_{a_n}(\mathbf{t}) + \\
\nonumber &\sum_{a_n=1}^{A_n} \sum_{m_{a_n}=1}^{M_{a_n}+2} Z_{\mathcal{B} m_{a_n}}(X_{i a_n}) U_{\mathcal{S} a_n m_{a_n}}(\mathbf{t}) + \sum_{h=1}^{H} \sum_{m=1}^{M_h} Z_{ihm} U_{hm}(\mathbf{t}) + E_{i}(\mathbf{t}),
\end{align}
with $M_{a_n}$ being the number of interior knots for the spline for $X_{i a_n}$, $Z_{\mathcal{B} m_{a_n}}(X_{ia_n})$ the corresponding Demmler-Reinsch design matrix, $U_{\mathcal{S} a_n m_{a_n}} (\mathbf{t}) \sim GP(\mathbf{0}, Q_{\mathcal{S} a_n})$ the corresponding spline random effect functions, and $U_{hm}(\mathbf{t}) \sim GP(\mathbf{0}, Q_h)$ and $E_{i}(\mathbf{t}) \sim GP(\mathbf{0}, S)$ modeling the interfunction covariance structure. As described above, the model would be fit in the transformed basis space. We first fit the basis space model with all random effects integrated out, and then sample the spline random effects from their complete conditional distribution while integrating out the other $H$ levels of random effects that capture any interfunctional covariance, and then project back to the function space in order to construct posterior samples of $f(x_{a_n}, \mathbf{t})$ on any desired grid of $\mathbf{t}$.
While omitted from (\ref{e:SPFMM}) for ease of presentation, this model can also be easily made to include any desired parametric-nonparametric interaction terms, with interaction of parametrically modeled covariate $X_{ia_l}$ and nonparametrically modeled covariate $X_{ia_n}$ being represented by the term $X_{ia_l} f_{a_l}(X_{ia_n},\mathbf{t})$. For $X_{ia_l}$ that are categorical dummy variables, this allows separate nonparametric fits of $(X_{ia_n},\mathbf{t})$ for different levels of the dummy variable. For continuous $X_{ia_l}$, this allows the corresponding slope to vary smoothly and nonparametrically with both $X_{ia_n}$ and $\mathbf{t}$. For example, in our scleral strain data, one may wish to include an interaction term to allow the nonparametric age effect to vary across IOP levels. If dummy variables were specified for each IOP level, this would allow separate independent nonparametric age effects for each IOP level. If IOP is modeled continuously via a parametric model like the hyperbolic model described in Section \ref{s:Application}, this would allow the hyperbolic coefficients to vary smoothly by age and scleral position, which would be equivalent to nonparametric age effects that vary across IOP but borrow strength from nearby IOP according to the structure induced by the hyperbolic model. In either case, the fixed effect and random spline design matrices corresponding to the $X_{ia_l} f_{a_l}(X_{ia_n},\mathbf{t})$ would be given by $X_{ia_l} X_{ia_n}$ and $X_{ia_l} Z_{\mathcal{S} m}(X_{i a_n})$, respectively, which are straightforwardly included in the FMM. As described in Section \ref{s:Application}, we considered these interaction structures, but found they did not appear necessary for representing the scleral strain MPS so were not included in the final model.
\section{Model Selection Heuristic for Semiparametric BayesFMM}
\label{s:model_sel}
Given the extensive flexibility of the semiparametric BayesFMM framework, there are a large number of modeling decisions to make. For example, in our sclera strain MPS data set, should the age effect be linear or nonparametric? If nonparametric, should the smoothing parameter in age be allowed to vary across the scleral surface $\mathbf{t}$, or is a common smoothing parameter across all scleral locations sufficient? Should the fixed IOP effect be linear, hyperbolic, or nonparametric? Should there be an interaction of age and IOP? Should there be a fixed left vs. right eye effect? For the random effect levels, is the subject-specific random effect necessary to account for correlation between right and left eyes for the same subject? Is the correlation across functions from multiple IOP for the same eye sufficiently handled by a compound symmetry structure assuming equal correlations, or is a structure allowing serial correlation necessary? Should this serial correlation be based on a linear, parabolic, or hyperbolic model? These decisions are challenging to make in a simple generalized additive mixed model framework with scalar responses, and become even more challenging in the current setting with complex, high-dimensional functional responses.
There are a some papers in the frequentist literature for performing variable selection in functional regression contexts \citep{Scheipl2013, Gertheiss2013b, Brockhaus2015}. However, there is a lack of functional regression model selection methods for MCMC-based fully Bayesian models such as the semiparametric BayesFMM, which present special challenges. One could split the data into training and validation data sets, fit separate MCMC for each prospective model in the training data, and then compute ratios of predicted marginal densities, integrating over MCMC posterior samples, for the validation data, as done in \cite{Zhu14}, for example. These predictive Bayes Factors would provide a rigorous model selection measure, or alternatively parallel MCMC could be run for each prospective model, and a multinomial random variable with Dirichlet prior used to select and perform Bayesian model averaging across models as the chains progress. These strategies might work fine for simple, low dimensional data sets or settings with only a few prospective models, but for the current setting with complex, high-dimensional data, they are impractical.
In this section, we present a model selection heuristic that we have developed that can explore a number of potential model structures to find which seem to be most appropriate for the given data without running any MCMC, and also provides ML and REML estimates that can be used as starting values for the parameters in the BayesFMM. This heuristic is admittedly \textit{ad hoc}, but is based on standard methods and appears to perform well in simulations, and so we believe can be a useful tool for modelers to assess which structures to included in their semiparametric FMM.
Our overall approach is to fit linear mixed models (LMM) to each basis coefficient $k$ using the {\tt lme} function in R \citep{nlme} for each prospective model, and then use a weighted voting scheme based on importance weights for each basis and an adapted Bayesian Information Criterion (aBIC) to obtain probability scores for each prospective model.
Here we outline the steps in detail.
\begin{enumerate}
\item{\textbf{Basis transform and importance weights:}} Transform the raw functions $Y_i(\mathbf{t}), i=1,\ldots, N$ to the basis space $Y^*_{ik}, k=1,\ldots,K$, and compute a series of weights $w_k$ that measure the relative importance of each basis for representing the data set. These weights can be computed by $w_k=\sum_{i}{Y_{ik}^*}^2/\sum_{i}\sum_{k}{Y_{ik}^*}^2$, with $\sum_k w_k=1$. For orthogonal $\psi_k(\mathbf{t})$, the $w_k$ represent the relative percent energy captured by basis coefficient $k$.
\item{\textbf{Fit basis-specific LMM and compute aBIC scores:}} For each prospective model $\mathcal{M}_c, c=1,\ldots,C$, use {\tt lme} in R \citep{nlme} to fit the corresponding LMM to the data for each basis coefficient $k=1,\ldots,K$, and compute an adapted version of the BIC ($aBIC_{ck}$), which we define as:
\begin{align}
aBIC_{ck} = -2 \mbox{log-likelihood}_{ck} + n_{\mbox{par},c}\mbox{log}(N) \nonumber,
\end{align}
where the log-likelihood is the marginal likelihood of the fixed effect and variance components of the model with the non-spline random effects integrated out conditional on the data for basis $k$, $N$ is the total number of observations in the dataset for basis $k$, and $n_{\mbox{par},c}$ is the total number of parameters of model $c$. As discussed by \cite{Vaida2005} and \cite{DIC}, selection of the effective number of parameters for LMM or Bayesian hierarchical models is tricky and context dependent. If inference is desired on the random effects themselves, then counting only fixed effects and variance components as parameters is not appropriate. In our setting, we are not interested in the random effects at levels capturing interfunctional correlation as we work with the marginalized model, but for nonparametric terms $f(x,\mathbf{t})$ we are clearly interested in the ``random effects" corresponding to the spline coefficients. Thus, we count the number of parameters to be the sum of the number of fixed effects, the number of variance components and the estimated degrees of freedom for each nonparametric term. This last term adjusts appropriately for the extra parameters of the spline fits even thought they are captured as random effects in the LMM.
\item{\textbf{Use weighted voting scheme to rank models:}} We compute a probability weight $P_c$ for each model $\mathcal{M}_c; c=1,\ldots,C$:
\begin{align}
\nonumber P_c=\sum_{k=1}^K w_k I\{c=\argmin_{c'}aBIC_{c'k}\}
\end{align}
\end{enumerate}
This procedure is applied in two steps: first assessing different fixed effect models (including parametric and/or nonparametric effects), and second assessing various random effect structures for capturing interfunctional variability while conditioning on the best fixed effect model.
In principle, $I\{c=\argmin_{c'}aBIC_{c'k}\}$ indicates whether the model $\mathcal{M}_c$ is the best in terms of $aBIC$ for the data set on the $k^{th}$ basis. Therefore, the $P_c$ is computed via a weighted voting scheme, an aggregated measure of proportion of times $\mathcal{M}_c$ is the best model across the all basis coefficients, with basis coefficients weighted by $w_k$.
In this way, the model fit for basis coefficients that account for a larger proportion of the total variability in the data count more towards the overall model selection. Empirically, we have found this weighted voting scheme seems to work well, as it is robust in the sense of not allowing any one basis function, especially one explaining a relatively low proportion of total energy for the given data set, to dominate the model selection because of an extreme $aBIC$ score. This can also be applied using alternative measures (e.g. $aAIC$). We acknowledge that this strategy is \textit{ad hoc} and more rigorous model selection methods for settings like this are needed, but we believe it can provide useful guidance for modelers and performs well in simulations, as described below.
\textbf{A word of caution:}. This heuristic is meant for selecting among various different modeling structures for specified covariates as done for our case study, or perhaps could be used to select among a few covariates, but \textit{it is not intended for high-dimensional variable selection across many potential predictors}. In such settings, consideration of a large number of models and only fitting the ``best'' one can dramatically inflate type I error rates, and post-selection inference as described in \cite{Berk2013} would need to be considered.
\textbf{Simulation Study on Model Selection:} We conducted a simple simulation study to investigate the performance of this model selection heuristic. We considered four different models:
\begin{itemize}
\item Model 1 (null model): $Y_{ij}(\mathbf{t}) = B_0(\mathbf{t})+E_{ij}(\mathbf{t})$,
\item Model 2 (linear age effect): $Y_{ij}(\mathbf{t}) = B_0(\mathbf{t})+X_{\mbox{age},i}B_{\mbox{age}}(\mathbf{t})+E_{ij}(\mathbf{t})$,
\item Model 3 (nonparametric age effect): $Y_{i}(\mathbf{t}) = f(X_{\mbox{age},i}, \mathbf{t})+E_{ij}(\mathbf{t})$, and
\item Model 4 (linear age effect, random effect): $Y_{ij}(\mathbf{t}) = B_0(\mathbf{t})+X_{\mbox{age},i}B_{\mbox{age}}(\mathbf{t})+U_{j}(\mathbf{t})+E_{ij}(\mathbf{t})$.
\end{itemize}
We fit each of these models to the scleral strain data and used the fitted model as the truth, and simulated 100 replicate data sets for each model. For each simulated data set, we fit each of these four models and performed the model selection procedure using $P_c$ to select the best model. In all four scenarios, this procedure selected the correct model 100/100 times. {Average values of $P_c$ for each model can be found in Section 4 of the supplementary materials, and Section 6 of the supplement investigates issues that can arise in variable selection of GAMMs when considering nonparametric smooth terms of subject-specific covariates in models including subject-level random effects.}
\section{Glaucoma Scleral Strain MPS Case Study} \label{s:Application}
\subsection{Overview of Glaucoma Scleral Strain MPS Data}
As described in Section \ref{s:intro}, glaucoma is characterized by ONH damage related to IOP but its etiology is not fully known. Researchers have hypothesized that biomechanics of the scleral region close to the ONH may modulate the effect of IOP on the ONH, and thus may play an important role in glaucoma. In particular, the scleral surface is elastic so deforms under pressure, which can partially relieve IOP-induced forces on the eye, including the ONH. Thus, studies of these properties could reveal insights into the etiology of glaucoma.
Recently, novel custom instrumentation was developed that can precisely measure the mechanical strain in the posterior human sclera at a fixed level of IOP \citep{Fazio12a,Fazio12b}. Briefly, the posterior 1/3 of the eye is clamped, sealed, and pressurized. Next, the eye is {preconditioned}, and then pressurized from {7 mmHg to 45 mmHg} using an automated system with computer feedback control, while scleral surface displacements are measured by a laser speckle interferometer. This device measures a light interference distribution that is used to reconstruct the surface displacement field in three dimensions {with nanometer-scale precision}. These displacements were processed as described in \citet{Fazio12a} to compute the 3D strain tensor, a $3 \times 3$ matrix summarizing the displacement in the meridional, circumferential, and radial directions, continuously around the outer scleral surface. The leading eigenvalue of the strain tensor, called the maximum principal strain (MPS), was computed on a grid of scleral locations for 120 circumferential locations $\phi\in(\SI{ 0}{\degree},\SI{360}{\degree})$ and 120 meridional locations $\theta \in(\SI{ 9}{\degree},\SI{24}{\degree})$, where $\theta=\SI{0}{\degree}$ corresponds to the ONH. This yields MPS functions defined on a grid of 14,400 points on the scleral surface that comprises a partial spherical domain.
Using this custom instrumentation, \citet{Fazio14} conducted a study to investigate age-related changes in the scleral surface strain. They obtained twenty pairs of eyes from normal human donors in the Lions Eye Bank of Oregon in Portland, OR and the Alabama Eye Bank in Birmingham, AL. For each subject, the MPS measurements were obtained as described above at nine different levels of IOP ({7, 10, 15, 20, 25, 30, 35, 40, and 45 mmHg}) for both left and right eyes. The data for both eyes from one subject failed a quality control check, so was excluded from analysis, as did one of the eyes from four other subjects. Thus, the data we analyzed consisted of 34 eyes from 19 subjects. With 14,400 measurements for each of 9 IOP levels $\times$ 34 eyes, this data set contained over 4.5 million measurements. Let $Y_{ijp}(\mathbf{t})$ be the MPS for eye $j$ for subject $i$ under IOP level $p$ at scleral location indexed by $\mathbf{t}=(\theta, \phi)$, which on the sampling grid can be written as a vector $\mbox{\bf y}_{ijp}$ of length $14,400$. The primary goals are to study MPS, assessing how it varies around the scleral surface, across IOP, and with age. The hypothesis is that MPS is greater near the ONH, which could confer a protective effect, and that MPS tends to decrease with age, which could contribute to increased stress on the ONH thus conferring increased glaucoma risk.
\subsection{Model Specification}
\textbf{Basis Transform:}
Various criteria can be considered when choosing which basis to use within the BayesFMM framework, including sparse representation, fast calculation, richness for representing the functional parameters at the various levels of the models, ability to capture the key visual features of the observed functions, and flexibility for representing the intrafunctional correlation in the data. Multiresolution bases like wavelets have advantages for many of these considerations, so we constructed a custom rectangular wavelet basis defined on the cylindrical spherical projection of the partial scleral space $\mathbf{t}=(\theta, \phi)$, which is a tensor transform computed by successively applying 1D wavelet transforms to the meridional and circumferential directions.
\textbf{Tensor Wavelets for Scleral Space:}. Specifically, we constructed $\psi_k(\mathbf{t})=\psi_k(\theta,\phi)$ as a tensor wavelet, $\psi_k(\mathbf{t}) = \psi^{\theta}_{k1}(\theta) \otimes \psi^{\phi}_{k2}(\phi)$, with meriodonal wavelet $\psi^{\theta}_{k_1}(\theta)$ being a db3 wavelet basis with three vanishing moments, reflection boundary condition, 5 levels of decomposition and circumferential wavelet $\psi^{\phi}_{k_2}(\phi)$ being a db3 wavelet with three vanishing moments, 5 levels of decomposition and periodic boundary conditions since its domain is circular, covering the entire circumferential space. This transform yielded a basis $\{\psi_k(\mathbf{t}); k=1, \ldots, K=17,185\}$. While single-indexed here for simplicity of presentation, these basis coefficients can be written as multi-indexed by circumferential scale $j_1=0, \dots, 5$, meriodonal scale $j_2=0, \ldots, 5$, circumferential locations $k_1=1,\ldots,K_{1j_1}$ and meriodonal locations $k_2=1,\ldots,K_{2j_2}$ with $K=\sum_{j_1}\sum_{j_2} K_{1j_1}*K_{2j_2}$. The levels $j_1=0$ and $j_2=0$ correspond to the father wavelet coefficients at the lowest level of decomposition, and the other $j_1$ and $j_2$ index the corresponding mother wavelets at increasing levels of scale. With $\mbox{\bf y}_{ijp}=Y_{ijp}(\mathbf{t})$ being the observed function for subject $i$, eye $j$, and IOP $p$ on the scleral surface sampling grid of size $T=14,400$ written in vector form, this basis representation can be written as $\mbox{\bf y}_{ijp}=\mbox{\bf y}^*_{ijp} \boldsymbol{\Psi}$, where $\boldsymbol{\Psi}$ is a $K \times T$ basis matrix with elements $\psi_k(\mathbf{t})$ and $\mbox{\bf y}^*_{ijp}$ is a vector of $K$ corresponding basis coefficients. Because of the structure of the tensor transform, if we unstack $\mbox{\bf y}_{ijp}$ into a $(T_1=120) \times (T_2=120)$ matrix $\mbox{\bf Y}_{ijp}$ with rows indexed by equally spaced meriodonal locations $\theta_1=\SI{9}{\degree}, \ldots, \theta_{120}=\SI{24}{\degree}$ and columns by equally spaced circumferential locations $\phi_1=\SI{ 0}{\degree}, \ldots, \phi_{120}=\SI{360}{\degree}$, we could write $\mbox{\bf y}^*_{ijp}=\mbox{vec}(\boldsymbol{\Psi}_\theta \mbox{\bf Y}_{ijp} \boldsymbol{\Psi}'_\phi)$, where $\mbox{vec}(\bullet)$ is the column-stacking vectorizing operator, $\boldsymbol{\Psi}_\theta$ is the $K_1 \times (T_1=120)$ basis matrix corresponding to the meriodonal wavelet $\psi^\theta_{k_1}(\theta)$ and $\boldsymbol{\Psi}_\phi$ the $K_2 \times (T_2=120)$ basis matrix corresponding to the circumferential wavelet $\psi^\phi_{k_2}(\phi).$ In principle, spherical wavelets could be used as the transforming basis, but currently available software does not handle transforms for part of the sphere, and since we only model $\theta$ over a limited range of $\SI{ 9}{\degree}$ - $\SI{24}{\degree}$, the distortion from using the basis on the projection and not the true spherical geodesic is not great.
Some eyes had technical processing artifacts that resulted in a spike of extremely high MPS at some local set of scleral locations, typically close to the boundary. Given the multiresolution nature of the wavelet transform, these artifacts were captured by wavelet coefficients at extremely high frequency scales. Given the relatively smooth nature of most of the MPS functions, these wavelet coefficients were essentially zero for all eyes except for those with the artifact which yielded very large coefficients. Thus, we removed these artifacts by filtering out any wavelet coefficients with \textit{extremely} skewed distributions for which the mean across all samples was more than {$100 \times$} the median. As seen in the supplementary file \textit{RawMPScurves.zip}, with illustration in Supplemental Figure 26, this strategy effectively removed the outlying spikes without substantively affecting MPS values for other scleral locations. We also applied the joint wavelet compression strategy described in \citet{Morris11} to obtain a reduced dimension near-lossless basis function to use, and found a subset of 269 wavelet coefficients that jointly preserved $>99.5\%$ of the total signal energy for each eye, and an average of $>99.9\%$, leading to $>50:1$ compression. As shown in Supplemental Figure 26 and \textit{RawMPScurves.zip}, the data projected into the basis is essentially identical to the raw data, demonstrating its near-lossless nature. We considered this basis for our model.
We modeled these basis coefficients using the basis transform modeling approach described in Section \ref{sec:BayesFMM}. Besides providing a relatively sparse representation and enabling the adaptive removal of spiky artifacts, this transform being a location-scale decomposition allowed nonstationary intra-scleral correlations and adaptive borrowing of strength across scleral locations. File {\tt intrafunctional\_correlation.mp4} in the supplement contains a movie file demonstrating the form of the intrafunctional correlation structure induced by this choice, computed by constructing the basis and basis transform matrices $\boldsymbol{\Psi}$ and $\boldsymbol{\Psi}^-$, respectively, and applying (\ref{eq:covariance}) to the basis space covariances at the various random effect and residual error levels of the model. For illustration, panels (e) and (f) of Figure \ref{f:combo_plots} contains plots of this surface at a particular scleral location for two of the random levels.
We also considered using principal components computed on the wavelet-transformed (and compressed) data, similar to the strategy used in \cite{Meyer15}, which implies applying a singular value decomposition to the wavelet-space data matrix, and then using the resulting eigenvectors to construct the empirical basis functions $\psi_k(\mathbf{t})$ that are used for the BayesFMM modeling. In this case, we kept $K=29$ basis functions that explained $>99.5\%$ of the total variability in the data set according to the scree plot, which as estimated by four-fold cross validation retained a minimum of $96.7\%$ of the total energy for each eye, so is somewhat near-lossless. We used the wavelets for our primary analysis given that it yielded a richer basis set for representing the various functional parameters at various levels of the models, but for sensitivity we also presented results using the BayesFMM using these wavelet-regularized principal components in Section 7 of the supplement, as well as other summaries including the induced intrafunctional correlation structures.
\textbf{Model Selection:} We applied the model selection heuristic described in Section \ref{s:model_sel} to help select the structures in the semiparametric FMM to include in the FMM. We first determined which fixed effect covariates to include and what their functional forms should be. Here we summarize the results, for which more details are provided in Section 3 of the supplementary materials. Three different fixed effects were considered: age, IOP, and eye (left vs. right). For the form of the age effect, we considered two possibilities: linear or nonparametric. For the form of the IOP effect, we considered three different possibilities: linear, hyperbola, or nonparametric. Models without the eye effect were also compared. As a result, we compared 12 different models for the fixed effect selection. It turned out that the model with the nonparametric age effect, the hyperbolic IOP effect, and no eye effect showed the highest $P_c$ when using $aBIC$, and the model with nonparametric age effect, hyperbolic IOP effect, and a left vs. right eye effect had the highest $P_c$ when using $aAIC$. For our primary analysis, we considered the model with no left vs. right eye effect, since there is no strong scientific rationale for such an effect, and present the other model as a sensitivity analysis in Section 7 of the supplementary materials. We also assessed whether the smoothing parameter for the nonparametric age effect should be constant or vary around the sclera, and found that the sclerally varying smoothing parameter was clearly necessary for good fit, as detailed in Section 5 of the supplement. Once we selected the main fixed effects, we assessed whether the interaction term between age and IOP was needed, and our model selection heuristic suggested the interaction was not necessary. Finally, with the selected fixed terms, we compared several different random effect distributions to capture the interfunctional covariance structure. Two different levels of random effects were considered: the subject-level random effect and the serial eye-level random effect. For the form of the eye-level random effect in terms of IOP as illustrated in Section \ref{s:growth}, we considered three different forms: constant (compound symmetry), linear, or hyperbola. Our model selection heuristic selected the eye-level random effect with the hyperbolic IOP effect, but not the subject-level random effect.
\textbf{Model:} Thus, the final fitted semiparametric FMM was:
\begin{align}
\nonumber Y_{ijp}(\mathbf{t}) =& B_0(\mathbf{t}) +B_{1}(\mathbf{t})G_1(p)+B_{2}(\mathbf{t})G_2(p)+f(X_{\mbox{age},i}, \mathbf{t})+\\
\label{e:finalmodel4} &U_{ij}(\mathbf{t})+U_{ij1}(\mathbf{t})G_1(p)+U_{ij2}(\mathbf{t})G_2(p)+E_{ijp}(\mathbf{t}),
\end{align}
with $X_{\mbox{age},i }$ is the age for subject $i$, and $G_1(p)$ and $G_2(p)$ are the values of the orthogonalized hyperbolic basis corresponding to $\mbox{IOP}=p$ as described below. This is equivalent to the FMM:
\begin{align}
\label{e:finalFMM} Y_{ijp}(\mathbf{t}) =& B_0(\mathbf{t}) + B_{1}(\mathbf{t})G_1(p)+B_{2}(\mathbf{t})G_2(p)+B_3(\mathbf{t}) X_{\mbox{age},i} + \sum_{m=1}^{M+2} Z_{\mathcal{B} m}(X_{\mbox{age},i}) U_{\mathcal{S} m}(\mathbf{t}) + \\
\nonumber &U_{ij}(\mathbf{t})+U_{ij1}(\mathbf{t})G_1(p)+U_{ij2}(\mathbf{t})G_2(p)+E_{ijp}(\mathbf{t}),
\end{align}
with $Z_{\mathcal{B},m}(x)$ the Demmler-Reinsch basis functions corresponding to $M$ interior knots on $x$, $U_{\mathcal{S} m}(\mathbf{t}) \sim GP\{\mathbf{0}, Q_{\mathcal{S}}\}, U_{ij}(\mathbf{t}) \sim GP\{\mbox{\bf 0}, Q_0\}, U_{ij1}(\mathbf{t}) \sim GP\{\mbox{\bf 0},Q_1\}, U_{ij2}(\mathbf{t}) \sim GP\{\mbox{\bf 0}, Q_2\}$, and $E_{i}(\mathbf{t}) \sim GP(\mathbf{0}, S)$, and with $Q_{\mathcal{S}}, Q_0, Q_1, Q_2$, and $S$ being covariance surfaces defined on $\mathcal{T} \times \mathcal{T}$. Following the guidelines suggested by \cite{Ruppert03}, we chose $M=5$ equally spaced knots over $X_{\mbox{age}}$.
\textbf{Parameterization of IOP effect:}. From a preliminary investigation in which we fit separate models to each scleral location $\mathbf{t}$, we found that the serial IOP effects were well modeled by a hyperbola of special form $Y=b_0+b_1 p + b_2 p^{-1}$, with an average $R^2$ of $0.98$ across all eyes and scleral locations, and this form was also chosen by the model selection heuristic. To accommodate model fitting without having to include a covariance between $b_1$ and $b_2$, we utilized orthogonalized versions of these predictors: $G_1(p)=\sqrt{2}/2 X_{1,p} -\sqrt{2}/2 X_{2,p}$ and $G_2(p)=\sqrt{2}/2 X_{1,p} +\sqrt{2}/2 X_{2,p}$ where $X_{1,p}$ is a standardized version of $p$ and $X_{2,p}$ is a standardized version of $p^{-1}$.
\textbf{Basis Space Model:} We used the fast rectangular 2D wavelet transform to compute the basis coefficients from the raw functions, equivalent to the matrix multiplication $\mbox{\bf y}^*_{ijp}=\mbox{\bf y}_{ijp} \boldsymbol{\Psi}^{-}$ with $\boldsymbol{\Psi}^-=\boldsymbol{\Psi}'(\boldsymbol{\Psi} \boldsymbol{\Psi}')^{-1}$ with $\mbox{\bf y}^*_{ijp}$ a vector of length $K=269$ with elements $Y^*_{ijpk}, k=1,\ldots,K$. We then fit the basis-space version of model (\ref{e:finalFMM}):
\begin{align}
\label{e:finalFMMk} Y^*_{ijpk} =& B^*_{0k} + B^*_{1k} G_1(p)+B^*_{2k}G_2(p)+B^*_{3k} X_{\mbox{age},i} + \sum_{m=1}^{M+2} Z_{\mathcal{B} m}(X_{\mbox{age},i}) U^*_{\mathcal{S} mk} \\
\nonumber &+U^*_{ijk}+U^*_{ij1k}G_1(p)+U^*_{ij2k}G_2(p)+E^*_{ijpk}, \mbox{ with}
\end{align}
$U^*_{\mathcal{S} mk} \sim N(0,q_{\mathcal{S} k}), U^*_{ijk} \sim N(0,q_{0k}), U^*_{ij1k} \sim N(0,q_{1k}), U^*_{ij2k} \sim N(0,q_{2k})$, and $E^*_{ijpk} \sim N(0,s_k)$.
\textbf{Prior Specification:} We specified vague conjugate inverse Gamma priors for each basis space variance component $\{q_{\mathcal{S} k}, q_{0k}, q_{1k}, q_{2k}, s_k\}$ in the model, with prior mode being the REML starting values and with effective sample size of 2, e.g. $s_k \sim \mbox{InverseGamma}(a_s, b_s)$ with $a_s=2$ and $b_s=3*\hat{s}_k$ where $\hat{s}_k$ is the REML starting values for $s_k$. We used spike-slab priors for the basis-space fixed effects $\{B^*_{ak}, a=0, \ldots, 3\}$, with regularization parameters $\{\pi_{aj}, \tau_{aj}\}$ varying over predictor $a=0, \ldots, 3$, with \textit{regularization sets} $j=1, \ldots, J=36$ determined by the tensor wavelet scale levels, with $j=0$ for $j_1=j_2=0$, $j=1$ for $j_1=0, j_2=1$, \ldots, $j=J=36$ for $j_1=j_2=5$. We estimated the regularization parameters using the empirical Bayes algorithm specified in \cite{Morris06}. To assess sensitivity of results to these choices of regularization parameters, we also ran the model doing no additional regularization (beyond wavelet compression) by setting $\pi_{aj}\equiv 1$ and $\tau_{aj}\equiv 10^6$. Results are provided in Section 7 of the supplementary materials.
\textbf{Model Fitting:} We ran an MCMC to obtain posterior samples of the parameters of model (\ref{e:finalFMMk}) $\{B^*_{\bullet k}, q_{\bullet k}, s_k\}$ from the marginalized version of this model with $U^*_{\bullet k}$ all integrated out.
We fit a total of 10,000 posterior samples after a burn-in of 5000, thinning by keeping every 10. We then sampled the spline random effects $U^*_{\mathcal{S} m k}$ from their complete conditional distributions with the other random effects still integrated out, which are conjugate multivariate normal Gibbs steps as detailed in Section 1 of the supplementary materials, from which posterior samples of $f^*_k(x)=xB^*_{3k} + \sum_m Z_{\mathcal{B} m}(x) U^*_{\mathcal{S} mk}$ were subsequently constructed for a grid of ages $x$ of size $71$ corresponding to ages 20-90. Let $\mathbf{F}^*_{g}$ be a $71 \times (K=269)$ matrix representing posterior sample $g$ of the basis space nonparametric age effect, $g=1,\ldots, 1000$. We then transformed this back to the data space via $\mathbf{F}_g=\mathbf{F}^*_{g} \Psi$ to obtain the $71 \times (T=14,400)$ matrix of posterior samples of the nonparametric age effect $f(X_{\mbox{age}},\mathbf{t})$ in model (\ref{e:finalmodel4}), and similarly transforming the other fixed effects back to the data space to get posterior samples of $\{B_0(\mathbf{t}), B_1(\mathbf{t}), B_2(\mathbf{t})\}$ on the sampling grid of $\mathbf{t}$ and used for posterior inference.
On a laptop computer, the entire analysis took 7hr39min on a single core, with the basis transform taking 1m37s, model selection 22m48s, each MCMC iteration 0.77s with 15,000 iterations taking 3hr15m, and the postprocessing including inverse basis transform of posterior samples and key inferential summary calculations 4hr. Many other summaries and plots were computed for the purposes of this paper for sensitivity and illustration of the deep properties of the modeling framework at more computational expense, but these additional analyses are not necessary for analysis of the data. The Metropolis-Hastings acceptance probabilities ($\approx 0.85-0.95$) were reasonable, and Geweke convergence statistics (median 0.01, $Q_{.025}=-1.98, Q_{0.975}=1.94$) across the many parameters in the model showed that most were within roughly 2 standard deviations of zero, and only $4.4\%$ of the corresponding p-values were less than 0.05, so suggest reasonable MCMC convergence (see Section 9 of the supplementary materials for more details).
We simulated virtual MPS functions for hypothetical subjects with specified age and IOP from the posterior predictive distribution of the data (see Section 10 of the supplementary materials), and found that the simulated MPS data are visually similar to the MPS data from real eyes, suggesting the BayesFMM model with the tensor wavelet bases was sufficiently flexible to capture the salient features of these data, and lending support to its use for inference. We share these pseudo data on github (\url{https://github.com/MorrisStatLab/SemiparametricFMM}), as well as the real data and scripts to perform all analyses.
\begin{figure}
\centerline{\includegraphics[scale=0.7]{age_by_pressure_with_df83.png}}
\caption{\footnotesize (a) Polar azimuthal projection of fitted MPS function for left eye from one subject of age 90yr under {45 mmHG} of IOP. (b) posterior mean of degrees of freedom of nonparametric MPS fit of age. {The right nine panels} depict estimated nonparametric MPS fit of age for all nine IOP levels at the scleral location indicated by {the white dot} in (a) and (b), along with the raw data indicated by the black dots.}\label{f:MPS_age_pressure}
\end{figure}
\subsection{Scientific Results} \label{sec:Results}
{\bf Nonparametric MPS function of age:} First, we computed the posterior mean MPS curve as a function of age over the entire meridional and circumferential domain at each level of IOP. As a thorough summary of the model fit, we generated a plot of these fits for each scleral location, and joined these together to make a digital movie file {\tt MPSvAGE-wave.mp4} contained in the supplemental materials.
Figure \ref{f:MPS_age_pressure} shows a snapshot of the movie at the scleral location indicated by the white dot. Panel (a) depicts a polar azimuthal projection of the fitted MPS function for a left eye from a subject of age 90yr under {45 mmHg} of IOP. Note that MPS is higher near the ONH, as expected as a protective effect. Panel (b) shows the posterior mean degrees of freedom (DF) of the nonparametric age effect. We see strong nonlinear age effects in scleral regions close to the ONH and towards the inferior and nasal regions of the sclera, while many other regions show linear or almost linear age effects. The right nine panels contain the fitted nonparametric age effect at nine different IOP levels at the scleral position indicated by the white dot. In each panel, the black dots are the raw data, the solid blue line is the estimated nonparametric mean MPS function of age, and the solid and dashed red lines correspond to joint and point-wise 95\% credible bands of the mean MPS curve, with joint bands computed as described in \citet{Meyer15}. We see from this plot how the MPS increases with IOP, and that the hyperbolic model seems to capture the rate of increase very well. From this plot and the movie in the supplement showing results stepping across the scleral locations, we see this model fits the data for all IOP and scleral locations remarkably well in spite of the fact that independent splines were not fit to the data for each IOP and scleral location separately, but rather is the result of the complex joint unified model \eqref{e:finalmodel4} that borrows strength from other IOP according to the modeled hyperbolic serial effect and from other scleral locations according to the basis functions. The model also borrows strength from other nearby scleral locations according to the basis functions in estimating the nonlinearity of the age effect, as seen in the local smoothness of the $DF(\mathbf{t})$ plot.
\begin{figure}
\centerline{\includegraphics[scale=1]{Combo_plot_83.png}}
\caption{ \footnotesize \textbf{Key Summaries of Fitted Model}. Shows key summaries at the scleral position marked by the open circle, including (a) the fitted MPS for a 90yr old with IOP=45mmHg, (b) the nonparametric MPS vs. age curve using the AUC to integrate over IOP, with blue line being posterior mean, dotted and solid red lines pointwise and joint credible bands, and the raw data (computing AUC for this scleral location for each eye) indicated by dots, (c) the degrees of freedom of the nonparametric age fit as a function of the scleral location, (d) the serial correlation across IOP induced by the model at this scleral position, and (e) and (f) being the intrafunctional correlation surface induced by our model and choice of tensor basis for the eye-to-eye random intercept and residual error levels, respectively, at this scleral position. The file {\tt combo\_plot.mp4} in the supplement is a movie file showing how these summaries vary across scleral locations.}\label{f:combo_plots}
\end{figure}
{\bf Induced Intrafunctional and Interfunctional Covariance Structures:} To assess the intrafunctional covariance structures induced by our tensor wavelet bases, we used equation (\ref{eq:covariance}) to estimate the scleral space intrafunctional covariance matrices $Q_d(\mathbf{t}_1,\mathbf{t}_2), d=0,1,2$ and $S(\mathbf{t}_1,\mathbf{t}_2)$. Supplementary Figure 22 plots the diagonals of these matrices representing the variances at the various hierarchical levels as a function of scleral location $\mathbf{t}$. Note that the variance for the eye intercept $Q_0(\mathbf{t},\mathbf{t})$ is an order of magnitude greater than that of the eye-level IOP coefficients $Q_1(\mathbf{t},\mathbf{t})$ and $Q_2(\mathbf{t},\mathbf{t})$, which are in turn an order of magnitude greater than the residual error variance $S(\mathbf{t},\mathbf{t})$. Also, note that these covariances vary around the scleral surface, with locations near the ONH having greater levels of variability. The supplement also contains a movie file {\tt interfunctional\_cor.mp4} that represents the corresponding intrafunctional correlation surfaces induced by our model by stepping around scleral locations and for each plotting a heatmap representing the correlation of the indicated location with all other scleral locations. For illustration, panels (e) and (f) of Figure \ref{f:combo_plots} show the correlation of a specific scleral location $\mathbf{t}^*$ (indicated by the white dot) with all other scleral locations $\mathbf{t}$ at the eye intercept $Q_0(\mathbf{t}^*, \mathbf{t})$ and residual error $S(\mathbf{t}^*,\mathbf{t})$ levels, and Supplementary Figure 18 contains these plus those for the eye hyperbolic random effects $Q_1(\mathbf{t}^*,\mathbf{t})$ and $Q_2(\mathbf{t}^*,\mathbf{t})$ for this location, and {\tt Intrafunctional\_correlations.mp4} contains a movie file showing all scleral locations. Note how our model captures local intrafunctional correlation, and the strength and tails of this correlation are allowed to vary by scleral location and hierarchical level. Supplementary Figure 19 includes equivalent plots for the wavelet-regularized principal component basis functions, and {\tt Intrafunctional\_correlations-pc.mp4} contains a movie showing all scleral locations. Note that the PC basis functions, although global, induce intrafunctional correlation surfaces that are dominated by the local correlation among nearby scleral locations that is also captured by the tensor wavelet bases.
We also computed the induced interfunctional serial correlation across MPS curves for different IOP for the same eye using the formulas contained in Section \ref{s:growth}. The supplement contains a movie file {\tt Intra\_IOP\_corr.mp4} that plots the variance, var$\{Y_{ijp}(\mathbf{t})|IOP=p,\mathbf{t}\}$, as a function of IOP and scleral location and the serial correlation across IOP, corr$\{Y_{ijp}(\mathbf{t}),Y_{ijp'}(\mathbf{t})\}$, as a function of IOP as it varies around the scleral surface $\mathbf{t}$. Note the form of the serial correlation induced by the hyperbolic model, and how it is able to vary yet borrow strength across scleral locations. Panel (d) of Figure \ref{f:combo_plots} portrays this serial correlation at a single scleral location.
{\bf Inference on functionals of the parameters:} While Figure \ref{f:MPS_age_pressure} and the accompanying movie file provide a thorough summary of the age effect on MPS estimated by our model, for interpretability it may be useful to aggregate results over IOP and/or scleral locations. One major advantage of our fully Bayesian approach is that we are able to compute posterior samples and obtain estimates and inference for any functional of the model parameters.
First, to aggregate information across all IOP, we considered the area under the MPS vs. IOP curve, defined as follows:
\begin{align}
\nonumber\mbox{AUC}(X_{\mbox{age}},\mathbf{t})=f(X_{\mbox{age}},\mathbf{t})+\int_{7}^{45} \{B_1(\mathbf{t})G_1(p) +B_2(\mathbf{t})G_2(p)\} dp.
\end{align}
This integral summarizes the total MPS behavior over the range of IOP in the study, which covers the practical range of IOP values in this context. The integral is estimated numerically for each MCMC sample to yield posterior inference on AUC. We plotted the posterior mean AUC vs. age curve and corresponding posterior pointwise and joint credible bands for each scleral location, and assembled into a digital movie file {\tt AUCvAGE-wave.mp4} in the supplementary materials, and for illustration panel (b) of Figure \ref{f:combo_plots} contains this plot for a single scleral location. Although our model was not fit to the AUC data, but rather the raw data for each IOP, note how the model provides a nonparametric smooth fit of AUC vs. age for each scleral location, and again these fits borrow strength across scleral locations. From these results, we see that that aggregating over IOP, the MPS tends to decrease with age at most scleral locations, especially near the ONH. With MPS decreasing with age, the eye seemingly becomes less elastic and less able to absorb IOP, potentially exposing the ONH to IOP-induced damage over time.
Given the nonlinearity of the age fit, it may be instructive to {directly} look at the rate of decline of MPS over age. We can do this by computing inference on the derivative of the AUC curve with respect to $X_{\mbox{age}}$. Given the lack of an IOP $\times$ age interaction in our final fitted model, it follows that $\partial\mbox{AUC}(X_{\mbox{age}},\mathbf{t})/\partial{X_{\mbox{age}}}=\partial f(X_{\mbox{age}},\mathbf{t})/\partial{X_{\mbox{age}}}$. Using the fact that $f(X_{\mbox{age}},\mathbf{t})=\beta_{0}(\mathbf{t})+X_{\mbox{age}}\beta_{3}(\mathbf{t})+Z_{\boldsymbol{\mathcal{B}}}(X_{\mbox{age}})\mbox{\bf u}_{\mathcal{S}}(\mathbf{t})$ as described in Section \ref{s:nonpara}, with $Z_{\boldsymbol{\mathcal{B}}}(X_{\mbox{age}}) = \{\mathcal{B}_1(X_{\mbox{age}}),\ldots$
$,\mathcal{B}_{M+4}(X_{\mbox{age}})\}'Z_{\Omega}\mbox{diag}(d_1^{-1/2},\ldots,d_{M+2}^{-1/2})$ , it can be easily seen that
\begin{align}
\frac{\partial \mbox{AUC}(X_{\mbox{age}},\mathbf{t})}{\partial X_{\mbox{age}}}=\beta_{3}(\mathbf{t})+\frac{\partial{Z_{\boldsymbol{\mathcal{B}}}(X_{\mbox{age}})}}{\partial{X_{\mbox{age}}}}\mbox{\bf u}_{\mathcal{S}}(\mathbf{t}). \label{eq:deriv}
\end{align}
Our R scripts contain calculations of $\frac{\partial{Z_{\boldsymbol{\mathcal{B}}}(X_{\mbox{age}})}}{\partial{X_{\mbox{age}}}}$, which come from \citet{Wand08}. By applying (\ref{eq:deriv}) to the posterior samples of $\beta_3(\mathbf{t})$ and $\mbox{\bf u}_{\mathcal{S}}(\mathbf{t})$, we could obtain posterior samples of this derviative for each scleral location, although we do not present those results here. Since the nonparametric fits were done placing a smoothness penalty on the age effects themselves, the derivatives may appear slightly undersmoothed. If one had primary interest in estimating these derivatives smoothly, they could do so by simply choosing higher order penalties in the spline fits. A third order penalty would produce smoothness in the first derivative.
{\bf Aggregated summaries over functional regions:} Although our model fits the entire data set over all scleral locations, for ease of interpretation {researchers} at times would like to look at estimates and inference for aggregated scleral regions. Given the hypothesis that scleral strain is most important near the ONH, we averaged results over all circumferential regions to obtain results as a continuous function of meridional distance from the optic nerve head. Figure \ref{fg4} depicts the posterior mean AUC as a function of age and distance from the ONH, aggregating over circumferential regions. We can see how MPS is higher near the ONH and decreases moving away from the ONH, potentially providing a protective effect to the ONH. Younger individuals have high MPS levels at scleral locations extending well out from the ONH, while for middle age individuals the regions of high MPS does not extend out far from the ONH, and for older individuals the MPS is quite low even close to the ONH. This coincides with the increased glaucoma risk in older individuals.
\begin{figure}
\centerline{\includegraphics[scale=0.5]{auc_circum.png}}
\caption{\label{fig:combo_plots} \footnotesize \textbf{Posterior mean AUC (of MPS) as a function of age and distance from ONH.} These results are obtained by aggregating posterior samples over all circumferential regions.}\label{fg4}
\end{figure}
We also summarized the results aggregating over the innermost region closest to the optic nerve head, which is called peripapillary(PP) region and also within the adjacent region, called the mid-peripheral(MP) region. Here we present aggregated AUC results in both PP and MP regions. Figure \ref{fg5} depicts the age effects aggregating AUC over the PP and MP regions, with the top two plots containing the posterior mean fits and the bottom containing the derivatives with respect to age. The blue line contains the posterior mean fit. The dotted and solid red lines indicating 95\% pointwise and joint credible intervals, respectively. From this we can clearly see that the MPS is systematically higher in the PP region closest to the ONH than in the MP region further away, potentially conferring a protective effect. The MPS decreases with age in both regions, but the decrease is substantially steeper for the all-important PP region close to the ONH. This effect is nonlinear, with the rate of decrease accelerating throughout middle age (40-60 years old), an age at which glaucoma risk increases substantially.
\begin{figure}
\centerline{\includegraphics[scale=0.6]{AUC_dAUC_by_age_by_region.png}}
\caption{\footnotesize \textbf{Aggregated AUC summaries:} AUC aggregated over peripapillary (PP) region {adjacent to the ONH} and {the} mid-peripheral(MP) region just beyond PP. (a) and (b) show AUC summaries over the two regions. (c) and (d) show derivatives of aggregated AUC {summaries} presented in (a) and (b).}\label{fg5}
\end{figure}
These analyses confirm the hypothesis that MPS is higher in scleral regions closer to the ONH, decreases with age, and this decrease with age is more pronounced near the ONH. This agrees with the notion that biomechanical changes in the sclera may contribute to increased glaucoma risk.
\textbf{Sensitivity Analyses:} Section 7 of the Supplement and Supplementary Figures 2-25 present extensive results for the alternative model with the left vs. eye effect, with the alternative values for the prior shrinkage hyperparameters $\{\tau_{aj},\pi_{aj}\}$, and using the wavelet-regularized principal components as the projected basis. Substantive results do not change, so we see our conclusions are not driven by these choices.
\section{Discussion}
In this paper, we demonstrated how to adapt the BayesFMM modeling framework to account for serial interfunctional correlation and smooth nonparametric covariate functional effects and applied it to an innovative glaucoma study investigating MPS of scleral strain tensors. We found that MPS is maximized near {the} ONH. We also found that MPS decreases with age, especially in regions closest to the ONH, {and the decrease in MPS accelerates throughout middle-age. This could contribute to the increased glaucoma risk seen in elderly.} The age effect on MPS tends to be non-linear near the ONH, especially towards the inferior and nasal sides, while the other scleral regions show the age effect close to linear. {Focal glaucoma damage is most often observed in the inferior quadrant of the ONH.}
While motivated by the glaucoma data application, the BayesFMM framework presented here is extremely general, with the ability to be used with any near-lossless basis tranform and applicable {to} many types of complex, high dimensional functional data of modern interest including wearable computing data, genome-wide data, {proteomics} data, geospatial time series data, neuroimaging data, and many others. Using this framework, one can not only accommodate nonparametric functional fixed effects, but also model serially correlated functions through functional growth curve effects. Our model allows the nonparametric fits, smoothness, and interfunctional correlations to potentially vary over the functional domain, which is necessary for good fit to these data and likely many other complex functional data sets. In addition, we introduced a model selection heuristic that can be used to select among fixed and random effects and decide whether they should be linear, parametric, or nonparametric, {and whether the fits or smoothness should be constant or vary around the functional domain} before running the MCMC.
On our github page (\url{https://github.com/MorrisStatLab/SemiparametricFMM}), we share the Matlab files required to fit these models, including scripts to apply the model selection heuristic, links to automated software to perform the MCMC, and scripts to compute posterior samples and posterior inference for the nonparametric functional effects and all summary plots included in this paper. The model is set up using $lmer$ style model statements \citep{lmer}. The software can be used to fit models with any fixed or random effect function covariates, with different basis functions, and for functions on domains of any dimension or measure given suitable basis functions for the corresponding space. The method is efficient enough to feasibly apply to very large data sets like our glaucoma data set here with over 4.5 million observations. While it took a relatively long time to run the MCMC using a single core, we do not think this is inordinate considering the cost and time to collect these data and the extensive inferential summaries provided by the model fits. This run time was orders of magnitude less than a conceptually simpler approach of applying {\tt lme} to each scleral location, applying a 2d smooth, and using a bootstrap for inference, which by our calculations would take 10 weeks to compute using 1000 bootstrap samples on the same computer we used for the BayesFMM model. If further speed is desired for our method, the model fitting is highly parallelizable and the MCMC code has cluster computing capabilities so can be sped up using GPU or cluster computing resources. While to code we share only utilizes a single core, future updates of the software will enable distributed computing for faster calculations for big data sets.
We are in the process of extending the package to fit these models and including many other features of the BayesFMM framework in other publications but not included in this paper. We anticipate this package, which will be available in Matlab and R, will greatly enhance the usability of the method, and expect this package to be completed and freely available in the near future.
One significant benefit of our fully Bayesian approach to fitting these semiparametric functional mixed models is that we can produce inference on any parameters in the model, or any functional or aggregation of these parameters, and this inference integrates over the various sources of interfunctional variability in the model and over the uncertainty in estimating the covariance parameters and level of nonlinearity of the spline fits. This allows us to perform a flexible, thorough analysis in the entire functional domain, yet produce inference and results for aggregated summaries that may be more interpretable to investigators. This raises the obvious question, ``What is the advantage of fitting the entire functional model? Why not just compute the aggregated summaries and model those using standard tools?" The answer to this question is multi-faceted.
First, if one only looked at specific aggregated summaries, they may miss insights that could have been gleaned from their data but were not captured by these summaries. For example, if only looking at the PP and MP regions, one may miss out on different MPS behavior in the inferior and nasal regions of the sclera, which may be important. By modeling the scleral function in its entirety, we are able to examine the entire domain to ensure we are not missing out on any insights, and then we can still produce inference for any desired aggregated summaries for ease of interpretation, as well. This concept is also relevant in other application areas where summaries are used because of the complexity and high dimensionality of the raw functional data. This approach of flexible modeling followed by inferenct on summary measures can capture the best of both worlds, analyzing the entire function yet producing inference for summary measures interpretable for the scientific subject area.
Second, any aggregation draws arbitrary boundaries in the function space. MPS for scleral locations at the boundary of the PP and MP regions are highly correlated with each other, {and yet} PP and MP extracted summaries arbitrarily separate them from each other. The flexible functional modeling framework introduced in this paper models the entire functional space, yet captures and accounts for intrafunctional correlation through the chosen basis functions. This allows a smooth borrowing of strength from nearby locations, and if location-scale bases like wavelets are used, then this borrowing of strength can be adaptive, able to accommodate spatially heterogenous functions for which some functional regions are more correlated than others. In principle, accounting for this intrafunctional correlation leads to greater efficiency, as has been shown in various contexts.
While flexible, the BayesFMM framework presented here has some limitations and drawbacks, including the need to choose a common basis to use for transformation at all levels of the model, the independence in the basis space assumption that can limit certain types of intrafunctional covariance structure depending on the choice of basis, and the computational intensity from having to run a full MCMC to obtain estimates and inference for model quantities. It is designed for representing functional data sampled on a common fine grid, so is not suitable for sparsely sampled functional data or functional data for which the individual functions are sampled on wildly different sampling grids, a setting in which the \cite{Scheipl15} and \cite{Greven2017} framework is well-suited. The model selection heuristic is \textit{ad hoc}, and should not be used to select over a large number of variables.
In spite of these limitations and drawbacks, the BayesFMM modeling framework is very general, and this paper can serve as a template for how to utilize this modeling framework to model complex functions with various types of interfunctional correlation structures. It has the potential to impact many areas of science yielding complex functional data, and the analysis presented here illustrates a rigorous, thorough workflow to analyze the entire data set, extract many types of information from them, yet provide interpretable graphical summaries and inferential results desired by investigators.
\newpage
\begin{center}
{\large\bf SUPPLEMENTARY MATERIALS}
\end{center}
\begin{description}
\item[Supplement.pdf:] This document is organized as follows. Section 1 describes details of MCMC update steps. Section 2 provides derivation of $\mbox{DF}(\mathbf{t})$ for Model (\ref{e:finalmodel4}). Sections 3-4 include details of model selection results. In Section 5, we assess whether the smoothing parameter for the nonparametric age effect should be constant or vary around the scleral surface. In Section 6, we provide description of overall procedure to fit our model and obtain inferential results with a simulation dataset. Section 7 contains sensitivity analyses to various modeling assumptions, including results for other basis (wavelet-regularized PC), other model (model that also includes left vs. right eye effect), and other regularization hyperparameters (choice of values with no additional shrinkage provided by prior). Section 8 describes some supplementary files presenting additional results discussed in the paper, and Section 9 describes the simulated pseudo-data and demonstrates that it appears to capture the features of the real MPS scleral strain data reasonably well.
\item[RawMPScurves.zip:] It includes plots of raw MPS curves, results after tensor wavelet compression, and results after robust filtering to remove spiky artifacts.
\item[movies.zip:] Includes various {\tt .mp4} movie files illustrating various detailed results from the paper, including the following:
\item[MPSvsAge-wave.mp4:] Movie of MPS vs. age for each IOP based on the tensor wavelet basis function.
\item[Combo\_plots.mp4:] Movie of key summary results based on the tensor wavelet basis functions.
\item[Intrafunctional\_correlations.mp4] Movie showing intrafunctional correlations induced by tensor wavelet basis functions.
\item[Intra\_IOP\_corr.mp4] Movie showing interfunctional variance and serial correlation across IOP from same eye based on the tensor wavelet basis functions.
\
\item[MPSvsAge-pc.mp4:] Movie of MPS vs. age for each IOP based on the principal component basis functions.
\item[AUCvsAge-pc.mp4:] Movie of AUC vs. age based on the principal component basis functions.
\item[Intrafunctional\_correlations-pc.mp4] Movie showing intrafunctional correlations induced by the principal component basis functions.
\item[MPSvsAge-eye.mp4:] Movie of MPS vs. age for each IOP based on tensor wavelet basis functions and model including left vs. right eye effect.
\item[AUCvsAge-eye.mp4:] Movie of AUC vs. age based on tensor wavelet basis functions and model including left vs. right eye effect.
\item[Intrafunctional\_correlations-eye.mp4] Movie showing intrafunctional correlations induced by the tensor wavelet basis functions and model including left vs. right eye effect
\item[MPSvsAge-nosmooth.mp4:] Movie of MPS vs. age for each IOP based on tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\item[AUCvsAge-nosmooth.mp4:] Movie of AUC vs. age based on tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\item[Intrafunctional\_correlations-nosmooth.mp4] Movie showing intrafunctional correlations induced by the tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\item[EYE\_toolbox.zip] Contains all of the files necessary to run the methods presented in the paper, including the raw glaucoma data, pseudo data and full scripts to run the analyses and produce the plots contained in the paper. This includes {\tt wfmm\_install.pdf} that contains step-by-step instructions on how to install the R package \textit{wfmm} and associcated executable, and {\tt Analysis\_of\_Pseudo\_Data.pdf} that contains detailed step-by-step instructions for running a complete analysis on pseudo data generated to mimic the real data in the application in Matlab, including the basis transform, model selection heuristic, MCMC in basis space, projection of posterior samples back to data space, MCMC convergence diagnostics, and producing all inferential summaries and plots contained in this paper that present results and illustrate properties of the model, with run time estimates for each step. We also include {\tt Producing Plots for Main Data Analysis in Paper.pdf} that gives instructions for running scripts to reproduce the figures for the real data analysis for the main model used for the MPS scleral strain data analysis contained in the paper. This toolbox and data are available on our github (\url{https://github.com/MorrisStatLab/SemiparametricFMM}).
\end{description}
\section*{Supplementary Materials}
\section*{Details of the MCMC Algorithm}
We utilize a Markov chain Monte Carlo algorithm to draw posterior samples for the parameters in our model (12). First, we sample the fixed effects and variance components alternatively using the conditional posterior distribution which is marginalized over the random effects. Then we later sample the random effect $u_{m,k}^*$ which is needed for estimating nonparametric age effect. It is sampled using the full conditional distribution which is still marginalized over the other random effects. This sampling algorithm improves the mixing properties of the MCMC chains and speeds up the MCMCM algorithm as illustrated in \citet{Morris06}. The following are the details of the MCMC.
Note that there are a total of 306 (9 IOP levels $\times$ 34 eyes) observed MPS functions. For easy presentation of the MCMC, we rewrite our model (12) in a matrix formula including all observations:
\begin{align}
\nonumber\mbox{\bf Y}^*_k = \mbox{\bf X}\boldsymbol{b}^*_k + \boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}\mbox{\bf u}^*_k +\sum_{h=0}^2\boldsymbol{Z}_h\mbox{\bf u}^*_{h,k}+\mbox{\bf E}^*_k,
\end{align}
where $\mbox{\bf Y}^*_k$ is the $306\times 1$ matrix of $Y^*_{ijp,k}$, $\mbox{\bf X}$ is the $306\times 4$ design matrix of fixed effects, $\boldsymbol{b}^*_k=(\beta^*_{0,k},\beta^*_{1,k},B^*_{1,k},B^*_{2,k})'$, $\boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}$ is the $306\times (M+2)$ design matrix for $\mbox{\bf u}^*_k=(u_{1,k}^*,\ldots,u_{(M+2),k}^*)'$, $\boldsymbol{Z}_0$ is the $306\times 38$ design matrix for $\mbox{\bf u}^*_{0,k}=(U^*_{1,1,k},\ldots,U^*_{19,1,k},U^*_{1,2,k},\ldots,U^*_{19,2,k})'$, $\boldsymbol{Z}_1$ is the $306\times 38$ design matrix for the random effect for $\mbox{IOP}_1$, $\mbox{\bf u}^*_{1,k}=(U^*_{1,1,1,k},\ldots,U^*_{19,1,1,k},U^*_{1,2,1,k},\ldots,U^*_{19,2,1,k})'$, $\boldsymbol{Z}_2$ is the $306\times 38$ design matrix for the random effect for $\mbox{IOP}_2$, $\mbox{\bf u}^*_{2,k}=(U^*_{1,1,2,k},\ldots,U^*_{19,1,2,k},U^*_{1,2,2,k},\ldots,U^*_{19,2,2,k})'$, and $\mbox{\bf E}^*_k$ is the $306\times 1$ matrix of $
E^*_{ijp,k}$. For simplicity, let $\boldsymbol{b}^*_k=(b^*_{1,k},\ldots,b^*_{4,k})'$. Recall that sparsity priors are placed on the fixed effects:
\begin{equation}
\nonumber b^*_{a,k} = \gamma^*_{a,k}N(0,\tau_{a,k})+(1-\gamma^*_{a,k})I_0,\quad \gamma^*_{a,k}=\mbox{Bernoulli}(\pi_{a,k}), \quad a=1,\ldots,4,
\end{equation}
where $I_0$ is a point mass at zero. Assumptions made on the random terms are that $\mbox{\bf u}^*_{k}\sim N(\mbox{\bf 0},q^*_kI_{M+2})$, $\mbox{\bf u}^*_{h,k}\sim N(\mbox{\bf 0},q^*_{h,k}I_{38})$ $(h=0,1,2)$, and $\mbox{\bf E}_k^*\sim N(\mbox{\bf 0},s^*_{k}I_{306})$ where $I_d$ is a $d\times d$ identity matrix. Let $\Omega_k^*=(q^*_k,q^*_{0,k},q^*_{1,k},q^*_{2,k},s^*_k)'$
\begin{enumerate}
\item[Step 1.] For each $a$, draw a sample of $b^*_{a,k}$ from $f(b^*_{a,k}|\mbox{\bf Y}^*_k,\boldsymbol{b}^*_{-a,k},\Omega_k^*)$, where $\boldsymbol{b}^*_{-a,k}$ is the set of all fixed effects except $b^*_{a,k}$. This distribution is a mixture of a point mass at zero and a Gaussian distribution with the Gaussian proportion $\alpha_{a,k}$:
\begin{align}
\nonumber &\gamma_{a,k} \sim \mbox{Bernoulli}(\alpha_{a,k}),\\
\nonumber &b^*_{a,k} = \gamma_{a,k}N(\mu_{a,k},v_{a,k})+(1-\gamma_{a,k})I_0,
\end{align}
where
\begin{align}
\nonumber &\mu_{a,k}=\hat{b}_{a,k,MLE}^*(1+V_{a,k}/\tau_{a,k})^{-1},\\
\nonumber &v_{a,k} = V_{a,k}(1+V_{a,k}/\tau_{a,k})^{-1},\\
\nonumber &\alpha_{a,k} = \frac{\pi_{a,k}}{1-\pi_{a,k}}*(1+V_{a,k}/\tau_{a,k})^{-1/2}\exp\{\frac{1}{2}\zeta_{a,k}^2(1+V_{ak}/\tau_{a,k})^{-1})\}, \mbox{and} \\
\nonumber &\zeta_{a,k} = \hat{b}_{a,k,MLE}^*/\sqrt{V_{a,k}}.
\end{align}
Here $\hat{b}_{a,k,MLE}^*$ is the maximum likelihood estimate of $b^*_{a,k}$ which is
\begin{align}
\nonumber &\hat{b}_{a,k,MLE}^*=(\mbox{\bf X}_a'\Sigma_k^{-1}\mbox{\bf X}_a)^{-1}\mbox{\bf X}_a'\Sigma_k^{-1}(\mbox{\bf Y}^*_k-\mbox{\bf X}_{-a}\boldsymbol{b}^*_{-a,k}), \\
\nonumber &V_{a,k} = (\mbox{\bf X}_a'\Sigma_k^{-1}\mbox{\bf X}_a)^{-1}, \mbox{and}\\
\nonumber &\Sigma_k = q^*_k\boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}\boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}' + \sum_{h=0}^2 q^*_{h,k}\boldsymbol{Z}_{h}\boldsymbol{Z}_{h}' + s^*_kI_{306}
\end{align}
where $\mbox{\bf X}_a$ is the $a$-th column of $\mbox{\bf X}$ and $\mbox{\bf X}_{-a}$ is the $\mbox{\bf X}$ with the $a$-th column removed.
\item[Step 2.] Draw a sample of $\Omega_k^*$ by using a random-walk Metropolis-Hastings step from the full conditional distribution
\begin{align}
\nonumber f(\Omega_k^*|\mbox{\bf Y}^*_k,\boldsymbol{b}^*_{k})\propto |\Sigma_{k}|^{-1/2}\exp\{\frac{1}{2}(\mbox{\bf Y}^*_k-\mbox{\bf X}\boldsymbol{b}^*_k)'\Sigma_k^{-1}(\mbox{\bf Y}^*_k-\mbox{\bf X}\boldsymbol{b}^*_k)\}f(\Omega_k).
\end{align}
We use an independent zero-truncated Gaussian distribution as the proposal for each parameter. The proposal variances are automatically estimated from the data by using the maximum likelihood estimates \citep{Wolfinger94}.
\item[Step 3.] Sample the random effect $\mbox{\bf u}_{k}^*$ related to the nonparametric age effect from its fully conditional distribution marginalized over the other random effects by integrating them out. The distribution can be easily seen to be multivariate Gaussian
\begin{align}
\nonumber f(\mbox{\bf u}_{k}^*|\mbox{\bf Y}^*_k,\boldsymbol{b}^*_{k},\Omega_{k}^*) \sim N(\boldsymbol{m}_k,V_{k}),
\end{align}
where
\begin{align}
\nonumber &\boldsymbol{m}_k=V_k\boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}'(\sum_{h=0}^2q^*_{h,k}\boldsymbol{Z}_h\boldsymbol{Z}_h'+s^*_kI_{306})^{-1}(\mbox{\bf Y}_k-\mbox{\bf X}\boldsymbol{b}^*_k),\\
\nonumber &V_{k}=[\Psi^{-1}_k+(q^*_kI_{306})^{-1}]^{-1} , \mbox{and}\\
\nonumber &\Psi^{-1}_k = \boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}'(\sum_{h=0}^2q^*_{h,k}\boldsymbol{Z}_h\boldsymbol{Z}_h'+s^*_kI_{306})^{-1}\boldsymbol{Z}_{\boldsymbol{\mathcal{B}}}.
\end{align}
\end{enumerate}
Note that the MCMC algorithm can be performed for each $k$ separately. Thus, the MCMC fitting can be done using parallel processing using multiple cores or clusters.
\section*{Derivation of $\mbox{DF}(\mathbf{t})$ for Model (9)}
Here we derive the form of $\mbox{DF}(\mathbf{t})$ for Model (9) that is introduced in Section 2.4 of the main paper. Recall that Model (9) is given as
\begin{align}
\nonumber Y_{ijp}(\mathbf{t}) =& f(X_{\mbox{age},i}, \mathbf{t})+B_{1}(\mathbf{t})\mbox{IOP}_1+B_{2}(\mathbf{t})\mbox{IOP}_2+\\
\label{e:finalmodel} &U_{ij}(\mathbf{t})+U_{ij1}(\mathbf{t})\mbox{IOP}_1+U_{ij2}(\mathbf{t})\mbox{IOP}_2+E_{ijp}(\mathbf{t}),
\end{align}
where $i=1,\ldots,n$; $j=1, 2$; $p=7,10,15,\ldots,45$; and $\mbox{IOP}_1$ and $\mbox{IOP}_2$ together represent the orthogonalized hyperbola terms. Under the assumption of (5) in Section 2.3 of the main paper, the model \eqref{e:finalmodel} for a given $\mathbf{t}$ can be rewritten in a matrix form,
\begin{align}
\label{e:NPM2} \boldsymbol{y}({\mathbf{t}})&=\boldsymbol{\mathcal{B}}\boldsymbol{\nu}(\mathbf{t}) +\mbox{\bf X}_{\mbox{IOP}}\boldsymbol{B}(\mathbf{t})+\boldsymbol{Z}_{\mbox{IOP}}\mbox{\bf U}(\mathbf{t})+\boldsymbol{\epsilon}{(\mathbf{t})},
\end{align}
where $\boldsymbol{\mathcal{B}}$ is the B-spline design matrix, $\mbox{\bf X}_{\mbox{IOP}}$ is the design matrix for the fixed hyperbolic IOP effect, and $\boldsymbol{Z}_{\mbox{IOP}}$ is the design matrix for all random effects. Let $\tilde{\boldsymbol{y}}({\mathbf{t}})=\boldsymbol{y}({\mathbf{t}})-\mbox{\bf X}_{\mbox{IOP}}\boldsymbol{B}(\mathbf{t})$ and $\tilde{\boldsymbol{\epsilon}}{(\mathbf{t})}=\boldsymbol{Z}_{\mbox{IOP}}\mbox{\bf U}(\mathbf{t})+\boldsymbol{\epsilon}{(\mathbf{t})}$. Then, the model \eqref{e:NPM2} becomes
\begin{align}
\label{e:NPM3} \tilde{\boldsymbol{y}}({\mathbf{t}}) = \boldsymbol{\mathcal{B}}\boldsymbol{\nu}(\mathbf{t}) + \tilde{\boldsymbol{\epsilon}}{(\mathbf{t})}.
\end{align}
Let $\boldsymbol{W}$ be the inverse of covariance of $\tilde{\boldsymbol{\epsilon}}{(\mathbf{t})}$, $\boldsymbol{W}=\mbox{Cov}( \tilde{\boldsymbol{\epsilon}}{(\mathbf{t})})^{-1}$. By multiplying $\boldsymbol{W}^{1/2}$ to both sides of \eqref{e:NPM3}, one can have
\begin{align}
\label{e:NPM4} \boldsymbol{W}^{1/2}\tilde{\boldsymbol{y}}({\mathbf{t}}) = \boldsymbol{W}^{1/2}\boldsymbol{\mathcal{B}}\boldsymbol{\nu}(\mathbf{t}) + \boldsymbol{W}^{1/2}\tilde{\boldsymbol{\epsilon}}{(\mathbf{t})}.
\end{align}
A smoothing spline for this model can be obtained by the following optimization problem,
\begin{align}
\nonumber \mbox{min} \{{\parallel \boldsymbol{W}^{1/2}(\tilde{\boldsymbol{y}}({\mathbf{t}})-\boldsymbol{\mathcal{B}}\boldsymbol{\nu}(\mathbf{t})) \parallel}^2 +\lambda_{\mathbf{t}}\boldsymbol{\nu}(\mathbf{t})'\Omega\boldsymbol{\nu}(\mathbf{t})\}.
\end{align} One can show that the resulting spline estimator is $\hat{\boldsymbol{\nu}}(\mathbf{t})=(\boldsymbol{\mathcal{B}}'\boldsymbol{W}\boldsymbol{\mathcal{B}}+\lambda_{\mathbf{t}}\Omega)^{-1}\boldsymbol{\mathcal{B}}'\boldsymbol{W}\tilde{\boldsymbol{y}}({\mathbf{t}})$. Following the same arguments in Section 2.3, one can easily see that this penalized spline estimator is equivalent to the posterior mean of $\boldsymbol{\nu}(\mathbf{t})$ given $\tilde{\boldsymbol{y}}({\mathbf{t}})$ with prior specification $g(\boldsymbol{\nu}(\mathbf{t}))\propto \exp(-\frac{1}{2\sigma_{\mathbf{t}}^2}\boldsymbol{\nu}(\mathbf{t})'\Omega\boldsymbol{\nu}(\mathbf{t}))$ where $\lambda_{\mathbf{t}}={1}/{\sigma_\mathbf{t}^2}$. Again, following the same arguments in Section 2.3, one can show that the model \eqref{e:NPM4} can be formulated as the mixed model (10) in Section 2.3 of the main paper. As the penalized spline fit is given as $\boldsymbol{\mathcal{B}}\hat{\boldsymbol{\nu}}(\mathbf{t})=\boldsymbol{\mathcal{B}}(\boldsymbol{\mathcal{B}}'\boldsymbol{W}\boldsymbol{\mathcal{B}}+\lambda_{\mathbf{t}}\Omega)^{-1}\boldsymbol{\mathcal{B}}'\boldsymbol{W}\tilde{\boldsymbol{y}}({\mathbf{t}})$, the $\mbox{DF}(\mathbf{t})$ for Model (9) is given as
\begin{align}
\nonumber \mbox{DF}(\mathbf{t})= \mbox{trace}\{\boldsymbol{\mathcal{B}}(\boldsymbol{\mathcal{B}}'\boldsymbol{W}\boldsymbol{\mathcal{B}}+\lambda_\mathbf{t}\Omega)^{-1}\boldsymbol{\mathcal{B}}'\boldsymbol{W}\}.
\end{align}
\newpage
\section*{Model Selection Results for Glaucoma Data}
We performed the model selection strategy described in Section 4 to find the final model to be fitted via MCMC. We first selected fixed effects to be included in the final model and their forms. Three different fixed effects were considered: age, IOP, and eye (left vs. right). For the form of the age effect, we considered two possibilities: linear or nonparametric. For the form of the IOP effect, we considered three different possibilities: linear, hyperbola, or nonparametric. Models without the eye effect were also compared. As a result, we compared 12 different models for the fixed effect selection. The $P^h$ scores are given in Table \ref{tb:fixed}. The model with the nonparametric age effect, the hyperbolic IOP effect, and no eye effect showed the highest $P^h$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|c|c|c|c| }\hline
\multicolumn{2}{|c|}{Form of fixed effect} & \multicolumn{2}{c|}{$P^h, aBIC$} & \multicolumn{2}{c|}{$P^h, aAIC$}\\ \cline{1-2}\cline{3-4}\cline{5-6}
Age & IOP & without Eye effect & with Eye effect & without Eye effect & with Eye effect\\\hline
Nonparametric & Nonparametric & 0.005 & 0.020 & 0.003 & 0.013 \\
Nonparametric & Hyperbolic & 0.543 & 0.377 & 0.022 & 0.958\\
Nonparametric & Linear & 0.000 & 0.000 & 0.000 & 0.000\\
Linear & Nonparametric & 0.013 & 0.043 & 0.000 & 0.003\\
Linear & Hyperbolic & 0.000 & 0.000 & 0,000 & 0,000\\
Linear & Linear & 0.000 & 0.000 & 0.000 & 0.000\\ \hline
\end{tabular}
\caption{ \footnotesize Comparison among fixed effects. }\label{tb:fixed}
\end{center}
\end{table}
Once we selected the fixed effects, we assessed whether the interaction term between age and IOP is needed. As shown in Table \ref{tb:interaction}, our model selection criteria made it clear the interaction was not necessary.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|}\hline
Interaction term in model & $P^h$ \\ \hline
No interaction & 1 \\
$f(age)*\mbox{IOP}_1$ & 0 \\
$f(age)*\mbox{IOP}_2$ & 0 \\
$f(age)*\mbox{IOP}_1$+$f(age)*\mbox{IOP}_2$ & 0 \\ \hline
\end{tabular}
\caption{ \footnotesize Comparison among interaction terms, results same for $aBIC$ and $aAIC$. }\label{tb:interaction}
\end{center}
\end{table}
Finally, with the selected fixed terms, we compared several models by varying random effects. Two different levels of random effects were considered: the subject-level random effect and the longitudinal eye-level random effect. For the form of the eye-level random effect in terms of IOP as illustrated in Section 2.2, we considered three different forms: constant, linear, or hyperbola. The results are given in Table \ref{tb:random}. Our model selection strategy selected the eye-level random effect with the form of hyperbola in terms of IOP.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}\hline
\multirow{2}{*}{Form of IOP random effect} & \multicolumn{2}{|c|}{$P^h aBIC$} & \multicolumn{2}{c|}{$P^h aAIC$} \\ \cline{2-3} \cline{4-5}
& without subject RE & with subject RE & without subject RE & with subject RE\\\hline
No IOP random effect & 0.000 & 0.000 & 0.000 & 0.000\\
constant & 0.000 & 0.000 & 0.000 & 0.000\\
Linear & 0.005 & 0.000 & 0.002 & 0.000\\
Hyperbolic without intercept & 0.000 & 0.000 & 0.000 & 0.000\\
Hyperbolic with intercept & 0.991 & 0.003 & 0.798 & 0.200\\ \hline
\end{tabular}
\caption{ \footnotesize Comparison among random effects. }\label{tb:random}
\end{center}
\end{table}
\section*{Model Selection Results for Simulation Study}
As described in Section 4, we conducted a simulation study to investigate the performance of the model selection approach. Table \ref{tb:sim} provides average $P^h$ over 100 replications for each model in each scenario. For each simulation setting, the $P^h$ was maximized by the correct model in all 100/100 replications.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|cccc|}\hline
\multirow{2}{*}{True Model} & \multicolumn{4}{c}{$P^h aBIC$} \\ \cline{2-5}
& Model 1 & Model 2 & Model 3 & Model 4\\\hline
Model 1 & 0.984 & 0.015 & 0.000 & 0.000\\
Model 2 & 0.273 & 0.723 & 0.003 & 0.002\\
Model 3 & 0.093 & 0.292 & 0.602 & 0.013\\
Model 4 & 0.000 & 0.000 & 0.000 & 1.000\\ \hline
\end{tabular}
\caption{ \footnotesize Average $P^h$ in Simulation Study. }\label{tb:sim}
\end{center}
\end{table}
\section*{Assessing varying smoothness of age effect around sclerum}
As described in Section 2.3, the smoothness penalty parameter $\lambda_\mathbf{t}$ is equivalent to ${\sigma_{\epsilon,\mathbf{t}}^2}/{\sigma_\mathbf{t}^2}$ where ${\sigma_{\epsilon,\mathbf{t}}^2}$ and $\sigma_\mathbf{t}^2$ are variances for the random effect and residual error respectively. A key strength of our framework is that it allows the smoothness penalty parameter to vary across functional domains as ${\sigma_{\epsilon,\mathbf{t}}^2}/{\sigma_\mathbf{t}^2}$ depends on $\mathbf{t}$. If one wants to use the smoothness penalty parameter which is common across $\mathbf{t}$, it is basically equivalent to assume that ${\sigma_{\epsilon,\mathbf{t}}^2}=\lambda\sigma_\mathbf{t}^2$ in the estimation procedure. Our model selection strategy can be also used to assess whether the data favors this varying smoothness penalty parameter against a common smoothness parameter. In particular, consider the model (8) introduced in Section 2.3:
\begin{align}
\label{e:Bspline3} \boldsymbol{y}(\mathbf{t})= 1_n\beta_{0}(\mathbf{t})+X_{\mbox{age}}\beta_{1}(\mathbf{t})+Z_{\boldsymbol{\mathcal{B}}}\boldsymbol{u}(\mathbf{t})+\boldsymbol{\epsilon}(\mathbf{t}),
\end{align}
where $\boldsymbol{u}(\mathbf{t}) \sim N(0,\sigma_\mathbf{t}^2I)$ and $\boldsymbol{\epsilon}(\mathbf{t}) \sim N(0,\sigma_{\epsilon,\mathbf{t}}^2I)$. To compute the BIC of the model with a common smoothness penalty $\lambda$, we can rewrite the model \eqref{e:Bspline3} as
\begin{align}
\label{e:Bspline4} \boldsymbol{y}(\mathbf{t})= 1_n\beta_{0}(\mathbf{t})+X_{\mbox{age}}\beta_{1}(\mathbf{t})+\tilde{\boldsymbol{\epsilon}}(\mathbf{t}),
\end{align}
where $\tilde{\boldsymbol{\epsilon}}(\mathbf{t})\sim N(0,\tilde{V})$ and $\tilde{V}=\sigma_\mathbf{t}^2(Z_{\boldsymbol{\mathcal{B}}}Z_{\boldsymbol{\mathcal{B}}}'+\lambda I)$ by using the fact that ${\sigma_{\epsilon,\mathbf{t}}^2}=\lambda\sigma_\mathbf{t}^2$. This model can be easily fitted after multiplying by $(Z_{\boldsymbol{\mathcal{B}}}Z_{\boldsymbol{\mathcal{B}}}'+\lambda I)^{-1/2}$ to both sides of \eqref{e:Bspline4}. The resulting BIC can be compared with the BIC of the model with varying smoothness penalty via our model selection procedure.
We conducted simple simulation study to investigate how well the $P^h$ works in the comparison between common and varying smoothness. We considered two different models:
\begin{itemize}
\item Model 1. $\lambda_\mathbf{t}={\sigma_{\epsilon,\mathbf{t}}^2}/{\sigma_\mathbf{t}^2}$ (varying smoothness),
\item Model 2. $\lambda={\sigma_{\epsilon,\mathbf{t}}^2}/{\sigma_\mathbf{t}^2}$ (common smoothness).
\end{itemize}
First, we fitted the glaucoma data using \eqref{e:Bspline3}. When generating simulation dataset, we used \eqref{e:Bspline3} for Model 1 and \eqref{e:Bspline4} for Model 2. We generated 100 simulated data sets from each true model. For each simulated data set, we performed the model selection procedure using $P^h$ as described in Section 4 to see how well the procedure choose the true model. It turned out that the model selection procedure always chose the correct model.
We applied this comparison as well to the glaucoma data. Figure \ref{fg1} shows comparison of computed $P^h$ with various values of common smoothness penalty $\lambda$. With all values considered here, our model selection procedure favors the fit with varying smoothness against the fit with common smoothness.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.7]{smoothness.pdf}}
\vspace{-0.2in}
\caption{ \footnotesize Comparison between common and varying smoothness}\label{fg1}
\end{figure}
\section*{Simulation Study to Examine Potential Identifiability Issue in Generalized Additive Mixed Model (GAMM)}
As pointed out in Section 4, one may have difficulty in detecting a nonparametric age effect when subject-specific random effects are included in the model at the same time. However, this is a general issue of GAMMs with a smooth nonparametric effect for a subject-specific covariate that includes repeated measures per subject and a subject-specific random effect. To investigate this issue, we conducted an additional simulation study as follows. We considered the following three different models:
\begin{itemize}
\item Model 1 (subject-specific random effect): $Y_{ijp} = \mu+U_{i}+E_{ijp}$,
\item Model 2 (nonparametric age effect): $Y_{ijp} = f(X_{\mbox{age},i})+E_{ijp}$, and
\item Model 3 (nonparametric age effect+subject-specific random effect): $Y_{ijp} = f(X_{\mbox{age},i})+U_{i}+E_{ijp}$.
\end{itemize}
Here, $Y_{ijp}$ was the first basis coefficient of the scleral strain data in the wavelet space. Each of Models 1-3 was fit to the real scleral strain data and we generated 100 simulation data sets from the fitted model. For each simulation data set, we performed model selection using (1) the marginal BIC (BIC$_1$) or (2) the modified BIC (BIC$_2$) where the the number of parameters in the penalty was counted using the degree of freedom of the nonparametric fit. Table \ref{tb:sim2} shows the number of times that each model was selected out of 100 replications if the random and fixed effects were evaluated together. We can clearly see that the identifiability issue occured when the true model is Model 3. In this case, neither BIC$_1$ nor BIC$_2$ was able to correctly find the true model (Model 3). However, this issue could be rectified when we performed the fixed effect selection first followed by the random effect selection. Using such a two-step selection procedure, we were able to find the correct model for most times (99/100 for BIC$_1$ and 95/100 for BIC$_2$). This is the strategy we used for our analysis, and appears to be a fine fix for this problem, but clearly this problem should be further investigated in the setting of GAMMs.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|cccc|}\hline
\multirow{2}{*}{True Model } & \multirow{2}{*}{Criterion} & \multicolumn{3}{c}{Selected Model} \\ \cline{3-5}
& & Model 1 & Model 2 & Model 3 \\\hline
\multirow{2}{*}{Model 1} & BIC$_1$ &100 & 0 & 0 \\
& BIC$_2$ & 100 & 0 & 0 \\ \hline
\multirow{2}{*}{Model 2} & BIC$_1$ & 0 & 100 & 0 \\
& BIC$_2$ & 53 & 47 & 0 \\ \hline
\multirow{2}{*}{Model 3} & BIC$_1$ & 98 & 0 & 2 \\
& BIC$_2$ & 99 & 0 & 1 \\ \hline
\end{tabular}
\caption{ \footnotesize The number of times that each model was selected }\label{tb:sim2}
\end{center}
\end{table}
\newpage
\section*{Sensitivity Analyses}
\subsection*{Model Choice: Including Left vs. Eye Fixed Effect Function}
Recall that based on the model selectin heuristic, when using $aBIC$ the second-best model also included an ``eye'' effect, which consisted of a fixed offset for left vs. right eye. This model was the best model when using $aAIC$ for the model selection heuristic. Thus, for this model and the same tensor wavelet basis used for the primary analysis, we repeated the analysis of the data. The \textit{MPSvsAge-eye.mp4} file shows a movie of the IOP-specific MPS fit based on the model with left vs. eye effect. The \textit{AUCvsAge-eye.mp4} file shows a movie of the AUC summaries based on the model with left vs. eye effect. The \textit{Intrafunctional\_correlations-eye.mp4} file shows a movie of the induced intrafunctional correlation from this model. The fits based on this model are similar to those based on the primary model used in the paper, so substantive results are not sensitivity to inclusion of this left vs. right eye effect.
\subsection*{Regularization Hyperparameters: Assessing Results with No Shrinkage}
One concern voiced by reviewers was that results could be sensitive to our choice of prior for the fixed effects. We use a spike-slab sparsity prior, and as pointed out by one reviewer, results of variable selection can be strongly sensitivty to the choice of the regularization hyperparameters $\pi_{aj}$ and $\tau_{aj}$ for this prior. As described in the main paper, we used an empirical Bayes approach to estimate these parameters from the data. However, there is some concern that results may be driven by this informative prior. To assess, we performed another analysis using relatively uninformative hyperpriors with $\pi_{aj} \approx 1$ and $\tau_{aj}=10^6$ for all $a,j$. This is an extreme case of the spike slab for which the probability of the zero slab is negligible and also the linear shrinkage induced by the Gaussian variance is also mostly negligible. We have shown in previous publications this prior, which we call the ``no smoothing prior'', gives point estimates the same as if no smoothing is done, and is thus unbiased. We applied this prior to our data set using the same tensor wavelet basis used for the primary analysis, and repeated all analyses of the data. The \textit{MPSvsAge-nosmooth.mp4} file shows a movie of the IOP-specific MPS fit based on the model with no shrinkage. The \textit{AUCvsAge-nosmooth.mp4} file shows a movie of the AUC summaries based on the model with no shrinkage. The \textit{Intrafunctional\_correlations-nosmooth.mp4} file shows a movie of the induced intrafunctional correlation from this model for this choice of prior. The fits based on this model are similar to those based on the primary model used in the paper, so we can see that there is no significant bias induced by our informative priors. The difference we see is that the fits over MPS are slightly less smooth in MPS.
\subsection*{Choice of Basis: Wavelet-Regularized Principal Components}
Recall that a total of 269 wavelet coefficients after the outlier filtering were used as basis coefficients in our application. As described in Section 5.2, we also considered using principal components (PC) scores computed on these wavelet coefficients as basis coefficients. In particular, we applied a singular value decomposition to $\mathbf{W}$ where $\mathbf{W}$ is the $306\times 269$ matrix of wavelet coefficients. Let $\mathbf{V}$ be the matrix of right singular vectors. The wavelet-space PC scores were computed by $\mathbf{Y^*}=\mathbf{W}\mathbf{V}$ where $\mathbf{Y^*}$ are the wavelet-space PC scores. We kept only the leading 27 columns of $\mathbf{Y^*}$ that account for most of the variability according to the scree plot ($>99.5\%$).The figure below contains the first 9 PCs, and the file \textit{wPCs.pdf} contains plots of all 27 PC bases. The \textit{MPSvsAge-pc.mp4} file shows a movie of the IOP-specific MPS fit based on the PC scores. The \textit{AUCvsAge-pc.mp4} file shows a movie of the AUC summaries based on the PC scores. The \textit{Intrafunctional\_correlations-pc.mp4} file shows a movie of the induced intrafunctional correlation from this choice of basis. Note that although allowing more global correlations, the strongest correlations include the local correlations at nearby scleral locations that also dominate these correlation surfaces for the tensor wavelet basis. The fits based on the PC scores are similar to those based on the wavelet coefficients. The levels of variability are greater for the wPC bases, and the age effects in the regions more distant from the ONH are more nonlinear than for the wavelet bases, possibly from the extra induced correlation of these locations and scleral positions close to the ONH. Although the PC basis keeps $99.5\%$ of the variability in the data according to the scree plots, the minimum correlation of the raw functions and the PC projections is $<0.97$, which results in some loss of information. Also, given such a small number of basis functions, $27$, it is not clear whether this is sufficiently flexible to capture all of the structure at the various levels. These are some of the reasons we prefer the wavelet bases for this application.
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{age_by_pressure_with_df83.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{age_by_pressure_with_df83-pc.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{age_by_pressure_with_df83-model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{age_by_pressure_with_df83-nosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_83.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_83-pc.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_83-model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_83-nosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{auc_circum.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{auc_circum_Model2_WavPC.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{auc_circum_Model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{auc_circum_Model2_njnosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_by_region.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_by_region-pc.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_by_region-model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{AUC_by_age_by_region-nosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{interfuntional_cor_83.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{interfuntional_cor_83-pc.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{interfuntional_cor_83-model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{interfuntional_cor_83-nosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\begin{figure}
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{marginal_variance.png}}
\caption{ \footnotesize Tensor Wavelet, Main Model, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{marginal_variance-pc.png}}
\label{fig:sfig1}
\caption{ \footnotesize Wavelet-Regularized PC Basis, Main Model}
\endminipage \\
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{marginal_variance-model8.png}}
\caption{ \footnotesize Tensor Wavelet, Model with Left vs. Eye Effect, Empirical Bayes Shrinkage}
\label{fig:sfig1}
\endminipage \hfill
\minipage{0.45\textwidth}
\centerline{\includegraphics[scale=0.4]{marginal_variance-nosmooth.png}}
\label{fig:sfig1}
\caption{ \footnotesize Tensor Wavelet, Main Model, No Shrinkage Prior}
\endminipage
\end{figure}
\newpage
\section*{Other Results from Main Paper}
There are other supplemental plots and movies referenced in the paper and included in the supplement including:
\begin{description}
\item[RawMPScurves.zip:] plots of all raw MPS curves, results after robust filtering to remove outliers, and results after wavelet compression to show that virtually no information is lost by the reduction from $14,400$ observatrions to $269$ basis coefficients. The following figure contains the raw MPS data for one eye from a subject at the 9 IOP levels, plus plots after outlier removal and compression to show how near-lossless the transform is.
\begin{figure}
\centerline{\includegraphics[scale=0.8]{compression.png}} \label{fig:compression}
\caption{ \footnotesize Plot of raw data for one eye from 39 year old subject at 9 levels of IOP (top), plus results after outlier removal (middle), and after wavelet compression down to 269 coefficients (bottom), demonstrating the near-lossless nature of the transform. The plots for all other eyes are in RawMPScurves.zip.}
\end{figure}
\item[MPSvsAge-wave.mp4:] Movie of MPS vs. age for each IOP based on the tensor wavelet basis function.
\item[Combo\_plot.mp4:] Movie of key summary results based on the tensor wavelet basis functions.
\item[Intrafunctional\_correlations.mp4] Movie showing intrafunctional correlations induced by tensor wavelet basis functions.
\item[Intra\_IOP\_corr.mp4] Movie showing interfunctional variance and serial correlation across IOP from same eye based on the tensor wavelet basis functions.
\
\item[MPSvsAge-pc.mp4:] Movie of MPS vs. age for each IOP based on the principal component basis functions.
\item[AUCvsAge-pc.mp4:] Movie of AUC vs. age based on the principal component basis functions.
\item[Intrafunctional\_correlations-pc.mp4] Movie showing intrafunctional correlations induced by the principal component basis functions.
\item[MPSvsAge-eye.mp4:] Movie of MPS vs. age for each IOP based on tensor wavelet basis functions and model including left vs. right eye effect.
\item[AUCvsAge-eye.mp4:] Movie of AUC vs. age based on tensor wavelet basis functions and model including left vs. right eye effect.
\item[Intrafunctional\_correlations-eye.mp4] Movie showing intrafunctional correlations induced by the tensor wavelet basis functions and model including left vs. right eye effect
\item[MPSvsAge-nosmooth.mp4:] Movie of MPS vs. age for each IOP based on tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\item[AUCvsAge-nosmooth.mp4:] Movie of AUC vs. age based on tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\item[Intrafunctional\_correlations-nosmooth.mp4] Movie showing intrafunctional correlations induced by the tensor wavelet basis functions and model with no smoothing ($\pi_{\cdot}=1, \tau_{\cdot}=10^6$).
\end{description}
\section*{Details of MCMC Convergence Diagnostics}
We used the package {\itshape coda} in R to run the Geweke convergence diagnostics. We looked at three different sets of parameters: the fixed effects coefficients (FE), the nonparametric age coefficients (Age NP), the variance components (VCs), and the combined results for all parameters. For each Markov chain, we test equality of means for the first 25\% of chain and the last 25\% of the chain. Tables \ref{TableDiag1}- \ref{TableDiag4} show the results for the Geweke Z-score quantiles and mean, the quantiles and mean of the respective p-values, the median of the effective sample size (ESS), the proportion of rejection of equality of the means, and the Metropolis-Hastings (M-H) acceptance probability for the variance components.
\begin{table}[!h]
\centering
\caption{Geweke convergence diagnostics summaries for the main model presented in the paper.}
\label{TableDiag1}
\begin{tabular}{|c|ccccc|}
\hline
& &FE & Age NP & VCs & Combined \\
\hline
P-value & Mean & 0.496 & 0.493 & 0.472 & 0.489 \\
& Q025 & 0.025 & 0.022 & 0.006 & 0.000 \\
& Q05 & 0.053 & 0.060 & 0.018 & 0.000 \\
& Median & 0.506 & 0.493 & 0.473 & 0.392 \\
& Q95 & 0.919 & 0.942 & 0.950 & 0.948 \\
& Q975 & 0.956 & 0.969 & 0.972 & 0.974 \\
\hline
Geweke & Mean & 0.046 & -0.006 & 0.040 & 0.007 \\
& Q025 & -1.924 & -1.951 & -2.058 & -1.983 \\
& Q05 & -1.465 & -1.724 & -1.752 & -1.726 \\
& Median & -0.042 & 0.119 & 0.022 & 0.092 \\
& Q95 & 1.754 & 1.521 & 1.907 & 1.625 \\
& Q975 & 1.976 & 1.802 & 2.433 & 1.941 \\
\hline
ESS & Median & 912.414 & 887.777 & 249.245 & 762.605 \\
\hline
\% of Rejections & & 2.23\% & 4.24\% & 6.39\% & 4.43\% \\
\hline
M-H & Mean & & & 0.929 & \\
\hline
& Q025 & & & 0.877 & \\
& Q05 & & & 0.911 & \\
& Median & & & 0.930 & \\
& Q95 & & & 0.957 & \\
& Q975 & & & 0.971 & \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\centering
\caption{Geweke convergence diagnostics summaries for the model including left vs. right eye fixed effect function.}
\label{TableDiag2}
\begin{tabular}{|c|ccccc|}
\hline
& &FE & Age NP & VCs & Combined \\
\hline
P-value & Mean & 0.514 & 0.487 & 0.473 & 0.487 \\
& Q025 & 0.024 & 0.026 & 0.008 & 0.000 \\
& Q05 & 0.057 & 0.047 & 0.023 & 0.000 \\
& Median & 0.525 & 0.480 & 0.460 & 0.442 \\
& Q95 & 0.962 & 0.945 & 0.954 & 0.943 \\
& Q975 & 0.978 & 0.973 & 0.977 & 0.971 \\
\hline
Geweke & Mean & 0.009 & -0.004 & -0.011 & -0.004 \\
& Q025 & -1.965 & -1.922 & -2.291 & -1.985 \\
& Q05 & -1.549 & -1.631 & -1.841 & -1.669 \\
& Median & 0.033 & -0.045 & -0.002 & -0.028 \\
& Q95 & 1.494 & 1.780 & 1.835 & 1.777 \\
& Q975 & 1.887 & 2.085 & 2.236 & 2.108 \\
\hline
ESS & Median & 865.289 & 899.544 & 253.117 & 820.286 \\
\hline
\% of Rejections & - & 0.032 & 0.055 & 0.074 & 0.056 \\
\hline
M-H & Mean & & & 0.929 & \\
& Q025 & & & 0.888 & \\
& Q05 & & & 0.910 & \\
& Median & & & 0.930 & \\
& Q95 & & & 0.958 & \\
& Q975 & & & 0.968 & \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\centering
\caption{Geweke convergence diagnostics summaries for the model with no regularization/shrinkage.}
\label{TableDiag3}
\begin{tabular}{|c|ccccc|}
\hline
& &FE & Age NP & VCs & Combined \\
\hline
P-value & Mean & 0.491 & 0.486 & 0.481 & 0.486 \\
\hline
& Q025 & 0.024 & 0.015 & 0.009 & 0.000 \\
& Q05 & 0.045 & 0.032 & 0.029 & 0.000 \\
& Median & 0.482 & 0.483 & 0.494 & 0.389 \\
& Q95 & 0.960 & 0.953 & 0.945 & 0.913 \\
& Q975 & 0.982 & 0.975 & 0.969 & 0.956 \\
\hline
Geweke & Mean & -0.026 & 0.011 & 0.010 & 0.006 \\
& Q025 & -2.067 & -2.182 & -2.144 & -2.132 \\
& Q05 & -1.754 & -1.723 & -1.788 & -1.759 \\
& Median & -0.043 & 0.030 & 0.041 & 0.021 \\
& Q95 & 1.652 & 1.890 & 1.759 & 1.873 \\
& Q975 & 1.958 & 2.137 & 2.203 & 2.128 \\
\hline
ESS & Median & 1,000.000 & 608.125 & 253.413 & 590.288 \\
\hline
\% of Rejections & & 0.059 & 0.083 & 0.058 & 0.075 \\
\hline
M-H & Mean & & & 0.928 & \\
& Q025 & & & 0.900 & \\
& Q05 & & & 0.909 & \\
& Median & & & 0.928 & \\
& Q95 & & & 0.958 & \\
& Q975 & & & 0.969 & \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\centering
\caption{Geweke convergence diagnostics summaries for the wavelet-regularized principal component' model.}
\label{TableDiag4}
\begin{tabular}{|c|ccccc|}
\hline
& &FE & Age NP & VCs & Combined \\
\hline
P-value & Mean & 0.466 & 0.426 & 0.454 & 0.437 \\
& Q025 & 0.020 & 0.000 & 0.009 & 0.000 \\
& Q05 & 0.027 & 0.015 & 0.021 & 0.000 \\
& Median & 0.476 & 0.303 & 0.450 & 0.080 \\
& Q95 & 0.955 & 0.860 & 0.926 & 0.711 \\
& Q975 & 0.972 & 0.900 & 0.943 & 0.941 \\
\hline
Geweke & Mean & -0.158 & 0.074 & -0.161 & -0.002 \\
& Q025 & -2.272 & -2.351 & -2.168 & -2.285 \\
& Q05 & -2.026 & -1.991 & -1.773 & -1.991 \\
& Median & -0.098 & 0.203 & -0.307 & 0.077 \\
& Q95 & 1.528 & 2.110 & 1.674 & 1.932 \\
& Q975 & 2.029 & 3.488 & 2.506 & 2.592 \\
\hline
ESS & Median & 1,000.000 & 631.661 & 237.495 & 611.360 \\
\hline
\% of Rejections & - & 0.086 & 0.138 & 0.074 & 0.118 \\
\hline
M-H & Mean & & & 0.935 & \\
& Q025 & & & 0.908 & \\
& Q05 & & & 0.909 & \\
& Median & & & 0.930 & \\
& Q95 & & & 0.970 & \\
& Q975 & & & 0.979 & \\
\hline
\end{tabular}
\end{table}
\section*{Simulated Pseudo Data}
We generated pseudo-data from the model (12) in Section 2.5. For each basis coefficient, the model (12) was fitted using the \textit{lme} function in \textit{nlme} R package \citep{nlme}. The estimated parameters were used to generate the simulation data. Once the basis coefficients were generated, they were transformed back to the data space using the inverse of the 2D rectangular wavelet transformation. As a result, the simulation data set has 306 curves and each curve has 14400 functional locations. The file \textit{Y\_simulated.mat} contains the simulated data. Plots of simulated pseudo data for 3 subjects and all 9 IOP levels are given in Supplemental Figure 28, and similar plots of real data for 3 subjects are given in Figure 27, and demonstrate that the simulated data look much like real data.
The zip file \textit{EYE\_Toolbox.zip} contains the pseudo data along with all required scripts to perform all of the analyses contained in this paper, plus many of the additional plots and diagnostics that are provided to illustrate its properties. The file \textit{Pseudo\_data\_analysis.pdf} contains a detailed description of overall procedure to fit the semiparametric functional mixed model and obtain inferential results with a simulation dataset, and can be adapted by users to analyze their own data.
\begin{figure}
\minipage{0.75\textwidth}
\centerline{\includegraphics[scale=0.5]{data_1.png}}
\caption{ \footnotesize Real data for three subjects}
\label{fig:sfig1}
\endminipage \\
\minipage{0.75\textwidth}
\centerline{\includegraphics[scale=0.5]{simulated_data_1.png}}
\label{fig:sfig1}
\caption{ \footnotesize Simulated pseudo data for three simulated subjects}
\endminipage
\end{figure}
\clearpage
\newpage
\newpage
\bibliographystyle{ECA_jasa}
|
{
"timestamp": "2018-05-08T02:18:35",
"yymm": "1802",
"arxiv_id": "1802.08727",
"language": "en",
"url": "https://arxiv.org/abs/1802.08727"
}
|
\section{Introduction}
Consider a set of agents, whose goal is to reach consensus by exchanging information locally with their neighbors through a directed graph.
There is a large body of work on consensus algorithms.
Ordinary consensus has been shown to converge asymptotically under various scenarios such as growing intercommunicating intervals \cite{lorenz2011convergence}, presence of delays and/or unbounded intercommunication intervals \cite{Blondel2005}.
Another problem of interest for which extensive research has been carried out is \textit{average} consensus. While most related works study asymptotic convergence, \cite{charalambous2015distributed} studies average consensus in a finite number of steps.
Push-sum is one of the many algorithms for average consensus that was first proposed by \cite{kempe2003gossip}. This algorithm has been widely used to develop protocols that reach average consensus, under different assumptions and scenarios; such as the presence of bounded delays \cite{hadjicostis2014average}, time varying graphs \cite{hadjicostis2012average}\cite{Rezaeinia2017}, or asynchronous communication \cite{benezit2010weighted}.
Since reliable communication is a very restrictive assumption in network applications, or expensive to enforce, recent work has considered algorithms that reach consensus in a setting where communication between agents is unreliable.
While in this case, push-sum might not converge to average, exponential convergence still holds and the error between the final value and the true average can be characterized \cite{DBLP:journals/corr/GerencserH15}.
In \cite{Vaidya}, Vaidya et al. introduce the technique of running sums (counters) and modify push-sum to overcome possible packet drops and imprecise knowledge of the network in a synchronous communication setting. They prove \textit{almost surely} convergence of their algorithms using weak ergodicity.
Inspired by \cite{Vaidya}, \cite{Schenato} takes this further and develops an asynchronous algorithm for average consensus, which is robust to unreliable communication. This algorithm uses a \textit{broadcast asymmetric} communication protocol; that is, at each iteration only one node is allowed to wake up and transmit information to its neighbors. Exponential convergence of this algorithm is proved under bounded consecutive link failures and nodes' update delays.
Consensus and average consensus have a lot of application in other algorithms as well; they can be used as a building block to develop distributed optimization algorithms \cite{tsianos2012push}\cite{varagnolo2016newton}. For example, in \cite{bof2017newton} the authors use a robust version of push-sum as a building block to develop an asynchronous Newton-based distributed optimization algorithm, robust to packet losses.
A lot of available works in the literature assume bounded intercommunication intervals; which motivated us to study and explore sufficient connectivity conditions which allow intercommunication intervals to slowly grow and potentially be unbounded. We propose logarithmically growing upper bounds which guarantee convergence.
Distributed synchronous systems require coordination between the agents. Asynchronous systems, in contrast, do not depend on global clock signals. This can save power as agents do not have to perform computation and communication at every iteration. However, it might require more iterations to converge.
While existing works on push-sum in the presence of link failures assume synchronous \cite{Vaidya} or broadcast asymmetric \cite{Schenato} communication setting, our major contribution in this paper is to develop a \textit{fully asynchronous} robust push-sum algorithm that allows the successive link failures to grow to infinity.
The rest of the paper is organized as follows. In Section \ref{sec:notation formulation} we introduce our notation and define the problem. In Sections \ref{sec:Ordinary} and \ref{sec:push-sum} we study ordinary consensus and push-sum algorithms, respectively, and state our convergence results. In Section \ref{sec:ra_ac}, we propose an asynchronous push-sum algorithm which is robust to unreliable communication links, followed by concluding remarks in Section \ref{sec:conclusion}.
\section{Problem Formulation}\label{sec:notation formulation}
\subsection{Notations and Definitions}
Suppose $\textbf{A}$ is a matrix, by $A_{ij}$ we denote its $(i,j)$ entry.
A matrix is called \textit{(row) stochastic} if it is non-negative and the sum of the elements of each row equals to one. Similarly, a matrix is \textit{column stochastic} if its transpose is stochastic. A matrix is called \textit{doubly stochastic} if it is both column and row stochastic.
To a non-negative matrix $\textbf{A} \in \mathbb{R}^{n\times n}$ we associate a directed graph $\mathcal{G}_\textbf{A}$ with vertex set $\mathcal{N}=\{1,2,\ldots,n\}$ and edge set $\mathcal{E}_\textbf{A}=\{(i,j)\vert A_{ji}>0 \}$. Note that the graph might contain self-loops.
By $[\textbf{A}]_\alpha$ we denote the \textit{thresholded} matrix obtained by setting every element of $\textbf{A}$ smaller than $\alpha$ to zero.
Given a sequence of matrices $\textbf{A}^0,\textbf{A}^1,\textbf{A}^2,\ldots$, we denote by $\textbf{A}^{k_2:k_1},k_2\geq k_1$, the product of elements $k_1$ to $k_2$ of the sequence, inclusive, in the following order:
\begin{displaymath}
\textbf{A}^{k_2:k_1}=\textbf{A}^{k_2}\textbf{A}^{k_2-1}\cdots \textbf{A}^{k_1}.
\end{displaymath}
Node $i$ is an \textit{in-neighbor} of node $j$, if there is a directed link from $i$ to $j$. Hence $j$ would be an \textit{out-neighbor} of node $i$. We denote the set of in-neighbors and out-neighbors of node $i$ at time $k$ with $N_i^{-,k}$ and $N_i^{+,k}$, respectively. Moreover, we denote the number of in-neighbors and out-neighbors of node $i$ at time $k$ with $d_i^{-,k}$ and $d_i^{+,k}$, as its \textit{in-degree} and \textit{out-degree}, respectively. If the graph is fixed, we will simply drop the index $k$ in the aforementioned notations.
By $x_{\min}$ and $x_{\max}$ we denote $\min_i x_i$ and $\max_i x_i$, respectively, unless mentioned otherwise.
We also denote a $n \times 1$ column vector of all ones by $\textbf{1}_n$, or $\textbf{1}$ when its size is clear from the context.
We sometimes use the notion of \textit{mass} to denote the value an agent holds, sends or receives. With that in mind, we can think of a value being sent from one node, as a mass being transferred.
\subsection{Problem Formulation}
Consider a set of $n$ agents $\mathcal{N} = \{1,2,\ldots,n\}$, where each agent $i$ holds an initial scalar value $x_i^0$. These agents communicate with each other through a sequence of directed graphs. Our goal is to develop protocols through which these agents communicate and update their values so that they reach consensus. Throughout this paper we use the terms \textit{agents} and \textit{nodes} interchangeably.
Ordinary consensus and push-sum are two main algorithms proposed for this purpose.
In ordinary consensus, each node updates its value by forming a convex combination of the values of its in-neighbors.
In push-sum, average consensus is reached by running two parallel iterations in which, each node splits and sends its value to its out-neighbors and updates its own value by forming the sum of the messages that it has received.
\section{Ordinary Consensus}\label{sec:Ordinary}
Although the main target of this paper is push-sum, in this section we state and prove similar results for ordinary consensus. Comparable results can be found in \cite{lorenz2011convergence}, however the proofs provided here are necessary to understand the methods used in the following sections.
Linear consensus is defined as,
\begin{equation}\label{eq:1}
\textbf{x}^{k+1} = \textbf{A}^k \textbf{x}^k,\;\;k=0,1,\ldots,
\end{equation}
where the matrices $\textbf{A}^k$ are stochastic and $\textbf{x}^k$ is constructed by collecting all $x_i^k$ in a column vector.
Under the following conditions, the iteration \eqref{eq:1} results in consensus, meaning all the $x^k_{i}$ converge to the same value as $k\rightarrow\infty$.
The following assumption ensures sufficient connectivity of the graphs.
\begin{assumption}\label{assump:B_k-connectivity}
There exist a sequence $b_1,b_2,\ldots$ of positive integers such that when we partition the sequence of graphs $\mathcal{G}^0,\mathcal{G}^1,\mathcal{G}^2,\ldots$ to consecutive blocks of length $b_k$, $k=1,2,\ldots$, the graph constructed by the union of the edges in each block, is strongly connected. Also each graph $\mathcal{G}^k$ has a self-loop at every node.
\end{assumption}
Let us define $\mu_0=\lambda_0=0$, and for $k\geq1$:
\begin{gather}
\mu_k=\sum_{j=1}^{k}b_j,\label{eq:mu}\\
\lambda_k=\sum_{j=(k-1)n+1}^{kn}b_j=\mu_{kn} - \mu_{(k-1)n}.\label{eq:lambda}
\end{gather}
The following proposition states sufficient conditions for the convergence of ordinary consensus with growing intercommunication intervals.
\begin{proposition}\label{pro:B_k-convergence}
Suppose there exist some $\alpha>0$ such that the sequence of graphs $\mathcal{G}_{[\textbf{A}^0]_\alpha},\mathcal{G}_{[\textbf{A}^1]_\alpha},\mathcal{G}_{[\textbf{A}^2]_\alpha},\ldots$ satisfies Assumption \ref{assump:B_k-connectivity}. If there exist some $K\geq1,T\geq0,$ such that $\lambda_k\leq-\frac{\ln(k+T)}{\ln(\alpha)}$ for all $k\geq K$, then $x^k$ converges to a limit in span\{\textbf{1}\}.
\end{proposition}
Before proving the proposition, we need the following lemmas and definitions.
Given a sequence of graphs $\mathcal{G}^0,\mathcal{G}^1,\mathcal{G}^2,\ldots$, we will say node $b$ is reachable from node $a$ in time period $k_1$ to $k_2$ ($k_1<k_2$), if there exists a sequence of directed edges $e^{k_1},e^{k_1+1},\ldots,e^{k_2}$ such that $e^k$ is in $\mathcal{G}^k$, destination of $e^k$ is the origin of $e^{k+1}$ for $k_1\leq k<k_2$, and the origin of $e^{k_1}$ is $a$ and the destination of $e^{k_2}$ is $b$.
\begin{lemma}\label{lem:strictly positive matrix}
Suppose there exists some $\alpha>0$ such that the sequence of graphs $\mathcal{G}_{[\textbf{A}^0]_\alpha},\mathcal{G}_{[\textbf{A}^1]_\alpha},\mathcal{G}_{[\textbf{A}^2]_\alpha},\ldots$ satisfies Assumption \ref{assump:B_k-connectivity}. Then for $l\geq 0$, $\textbf{A}^{\mu_{l+n}-1:\mu_l}$ is a strictly positive matrix, with its elements at least $\alpha^{\mu_{l+n}-\mu_l}$.
\end{lemma}
\begin{proof}
Consider the set of reachable nodes from node $i$ in time period $k_1$ to $k_2$ in the graph sequence $\mathcal{G}_{[\textbf{A}^0]_\alpha},\mathcal{G}_{[\textbf{A}^1]_\alpha},\mathcal{G}_{[\textbf{A}^2]_\alpha},\ldots,$ and denote it by $N^{k_2:k_1}$. Since by Assumption \ref{assump:B_k-connectivity} each of these graphs has self-loop at every node, the set of reachable nodes never decreases. If $N^{\mu_{l+m}-1:\mu_l}\neq\{1,2,\ldots,n\}$ then $N^{\mu_{l+m+1}-1:\mu_l}$ is a strict super-set of $N^{\mu_{l+m}-1:\mu_l}$; because in period $\mu_{l+m}$ to $\mu_{l+m+1}-1$ there is an edge in some $\mathcal{G}_{[\textbf{A}^i]_\alpha}$ leading from the set of reachable nodes from $i$, to those not reachable from $i$; this is true because the union of the graphs in block $\mu_{l+m}$ to $\mu_{l+m+1}-1$ is strongly connected. Hence we conclude $N^{\mu_{l+n}-1:\mu_l}=\{1,2,\ldots,n\}$ and $\textbf{A}^{\mu_{l+n}-1:\mu_l}$ is strictly positive. Furthermore, since every positive element of $[\textbf{A}^k]_\alpha$ is at least $\alpha$ by construction, every element of $\textbf{A}^{\mu_{l+n}-1:\mu_l}$ is at least $\alpha^{\mu_{l+n}-\mu_l}$.
\end{proof}
\begin{lemma}\label{lem:max-min inequality}
Suppose $\textbf{A}$ is a stochastic matrix with entries at least $\beta>0$. If $\textbf{v}=\textbf{A}\textbf{u}$ then,
\begin{equation}
v_{\max}-v_{\min}\leq (1-n\beta)\left(u_{\max}-u_{\min}\right).
\end{equation}
\end{lemma}
This lemma is proved in \cite[Theorem 3.1 \& Exercise 3.8]{seneta2006non}.
\begin{lemma}\label{lem:stochastic matrices}
Suppose $\textbf{A}$ is a stochastic matrix and $\textbf{v}=\textbf{A}\textbf{u}$. Then for all $i$,
\begin{equation}
u_{\min} \leq v_i \leq u_{\max}.
\end{equation}
\end{lemma}
This lemma holds true because each $v_i$ is a convex combination of elements of $\textbf{u}$.
\begin{lemma}\label{lem:alpha}
Suppose $0<\alpha_k<1$ for $k=1,\ldots,\infty$, then $\prod_{k=1}^{\infty}\left(1-\alpha_k\right)=0$ if and only if $\sum_{k=1}^{\infty}\alpha_k=\infty$.
\end{lemma}
This lemma is proved in \cite[Appendix: Theorem 1.9]{bremaud2013markov} and we will skip the proof here.
\begin{proof}[Proof of Proposition \ref{pro:B_k-convergence}]
By Lemma \ref{lem:strictly positive matrix}, we have for $k\geq 1$,
\begin{displaymath}
\left[\textbf{A}^{\mu_{kn}-1:\mu_{(k-1)n}}\right]_{ij}\geq\alpha^{\mu_{kn}-\mu_{(k-1)n}}=\alpha^{\lambda_{k}}.
\end{displaymath}
Applying Lemma \ref{lem:max-min inequality}, we get,
\begin{equation}\label{eq:max-min ordinary}
x_{\max}^{\mu_{kn}}-x_{\min}^{\mu_{kn}}\leq \left(1-n\alpha^{\lambda_{k}}\right)\left(x_{\max}^{\mu_{(k-1)n}}-x_{\min}^{\mu_{(k-1)n}}\right).
\end{equation}
Hence, using \eqref{eq:max-min ordinary} for $k=1,\ldots,l$ we obtain,
\begin{equation*}
x_{\max}^{\mu_{ln}}-x_{\min}^{\mu_{ln}}\leq \prod_{k=1}^l\left(1-n\alpha^{\lambda_{k}}\right)\left(x_{\max}^0-x_{\min}^0\right).
\end{equation*}
We have $0<\alpha<1$ and $\lambda_k\leq-\frac{\ln(k+T)}{\ln(\alpha)}$ for all $k\geq K$. It follows,
\begin{align*}
\sum_{k=1}^{\infty}n\alpha^{\lambda_k}\geq \sum_{k=K}^{\infty}n\alpha^{\lambda_k}\geq\sum_{k=K}^{\infty}n\alpha^{-\frac{\ln(k+T)}{\ln(\alpha)}}
=\sum_{k=K}^{\infty}n\left(\alpha^{\frac{1}{\ln(\alpha)}}\right)^{-\ln(k+T)}
=\sum_{k=K}^{\infty}\frac{n}{k+T}=\infty.
\end{align*}
Using Lemmas \ref{lem:stochastic matrices} and \ref{lem:alpha} and \eqref{eq:max-min ordinary} we conclude that Proposition \ref{pro:B_k-convergence} holds.
\end{proof}
Proposition \ref{pro:B_k-convergence} proves the convergence of $x_i^k$'s to a value which is not necessarily the total average and depends on the sequence of matrices. However if the matrices $\textbf{A}^k$ are doubly stochastic, the sum of the values of all nodes (agents) is preserved and therefore the algorithm converges to \textit{average} consensus.
Slight modifications to Example 1.2, Chapter 7 of \cite{bertsekas1989parallel} shows that if intercommunication intervals grow logarithmically in time, ordinary consensus fails to reach consensus.
\section{Push-Sum}\label{sec:push-sum}
Push-sum is an algorithm that reaches average consensus and does not require \textit{doubly} stochastic matrices, as opposed to ordinary average consensus. Here, we assume each node knows its out-degree at every iteration. Under this assumption, it turns out that average consensus is possible and may be accomplished using the following iteration,
\begin{align}\label{eq:push-sum}
x_{i}^{k+1}&=\sum_{j\in N_{i}^{-,k}}\frac{x_j^k}{d_j^{+,k}},\nonumber\\
y_{i}^{k+1}&=\sum_{j\in N_{i}^{-,k}}\frac{y_j^k}{d_j^{+,k}},\\
z_i^{k+1}&=\frac{x_i^{k+1}}{y_i^{k+1}},\nonumber
\end{align}
where the auxiliary variables $y_i$ are initialized as $y_i^0=1$ and are collected in a column vector $\textbf{y}$.
This iteration is implemented in a distributed way using two steps. First each node $i$ broadcasts $x_i^k/d_i^{+,k}$ to its out-neighbors. Next, every node sets $x_i^{k+1}$ to be the sum of the incoming messages. Variables $y_i^k$ follow the same evolution. $z_i^k$ may be thought of as node $i$'s estimation of the average.
We define $\textbf{W}^k$ to be the matrix such that iteration (\ref{eq:push-sum}) may be written as,
\begin{align}
\textbf{x}^{k+1}&=\textbf{W}^k\textbf{x}^k,\nonumber\\
\textbf{y}^{k+1}&=\textbf{W}^k\textbf{y}^k.\nonumber
\end{align}
Next, we will state and prove a proposition regarding the sufficient conditions for the push-sum algorithm to converge.
\begin{proposition}\label{pro:push-sum}
Suppose the sequence of graphs $\mathcal{G}_{\textbf{W}^0},\mathcal{G}_{\textbf{W}^1},\mathcal{G}_{\textbf{W}^2},\ldots,$ satisfies Assumption \ref{assump:B_k-connectivity}. If there exist some $K\geq1,T\geq0,$ such that $\lambda_k\leq\frac{\ln(k+T)}{2\ln(n)}$ for all $k\geq K$, by implementing the push-sum algorithm (\ref{eq:push-sum}), it follows
\begin{equation*}
\lim_{k\to\infty}z_i^k=\frac{\sum_{j=1}^{n}x_j^0}{n}.
\end{equation*}
\end{proposition}
Note that positive elements of $\textbf{W}^k$ are at least $1/d_{\max}^{+,k}\geq 1/n$. Moreover, $\textbf{W}^k$ is column stochastic, i.e.,
\begin{displaymath}
\textbf{1}^T\textbf{W}^k=\textbf{1}^T.
\end{displaymath}
consequently, the sum of $x^k$ and $y^k$ are preserved, i.e.,
\begin{align}
\sum_{i=1}^{n}x_i^k&=\sum_{i=1}^{n}x_i^0,\label{eq:sum x}\\
\sum_{i=1}^{n}y_i^k&=\sum_{i=1}^{n}y_i^0=n.\label{eq:sum y=n}
\end{align}
Before proving the proposition, we need the following lemma, which establishes bounds for $y_i^{\mu_{ln}}$.
\begin{lemma}\label{lemma:yk bound}
Suppose the Assumptions stated in Proposition \ref{pro:push-sum} are satisfied. The following bounds on $y_i^{\mu_{ln}}$ hold for any $l\geq 1$:
\begin{equation}\label{eq:yk bound}
\left(\frac{1}{n}\right)^{\lambda_l-1}\leq y_i^{\mu_{ln}}\leq n.
\end{equation}
\end{lemma}
\begin{proof}
We observe that for $l\geq 1$,
\begin{equation}\label{eq:4}
\textbf{y}^{\mu_{ln}}=\textbf{W}^{\mu_{ln}-1:0}\textbf{1}.
\end{equation}
By Lemma \ref{lem:strictly positive matrix}, the matrix $\textbf{W}^{\mu_{ln}-1:\mu_{(l-1)n}}$ is strictly positive with it's elements at least $(1/n)^{\lambda_l}$. Hence $\textbf{W}^{\mu_{ln}-1:0}$ is the product of a strictly positive column stochastic matrix and other column stochastic matrices; consequently each of its entries are at least $(1/n)^{\lambda_l}$. Using \eqref{eq:4} we derive the left part of \eqref{eq:yk bound}.
Since $y_j^k>0$ for all $j$ and $k$, using \eqref{eq:sum y=n}, the right part of \eqref{eq:yk bound} is concluded.
\end{proof}
Now we can proceed with the proof of Proposition \ref{pro:push-sum}.
\begin{proof}[Proof of Proposition \ref{pro:push-sum}]
We start by rewriting the evolution of $\textbf{z}^k$ in a matrix form. The method to accomplish this is based on an observation from \cite{seneta2006non}. Using \eqref{eq:push-sum}, we have $x_i^k=z_i^ky_i^k$ and therefore,
\begin{equation}
z_i^{k+1}y_i^{k+1}=\sum_{j=1}^{n} W^k_{ij}z_j^ky_j^k,\nonumber
\end{equation}
or
\begin{equation}\label{eq:z=ywy}
z_i^{k+1}=\sum_{j=1}^n\left(y_i^{k+1}\right)^{-1}W^k_{ij}z_j^ky_j^k,
\end{equation}
where in the last step we used the fact that $y_i^k\neq0$, which is true by Lemma \ref{lemma:yk bound}. Define,
\begin{equation}\label{eq:Pk definition}
\textbf{P}^k=\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{W}^k\textbf{Y}^k,
\end{equation}
where $\textbf{Y}^k={\rm diag}\left(\textbf{y}^k\right)$.
Using \eqref{eq:z=ywy} we have,
\begin{equation*}
\textbf{z}^{k+1}=\textbf{P}^k\textbf{z}^k.
\end{equation*}
Moreover, $\textbf{P}^k$ is stochastic:
\begin{align*}
\textbf{\textbf{P}}^k\textbf{1}&=\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{W}^k\textbf{Y}^k\textbf{1}=\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{W}^k\textbf{y}^k \\
&=\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{y}^{k+1}{}={}\textbf{1}.
\end{align*}
Using \eqref{eq:Pk definition}, we obtain
\begin{equation}\label{eq:P=YWY}
\textbf{P}^{\mu_{kn}-1:\mu_{(k-1)n}}=\left(\textbf{Y}^{\mu_{kn}}\right)^{-1}\textbf{W}^{\mu_{kn}-1:\mu_{(k-1)n}}\textbf{Y}^{\mu_{(k-1)n}}.
\end{equation}
By Lemma \ref{lem:strictly positive matrix} the matrix $\textbf{W}^{\mu_{kn}-1:\mu_{(k-1)n}}$ is strictly positive; therefore using \eqref{eq:yk bound} and \eqref{eq:P=YWY} , $\textbf{P}^{\mu_{kn}-1:\mu_{(k-1)n}}$ is a strictly positive matrix with its elements at least
\begin{equation*}
\alpha_k=\frac{1}{n}\left(\frac{1}{n}\right)^{\lambda_k}\left(\frac{1}{n}\right)^{\lambda_{k-1}-1}=\left(\frac{1}{n}\right)^{\lambda_{k}+\lambda_{k-1}}.
\end{equation*}
Using Lemma \ref{lem:max-min inequality} we obtain,
\begin{equation}
z_{\max}^{\mu_{kn}}-z_{\min}^{\mu_{kn}}\leq (1-n\alpha _k)\left(z_{\max}^{\mu_{(k-1)n}}-z_{\min}^{\mu_{(k-1)n}}\right),\nonumber
\end{equation}
and consequently,
\begin{equation}\label{eq:max-min push-sum}
z_{\max}^{\mu_{ln}}-z_{\min}^{\mu_{ln}}\leq \prod_{k=1}^l(1-n\alpha _k)\left(z_{\max}^{0}-z_{\min}^{0}\right).
\end{equation}
Moreover,
\begin{align*}
\sum_{k=1}^{\infty}n\alpha_k \geq \sum_{k=K}^{\infty}n\alpha_k &= \sum_{k=K}^{\infty}n\left(\frac{1}{n}\right)^{\lambda_{k}+\lambda_{k-1}}\\
&\geq \sum_{k=K}^{\infty}n\left(\frac{1}{n}\right)^{\frac{\ln(k+T)}{2\ln(n)}+\frac{\ln(k-1+T)}{2\ln(n)}}\\
&\geq \sum_{k=K}^{\infty}n\left(\frac{1}{n}\right)^{\frac{\ln(k+T)}{\ln(n)}}\\
&= \sum_{k=K}^{\infty}\frac{n}{k+T}=\infty.
\end{align*}
Hence using Lemma \ref{lem:alpha} and \eqref{eq:max-min push-sum}, $z_{\max}^{\mu_{ln}}-z_{\min}^{\mu_{ln}}$ converges to zero as $l\to \infty$. By Lemma \ref{lem:stochastic matrices} we conclude that $\lim_{k\to \infty}z_i^k$ exists and we denote it by $z_{\infty}$.
We have,
\begin{align*}
z_\infty &= z_\infty \lim_{k\to \infty} \left(\frac{\sum_{i=1}^n y_i^k}{\sum_{i=1}^n y_i^k}\right)\\
&= \lim_{k\to \infty}\left(\frac{\sum_{i=1}^n z_i^ky_i^k}{n}+\frac{\sum_{i=1}^n (z_\infty -z_i^k)y_i^k}{n}\right)\nonumber\\
&= \frac{\sum_{i=1}^n x_i^k }{n}+\lim_{k\to \infty}\left(\frac{\sum_{i=1}^n (z_\infty -z_i^k)y_i^k}{n}\right)\\
&= \frac{\sum_{i=1}^n x_i^0}{n},
\end{align*}
where the last equality holds due to the sum preservation property, \eqref{eq:sum x}.
\end{proof}
\section{Robust Asynchronous Push-Sum}\label{sec:ra_ac}
Here we describe and study another algorithm for average consensus, in which the communication system is asynchronous and unreliable. In an unreliable setting, communication links might fail to transmit data packets and information might get lost.
This algorithm is originally inspired by the algorithm proposed by \cite{Vaidya}, but under asynchronous communication. As the algorithm in \cite{Vaidya}, this algorithm is also based on the push-sum consensus. \cite{Schenato} has proved exponential convergence of this algorithm for the case when at each iteration only one node wakes up and transmits. Here we modify the algorithm presented by \cite{Schenato} and show that average consensus still holds while allowing for any subset of nodes to perform updates at each iteration.
In this algorithm, as opposed to the previous ones, we assume nodes do not have self-loops.
\begin{algorithm}
\caption{Robust Asynchronous Push-Sum}
\begin{algorithmic}[1]
\STATE Initialize the algorithm with $\textbf{y}^0=\textbf{1}$, $\sigma_i^{x,0}=\sigma_i^{y,0}=0$ $\forall i\in\{1,\ldots,n\}$ and $\rho_{ji}^{x,0}=\rho_{ji}^{y,0}=0$, $\forall (i,j)\in \mathcal{E}$.
\STATE At every iteration $k$, for every node $i$:
\IF{node $i$ wakes up}
\STATE $\sigma_i^{x,k+1}=\sigma_i^{x,k}+\frac{x_i^k}{d_i^{+}+1}$;
\STATE $\sigma_i^{y,k+1}=\sigma_i^{y,k}+\frac{y_i^k}{d_i^{+}+1}$;
\STATE $x_i^{k+1}=\frac{x_i^k}{d_i^{+}+1}$;
\STATE $y_i^{k+1}=\frac{y_i^k}{d_i^{+}+1}$;
\STATE Node $i$ broadcasts $\sigma_i^{x,k+1}$ and $\sigma_i^{y,k+1}$ to its out-neighbors: $N_i^+$
\ENDIF
\IF{node $i$ receives $\sigma_j^{x,k+1}$ and $\sigma_j^{y,k+1}$ from $j\in N_i^-$}
\STATE $\rho_{ij}^{x,k+1}=\sigma_j^{x,k+1}$;
\STATE $\rho_{ij}^{y,k+1}=\sigma_j^{y,k+1}$;
\STATE $x_i^{k+1}=x_i^{k+1}+\rho_{ij}^{x,k+1}-\rho_{ij}^{x,k}$;
\STATE $y_i^{k+1}=y_i^{k+1}+\rho_{ij}^{y,k+1}-\rho_{ij}^{y,k}$;
\ENDIF
\STATE Other variables remain unchanged.
\end{algorithmic}
\end{algorithm}
The impressive idea proposed by \cite{Vaidya} that allows us to overcome the issue of unreliable of links, is that of introducing the counters: in particular each node $i$ has a counter $\sigma_i^{x,k}$ ($\sigma_i^{y,k}$ respectively) to keep track of the total $x$-mass ($y$-mass) sent by itself to its neighbors from time 0 to time $k$, and counters $\rho_{ij}^{x,k}$ ($\rho_{ij}^{y,k}$ respectively) $\forall j\in N_i^-$, to take into account the total $x$-mass ($y$-mass) received from its neighbor $j$ from time 0 to time $k$.
While in reality, nodes will perform computations when they wake up; to make the analysis easier, we assume nodes perform computations (but no transmission) when they are not awake.
Next, we state and prove the main theorem of this paper, which shows that the algorithm above reaches average consensus under sufficient connectivity assumptions.
\begin{theorem}\label{theorem:robust push-sum}
Suppose we apply the Robust Asynchronous Push-Sum algorithm to a set of agents communicating with each other through a strongly connected graph $\mathcal{G=(N,E)}$, where $\mathcal{E}$ does not have self-loops. Let $\mathcal{G}^0,\mathcal{G}^1,\ldots,$ be the sequence of graphs $\mathcal{G}^i=(\mathcal{N},\mathcal{E}^i)$, $\mathcal{E}^i\subset\mathcal{E}$, containing only the links which transmit successfully at iteration $i$. Also, suppose there is another sequence $b_1,b_2,\ldots,$ of positive integers such that, if we split the sequence of $\mathcal{G}^0,\mathcal{G}^1,\ldots,$ to consecutive blocks of length $b_i$, the union of graphs of each block is equal to $\mathcal{G}$; i.e., $\cup_{i=\mu_k}^{\mu_{k+1}-1}\mathcal{E}^i=\mathcal{E}, \forall k\geq 0$,
where $\mu_k$ and $\lambda_k$ are defined in (\ref{eq:mu}) and (\ref{eq:lambda}). Suppose that there exists some $K\geq1,T\geq0,$ such that $\lambda_k\leq \frac{\ln(k+T)}{6\ln(n)}$, $\forall k\geq K$.
Then, $z_i^k=x_i^k/y_i^k$ converges to the average of $\textbf{x}^0$, i.e.,
\begin{equation}
\lim_{k\to\infty}z_i^k=\frac{\sum_{j=1}^{n}x_j^0}{n}.\nonumber
\end{equation}
\end{theorem}
\begin{proof}
Similar to the proofs of the previous propositions, here we first rewrite the evolution of $\textbf{x}^k$ and $\textbf{y}^k$ in a matrix form. We show these matrices are column stochastic. Then we write the evolution of the agents' estimate of the average, $\textbf{z}^k$, in matrix form. Finally, we exploit the properties of these matrices to show the convergence of $z_i^k$ to one limit which turns out to be the average.
Before we rewrite the iteration in a matrix form, we introduce the indicator variables $\tau_i^k$, for $i=1,2,\ldots,n$, and $\tau_{ij}^k$, for $(i,j)\in \mathcal{E}$. $\tau_i^k$ is equal to 1 if node $i$ wakes up at time $k$, and is 0 otherwise. Likewise $\tau_{ij}^k$ is 1 whenever node $i$ wakes up at time $k$, $j\in N_i^{+}$ and the edge $(i,j)$ is reliable, while it is 0 otherwise.
Let us introduce the following variables:
\begin{align*}
u_{ij}^k&=\sigma_i^{x,k}-\rho_{ji}^{x,k},\qquad \forall(i,j)\in \mathcal{E}, \\
v_{ij}^k&=\sigma_i^{y,k}-\rho_{ji}^{y,k},\qquad \forall(i,j)\in \mathcal{E},
\end{align*}
which are, intuitively, the total $x$-mass and $y$-mass, respectively, that has been sent by node $i$ but due to link failures has not been delivered to node $j$ yet.
The evolution of $y$-mass is exactly the same as $x$-mass; hence to avoid repetition, we only analyze the evolution of $\textbf{x}^k$ and $u_{ij}^k$.
We can write the update equations:
\begin{gather}
u_{ij}^{k+1}=\left(1-\tau_i^k\tau_{ij}^k\right)\left(u_{ij}^k+\tau_i^k\frac{x_i^k}{d_i^{+}+1}\right),\label{eq:update1}\\
x_i^{k+1} = \sum_{j \in N_i^-}\left(\frac{x_j^k}{d_j^{+}+1}+u_{ji}^k\right)\tau_j^k\tau_{ji}^k
+ x_i^k\left(1-\tau_i^k+\frac{\tau_i^k}{d_i^{+}+1}\right).\label{eq:update2}
\end{gather}
Let us introduce the column vectors $\textbf{u}^k$ and $\textbf{v}^k$ which collect all different $u_{ij}^k$ and $v_{ij}^k$, respectively. Moreover, let us introduce the column vectors $\bm{\phi}^{(x)}(k)=\left[ (\textbf{x}^k)^T,(\textbf{u}^k)^T\right]^T$, $\bm{\phi}^{(y)}(k)=\left[ (\textbf{y}^k)^T,(\textbf{v}^k)^T\right]^T\in \mathbb{R}^{n+m}$, where $m=\vert \mathcal{E}\vert$. Using \eqref{eq:update1} and \eqref{eq:update2} we can rewrite the algorithm in the following matrix form:
\begin{align}
\bm{\phi}^{(x)}(k+1)=\textbf{M}^k\bm{\phi}^{(x)}(k),\\
\bm{\phi}^{(y)}(k+1)=\textbf{M}^k\bm{\phi}^{(y)}(k).
\end{align}
\begin{lemma}\label{lem: M col stoc}
$\textbf{M}$ is column stochastic and each positive element of it is at least $1/(\max_i \{d_i^+\}+1)$. Also we have for $1\leq i \leq n$:
\begin{equation}
M_{ii}^k=\begin{cases}
1,&\text{if }\tau_i^k=0,\\
\frac{1}{d_i^++1},&\text{if }\tau_i^k=1.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Let us first consider the $i^{th}$ column of $\textbf{M}^k$, with $1\leq i \leq n$. The element $M_{ii}^k$ indicates how $x_i^k$ influences $x_i^{k+1}$. Using \eqref{eq:update2}, it follows:
\begin{equation}\label{eq:M_ii}
M_{ii}^k=1-\tau_i^k+\frac{\tau_i^k}{d_i^{+}+1}=\begin{cases}
1,&\text{if }\tau_i^k=0,\\
\frac{1}{d_i^++1},&\text{if }\tau_i^k=1.
\end{cases}
\end{equation}
The element $M_{ji}^k$, $j \in \{1,\ldots,n\}\setminus\{i\}$ indicates how $x_i^k$ influences $x_j^{k+1}$. It holds,
\begin{equation}\label{eq:M_ji}
M_{ji}^k=\begin{cases}
\frac{\tau_i^k\tau_{ij}^k}{d_i^{+}+1},&\text{if }j \in N_i^+,\\
0,&\text{otherwise.}
\end{cases}
\end{equation}
Finally, if $h\in \{n+1,\ldots,n+m\}$ is such that $\bm{\phi}^{(x)}_h(k)=u_{rj}^k$; the element $M_{hi}^k$ indicates how $x_i^k$ influences $u_{rj}^{k+1}$, we have
\begin{equation}\label{eq:M_li}
M_{hi}^k=\begin{cases}
\frac{(1-\tau_{ij}^k)\tau_i^k}{d_i^{+}+1},&\text{if }r=i,\\
0,&\text{otherwise.}
\end{cases}
\end{equation}
Using \eqref{eq:M_ii}-\eqref{eq:M_li}, entries of $i^{th}$ column of $\textbf{M}^k$ sum to 1.
Now we consider the $h^{th}$ column of $\textbf{M}^k$, $h\in \{n+1,\ldots,n+m\}$. Suppose $\bm{\phi}^{(x)}_h(k)=u_{ij}^k$, we have
\begin{align}
M_{jh}^k&=\tau_i^k\tau_{ij}^k,\label{eq:M_jh}\\
M_{hh}^k&=1-\tau_i^k\tau_{ij}^k,\label{eq:M_hh}
\end{align}
and all the other elements of $h^{th}$ column are zero. Using \eqref{eq:M_jh} and \eqref{eq:M_hh}, the entries of the $h^{th}$ column sum to 1 and hence the matrix $\textbf{M}^k$ is column stochastic.
\end{proof}
Let us augment the graph $\mathcal{G}^k$ to $\mathcal{H}^k=\mathcal{G}_{\textbf{M}^k}$ by adding auxiliary nodes $b_{ij}$, $\forall(i,j)\in\mathcal{E}$. Note that by Lemma \ref{lem: M col stoc}, node $i\in\{1,\ldots,n\}$ has self-loop all the time and node $b_{ij}$ has self-loop unless the link $(i,j)$ transmits reliably. Let us call nodes $b_{ij}$ \textit{buffers} and assign values $u_{ij}^k$ and $v_{ij}^k$ to them.
The algorithm is equivalent to the following process: Suppose node $i$ wakes up. If the link $(i,j)$ works properly, node $i$ sends some mass ($x_i^k/(d_i^++1)$ and $y_i^k/(d_i^++1)$) to node $j$ and also node $b_{ij}$ sends all of its mass ($u_{ij}^k$ and $v_{ij}^k$) to node $j$ and becomes zero. Otherwise, the mass is sent from node $i$ to node $b_{ij}$ instead of $j$. Then all the mass gets accumulated at node $b_{ij}$ because of its self-loop, until the link $(i,j)$ transmits reliably.
\begin{lemma}\label{lemma:positive n-rows}
The first n rows of $\textbf{M}^{\mu_{l+n}-1:\mu_l}$ are strictly positive, $l\geq 0$. The positive elements of this matrix are at least $\left(1/n\right)^{\mu_{l+n}-\mu_l}$.
\end{lemma}
\begin{proof}
Observing $\mathcal{H}^k$, every node $j\in \{1,\ldots,n\}$ has self-loop in every iteration and buffer $b_{ij}$ has self-loop unless link $(i,j)$ transmits successfully. We also know that during period $\mu_k$ to $\mu_{k+1}-1$, $k=0,1,\ldots$, each edge $(i,j)\in \mathcal{E}$ transmits successfully at least once. Moreover, $\mathcal{G}$ is strongly connected; Hence at the end of period $\mu_l$ to $\mu_{l+n}-1$, every node $j\in \{1,\ldots,n\}$ is reachable from all the nodes in graph $\mathcal{H}$. Also, since each positive element of $\textbf{M}^k$ is at least $1/n$, each positive element of $\textbf{M}^{\mu_{l+n}-1:\mu_l}$ is at least $(1/n)^{\mu_{l+n}-\mu_l}$.
\end{proof}
Define $\textbf{W}^k=\textbf{M}^{\mu_{(k+1)n}-1:\mu_{kn}}, k\geq 0$, which has positive elements of at least $\alpha^{\lambda_{k+1}}$ where $\alpha=1/n$. Then we have:
\begin{gather}
\begin{bmatrix}
\textbf{x}^{\mu_{(k+1)n}}\\ \textbf{u}^{\mu_{(k+1)n}}
\end{bmatrix}=\textbf{W}^k
\begin{bmatrix}
\textbf{x}^{\mu_{kn}}\\ \textbf{u}^{\mu_{kn}}
\end{bmatrix},\label{eq:xu=wxu} \\
\begin{bmatrix}
\textbf{y}^{\mu_{(k+1)n}}\\ \textbf{v}^{\mu_{(k+1)n}}
\end{bmatrix}=\textbf{W}^k
\begin{bmatrix}
\textbf{y}^{\mu_{kn}}\\ \textbf{v}^{\mu_{kn}}
\end{bmatrix}.\label{eq:yv=wyv}
\end{gather}
Let us split the matrix $\textbf{W}^k$ to four sub-matrices as follows:
\begin{equation}\label{eq:W=ABCD}
\textbf{W}^k=\begin{bmatrix}
\textbf{A}^k&\textbf{B}^k\\\textbf{C}^k&\textbf{D}^k
\end{bmatrix},
\end{equation}
where $\textbf{A}^k\in\mathbb{R}^{n\times n}$, $\textbf{B}^k\in\mathbb{R}^{n\times m}$, $\textbf{C}^k\in\mathbb{R}^{m \times n}$ and $\textbf{D}^k\in\mathbb{R}^{m \times m}$. By Lemma \ref{lemma:positive n-rows} we know that matrices $\textbf{A}^k$ and $\textbf{B}^k$ are strictly positive.
For $h=1,\ldots,m$ define $r_h^k$ as follows:
\begin{equation*}
r_h^k=\begin{cases}
\frac{u_h^k}{v_h^k},&\text{if }v_h^k\neq0,\\
0,&\text{if }v_h^k=0.
\end{cases}
\end{equation*}
\begin{lemma}
$u_{ij}^k=0$ whenever $v_{ij}^k=0$.
\end{lemma}
\begin{proof}
Since $\textbf{v}^0=\textbf{0}_m$ and $\textbf{y}^0=\textbf{1}_n$ and node $i$ has self loop in graph $\mathcal{H}^k$ for all $k\geq 0$, $y_i^k$ is always positive. If $v_{ij}^k=0$, the last time the node $i$ has woken up, the link $(i,j)$ has worked successfully, or $i$ has not woken up yet. In either case, node $b_{ij}$ has no remaining ($x$ and $y$) mass and $u_{ij}^k=0$ holds.
\end{proof}
Therefore, the following always holds for $h=1,\ldots,m$:
\begin{equation}\label{eq:u=rv}
u_h^k=r_h^kv_h^k,
\end{equation}
Define $\bar{\textbf{x}}^k=\textbf{x}^{\mu_{kn}}$, $\bar{\textbf{y}}^k=\textbf{y}^{\mu_{kn}}$, $\bar{\textbf{u}}^k=\textbf{u}^{\mu_{kn}}$, $\bar{\textbf{v}}^k=\textbf{v}^{\mu_{kn}}$, $\bar{\textbf{z}}^k=\textbf{z}^{\mu_{kn}}$ and $\bar{\textbf{r}}^k=\textbf{r}^{\mu_{kn}}$. Using (\ref{eq:xu=wxu}) and (\ref{eq:W=ABCD}) we obtain:
\begin{align*}
\bar{z}_i^{k+1}\bar{y}_i^{k+1}=\bar{x}_i^{k+1} &=\sum_{j=1}^n A_{ij}^k\bar{x}_j^k + \sum_{j=1}^{m}B_{ij}^k\bar{u}_j^k\\
&=\sum_{j=1}^n A_{ij}^k\bar{z}_j^k\bar{y}_j^k + \sum_{j=1}^{m} B_{ij}^k\bar{r}_j^k\bar{v}_j^k.\nonumber
\end{align*}
Hence,
\begin{align*}
\bar{z}_i^{k+1}&= \left(\bar{y}_i^{k+1}\right)^{-1}\sum_{j=1}^n A_{ij}^k\bar{z}_j^k\bar{y}_j^k + \left(\bar{y}_i^{k+1}\right)^{-1}\sum_{j=1}^{m} B_{ij}\bar{r}_j^k\bar{v}_j^k,\\
\bar{\textbf{z}}^{k+1}&=\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{A}^k\textbf{Y}^k\bar{\textbf{z}}^k+
\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{B}^k\textbf{V}^k\bar{\textbf{r}}^k,
\end{align*}
where $\textbf{Y}^k={\rm diag}\left(\bar{\textbf{y}}^{k}\right)$ and $\textbf{V}^k={\rm diag}\left(\bar{\textbf{v}}^{k}\right)$. Note that $\bar{\textbf{y}}^k$ is strictly positive. Similarly, using \eqref{eq:yv=wyv}-\eqref{eq:u=rv} we have,
\begin{align*}
\bar{r}_i^{k+1}\bar{v}_i^{k+1}=\bar{u}_i^{k+1}&=\sum_{j=1}^n C_{ij}^k\bar{x}_j^k + \sum_{j=1}^{m} D_{ij}^k\bar{u}_j^k\nonumber\\
&=\sum_{j=1}^n C_{ij}^k\bar{z}_j^k\bar{y}_j^k + \sum_{j=1}^{m} D_{ij}^k\bar{r}_j^k\bar{v}_j^k.
\end{align*}
Here $\bar{\textbf{v}}^k$, as opposed to $\bar{\textbf{y}}^k$, is not necessarily strictly positive. Therefore instead of $\left(\textbf{V}^k\right)^{-1}$, we define the following:
\begin{equation*}
\tilde{v}_i^k=\begin{cases}
\frac{1}{\bar{v}_i^k},&\text{if }\bar{v}_i^k\neq0,\\
0,&\text{if }\bar{v}_i^k=0.
\end{cases}
\end{equation*}
It follows:
\begin{align*}
\bar{r}_i^{k+1} &= \tilde{v}_i^{k+1}\sum_{j=1}^n C_{ij}^k\bar{z}_j^k\bar{y}_j^k + \tilde{v}_i^{k+1}\sum_{j=1}^{m} D_{ij}^k\bar{r}_j^k\bar{v}_j^k,\\
\bar{\textbf{r}}^{k+1}&=\tilde{\textbf{V}}^{k+1}\textbf{C}^k\textbf{Y}^k\bar{\textbf{z}}^k+
\tilde{\textbf{V}}^{k+1}\textbf{D}^k\textbf{V}^k\bar{\textbf{r}}^k.
\end{align*}
where $\tilde{\textbf{V}}^k={\rm diag}(\tilde{v}^k)$. Thus,
\begin{equation}\label{eq:zr=pzr}
\begin{bmatrix}
\bar{\textbf{z}}^{k+1}\\\bar{\textbf{r}}^{k+1}
\end{bmatrix}
=\textbf{P}^k
\begin{bmatrix}
\bar{\textbf{z}}^{k}\\\bar{\textbf{r}}^{k}
\end{bmatrix},
\end{equation}
where,
\begin{equation}\label{eq:P=YAY YBV}
\textbf{P}^k=\begin{bmatrix}
\left(\textbf{Y}^{k+1}\right)^{-1}\textbf{A}^k\textbf{Y}^k & \left(\textbf{Y}^{k+1}\right)^{-1}\textbf{B}^k\textbf{V}^k\\
\tilde{\textbf{V}}^{k+1}\textbf{C}^k\textbf{Y}^k&\tilde{\textbf{V}}^{k+1}\textbf{D}^k\textbf{V}^k
\end{bmatrix}.
\end{equation}
Now we show that the sum of the elements of each row $1$ to $n$ of $\textbf{P}^k$ is equal to 1, but for the rest of the rows they either sum to 1 or they are all zeros.
\begin{align*}
\textbf{P}^k\begin{bmatrix}
\mathbf{1}_n\\
\bold{1}_m
\end{bmatrix}=\begin{bmatrix}
\left(\textbf{Y}^{k+1}\right)^{-1}\left(\textbf{A}^k\bar{\textbf{y}}^k+\textbf{B}^k\bar{\textbf{v}}^k\right)\\
\tilde{\textbf{V}}^{k+1}\left(\textbf{C}^k\bar{\textbf{y}}^k+\textbf{D}^k\bar{\textbf{v}}^k\right)
\end{bmatrix}
=\begin{bmatrix}
\left(\textbf{Y}^{k+1}\right)^{-1}\bar{\textbf{y}}^{k+1}\\
\tilde{\textbf{V}}^{k+1}\bar{\textbf{v}}^{k+1}
\end{bmatrix}=\begin{bmatrix}
\bold{1}_n\\
\text{1 or 0}\\
\vdots\\
\text{1 or 0}
\end{bmatrix}.
\end{align*}
The $(n+h)^{th}$ row of $\textbf{P}^k$ is zero if and only if $v_h^{k+1}$ is zero.
\begin{lemma}
For $k\geq 0$ and $1\leq i\leq n$ we have:
\begin{equation}
\alpha^{\lambda_k} \leq \bar{y}_i^k \leq n.
\end{equation}
Moreover, for $1\leq h\leq m$ and $k\geq 1$ we have either $\bar{v}_h^k=0$ or,
\begin{equation}
\alpha^{\lambda_k+\lambda_{k-1}}\leq \bar{v}_h^k \leq n.
\end{equation}
\end{lemma}
\begin{proof}
We have for $k\geq 1$,
\begin{equation}
\begin{bmatrix}
\bar{\textbf{y}}^k\\ \bar{\textbf{v}}^k
\end{bmatrix}=\textbf{W}^{k-1:0}
\begin{bmatrix}
\bold{1}_n\\\bold{0}_m
\end{bmatrix},\nonumber
\end{equation}
where $\textbf{W}^{k-1:0}$ is the product of $\textbf{W}^{k-1}$ and other column stochastic matrices. By Lemma \ref{lemma:positive n-rows}, $\textbf{W}^{k-1}$ has positive first $n$ rows and its positive entries are at least $\alpha^{\lambda_k}$. Hence $\textbf{W}^{k-1:0}$ has positive first $n$ rows and its positive elements are at least $\alpha^{\lambda_k}$. We obtain for $1\leq i\leq n$,
\begin{equation*}
\bar{y}_i^k\geq\alpha^{\lambda_k},\textit{ for } k\geq 1.
\end{equation*}
Also since $\lambda_0=0$, $\bar{y}_i^0=1=\alpha^{\lambda_0}$.
Suppose node $h$ is the buffer of link $(i,j)$. If $\bar{v}_h^k$ is positive for some $k\geq 0$, it is because the last time node $i$ has woken up, link $(i,j)$ has failed and node $i$ has sent some value to $h$. Hence $W_{hi}^{k-1}\geq \alpha^{\lambda_k}$, and it follows,
\begin{equation*}
\bar{v}_h^k\geq\alpha^{\lambda_k}\bar{y}_i^{k-1}\geq \alpha^{\lambda_k+\lambda_{k-1}}.
\end{equation*}
Also, due to some preservation property, we have $\bar{y}_i^k, \bar{v}_h^k\leq n$, for all $i, h$ and $k$.
\end{proof}
Now we are able to find a lower bound on positive elements of $\textbf{P}^k$.
Let us divide $\textbf{P}^k$ to four sub-matrices as:
\begin{equation*}
\textbf{P}^k=\begin{bmatrix}
\textbf{E}^k&\textbf{F}^k\\\textbf{G}^k&\textbf{H}^k
\end{bmatrix},
\end{equation*}
where $\textbf{E}^k\in\mathbb{R}^{n\times n}$, $\textbf{F}^k\in\mathbb{R}^{n\times m}$, $\textbf{G}^k\in\mathbb{R}^{m \times n}$ and $\textbf{H}^k\in\mathbb{R}^{m \times m}$ are defined as in (\ref{eq:P=YAY YBV}).\\
By construction, positive elements of $\textbf{E}^k$ and $\textbf{G}^k$ are at least $\frac{1}{n}\alpha^{\lambda_{k+1}}\alpha^{\lambda_{k}}=\alpha^{\lambda_{k+1}+\lambda_k+1}$. Similarly, positive elements of $\textbf{F}^k$ and $\textbf{H}^k$ are at least $\alpha^{\lambda_{k+1}+\lambda_k+\lambda_{k-1}+1}$. Hence we can define the following lower bound for all positive elements of $\textbf{P}^k$:
\begin{equation}\label{eq:Pk lower bound}
\beta_k=\alpha^{\lambda_{k+1}+\lambda_k+\lambda_{k-1}+1}.
\end{equation}
We note the following facts by observing \eqref{eq:P=YAY YBV}:\\
$\bullet$ $\textbf{E}^k$ is strictly positive.\\
$\bullet$ if $\bar{v}_h^k$ is positive, the $h^{th}$ column of $\textbf{F}^k$ is strictly positive. Otherwise the whole $(n+h)^{th}$ column of $\textbf{P}^k$ is zero.\\
$\bullet$ if $\bar{v}_h^{k+1}$ is positive, the $h^{th}$ row of $\textbf{G}^k$ has at least one positive entry. This is true because during the time $\mu_{kn}$ to $\mu_{(k+1)n}-1$, the corresponding link $(i,j)$, transmits successfully at least once, which sets the values of $\bar{v}_h$ and $\bar{u}_h$ to 0. Therefore since $\bar{v}_h^{k+1}$ is positive, link $(i,j)$ has failed at least once after the last successful transmission. Hence, $C_{hi}^k$ is positive, and therefore $G_{hi}^k$ is also positive.
Define the index set $I^k=\{h\vert \bar{v}_h^k>0\}$. If $h\notin I^k$ we have $\bar{r}_h^k = \bar{v}_h^k = 0$, and also the $(n+h)^{th}$ column of $\textbf{P}^k$ has only zero entries; hence, $\bar{r}_h^k$ does not influence any variable of time $k+1$. We also have for $h\notin I^{k+1}$ the $(n+h)^{th}$ row of $\textbf{P}^k$ has only zero entries. Thus, $\bar{r}_h^{k+1}$ is formed by the sum of zero numbers. Intuitively, this means that for $h\notin I^k$, $\bar{r}_h^k$ is zero and so are the coefficients related to it in \eqref{eq:zr=pzr}. Therefore it gives us no meaningful information and it can be ignored. For the rest of the proof, we assume that all the variables $\bar{r}_h^k$ considered in the equations are the ones with $h\in I^k$.
We obtain:
\begin{gather*}
\bar{r}_{\max}^{k+1}\leq \beta^k \bar{z}_{\max}^k + (1-\beta^k)\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\},\\
\bar{z}_{\max}^{k+1}\leq \beta^k \min\{\bar{z}_{\min}^k,\bar{r}_{\min}^k\}
+ (1-\beta^k)\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\}.
\end{gather*}
Then,
\begin{equation*}
\max\{\bar{z}_{\max}^{k+1},\bar{r}_{\max}^{k+1}\}\leq \beta^k \bar{z}_{\max}^k + (1-\beta^k)\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\}.
\end{equation*}
Similarly,
\begin{equation*}
\min\{\bar{z}_{\min}^{k+1},\bar{r}_{\min}^{k+1}\}\geq \beta^k \bar{z}_{\min}^k + (1-\beta^k)\min\{\bar{z}_{\min}^k,\bar{r}_{\min}^k\}.
\end{equation*}
We also have:
\begin{gather*}
\bar{z}_{\max}^{k+1}\leq \beta^k \sum_{i=1}^n \bar{z}_i^k + (1-n\beta^k)\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\},\\
\bar{z}_{\min}^{k+1}\geq \beta^k \sum_{i=1}^n \bar{z}_i^k + (1-n\beta^k)\min\{\bar{z}_{\min}^k,\bar{r}_{\min}^k\}.
\end{gather*}
Thus,
\begin{equation*}
\bar{z}_{\max}^{k+1}-\bar{z}_{\min}^{k+1}\leq
(1-n\beta^k)\left(\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\}-\min\{\bar{z}_{\min}^k,\bar{r}_{\min}^k\}\right).
\end{equation*}
Equivalently,
\begin{gather*}
s^{k+1}\leq \beta^k t^k + (1-\beta^k)s^k,\\
t^{k+1}\leq (1-n\beta^k)s^k,
\end{gather*}
where $s^k=\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\}-\min\{\bar{z}_{\min}^k,\bar{r}_{\min}^k\}$ and $t^k=\bar{z}_{\max}^{k}-\bar{z}_{\min}^{k}$. Observing that $0\leq t^k\leq s^k$, we obtain:
\begin{align*}
s^{k+1}&\leq \beta^k(1-n\beta^{k-1})s^{k-1}+(1-\beta^k)s^k\\
&\leq \beta^k(1-n\beta^{k-1})s^{k-1}+(1-\beta^k)s^{k-1}\\
&= (1-n\beta^k \beta^{k-1})s^{k-1}.
\end{align*}
Hence $\lim_{k\to\infty}s^k=0$ if $\prod_{k=1}^{\infty}\left(1-n\beta^{2k}\beta^{2k-1}\right)=0$, which, by Lemma \ref{lem:alpha}, holds true if and only if $\sum_{k=1}^{\infty}\beta^{2k}\beta^{2k-1}=\infty$. Using (\ref{eq:Pk lower bound}), we have:
\begin{align*}
\sum_{k=1}^{\infty}\beta^{2k}\beta^{2k-1}&=\sum_{k=1}^{\infty} \alpha^{\lambda_{2k+1}+2\lambda_{2k}+2\lambda_{2k-1}+\lambda_{2k-2}+2}\\
&\geq \frac{1}{n^2} \sum_{k=K}^{\infty} \alpha^{-\frac{\ln(2k+1+T)}{\ln(\alpha)}}\\
&= \frac{1}{n^2}\sum_{k=K}^{\infty}\frac{1}{2k+1+T} = \infty.
\end{align*}
Hence $\max\{\bar{z}_{\max}^k,\bar{r}_{\max}^k\}-\min\{\bar{z}_{\min}^k, \bar{r}_{\min}^k\}$ converges to 0 as $k$ goes to infinity. Combining this with Lemma \ref{lem:stochastic matrices} we obtain,
\begin{equation}
\lim_{k\to \infty}\bar{z}_i^k = \lim_{k\to \infty ,\text{ } h\in I^k }\bar{r}_h^k = L. \label{eq:lim rz}
\end{equation}
We have:
\begin{align*}
L &= L \lim_{k \to \infty} \frac{\sum_{i=1}^n \bar{y}_i^k + \sum_{h=1}^m \bar{v}_h^k}{\sum_{i=1}^n \bar{y}_i^k + \sum_{h=1}^m \bar{v}_h^k}\\
&= \lim_{k \to \infty}\left(\frac{\sum_{i=1}^n \bar{z}_i^k\bar{y}_i^k + \sum_{h=1}^m \bar{r}_h^k\bar{v}_h^k}{n}\right)+\lim_{k \to \infty}\left(\frac{\sum_{i=1}^n (L-\bar{z}_i^k)\bar{y}_i^k + \sum_{h=1}^m (L-\bar{r}_h^k)\bar{v}_h^k}{n}\right)\\
&= \lim_{k \to \infty}\left(\frac{\sum_{i=1}^n \bar{x}_i^k + \sum_{h=1}^m \bar{u}_h^k}{n}\right)
+\lim_{k \to \infty}\left(\frac{\sum_{i=1}^n (L-\bar{z}_i^k)\bar{y}_i^k + \sum_{h=1}^m (L-\bar{r}_h^k)\bar{v}_h^k}{n}\right)\\
&= \frac{\sum_{i=1}^n x_i^0}{n},
\end{align*}
where in the last equality, we used \eqref{eq:lim rz}, and the fact that $\bar{v}_h^k=0$ for $h \notin I^k$.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
In this paper we established sufficient conditions on connectivity and link failures for consensus algorithms to converge. We started by showing that ordinary consensus and push-sum still work if intercommunication intervals do not grow too fast. Then we moved on to our main result, which is a fully asynchronous push-sum algorithm robust to link failures. We proved its convergence while allowing consecutive link failures to grow to infinity, as long as they remain smaller than a logarithmically growing upper bound.
This work can be extended by improving the upper bounds using ergodicity theory.
It is also possible to use our results to develop asynchronous distributed optimization algorithms robust to packet losses.
|
{
"timestamp": "2018-02-26T02:11:39",
"yymm": "1802",
"arxiv_id": "1802.08634",
"language": "en",
"url": "https://arxiv.org/abs/1802.08634"
}
|
\section{Introduction}
Let $\mathfrak{H}$ be a Hilbert space with inner product $(\cdot,\cdot)$ linear in the first argument
and let $J$ be a non-trivial fundamental symmetry, i.e., $J=J^*$, $J^2=I$, and $J\not={\pm{I}}$.
The space $\mathfrak{H}$ endowed with the indefinite inner product
\begin{equation}\label{new1}
[f,g]=(J{f}, g)
\end{equation}
is called a Krein space $(\mathfrak{H}, [\cdot,\cdot])$.
A (closed) subspace $\mathfrak{L}$ of the
Hilbert space $\mathfrak{H}$ is called {\it nonnegative, positive, uniformly positive} with respect to the indefinite
inner product $[\cdot,\cdot]$ if, respectively, $[f,f]\geq{0}$, \ $[f,f]>0$, \ $[f,f]\geq{\alpha}\|f\|^2, (\alpha>0)$ for all $f\in\mathfrak{L}\setminus\{0\}$.
Nonpositive, negative and uniformly negative subspaces are introduced similarly.
In each of the above mentioned classes we can define maximal subspaces.
For instance, a closed positive subspace $\mathfrak{L}$ is called \emph{maximal positive} if $\mathfrak{L}$
is not a proper subspace of a positive subspace in $\sH$. The concept of maximality for other classes is defined similarly.
A subspace $\mathfrak{L}$ of ${\mathfrak H}$ is called \emph{definite} if it is either positive or negative.
The term \emph{uniformly definite} is defined accordingly.
Subspaces $\mathfrak{L}_\pm$ of $\mathfrak{H}$ is called \emph{dual subspaces} if $\mathfrak{L}_-$ is nonpositive,
$\mathfrak{L}_+$ is nonnegative, and $\mathfrak{L}_\pm$ are orthogonal with respect to $[\cdot, \cdot]$, that is $[f_+, f_-]=0$ for all
$f_{+}\in\mathfrak{L}_+$ and all $f_{-}\in\mathfrak{L}_-$.
The subject of the paper is dual definite subspaces.
Our attention is mainly focused on dual definite subspaces $\mathfrak{L}_\pm$
with additional assumption of the density of their algebraic sum\footnote{The brackets in (\ref{e8}) indicates that
${\mathfrak L}_\pm$ are orthogonal with respect to $[\cdot, \cdot]$.}
\begin{equation}\label{e8}
{\cD}={\mathfrak L}_+[\dot{+}]{\mathfrak L}_-
\end{equation}
At first glance, the density of $\cD$ in $\sH$ should imply the maximality of definite subspaces ${\mathfrak L}_\pm$
in the Krein space $(\sH, [\cdot,\cdot])$.
However, the results of \cite{R12} show the existence of a densely defined sum \eqref{e8}
for which there are \emph{various extensions} to dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$:
\begin{equation}\label{agga32b}
\cD={\mathfrak L}_+[\dot{+}]{\mathfrak L}_- \ \rightarrow \ \cD_{max}={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}.
\end{equation}
The decomposition \eqref{e8} is often appeared in the spectral theory of $\mathcal{PT}$-symmetric Hamiltonians
\cite{AK_Bender} as the result of closure of linear spans of positive and negative eigenfunctions
and it is closely related to the concept of $\mathcal{C}$-symmetry in $\mathcal{PT}$-symmetric quantum mechanics (PTQM)
\cite{AK_Bender3, AK_Bender4}. The description of a symmetry $\mathcal{C}$ is one of the key points in PTQM
and it can be successfully implemented only in the case where the dual subspaces in \eqref{e8} are \emph{maximal}.
This observation give rise to a natural question: how to describe all possible extensions of dual definite subspaces ${\mathfrak L}_\pm$ to
dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$?
In Section \ref{Sec2} this problem is solved with the
use of Krein's results on non-densely defined Hermitian contractions \cite{ArTs, AK_Krein}. The main result
(Theorem \ref{agga22b}) reduces the description of extensions \eqref{agga32b} to the solution of the operator
equation \eqref{agga23}.
Each pair of dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$ generates an
associated Hilbert space $(\sH_G, (\cdot,\cdot)_G)$. If ${\mathfrak L}_\pm^{max}$ are
uniformly definite, then $\cD_{max}$ in \eqref{agga32b}
coincides with $\sH$ and $\sH=\sH_G$ (since the inner product $(\cdot,\cdot)_G$ is equivalent to the original
one $(\cdot,\cdot)$). On the other hand, if
${\mathfrak L}_\pm^{max}$ are only definite subspaces, then $\sH\not=\sH_G$ and the inner products
$(\cdot,\cdot)_G$, $(\cdot, \cdot)$ are not equivalent. In this case, the direct sum $\cD$ may lose the property
of being densely defined in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$.
We say that dual definite subspaces ${\mathfrak L}_\pm$ are \emph{quasi maximal} if
there exists at least one extension \eqref{agga32b} such that
the set $\cD$ remains dense in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$ constructed by
$\cD_{max}$.
In Section \ref{4}, dual quasi maximal subspaces are characterized in terms of extremal extensions of symmetric
operators: Theorems \ref{agga28}, \ref{agga30}, Corollary \ref{agga31}.
The theory of extremal extensions \cite{AK_Arlin, Arlin} allows one to classify all possible cases:
$(A), (B), (C)$ (uniqueness/nonuniqueness
Hilbert spaces $\sH_G$ which preserve the density of $\cD$).
Section \ref{5} deals with the operator of $\cC$-symmetry.
Each pair of dual definite subspaces ${\mathfrak L}_\pm$ determines by \eqref{new5} an operator $\cC_0$ such that
$\cC_0^2=I$ and $J\cC_0$ is a positive symmetric operator in $\sH$.
The operator $\cC_0$ is called \emph{an operator of $\cC$-symmetry} if
$J\cC_0$ is a self-adjoint operator in $\sH$. In this case, the notation $\cC$ is used instead of $\cC_0$.
Let $\cC_0$ be an operator associated with dual definite subspaces ${\mathfrak L}_\pm$.
Its extension to the operator of $\cC$-symmetry $\cC$ is equivalent to the construction of
dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$ in \eqref{agga32b}.
This relationship allows one to use the classification $(A), (B), (C)$ in Section \ref{4}
for the solution of the following problems: (i) how many operators of $\cC$-symmetry
can be constructed on the base of dual definite subspaces ${\mathfrak L}_\pm$?
(ii) is it possible to define an operator of $\cC$-symmetry as the
the extension by the continuity in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$?
The concept of dual quasi maximal subspaces allows one to introduce quasi bases in
Section \ref{6}. The characteristic properties of quasi bases are presented in Theorem \ref{agga38}
and Corollaries \ref{agga35} -- \ref{agga72}.
The relevant examples are given.
In what follows ${D}(H)$, $R(H)$ and $\ker{H}$ denote, respectively, the domain, the range, and the kernel space of a linear operator $H$.
The symbol $H\upharpoonright_{\mathcal{D}}$ means the restriction of $H$ onto a set $\mathcal{D}$.
Let $\sH$ be a complex Hilbert space. Sometimes, it is useful to specify the inner product $(\cdot,\cdot)$ endowed with $\sH$. In that case the notation $(\mathfrak{H}, (\cdot,\cdot))$ will be used.
\section{Dual maximal subspaces}\label{Sec2}
\subsection{Extension of dual subspaces ${\mathfrak L}_\pm$ on to dual maximal subspaces ${\mathfrak L}_\pm^{max}$.}
Let $(\mathfrak{H}, [\cdot,\cdot])$ be a Krein space with a fundamental symmetry $J$.
Denote
\begin{equation}\label{AK9}
\sH_+=\frac{1}{2}(I+J)\sH, \qquad \sH_-=\frac{1}{2}(I-J)\sH.
\end{equation}
The subspaces $\sH_{\pm}$ of
$\sH$ are orthogonal with respect to the initial inner product $(\cdot,\cdot)$ as well as with respect to the indefinite inner product $[\cdot,\cdot]$.
Moreover $\sH_{+} \ (\sH_{-})$ is maximal uniformly positive (negative) with respect to $[\cdot,\cdot]$ and
\begin{equation}\label{AK10}
\sH=\sH_+[\oplus]\sH_-.
\end{equation}
The decomposition (\ref{AK10}) is called \emph{the fundamental decomposition} of the
Krein space $(\mathfrak{H}, [\cdot,\cdot])$.
{\bf I.} Each positive (negative) subspace ${\mathfrak L}_+$ (${\mathfrak L}_-$) in the Krein space $(\mathfrak{H}, [\cdot,\cdot])$
can be presented with respect to \eqref{AK10} as follows:
\begin{equation}\label{agga9}
\begin{array}{c}
{\mathfrak L}_+=\{x_++K_+^-x_+ : x_+\in{M}_+\subseteq\sH_+\}, \\
\sL_-=\{x_-+K_-^+x_- : x_-{\in}M_-\subseteq\sH_-\},
\end{array}
\end{equation}
where $K_{+}^{-} : \sH_+\to\sH_-$ and $K_{-}^{+} : \sH_-\to\sH_+$ are strong contractions\footnote{an operator $K$ is called strong contraction if $\|Kf\|<\|f\|$ for all nonzero $f\in{D}(K)$}
with the domains $D(K_{+}^{-})=M_+\subseteq\sH_+$ and $D(K_{-}^{+})=M_-\subseteq\sH_-$, respectively. Therefore, the pair
of subspaces ${\mathfrak L}_\pm$ is uniquely determined by the
formula
\begin{equation}\label{agga20b}
{\mathfrak L}_\pm=(I+T_0)P_{\pm}D(T_0)
\end{equation}
where
\begin{equation}\label{neww4}
T_0=K_{+}^{-}P_++K_{-}^{+}P_-, \quad D(T_0)=M_+\oplus{M_-},
\end{equation}
and $P_{+}=\frac{1}{2}(I+J)$ and $P_{-}=\frac{1}{2}(I-J)$ are orthogonal projection operators on $\sH_+$ and $\sH_-$, respectively.
By the construction, $T_0$ is a strong contraction in $\sH$ such that
\begin{equation}\label{fff1}
JT_0=-T_0J
\end{equation}
and its domain $D(T_0)=M_-\oplus{M_+}$ is a (closed) subspace in $\sH$.
The additional requirement of duality of ${\mathfrak L}_\pm$ leads to
the symmetricity of $T_0$. Precisely, the following statement holds.
\begin{lemma}\label{neww2}
The definite subspaces ${\mathfrak L}_\pm$ in \eqref{agga20b} are dual if and only if the operator $T_0$ is a symmetric strong contraction in $\sH$
and \eqref{fff1} holds.
\end{lemma}
\begin{proof} It sufficient to establish that the symmetricity of $T_0$ is equivalent to the orthogonality of
$\sL_\pm$ with respect to the indefinite inner product $[\cdot, \cdot]$. Indeed, for any
$x_{\pm}\in{M}_{\pm}$,
$$
0=[(I+T_0)x_+, (I+T_0)x_-]=((I-T_0)x_+, (I+T_0)x_-)=(x_+,T_0x_-)-(T_0x_+,x_-).
$$
Hence, $(x_+, T_0x_-)=(T_0x_+, x_-)$, $\forall{x}_{\pm}\in{M}_{\pm}$. The last equality
is equivalent to
$$
(T_0(x_++x_-), y_++y_-)=(T_0x_+, y_-)+(T_0x_-,y_+)=(x_++x_-, T_0(y_++y_-))
$$
for all $f=x_++x_-$ and $g=y_++y_-$ from the domain of $T_0$. Therefore, $T_0$ is a symmetric operator.
\end{proof}
The operator $T_0$ characterizes the `deviation' of subspaces ${\mathfrak L}_\pm$ with respect to $\sH_{\pm}$
and it allows to characterize the additional properties of ${\mathfrak L}_\pm$.
\begin{lemma}[\cite{GKS}]\label{agga1}
Let ${\mathfrak L}_\pm$ be dual definite subspaces (i.e., the operator $T_0$ satisfies the condition of Lemma \ref{neww2}). Then
\begin{itemize}
\item[(i)] the subspaces ${\mathfrak L}_\pm$ are uniformly definite $\iff$ $\|T_0\|<1$;
\item[(ii)] the subspaces ${\mathfrak L}_\pm$ are definite but no uniformly definite $\iff$ $\|T_0\|=1$;
\item[(iii)] the subspaces ${\mathfrak L}_\pm$ are maximal $\iff$ $T_0$ is a self-adjoint operator defined on $\sH$.
\end{itemize}
\end{lemma}
By virtue of Lemmas \ref{neww2}, \ref{agga1}, the extension of dual definite subspaces ${\mathfrak L}_\pm$
on to dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$ \emph{is equivalent to the extension of $T_0$ to
a self-adjoint strong contraction $T$ anticommuting with $J$}. In this case cf. \eqref{agga20b},
\begin{equation}\label{agga21}
{\mathfrak L}_\pm^{max}=(I+T)\sH_{\pm}.
\end{equation}
The next result is well known and it can be proved by various methods (see, e.g. , \cite{AK_Azizov}, \cite[Theorem 2.1]{PH}).
For the sake of completeness, principal stages of the proof based on the Phillips work \cite{PH} are given.
\begin{theorem}\label{agga2}
Let ${\mathfrak L}_\pm$ be dual definite subspaces. Then there exist dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$ such that
${\mathfrak L}_\pm\subseteq{\mathfrak L}_\pm^{max}$.
\end{theorem}
\begin{proof}
For the construction of ${\mathfrak L}_\pm^{max}$ we should
prove the existence of a strong self-adjoint contraction $T$ which extends $T_0$ and anticommutes with $J$.
The existence of a self-adjoint contraction extension $T'\supset{T_0}$ is well known \cite{AK_Krein, AK_Akhiezer}.
However, we cannot state that $T'$ anticommutes with $J$.
To overcome this inconvenience we modify $T'$ as follows:
\begin{equation}\label{fifa1}
T=\frac{1}{2}(T'-JT'J).
\end{equation}
It is easy to see that $T$ is a required self-adjoint strong contraction
because $T$ anticommutes with $J$ and $T$ is an extension of $T_0$ (since $JT_0=-T_0J$). Therefore, the dual maximal subspaces ${\mathfrak L}_\pm^{max}\supseteq{\mathfrak L}_\pm$
can be defined by \eqref{agga21}.
\end{proof}
The set of all self-adjoint contractive extensions of $T_0$ forms an operator interval $[T_\mu, T_M]$ \cite[Theorem 3]{AK_Krein}.
The end points of this interval: $T_\mu$ and $T_M$ are called the \emph{hard} and the \emph{soft} extensions of $T_0$, respectively.
\begin{corollary}\label{agga3}
Let ${\mathfrak L}_\pm$ be dual definite subspaces.
Then their extension to the dual maximal subspaces ${\mathfrak L}_\pm^{max}$
can be defined by \eqref{agga21} with
$$
T=\frac{1}{2}(T_\mu+T_M).
$$
\end{corollary}
\begin{proof}
Let us proof that
\begin{equation}\label{agga4}
JT_\mu=-T_{M}J.
\end{equation}
By virtue of \cite[2.§108 p.380]{AK_Akhiezer}, the operators $T_{\mu}$ and $T_M$ can be defined:
\begin{equation}\label{kaa1}
T_{\mu}=T-\sqrt{I+T}Q_1\sqrt{I+T}, \quad T_{M}=T+\sqrt{I-T}Q_2\sqrt{I-T}
\end{equation}
where $T$ is a self-adjoint contractive extension of $T_0$ anticommuting with $J$ (its existence was proved in Theorem \ref{agga2}),
$Q_1$ and $Q_2$ are the orthogonal projections onto the orthogonal complements of the manifolds $\sqrt{I+T}D(T_0)$ and $\sqrt{I-T}D(T_0)$ respectively.
It is obvious that $J(I+T)=(I-T)J$ (because $T$ anticommutes with $J$). Furthermore, since $(I\pm T)$ are self-adjoint and positive, there exists unique square roots operators $\sqrt{I\pm T}$.
Let us consider the operator $S=J\sqrt{I+T}J$, and compute:
$$
S^2=J\sqrt{I+T}J^2\sqrt{I+T}J=J(I+T)J=(I-T)J^2=I-T.
$$
Hence, $S=\sqrt{I-T}$ or
$J\sqrt{I+T}=\sqrt{I-T}J.$ The latter relation yields that the unitary operator $J$ transforms
the decomposition $\mathfrak{H}=\sqrt{I+T}D(T_0)\oplus{Q}_1\mathfrak{H}$ in
$\mathfrak{H}=\sqrt{I-T}D(T_0)\oplus{Q_2\mathfrak{H}}$. This means that
$JQ_1=Q_2J$. The above analysis and (\ref{kaa1}) justifies (\ref{agga4}).
Due to the proof of Theorem \ref{agga2}, for the construction of $T$ in \eqref{fifa1}
we can use arbitrary self-adjoint contraction $T'\supset{T_0}$.
In particular, choosing $T'=T_{\mu}$ and using \eqref{agga4}, we complete the proof.
\end{proof}
In general, the extension of dual definite subspaces ${\mathfrak L}_\pm$ on to dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$
is not determined uniquely. To describe all possible cases we use the formula \cite{AK_Arlin, AK_Krein}
\begin{equation}\label{agga22}
T=T_\mu+(T_M-T_\mu)^{\frac{1}{2}}X(T_M-T_\mu)^{\frac{1}{2}}
\end{equation}
which gives a one-to-one correspondence between all self-adjoint contractive extensions $T$ of $T_0$ and
all nonnegative self-adjoint contractions $X$ in the subspace $\mathfrak{M}=\overline{R(T_M-T_\mu)}$.
\begin{theorem}\label{agga22b}
The self-adjoint contractive extension $T\supset{T_0}$ determines dual maximal subspaces ${\mathfrak L}_\pm^{max}$ in
\eqref{agga21} if and only if the corresponding nonnegative self-adjoint contraction $X$ describing $T$ in \eqref{agga22}
is the solution of the following operator equation in $\mathfrak{M}$:
\begin{equation}\label{agga23}
X=J(I-X)J
\end{equation}
\end{theorem}
\begin{proof} It follows from \eqref{agga4} that the subspace $\mathfrak{M}$ reduces $J$. Furthermore
$J(T_M-T_\mu)^{\frac{1}{2}}=(T_M-T_\mu)^{\frac{1}{2}}J$. Taking these relations into account, we
conclude that the self-adjoint contraction $T$ in \eqref{agga22} anticommutes with $J$ if and only if
$X$ satisfies \eqref{agga23}
\end{proof}
\begin{remark}\label{agga32}
The equation \eqref{agga23} has an elementary solution $X=\frac{1}{2}I$ which corresponds to
the operator $T$ defined in Corollary \ref{agga3}.
Let $X_0\not=\frac{1}{2}I$ be a solution of \eqref{agga23}. Then the nonnegative self-adjoint contraction
$X_1=I-X_0$ is also a solution of \eqref{agga23}. Moreover, each self-adjoint nonnegative contraction
$X_\alpha=(1-\alpha)X_0+\alpha{X}_1, \ \alpha\in[0,1]$ is the solution of \eqref{agga23}
Therefore, either the dual maximal definite subspaces ${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$ are determined uniquely
or there are infinitely many such extensions.
\end{remark}
{\bf II.} The above results as well as the results in the sequel it is useful to rewrite with the help of the Cayley transform
of $T_0$:
\begin{equation}\label{agga7}
G_0=(I-T_0)(I+T_0)^{-1}, \qquad T_0=(I-G_0)(I+G_0)^{-1}.
\end{equation}
In what follows \emph{we assume that the direct sum \eqref{e8} of ${\mathfrak L}_\pm$ is a dense set in $\sH$}.
Then,
the operator $G_0$ is a closed densely defined positive symmetric operator in $\sH$
with
$$
D(G_0)=\cD, \qquad \ker(I+G_0^*)=\sH\ominus(M_-\oplus{M_+})
$$
and such that
\begin{equation}\label{AK71}
JG_0f=G_0^{-1}Jf, \qquad \forall{f}\in{D}(G_0)={\mathcal D}.
\end{equation}
\begin{remark}
Every nonnegative self-adjoint extension
$G\supset{G_0}$ (i.e. $(Gf,f)\geq{0}$) is also a positive extension of $G_0$ (i.e. $(Gf,f)>{0}$ for $f\not=0$).
Indeed, if $(Gf,f)={0}$, then $Gf=0$ and $(Gf,g)=(f, G_0g)=0$ for all $g\in{D}(G_0)$. Therefore,
$f\perp{R}(G_0)=D(G_0^{-1})$ and $f=0$ since $D(G_0^{-1})$ is a dense set in $\sH$ by virtue of \eqref{AK71}.
\end{remark}
Self-adjoint positive extensions $G$ of $G_0$ are in one-to-one correspondence with the set of
contractive self-adjoint extensions of $T_0$:
\begin{equation}\label{agga16}
T=(I-G)(I+G)^{-1}, \qquad G=(I-T)(I+T)^{-1}.
\end{equation}
In particular the Friedrichs extension $G_{\mu}$ of $G_0$ corresponds to the operator $T_\mu$, while
the Krein-von Neumann extension $G_M$ is the Cayley transform of $T_M$. The relation \eqref{agga4}
between $T_\mu$ and $T_M$ is rewritten as follows \cite[Theorem 4.3]{GKS}:
\begin{equation}\label{agga29}
JG_\mu=G^{-1}_MJ.
\end{equation}
It follows from \eqref{agga7} and Lemmas \ref{neww2}, \ref{agga1} (see also \cite[Proposition 4.2]{KS}) that
\emph{the dual maximal definite subspaces ${\mathfrak L}_\pm^{max}\supseteq{\mathfrak L}_\pm$ is in
one-to-one correspondence with positive self-adjoint extensions $G$ of $G_0$ satisfying the
additional condition} (cf. \eqref{AK71}):
\begin{equation}\label{agga14}
JGf=G^{-1}Jf, \qquad \forall{f}\in{D}(G)={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}.
\end{equation}
\section{Krein spaces associated with dual maximal subspaces}
\subsection{The case of maximal uniformly definite subspaces.}\label{ref2}
Let ${\mathfrak L}_\pm^{max}$ be dual maximal uniformly definite subspaces.
Then:
\begin{equation}\label{e8b}
{\sH}={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}.
\end{equation}
Relation \eqref{e8b} illustrates the variety of possible decompositions of the Krein space $(\sH, [\cdot,\cdot])$ onto its
maximal uniformly positive/negative subspaces. This property is characteristic for a Krein space and, sometimes, it is used
for its definition \cite{AK_Azizov}.
With decomposition \eqref{e8b} one can associate a new inner product in $\sH$:
\begin{equation}\label{agga15}
(f, g)_G=[f_+, g_+]-[f_-, g_-], \qquad f,g\in\sH
\end{equation}
($ f=f_++f_-, \ g=g_++g_-, \ f_\pm, \ g_\pm \in {\mathfrak L}_\pm^{max}$). By virtue of \eqref{agga21},
the relations $f_\pm=(I+T)x_\pm$, \ $g_\pm=(I+T)y_\pm$, \ $x_\pm, y_\pm \in \sH_\pm$ hold.
Taking \eqref{agga16} into account we rewrite \eqref{agga15} as follows:
\begin{equation}\label{fff7}
\begin{array}{l}
(f, g)_G=((I-T)x_+, (I+T)y_+)+((I-T)x_-, (I+T)y_-)= \\
((I-T)(x_++x_-), (I+T)(y_++y_-))=(Gf,g).
\end{array}
\end{equation}
Here $G$ is a bounded\footnote{`bounded' since $\|T\|<1$ see Lemma \ref{agga1}} positive self-adjoint operator with $0\in\rho(G)$.
Therefore, the subspaces ${\mathfrak L}_\pm^{max}$ determine the
new inner product
$$
(\cdot,\cdot)_1=(G\cdot, \cdot)=(\cdot,\cdot)_G,
$$
which is equivalent to the initial one $(\cdot,\cdot)$.
The subspaces ${\mathfrak L}_\pm^{max}$ are mutually orthogonal with respect to $(\cdot,\cdot)_{G}$
in the Hilbert space $(\sH, (\cdot,\cdot)_G)$.
Summing up: \emph{the choice of various dual maximal uniformly definite subspaces ${\mathfrak L}_\pm^{max}$
generates infinitely many equivalent inner products $(\cdot,\cdot)_{G}$ of the Hilbert space $\sH$ but it
does not change the initial Krein space $(\sH, [\cdot,\cdot])$.}
\subsection{The case of maximal definite subspaces.}\label{ref1}
Assume that ${\mathfrak L}_\pm^{max}$ are dual maximal definite subspaces but they are not uniformly definite.
Then the direct sum
\begin{equation}\label{agga18}
\cD_{max}={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}
\end{equation}
is a dense set in the Hilbert space $(\sH, (\cdot, \cdot))$. The corresponding positive self-adjoint operator $G$ is unbounded.
Similarly to the previous case, with each of direct sum \eqref{agga18},
one can associate a new inner product $(\cdot,\cdot)_G=(G\cdot,\cdot)$ defined on $D(G)=\cD_{max}$ by the formula
\eqref{agga15}. The inner product $(\cdot,\cdot)_G$ is not equivalent to the initial one and
the linear space $\cD_{max}$ endowed with $(\cdot,\cdot)_{G}$ is a pre-Hilbert space.
Let $\sH_{G}$ be the completion of $\cD_{max}$ with respect to $(\cdot,\cdot)_{G}$. The Hilbert space $\sH_{G}$ does not coincide with $\sH$.
The dual subspaces ${\mathfrak L}_\pm^{max}$ are orthogonal with respect to $(\cdot,\cdot)_{G}$ and, by construction,
the new Hilbert space $(\sH_{G}, (\cdot,\cdot)_{G})$ can be decomposed as follows:
\begin{equation}\label{e8c}
\sH_{G}=\hat{\sL}_+^{max}\oplus_{G}\hat{\sL}_-^{max},
\end{equation}
where $\hat{\sL}_{\pm}^{max}$ are the completion of $\sL_\pm^{max}$ with respect to $(\cdot,\cdot)_{G}$.
The decomposition (\ref{e8c}) can be considered as the fundamental decomposition of the new
Krein space $(\sH_{G}, [\cdot, \cdot]_G)$ with the indefinite inner product
\begin{equation}\label{agga19}
[f, g]_G=({J_G}f, g)_G=(f_{+}, g_{+})_{G}-(f_{-}, g_{-})_{G},
\end{equation}
where $f=f_{+}+f_{-}, \ g=g_{+}+g_{-}, \ f_{\pm}, g_{\pm} \in \hat{\sL}_{\pm}^{max}$ and
${J_G}f=f_{+}-f_{-}$ is the fundamental symmetry in $\sH_G$.
Let $\mathfrak{D}[G]$ be the \emph{energetic linear manifold} constructed by the positive self-adjoint operator $G$. In other words,
$\mathfrak{D}[G]$ denotes the completion of $D(G)=\cD_{max}$ with respect to the energetic norm
$$
\|f\|^2_{en}=\|f\|^2+\|f\|^2_G=\|f\|^2+(Gf, f).
$$
The set of elements $\mathfrak{D}[G]$ coincides with $D(\sqrt{G})$ and the
energetic linear manifold is a Hilbert space $(\mathfrak{D}[G], (\cdot,\cdot)_{en})$
with respect to the energetic inner product
$$
(f, g)_{en}=(f,g)+(\sqrt{G}f,\sqrt{G}g), \qquad f,g\in{D(\sqrt{G})}=\mathfrak{D}[G].
$$
Comparing the definitions of $\sH_G$ and $\mathfrak{D}[G]$ leads to the conclusion that the energetic linear manifold $\mathfrak{D}[G]$ coincides
with the common part of $\sH$ and $\sH_G$, i.e., $\mathfrak{D}[G]=\sH\cap\sH_G$.
\begin{lemma}\label{agga20}
The indefinite inner products $[\cdot, \cdot]$ and $[\cdot, \cdot]_G$ coincide on $\mathfrak{D}[G]$.
\end{lemma}
\begin{proof}
Indeed, taking \eqref{agga15} and \eqref{agga19} into account,
$$
[f,g]_G=[f_+,g_+]+[f_-,g_-]=[f_++f_-, g_++g_-]=[f, g], \quad \forall f,g\in{D(G)}.
$$
The obtained relation can be extended onto $\mathfrak{D}[G]$ by the continuity because $|[f,g]|\leq(f,g)$ and
$|[f,g]|\leq(f,g)_G$.
\end{proof}
Summing up: \emph{the choice of various dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$
generates infinitely many Krein spaces $(\sH_G, [\cdot,\cdot]_G)$ such that
the indefinite inner product $[\cdot,\cdot]_G$ coincide with the original indefinite inner product $[\cdot,\cdot]$
on the energetic linear manifold $\mathfrak{D}[G]$. The inner products $(\cdot,\cdot)$ and $(\cdot,\cdot)_{G}$ restricted
on $\mathfrak{D}[G]$ are not equivalent.}
\section{Dual quasi maximal subspaces}\label{4}
\subsection{Definition and principal results.}\label{4.1}
Let ${\mathfrak L}_\pm$ be dual definite subspaces such that their direct sum \eqref{e8} is a dense set in
$\sH$ and let $G_0$ be the corresponding symmetric operator.
Each positive self-adjoint extension $G$ of $G_0$ with additional condition \eqref{agga14} determines
the Hilbert space\footnote{The Hilbert space $\sH_G$ coincides with $\sH$ if $G$ is a bounded operator} $(\sH_G, (\cdot,\cdot)_G)$.
\begin{definition}\label{agga25}
The dual definite subspaces ${\mathfrak L}_\pm$ are called quasi maximal if
there exists positive self-adjoint extension $G\supset{G_0}$ with the condition \eqref{agga14} and such that
the domain $D(G_0)$ remains dense in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$.
\end{definition}
Obviously, each maximal definite subspaces are quasi maximal.
For dual uniformly definite subspaces, the concept of quasi-maximality is equivalent to maximality, i.e.,
each quasi maximal uniformly definite subspaces have to be maximal uniformly definite.
In general case of definite subspaces, the closure of dual quasi maximal subspaces
${\mathfrak L}_\pm$ with respect to $(\cdot,\cdot)_G$ coincides with subspaces $\hat{\sL}_\pm^{max}$
in the fundamental decomposition \eqref{e8c}, i.e., the closure of ${\mathfrak L}_\pm$ in $(\sH_G, (\cdot,\cdot)_G)$
gives dual maximal uniformly definite subspaces $\hat{\sL}_\pm^{max}$ of the new Krein space
$(\sH_G, [\cdot,\cdot]_G)$.
It is natural to suppose that the quasi maximality
can be characterized in terms of the corresponding
positive self-adjoint extensions $G$ of $G_0$. For this reason, we recall \cite{Arlin, AK_Arlin}
that a nonnegative self-adjoint extension $G$ of
$G_0$ is called \emph{an extremal extension} if
\begin{equation}\label{bebe86}
\inf_{f\in{D}(G_0)}{(G(\phi-f),(\phi-f))}=0, \quad \mbox{for all} \quad \phi\in{D}(G).
\end{equation}
The Friedrichs extension $G_\mu$ and the Krein-von Neumann extension $G_M$ are examples of extremal extensions of $G_0$.
\begin{theorem}\label{agga28}
Dual definite subspaces ${\mathfrak L}_\pm$ are quasi maximal if and only if
there exists an extremal extension $G$ of $G_0$ which satisfies \eqref{agga14}
\end{theorem}
\begin{proof}
Since $\|\phi-f\|^2_{G}=(G(\phi-f),(\phi-f))$, the condition (\ref{bebe86})
means that each element $\phi\in{D}(G)={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}$ can be approximated by elements
$f\in{D}(G_0)={\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$ in the Hilbert space $(\sH_{G}, (\cdot,\cdot)_G)$
if and only if $G$ is an extremal extension of $G_0$.
\end{proof}
\begin{remark}\label{new61}
In general, an extremal extension $G$ of $G_0$ is not determined uniquely.
Let $G_i,$ $i=1,2$ be extremal extensions of $G_0$ that satisfy \eqref{agga14}.
By virtue of \eqref{agga15} and Lemma \ref{agga20}, the operator
$$
W : (\sH_{G_1}, (\cdot,\cdot)_{G_1}) \to (\sH_{G_2}, (\cdot,\cdot)_{G_2})
$$
defined as $Wf=f$ for $f\in{D}(G_0)$ and extended by continuity onto
$(\sH_{G_1}, (\cdot,\cdot)_{G_1})$ is a unitary mapping between $\sH_{G_1}$ and $\sH_{G_2}$.
Moreover, $WJ_{G_1}=J_{G_2}W$, where
$J_{G_i}$ are the fundamental symmetry operators corresponding to the fundamental decompositions
(cf. \eqref{e8c})
$\sH_{G_i}=(\hat{\sL}_{+}^{i})^{max}\oplus_{G_{i}}(\hat{\sL}_{-}^{i})^{max}$
of the Krein spaces $(\sH_{G_i}, [\cdot,\cdot]_{G_i})$. Therefore, the indefinite inner products of these spaces
satisfy the relation
$$
[f,g]_{G_1}=[Wf,Wg]_{G_2}, \qquad f,g\in\sH_{G_1}
$$
and they
are extensions (by the continuity) of the original indefinite inner product $[\cdot,\cdot]$ defined on $D(G_0)$.
For these reasons, the Krein spaces $(\sH_{G_i}, [\cdot,\cdot]_{G_i})$ corresponding to different
extremal extensions are unitary equivalent and we can identify them.
\end{remark}
Sufficient conditions of quasi maximality are presented below.
\begin{proposition}\label{agga8}
Dual definite subspaces ${\mathfrak L}_\pm$ are quasi maximal if one of the following (equivalent) conditions is satisfied:
\begin{itemize}
\item[(i)] the operator $G_0$ has a unique nonnegative self-adjoint extension;
\item[(ii)] dual maximal subspaces ${\mathfrak L}_\pm^{max}\supseteq{\mathfrak L}_\pm$ are determined
by the Friedrichs extension $G_\mu$ of $G_0$
(i.e., ${\mathfrak L}_\pm^{max}=(I+T_\mu)\sH_{\pm}$);
\item[(iii)] for all nonzero vectors $g\in\ker(I+G_0^*)=\sH\ominus(M_-\oplus{M_+})$
$$
\inf_{f\in{D}(G_0)}\frac{(G_0f,f)}{|(f,g)|^2}=0;
$$
\item[(iv)] for all nonzero vectors $g\in\sH\ominus{D(T_0)}=\sH\ominus(M_-\oplus{M_+})$
\begin{equation}\label{agga6}
\sup_{x\in{D}(T_0)}\frac{|(T_0x, g)|^2}{\|x\|^2-\|T_0x\|^2}=\infty.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
Let $G_0$ be a unique nonnegative self-adjoint extension $G$. Then, $G=G_\mu=G_M$ and, by virtue of \eqref{agga29},
$JG=G^{-1}J$. Therefore, the operator $G$ determines dual maximal subspaces ${\mathfrak L}_\pm^{max}\supseteq{\mathfrak L}_\pm$.
Furthermore, $G$ is an extremal extension (since the Friedrichs extension and the Krein-von Neumann extension are extremal ones).
In view of Theorem \ref{agga28}, ${\mathfrak L}_\pm$ are quasi maximal. Thus the condition (i) ensures the
quasi maximality of ${\mathfrak L}_\pm$.
The condition (ii) is equivalent to (i) due to \eqref{agga29} and \eqref{agga14}.
The equivalence (i) and (iii) follows form \cite[Theorem 9]{AK_Krein}.
The condition (i) reformulated for the Cayley transformation $T_0$ of $G_0$ (see \eqref{agga7})
means that $T_0$ has a unique self-adjoint contractive extension $T=T_\mu=T_M$. The latter
is equivalent to (iv) due to \cite[Theorem 6]{AK_Krein}.
\end{proof}
Assume that dual subspaces ${\mathfrak L}_\pm$ do not satisfy conditions of Proposition \ref{agga8}.
Then $T_\mu\not=T_M$ and the subspace $\mathfrak{M}=\overline{R(T_M-T_\mu)}$ of $\sH$
is nontrivial. In view of \eqref{agga4}, this subspace reduces the operator $J$. Therefore, the restriction of $J$ onto
$\mathfrak{M}$ determines the operator of fundamental symmetry in $\mathfrak{M}$ and the space $\mathfrak{M}$
endowed with the indefinite inner product $[\cdot,\cdot]$ is the Krein space $(\mathfrak{M}, [\cdot,\cdot])$.
A subspace $\mathfrak{M}_1\subset\mathfrak{M}$ is called \emph{hypermaximal neutral} if the space
$\mathfrak{M}$ can be decomposed $\mathfrak{M}=\mathfrak{M}_1\oplus{J}\mathfrak{M}_1$.
Not every Krein space contains hypermaximal neutral subspaces. The sufficient and necessary condition is the coincidence
of the dimension of a maximal positive subspace with the dimension of a maximal negative one \cite{AK_Azizov}.
\begin{theorem}\label{agga30}
Let dual subspaces ${\mathfrak L}_\pm$ do not satisfy conditions of Proposition \ref{agga8}.
Then ${\mathfrak L}_\pm$ are quasi maximal subspaces
if and only if the Krein space $(\mathfrak{M}, [\cdot,\cdot])$ contains a hypermaximal neutral subspace.
\end{theorem}
\begin{proof} Assume that ${\mathfrak L}_\pm$ are quasi maximal. By Theorem \ref{agga28},
this means the existence of an extremal extension $G\supset{G}_0$ with condition
\eqref{agga14}. Let $T$ be the Cayley transformation of $G$, see \eqref{agga16}.
By virtue of Theorem \ref{agga22b}, the operator $T$ is described
by \eqref{agga22}, where $X$ is a solution of \eqref{agga23}.
Due to \cite[Section 7]{AK_Arlin}, extremal extensions are specified in \eqref{agga22}
by the assumption that $X$ is an orthogonal projection in $\mathfrak{M}$.
Denote $\mathfrak{M}_1=X\mathfrak{M}$ and $\mathfrak{M}_2=(I-X)\mathfrak{M}$.
Since $X$ is the solution of \eqref{agga23} we decide that $J\mathfrak{M}_1=\mathfrak{M}_2$.
Therefore, $\mathfrak{M}=\mathfrak{M}_1\oplus{J}\mathfrak{M}_1$ and
$\mathfrak{M}_1$ is a hypermaximal neutral subspace of the Krein space
$(\mathfrak{M}, [\cdot,\cdot])$.
Conversely, let a hypermaximal neutral subspace $\mathfrak{M}_1$ be given.
Then $\mathfrak{M}=\mathfrak{M}_1\oplus{J}\mathfrak{M}_1$ and orthogonal projection $X$ on
$\mathfrak{M}_1$ turns out to be the solution of \eqref{agga23}. The formula \eqref{agga22}
with given $X$ determines self-adjoint strong contraction $T$ anticommuting with $J$ and
its Cayley transformation $G$ defines the Hilbert space $(\mathfrak{H}_G, (\cdot,\cdot)_G)$ in which
the direct sum \eqref{e8} is a dense set.
\end{proof}
\begin{corollary}\label{agga31}
Let the Krein space $(\mathfrak{M}, [\cdot,\cdot])$ contain a hypermaximal neutral subspace. Then
there exist infinitely many extensions of ${\mathfrak L}_\pm$ to dual maximal subspaces ${\mathfrak L}_\pm^{max}$
such that $\cD={\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$ is a dense set in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$ associated with
$\cD_{max}={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}$ and, at the same time,
there exist infinitely many extensions ${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$
such that $\cD$ is not dense in the corresponding Hilbert space $(\sH_G, (\cdot,\cdot)_G)$.
\end{corollary}
\begin{proof} It follows from the proof of Theorem \ref{agga30} that a
hypermaximal neutral subspace $\mathfrak{M}_1$ determines the dual maximal subspaces
${\mathfrak L}_\pm^{max}\supseteq{\mathfrak L}_\pm$ such that $\cD$ is a dense set in the Hilbert space
$(\sH_G, (\cdot,\cdot)_G)$ associated with $\cD_{max}$. The point is that the
orthogonal projection $X$ on $\mathfrak{M}_1$ in $\mathfrak{M}$ defines the subspaces ${\mathfrak L}_\pm^{max}$.
Therefore, one can construct infinitely many such extensions ${\mathfrak L}_\pm^{max}$ because there are infinitely many
hypermaximal neutral subspaces in the Krein space (of course, if at least one exists).
If $X$ is the orthogonal projection on $\mathfrak{M}_1$, then $I-X$ is the orthogonal projection on the hypermaximal neutral subspace
$J\mathfrak{M}_1$. These two operators define different pairs of dual maximal subspaces ${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$.
At the same time, the operators $X_\alpha=(1-\alpha)X+\alpha(I-X), \ \alpha\in(0,1)$
solve \eqref{agga23} and they determine dual maximal subspaces ${\mathfrak L}_\pm^{max}(\alpha)\supseteq{\mathfrak L}_\pm$
by the formulas \eqref{agga21} and \eqref{agga22}. The linear manifold $\cD$ can not be dense in the Hilbert space
$(\sH_G, (\cdot,\cdot)_G)$ associated with ${\mathfrak L}_+^{max}(\alpha)[\dot{+}]{\mathfrak L}_-^{max}(\alpha)$
because $X_\alpha$ loses the property of being orthogonal projector
($X_\alpha^2\not=X_\alpha$) and $G$ cannot be an extremal extension.
\end{proof}
By virtue of the results above, one can classify dual subspaces ${\mathfrak L}_\pm$
in dependence of behavior of the soft $T_M$ and the hard $T_\mu$ extensions of $T_0$.
Precisely:
\begin{itemize}
\item[(A)] \ $T_\mu=T_M$ $\iff$ the subspaces ${\mathfrak L}_\pm$ are quasi maximal,
there is unique extension of ${\mathfrak L}_\pm$ on to dual maximal subspaces ${\mathfrak L}_\pm^{max}$
$$
\cD={\mathfrak L}_+[\dot{+}]{\mathfrak L}_- \ \rightarrow \ \cD_{max}={\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}
$$
and the linear manifold $\cD$ is dense in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$ associated with $\cD_{max}$;
\item[(B)] $T_\mu\not=T_M$ and
the Krein space $(\mathfrak{M}, [\cdot,\cdot])$ contains hypermaximal neutral subspaces $\iff$ the subspaces ${\mathfrak L}_\pm$ are quasi maximal, there are infinitely many extensions $\cD \to \cD_{max}$ such that $\cD$ is dense in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$
associated with $\cD_{max}$ and simultaneously, there are infinitely many extensions $\cD \to \cD_{max}$ for which $\cD$ cannot be a
dense set in $(\sH_G, (\cdot,\cdot)_G)$;
\item[(C)] $T_\mu\not=T_M$ and the Krein space $(\mathfrak{M}, [\cdot,\cdot])$ does not contain
hypermaximal neutral subspaces $\iff$ the subspaces ${\mathfrak L}_\pm$ are not quasi maximal,
the total amount of possible extensions ${\mathfrak L}_\pm \to {\mathfrak L}_\pm^{max}$ is not specified (a unique extension is possible as well as infinitely many ones), the linear manifold $\cD$ is not dense in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$ associated with $\cD_{max}$.
\end{itemize}
\subsection{Examples.}
Let $\sL_{\pm}$ be dual definite subspaces
and let ${\mathfrak L}_\pm^{max}\supset\sL_{\pm}$ be dual maximal definite subspaces.
The subspaces ${\mathfrak L}_\pm^{max}$ are described by \eqref{agga21}, where $T$ is
a self-adjoint strong contraction anticommuting with $J$. Similarly,
the subspaces ${\mathfrak L}_\pm$ are determined by \eqref{agga20b}, where $T_0$ is the
restriction of $T$ onto $D(T_0)=M_+\oplus{M_-}$ where $M_\pm$ are subspaces of $\sH_\pm$.
Denote
\begin{equation}\label{neww723}
\Xi=\sqrt{I-T^2}.
\end{equation}
The operator $\Xi$ is a positive self-adjoint contraction in $\sH$ which leaves
the subspaces $\sH_{\pm}$ invariant.
\begin{lemma}\label{new23}
The direct sum of $\sL_{\pm}$ is a dense set in the Hilbert space
$(\sH_G, (\cdot,\cdot)_G)$ associated with the sum ${\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-^{max}$ of dual maximal definite subspaces
${\mathfrak L}_\pm^{max}\supset\sL_{\pm}$ if and only if
\begin{equation}\label{new38}
R(\Xi)\cap(\sH\ominus(M_-\oplus{M_+}))=\{0\}.
\end{equation}
\end{lemma}
\begin{proof}
It follows from \eqref{fff7}:
$$
\|f\|_G^2=(Gf, f)=\|\sqrt{G}f\|^2=\|\Xi{x}\|^2, \qquad f=(I+T)x\in{D}(G).
$$
Therefore, $\{{f}_n\}$ is a Cauchy sequence in $(\sH_G, (\cdot,\cdot)_G)$
if and only if $\{\Xi{x_n}\}$ is a Cauchy sequence in $(\sH, (\cdot,\cdot))$. This means that
a one-to-one correspondence between $\sH_G$ and $\sH$ can be established as follows:
$$
{f}_n\to{F_\gamma}\in\sH_G \ (\mbox{wrt.} \ \|\cdot\|_{G}) \iff \Xi{x_n}\to\gamma\in\sH \ (\mbox{wrt.} \ \|\cdot\|).
$$
Let as assume that $F\in\sH_G$ is orthogonal to ${\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$.
Then $F=F_\gamma$, where $\gamma\in\sH$ and for all $f\in{\mathfrak L}_+[\dot{+}]{\mathfrak L}_-=(I+T)D(T_0)$
$$
0=(F_\gamma, f)_G=\lim_{n\to\infty}({f}_n, f)_{G}=\lim_{n\to\infty}(\Xi{x_n}, \Xi{x}_0)=(\Xi\gamma, x_0),
$$
where $x_0$ runs $D(T_0)$. By virtue of \eqref{neww4} and \eqref{new38}, $\gamma=0$. Therefore,
$F_\gamma=0$.
\end{proof}
\subsubsection{How to construct dual quasi maximal
subspaces?}
We consider below an example (inspired by \cite{AK_AK}) which illustrates a general method of the construction of dual quasi maximal
subspaces.
Let $\{\gamma_n^+\}$ and $\{\gamma_n^-\}$ be orthonormal bases of subspaces $\sH_{\pm}$
in the fundamental decomposition \eqref{AK10}.
Every $\phi\in{\sH}$ has the representation
\begin{equation}\label{fff5}
\phi=\gamma^++\gamma^-=\sum_{n=1}^{\infty}(c_n^+\gamma_n^++c_n^-\gamma_n^-), \quad \gamma^{\pm}=\sum_{n=1}^{\infty}c_n^{\pm}\gamma_n^{\pm}\in\sH_{\pm},
\end{equation}
where $\{c_n^\pm\}\in{l_2(\mathbb{N})}$.
The operator
\begin{equation}\label{fff3}
T\phi=\sum_{n=1}^{\infty}i\alpha_n(c_n^+\gamma_n^--c_n^-\gamma_n^+),\qquad
\alpha_n=1-\frac{1}{n}
\end{equation}
is a self-adjoint contraction anticommuting with the fundamental symmetry $J$
of the Krein space $(\sH, [\cdot,\cdot])$.
The subspaces $\sL_{\pm}^{max}$ defined by \eqref{agga21} with the operator $T$ above
are dual maximal definite. But they cannot be uniformly definite since $\|T\|=1$, see Lemma \ref{agga1}.
Let us fix elements $\chi^{\pm}{\in}\sH$,
\begin{equation}\label{agga28c}
\chi^+=\sum_{n=1}^{\infty}\frac{1}{n^\delta}\gamma_n^+, \quad \chi^-=\sum_{n=1}^{\infty}\frac{1}{n^\delta}\gamma_n^-, \qquad
\delta>\frac{1}{2}
\end{equation}
and define the following subspaces of $\sH_\pm$:
$$
M_{+}=\{\gamma^+\in{\sH_+}: (\gamma^+,\chi^+)=0\}, \quad M_{-}=\{
\gamma^-\in{\sH_{-}} : (\gamma^-,\chi^-)=0\}.
$$
\begin{proposition}{\label{newprop}}
The dual definite subspaces
\begin{equation}\label{agga29}
\sL_+=(I+T)M_+, \qquad \sL_-=(I+T)M_-
\end{equation}
are quasi maximal for $\frac{1}{2}<\delta\leq\frac{3}{2}$.
In particular, $\frac{1}{2}<\delta\leq{1}$ corresponds to the case (A); the case (B) holds
when $1<\delta\leq\frac{3}{2}$.
\end{proposition}
\begin{proof}
First of all we note that $\cD={\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$ is a dense set in $\sH$ for $\frac{1}{2}<\delta\leq\frac{3}{2}$ \cite[p. 317]{AK_AK}.
The subspaces $\sL_\pm$ in \eqref{agga29} are the restriction of dual maximal subspaces ${\mathfrak L}_\pm^{max}=(I+T)\sH$.
Let us show that, for $\frac{1}{2}<\delta\leq{1}$, the set $\cD$
is dense in the Hilbert space
$(\sH_G, (\cdot,\cdot)_G)$ associated with ${\mathfrak L}_\pm^{max}$.
Due to Lemma \ref{new23}, we should check
\eqref{new38}. In view of \eqref{fff5}, \eqref{fff3}:
$$
R(\Xi)=R(\sqrt{I-T^2})=\left\{\sum_{n=1}^{\infty}\sqrt{(1-\alpha_n^2)}(c_n^+\gamma_n^++c_n^-\gamma_n^-) \ : \ \forall\{c_n^\pm\}\in{l_2(\mathbb{N})}\right\}.
$$
On the other hand, $\sH\ominus(M_-\oplus{M_+})=\mbox{span}\{\chi^+, \chi^{-}\}$. Hence, the set
$R(\Xi)\cap(\sH\ominus(M_-\oplus{M_+}))$ contains nonzero elements if and only if
$$
\left\{\frac{1}{n^\delta\sqrt{(1-\alpha_n^2)}}=\frac{1}{n^{\delta-1/2}\sqrt{(2-1/n)}}\right\}\in{l_2(\mathbb{N})}
$$
that is impossible for $\frac{1}{2}<\delta\leq{1}$. Therefore, relation \eqref{new38} holds and
$\sL_\pm$ are quasi maximal subspaces. By \cite[Proposition 6.3.9]{AK_AK},
the dual subspaces $\sL_\pm$ have a unique extension to dual maximal ones ${\mathfrak L}_\pm^{max}$
when $\frac{1}{2}<\delta\leq{1}$ that corresponds to the case $(A)$.
If ${1}<\delta\leq\frac{3}{2}$, then the set $\cD$ cannot be dense in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$
considered above. However, for such values of $\delta$, the dual subspaces $\sL_\pm$ can be extended to different pairs of
dual maximal subspaces \cite[Proposition 6.3.9]{AK_AK}. This means that the subspace
$\mathfrak{M}=\overline{R(T_M-T_\mu)}$ is not trivial. The operators $T_\mu, T_M$ are extensions of $T_0$.
Hence, they coincide on $D(T_0)=M_-\oplus{M_+}$.
Taking into account that $\sH=\mathfrak{M}\oplus\ker(T_M-T_\mu)$, we conclude that
$\mathfrak{M}\subset\mbox{span}\{\chi^+, \chi^{-}\}$. Moreover, $\mathfrak{M}=\mbox{span}\{\chi^+, \chi^{-}\}$.
Indeed, if $\dim\mathfrak{M}=1$, then the operator $J$ in \eqref{agga23} coincides with $+I$ (or $-I$). In this case,
the equation \eqref{agga23} has a unique solution $X=\frac{1}{2}I$ and, by Theorem \ref{agga22b}, there exists a unique extension
${\mathfrak L}_\pm^{max}\supset\sL_\pm$ that is impossible. Therefore,
$\dim\mathfrak{M}=2$ and $\mathfrak{M}=\mbox{span}\{\chi^+, \chi^{-}\}$.
The Krein space $(\mbox{span}\{\chi^+, \chi^{-}\}, [\cdot, \cdot])$ contains infinitely many hypemaximal neutral subspaces.
By Theorem \ref{agga30}, there are infinitely many extensions
${\mathfrak L}_\pm^{max}\supset\sL_\pm$, ${\mathfrak L}_\pm^{max}\not=(I+T)\sH$ such that
$\cD$ is a dense set in the corresponding Hilbert spaces $(\sH_G, (\cdot,\cdot)_G)$. Therefore, $\sL_\pm$
are quasi maximal (the case $(B)$).
\end{proof}
\subsubsection{The uniqueness of dual maximal extension ${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$ does not mean
that ${\mathfrak L}_\pm$ are quasi maximal.}\label{sec4.2.2}
Let us assume that $\chi^+=0$ and $\chi^{-}$ is defined as in \eqref{agga28c}.
Then $M_+=\sH_+$ and the subspace $\sL_+$ in \eqref{agga29} coincides with
${\mathfrak L}_+^{max}$. This means that
the dual maximal definite subspaces ${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$ are determined uniquely.
Precisely, ${\mathfrak L}_+^{max}=\sL_{+}$ and
\begin{equation}\label{agga12}
{\mathfrak L}_-^{max}=\sL_{+}^{[\perp]}\supset{\mathfrak L}_-=(I+T)M_-,
\end{equation}
where $\sL_{+}^{[\perp]}$ denotes the maximal negative subspace orthogonal to $\sL_{+}$ with respect to the indefinite inner product
$[\cdot, \cdot]$.
It follows from Proposition \ref{newprop} that the dual definite subspaces
${\mathfrak L}_+^{max}$ and $\sL_-$ are quasi maximal (the case (A) of the classification above)
for $\frac{1}{2}<\delta\leq{1}$.
Reasoning by analogy with the proof of Proposition \ref{newprop} we also conclude that the direct sum
${\mathfrak L}_+^{max}[\dot{+}]{\mathfrak L}_-$ cannot be dense in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$
constructed by ${\mathfrak L}_\pm^{max}$ for $1<\delta\leq\frac{3}{2}$.
Therefore, the subspaces ${\mathfrak L}_+^{max}$ and $\sL_-$ cannot be quasi maximal (the case $(C)$ of the classification above).
\section{Operator of $\cC$-symmetry associated with dual maximal definite subspaces}\label{5}
Let ${\mathfrak L}_\pm$ be dual definite subspaces such that theirs direct sum is a dense set in
$\sH$. An operator $\cC_0$ \emph{associated with ${\mathfrak L}_\pm$}
is defined as follows: its domain $D(\cC_0)$ coincides with ${\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$
and
\begin{equation}\label{new5}
\cC_0{f}=f_{+}-f_{-}, \qquad f=f_{+}+f_{-}\in{D(\cC_0)}, \quad f_\pm\in{\mathfrak L}_\pm.
\end{equation}
If $\cC_0$ is given, then the corresponding dual subspaces ${\mathfrak L}_\pm$
are recovered by the formula
$\sL_\pm=\frac{1}{2}(I\pm\cC_0)D(\cC_0)$.
\begin{proposition}\label{neww37}
The following assertions are equivalent:
\begin{itemize}
\item[(i)] $\cC_0$ is determined by dual subspaces ${\mathfrak L}_\pm$ with the use
of \eqref{new5};
\item[(ii)] $\cC_0$ satisfies the relation $\cC_0^2f=f$ for all $f\in{D(\cC_0)}$ and
$J\cC_0$ is a closed densely defined positive symmetric operator in $\sH$.
\end{itemize}
\end{proposition}
\begin{proof}
By virtue of \eqref{agga15} and \eqref{new5}, $(G_0f, g)=[\cC_0{f},g]$, $\forall{f, g}\in\cD(G_0)$.
Therefore, $G_0=J\cC_0$, where $G_0$ is a closed densely defined positive symmetric operator acting in $\sH$ and
defined by \eqref{agga7}. The implication $(i)\to(ii)$ is proved.
$(ii)\to(i)$. The operator $G_0=J\cC_0$ satisfies \eqref{AK71} (since $\cC_0^2=I$ on $\cD(G_0)$).
This operator has the Cayley transform $T_0$ (see \eqref{agga7}) which is a strong contraction in $\sH$
with the condition \eqref{fff1}. Substituting $T_0$ into \eqref{agga20b} we obtain the required dual
definite subspaces ${\mathfrak L}_\pm$ which generate $\cC_0$.
\end{proof}
We will say that $\cC_0$ is \emph{an operator of $\cC$-symmetry} if $\cC_0$ is
associated with dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$.
In this case, the notation $\cC$ will be used instead of $\cC_0$. An operator of
$\cC$-symmetry admits the presentation $\cC=Je^Q$, where $Q$ is a self-adjoint operator in $\sH$
such that $JQ=-QJ$ \cite{KS}.
Let $\cC_0$ be an operator associated with dual definite subspaces ${\mathfrak L}_\pm$.
Its extension to the operator of $\cC$-symmetry $\cC$ is equivalent to the construction of
dual maximal definite subspaces ${\mathfrak L}_\pm^{max}\supset\sL_{\pm}$.
By Theorem \ref{agga2}, each operator $\cC_0$ can be extended to an operator of $\cC$-symmetry
which, in general, is not determined uniquely. Its choice $\cC\supset\cC_0$ determines
the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$, where
\begin{equation}\label{neww11}
G=J\cC=e^Q.
\end{equation}
If ${\mathfrak L}_\pm$ are quasi maximal, then there exists
a dual maximal extension
${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$ such that $\cC_0$ is extended to $\cC$
by continuity in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$
generated by ${\mathfrak L}_\pm^{max}$.
If the subspaces ${\mathfrak L}_\pm$ are not quasi maximal, then
the sum ${\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$ loses the property of being dense
in each Hilbert space $(\sH_G, (\cdot,\cdot)_G)$ constructed by the dual maximal subspaces
${\mathfrak L}_\pm^{max}\supset{\mathfrak L}_\pm$. In this case, the extension by continuity of $\cC_0$ to an operator of $\cC$-symmetry
is impossible. In other words, all possible extensions of $\cC_0$ to an operator of $\cC$-symmetry generate Hilbert spaces
$(\sH_G, (\cdot,\cdot)_G)$ each of which contains a nontrivial part $\sH_G\ominus_G({\mathfrak L}_+[\dot{+}]{\mathfrak L}_-)$
that has no direct relationship to the original dual subspaces ${\mathfrak L}_\pm$.
Summing up, we can rephrase the classification in Section \ref{4.1} as follows:
\begin{itemize}
\item[(A{'})] the extension of $\cC_0$ to an operator of $\cC$-symmetry is unique and it coincides with the extension of
$\cC_0$ by continuity in the Hilbert space $(\sH_G, (\cdot,\cdot)_G)$, where $G=G_\mu$ is the Friedrichs extension of $G_0$;
\item[(B')] the extension of $\cC_0$ to an operator of $\cC$-symmetry is not unique. Infinitely many operators $\cC$
can be realized via the extension of $\cC_0$ by continuity in the corresponding Hilbert spaces $(\sH_G, (\cdot,\cdot)_G)$.
At the same time, there exist infinitely many extensions $\cC\supset\cC_0$
which generate Hilbert spaces $(\sH_G, (\cdot,\cdot)_G)$ containing the closure of ${\mathfrak L}_+[\dot{+}]{\mathfrak L}_-$
as a proper subspace;
\item[(C')] there is no extension of $\cC_0$ to an operator of $\cC$-symmetry by continuity in the corresponding Hilbert space $(\sH_G, (\cdot,\cdot)_G)$.
The total amount of possible extensions $\cC\supset\cC_0$ is not specified (a unique extension is possible as well as infinitely many ones).
\end{itemize}
\section{$J$-orthonormal quasi bases}\label{6}
\subsection{Quasi bases.}
A sequence $\{f_n\}$ is called orthonormal in the Krein space $(\sH, [\cdot,\cdot])$, (briefly, $J$-orthonormal)
if $\{f_n\}$ is orthonormalized with respect to the indefinite inner product $[\cdot,\cdot]$, i.e.,
if $|[f_n, f_m]|=\delta_{nm}$. In what follows, the sequence $\{f_n\}$ is assumed to be complete
(i.e., the span of $\{f_n\}$ is a densely defined set in $\sH$).
Separating the sequence $\{f_n\}$ by the signs of $[f_n,f_n]$:
\begin{equation}\label{bebe95}
f_{n}=\left\{\begin{array}{l}
f_{n}^+ \quad \mbox{if} \quad [f_{n},f_{n}]=1, \\
f_{n}^- \quad \mbox{if} \quad [f_{n},f_{n}]=-1
\end{array}\right.
\end{equation}
we obtain two sequences of positive $\{f_n^+\}$ and negative $\{f_n^-\}$ orthonormal elements.
Denote by ${\mathfrak L}_+$ and ${\mathfrak L}_-$ the closure in the Hilbert space $(\sH, (\cdot, \cdot))$ of
the linear spans generated by the sets $\{f_n^+\}$ and $\{f_n^-\}$, respectively.
By the construction, ${\mathfrak L}_\pm$ are dual definite subspaces and their direct sum \eqref{e8} is dense in ${\mathfrak H}$.
\begin{definition} A complete $J$-orthonormal sequence $\{f_n\}$ is called a quasi basis
of the Hilbert space $(\sH, (\cdot, \cdot))$
if the corresponding dual subspaces ${\mathfrak L}_\pm$ are quasi maximal.
\end{definition}
If a $J$-orthonormal sequence $\{f_n\}$ is a basis in $\sH$, then
the dual subspaces ${\mathfrak L}_\pm$
are maximal definite, i.e., ${\mathfrak L}_\pm={\mathfrak L}_\pm^{max}$ \cite[Statement 10.12 in Chapter 1]{AK_Azizov}.
Therefore, $\{f_n\}$ is also a quasi basis in $\sH$.
\begin{proposition}
Let a $J$-orthonormal sequence $\{f_n\}$ be a quasi basis in $\sH$, then
its biorthogonal sequence $\{\gamma_n=\mbox{sign}([f_n,f_n])Jf_n\}$ is also a quasi basis.
\end{proposition}
\begin{proof} The biorthogonal sequence $\{\gamma_n\}$ determines dual definite subspaces $J{\mathfrak L}_\pm$.
The corresponding positive symmetric operator associated with $J{\mathfrak L}_+[\dot{+}]J{\mathfrak L}_-$ (see \eqref{agga7})
coincides with $G_0^{-1}$. By Theorem \ref{agga28} there exists an extremal extension $G\supset{G_0}$ which satisfies \eqref{agga14}.
Then, $G^{-1}$ is an extremal extension of $G_0^{-1}$ which satisfies \eqref{agga14}. Therefore, $J{\mathfrak L}_\pm$
are quasi maximal.
\end{proof}
\begin{theorem}\label{agga38}
Let $\{f_n\}$ be a complete $J$-orthonormal sequence in $\sH$ and
let $\cC_0$ be associated with the subspaces ${\mathfrak L}_\pm$ generated by $\{f_n\}$
The following statements are equivalent:
\begin{itemize}
\item[(i)] $\{f_n\}$ is a quasi basis of $\sH$;
\item[(ii)] there exists an operator of $\cC$-symmetry $\cC\supset\cC_0$
such that $\{f_n\}$ turns out to be an orthonormal basis in the
Hilbert space $(\sH_G, (\cdot, \cdot)_G)$ generated by $\cC$.
\item[(iii)] there exists a self-adjoint operator $Q$ in the Hilbert space $\sH$, which anticommutes with
$J$ and such that the sequence $\{g_n=e^{Q/2}f_n\}$ is an orthonormal basis of $\sH$ and each $g_n$ belongs to one of the subspaces
$\sH_\pm$ of the fundamental decomposition \eqref{AK10}.
\end{itemize}
\end{theorem}
\begin{proof} $(i)\to(ii)$. If $\{f_n\}$ is a quasi basis, then there exists an extremal extension $G\supset{G_0}$ which satisfies \eqref{agga14}
and the linear span of $\{f_n\}$ is dense in $(\sH_G, (\cdot, \cdot)_G)$.
It follows from \eqref{agga15} and \eqref{neww11} that
\begin{equation}\label{neww49}
(f_n, f_m)_G=(Gf_n, f_m)=[\cC{f}_n, f_m]=\delta_{nm}.
\end{equation}
Therefore, $\{f_n\}$ is an orthonormal basis in the Hilbert space $(\sH_G, (\cdot, \cdot)_G)$.
The inverse implication $(ii)\to(i)$ is obvious.
$(ii)\to(iii)$. Since $\cC=Je^Q$ and $G=e^Q$, where $Q$ satisfies the condition of item $(iii)$,
the relation \eqref{neww49} takes the form $(e^{Q/2}f_n, e^{Q/2}f_m)=\delta_{nm}$.
Hence, $\{g_n=e^{Q/2}f_n\}$ is an orthonormal sequence in $\sH$. The completeness of
$\{g_n\}$ in $\sH$ will be established with the use of Lemma \ref{new23}. Before doing this we
note that the dual maximal definite subspaces corresponding to
$\cC=Je^Q$ are given by \eqref{agga21}, where $T=-\tanh\frac{Q}{2}$ \cite{KS}.
Therefore, the bounded operator $\Xi$ in \eqref{neww723} coincides with $\cosh^{-1}{Q/2}$, where
$\cosh{Q/2}=\frac{1}{2}(e^{Q/2}+e^{-Q/2})$.
Each $g_n$ belongs to the domain of definition of $\cosh{Q/2}$ and
\begin{equation}\label{agga45}
(I-\tanh{Q}/{2})\cosh{Q}/{2}g_n=(\cosh{Q}/{2} - \sinh{Q}/{2})g_n=e^{-{Q}/{2}}g_n=f_n.
\end{equation}
Comparing the obtained relation with \eqref{agga20b}, \eqref{neww4} and taking into account
the definition of ${\mathfrak L}_\pm$, we conclude that
$M_-\oplus{M_+}$ coincides with the closure of $\mbox{span}\{\cosh{Q}/{2}g_n\}$. Therefore,
$$
\sH\ominus(M_-\oplus{M_+})=\sH\ominus\mbox{span}\{\cosh{Q}/{2}g_n\}.
$$
Let $u\in\sH$ be orthogonal to $\{g_n\}$. Then
$$
0=(u, g_n)=(\cosh^{-1}{Q/2}u, \cosh{Q}/{2}g_n)
$$ and hence, $\cosh^{-1}{Q/2}u\in{R}(\Xi)\cap(\sH\ominus(M_-\oplus{M_+}))$.
By virtue of Lemma \ref{new23}, $\cosh^{-1}{Q/2}u=0$. This means that $u=0$ and
$\{g_n\}$ is a complete orthonormal sequence in $\sH$, i.e., $\{g_n\}$ is a basis in $\sH$.
It follows from \eqref{agga45}, that $\cosh{Q}/{2}g_n$ belongs to $\sH_+$ or $\sH_-$
(depending on either $f_n\in{\mathfrak L}_+$ or $f_n\in{\mathfrak L}_-$). The same property
holds true for $g_n$ since the operator $\cosh{Q/2}=\frac{1}{2}(e^{Q/2}+e^{-Q/2})$ commutes with $J$.
$(iii)\to(ii)$. Since $g_n\in\sH_\pm$, we get $Jg_n=\pm{g_n}=Je^{Q/2}f_n=e^{-Q/2}Jf_n$.
Therefore $g_n\in{D(e^{Q/2})}=R(e^{-Q/2})$.
This means that the sequence $\{\cosh{Q}/{2}g_n\}$ is well defined and $f_n\in{D}(e^Q)$.
For given $Q$ we define the operator of $\cC$-symmetry $\cC=Je^Q$ and set $G=e^Q$.
By analogy with \eqref{neww49},
$$
\delta_{nm}=(g_n, g_m)=(e^{Q/2}f_n, e^{Q/2}f_m)=(Gf_n, f_m)=(f_n, f_m)_G.
$$
Therefore, $\{f_n\}$ is an orthonormal sequence in $(\sH_G, (\cdot, \cdot)_G)$.
Furthermore, the relations above mean that $Gf_n=\gamma_n=sign([f_n,f_n])Jf_n$ and
$\cC{f}_n=\cC_0f_n=sign([f_n,f_n])f_n$. Hence, $\cC$ is an extension of $\cC_0$ and
the dual maximal definite subspaces ${\mathfrak L}_\pm^{max}$ determined by $\cC$
are the extensions of the dual definite subspaces ${\mathfrak L}_\pm$ generated as the
closures of $\{f_n^{\pm}\}$. This fact and \eqref{agga45} lead to the conclusion that
${\mathfrak L}_\pm$ are determined by \eqref{agga20b}, where
$D(T_0)=M_-\oplus{M_+}$ coincides with the closure of $\mbox{span}\{\cosh{Q}/{2}g_n\}$.
Assume that $\{f_n\}$ is not complete in $(\sH_G, (\cdot, \cdot)_G)$. Then the direct sum
of ${\mathfrak L}_\pm$ cannot be dense in $\sH_G$ and, by Lemma \ref{new23} (since $\Xi=\cosh^{-1}{Q/2}$) there exists
nonzero $p=\cosh^{-1}{Q/2}u$ such that for all $g_n$
$$
0=(p, \cosh{Q}/{2}g_n)=(\cosh^{-1}{Q/2}u, \cosh{Q/2}g_n)=(u, g_n)=0
$$
that is impossible (since $\{g_n\}$ is a basis of $\sH$). The obtained contradiction means
that $\{f_n\}$ is an orthonormal basis of $\sH_G$.
\end{proof}
\begin{corollary}\label{agga35}
Let $\{f_n\}$ be a quasi basis of $\sH$.
Then there exists an operator of $\cC$-symmetry $\cC=Je^{Q}\supset\cC_0$
such that for elements of the energetic linear manifold $g\in\mathfrak{D}[G]\subset\sH$:
\begin{equation}\label{agga81}
g=\sum_{n=1}^\infty[g, {\cC}f_n]f_n, \qquad e^{Q/2}g=\sum_{n=1}^\infty[g, {\cC}f_n]e^{Q/2}f_n
\end{equation}
where the series converge in the Hilbert spaces $(\sH_G, (\cdot, \cdot)_G)$ and $(\sH, (\cdot, \cdot))$,
respectively.
\end{corollary}
\begin{proof}
The energetic linear manifold $\mathfrak{D}[G]$
coincides with the common part of $\sH$ and $\sH_G$ (see Section \ref{ref1}).
Hence, each $g\in\mathfrak{D}[G]$
can be presented as $g=\sum_{n=1}^\infty{c_n}f_n$, where the series converges
$\mathfrak{H}_{G}$ and $c_n=(g, f_n)_G=(e^{Q/2}g, e^{Q/2}f_n)=(g, e^{Q}f_n)=[g, {\cC}f_n].$
Similarly, each $e^{Q/2}g$ admits the decomposition $e^{Q/2}g=\sum_{n=1}^\infty{c_n'}e^{Q/2}f_n$,
where the series converges in $\sH$ and $c_n'=(e^{Q/2}g, e^{Q/2}f_n)=[g, {\cC}f_n].$
\end{proof}
\begin{corollary}
Let $\{f_n\}$ be a quasi basis of $\sH$.
Then an operator of $\cC$-symmetry $\cC$ appearing in items $(ii), (iii)$ of Theorem \ref{agga38} acts
as follows:
\begin{equation}\label{new8b}
\cC{f}=\sum_{n=1}^\infty[f, f_n]{f_n}, \qquad \cC{f}=\sum_{n=1}^\infty[e^{Q/2}f, f_n]{e^{Q/2}f_n}, \quad \forall{f}\in{D}(\cC),
\end{equation}
where the series are convergent in $\sH_G$ and $\sH$, respectively.
\end{corollary}
\begin{proof}
The relations in \eqref{new8b} follow from the corresponding formulas in \eqref{agga81} where
$g=\cC{f}$ and $g=e^{-Q/2}\cC{f}$, respectively.
\end{proof}
An operator $H$ in a Krein space $(\sH, [\cdot, \cdot])$ is called \emph{$J$-symmetric} if
$[Hf, g]=[f, Hg]$ for all $f,g\in{D}(H)$.
\begin{corollary}\label{agga72}
If eigenfunctions $\{f_n\}$ of a $J$-symmetric operator $H$ form a quasi basis in $\sH$, then
there exists an operator of $\cC$-symmetry $\cC$ such that the operator
$H$ restricted on $\mbox{span}\{f_n\}$ turns out to be essentially self-adjoint in the Hilbert space $(\sH_G, (\cdot,\cdot)_{G})$
generated by $\cC$.
\end{corollary}
\begin{proof} Due to Theorem \ref{agga38} there exists an operator $\cC$ such that $\{f_n\}$ is a basis of $(\sH_G, (\cdot,\cdot)_{G})$.
The restriction of $\cC$ on $\mbox{span}\{f_n\}$ coincides with the operator $\cC_0$ defined by \eqref{new5} (here, of course,
${\mathfrak L}_\pm$ are the closures of $\{f_n^{\pm}\}$). It is easy to see that
$$
\cC{H}f=\cC_0{H}f=H\cC_0{f}=H\cC{f}, \qquad f\in\mbox{span}\{f_n\}.
$$
Taking \eqref{agga15}, \eqref{fff7} into account, we obtain
$$
(Hf, g)_G=(GHf, g)=[\cC{H}f, g]=[f, \cC{H}g]=(f, GHg)=(f, Hg)_G
$$
for all $f, g\in\mbox{span}\{f_n\}$.
Hence $H$ is symmetric in $(\sH_G, (\cdot,\cdot)_{G})$. Since $R(H\pm{i}I)\supset\mbox{span}\{f_n\}$, the operator
$H$ is essentially self-adjoint in $\sH_G$.
\end{proof}
\subsection{Examples.} Examples of quasi-bases can be easy constructed with the use
of Theorem \ref{agga38}. Indeed let us assume that $g_n$ be an orthonormal basis of $(\sH, (\cdot,\cdot))$
such that each $g_n$ belongs to one of the subspaces $\sH_\pm$ of the fundamental decomposition \eqref{AK10}.
Let $Q$ be a self-adjoint operator in $\sH$, which anticommutes with $J$. If all $g_n$ belong to the domain of definition of
$e^{-Q/2}$ then $f_n=e^{-Q/2}g_n$ is an $J$-orthonormal system of the Krein space $(\sH, [\cdot,\cdot])$.
Assuming additionally that $\{f_n\}$ is complete in $\sH$, we get an example of quasi basis.
\vspace{3mm}
{\bf I.}
Let $\sH=L_2(\mathbb{R})$ and let $J=\mathcal{P}$ be the space parity operator $\mathcal{P}f(x)=f(-x)$.
The subspaces $\sH_{\pm}$ of the fundamental decomposition \eqref{AK10} coincide with the subspaces of even and odd
functions of $L_2(\mathbb{R})$.
The Hermite functions
$$
g_n(x)=\frac{1}{\sqrt{2^nn!\sqrt{\pi}}}H_n(x)e^{-x^2/2}, \quad H_n(x)=e^{x^2/2}(x-\frac{d}{dx})^ne^{-x^2/2}
$$
is an example of orthonormal basis of $L_2(\mathbb{R})$. The functions $g_n$ are either odd or even functions.
Therefore, $g_n\in\sH_+$ or $g_n\in\sH_-$.
Since Hermitian functions are entire functions, the complex shift of $g_n$ can be defined:
$$
f_n(x)=g_n(x+ia), \qquad a\in\mathbb{R}\setminus\{0\}, \quad n=0,1,2,\ldots
$$
The sequence $\{f_n\}$ is complete in $L_2(\mathbb{R})$ \cite[Lemma 2.5]{Mit}.
Applying the Fourier transform
$Ff=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty{e^{-ix\xi}}f(x)dx$
to $f_n$ we get $Ff_n=e^{-a\xi}Fg_n$. Therefore, $f_n=F^{-1}e^{-a\xi}Fg_n$.
The last relation can be rewritten as
$$
f_n=e^{-Q/2}g_n, \qquad Q=-2ai\frac{d}{dx}.
$$
This means that $\{f_n\}$ is a quasi basis of $L_2(\mathbb{R})$.
The functions $\{f_n\}$ are simple eigenfunctions of the $\mathcal{P}$-symmetric
operator
$$
H=-\frac{d^2}{dx^2}+x^2+2iax, \qquad Hf_n=(1+2n+a^2)f_n.
$$
Therefore, $H$ restricted on $\mbox{span}\{f_n\}$ is essentially self-adjoint
in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$, where $\sH_G$ is the
completion of $\mbox{span}\{f_n\}$ with respect to the norm:
$\|f\|^2_G=(e^Qf,f)=(F^{-1}e^{2a\xi}Ff, f)$.
{\bf II.} Let $\{g_n\}$ be orthonormal basis in $L_2(\mathbb{R})$
which consists of the eigenfunctions of the anharmonic oscillator
$$
H_0=-\frac{d^2}{dx^2} + |x|^\beta, \qquad \beta>2
$$
The eigenfunctions $g_n$ are either even or odd functions.
Consider the sequence $f_n(x)=e^{p(x)}g_n(x)$,
where $p\in{C^2}(\mathbb{R})$ is a real valued odd function such that
$$
|p^{k}(x)|\leq{C}(1+x^2)^{\frac{\alpha-k}{2}}, \quad k=0,1,2, \quad \alpha<\beta/2+1.
$$
The sequence $\{f_n\}$ is complete in $L_2(\mathbb{R})$ \cite[Lemma 3.6]{Mit} and
$f_n$ are simple eigenfunctions of the $\mathcal{P}$-symmetric operator
$$
H=H_0+p''(x)- (p'(x))^2+2ip'(x)\frac{d}{dx}.
$$
The sequence $\{f_n\}$ is a quasi basis in $L_2(\mathbb{R})$
(since $f_n=e^{-Q/2}g_n$ with $Q=-2p(x)I$) and
$H$ restricted on $\mbox{span}\{f_n\}$ is essentially self-adjoint
in the new Hilbert space $(\sH_G, (\cdot,\cdot)_G)$, where $\sH_G$ is the
completion of $\mbox{span}\{f_n\}$ with respect to the norm:
$\|f\|^2_G=(e^Qf,f)=(e^{-2p(x)}f, f)$.
\vspace{3mm}
{\bf III.} \emph{An example of a complete $J$-orthonormal sequence $\{f_n\}$ which cannot be a quasi-basis.}
The dual definite subspaces ${\mathfrak L}_+^{max}$ and $\sL_-$ considered in Sect.
\ref{sec4.2.2} cannot be dual quasi maximal for $1<\delta\leq\frac{3}{2}$. Therefore,
each $J$-orthonormal sequence $\{f_n\}$ such that the closure of its positive/negative elements coincide
with ${\mathfrak L}_+^{max}$ and $\sL_-$, respectively cannot be quasi basis.
|
{
"timestamp": "2018-02-26T02:12:03",
"yymm": "1802",
"arxiv_id": "1802.08647",
"language": "en",
"url": "https://arxiv.org/abs/1802.08647"
}
|
\section{Introduction}\label{sec:intro}
Recent advances in deep representation learning have resulted in powerful probabilistic generative models which have demonstrated their ability on modeling continuous data, \eg, time series signals~\citep{OorDieZenSimetal16, DaiDaiZhaetal17} and images~\citep{RadMetChi15,KarAilLaietal17}.
Despite the success in these domains,
it is still challenging to correctly generate discrete structured data,
such as graphs, molecules and computer programs.
Since many of the structures have syntax and semantic formalisms,
the generative models without explicit constraints often produces invalid ones.
Conceptually an approach in generative model for structured data can be divided in two parts,
one being the formalization of the structure generation
and the other one being a (usually deep) generative model producing parameters for stochastic process in that formalization.
Often the hope is that with the help of training samples and capacity of deep models,
the loss function will prefer the valid patterns and encourage the mass of the distribution of the generative model towards the desired region automatically.
Arguably the simplest structured data are sequences,
whose generation with deep model has been well studied under the seq2seq~\citep{SutVinLe14} framework
that models the generation of sequence as a series of token choices parameterized by recurrent neural networks~(RNNs).
Its widespread success has encourage several pioneer works that consider the conversion of more complex structure data into sequences
and apply sequence models to the represented sequences.
\citet{GomDuvHer16} (CVAE) is a representative work of such paradigm for the chemical molecule generation, using the SMILES line notation~\citep{Weininger88} for representing molecules.
However, because of the lack of formalization of syntax and semantics serving as the restriction of the \emph{particular} structured data,
underfitted \emph{general-purpose} string generative models will often lead to invalid outputs.
Therefore, to obtain a reasonable model via such training procedure,
we need to prepare large amount of valid combinations of the structures,
which is time consuming or even not practical in domains like drug discovery.
To tackle such a challenge, one approach is to incorporate the structure restrictions explicitly into the generative model.
For the considerations of computational cost and model generality,
context-free grammars~(CFG) have been taken into account in the decoder parametrization.
For instance, in molecule generation tasks,
\citet{KusPaiHer17} proposes a grammar variational autoencoder~(GVAE)
in which the CFG of SMILES notation is incorporated into the decoder.
The model generates the parse trees directly in a top-down direction, by repeatedly expanding any nonterminal with its production rules.
Although the CFG provides a mechanism for generating \emph{syntactic valid} objects,
it is still incapable to regularize the model for generating \emph{semantic valid} objects~\citep{KusPaiHer17}.
For example, in molecule generation, the semantic of the SMILES languages requires that the rings generated must be closed;
in program generation, the referenced variable should be defined in advance and each variable can only be defined exactly once in each local context (illustrated in Fig~\ref{fig:cfg_not_enough}).
All the examples require cross-serial like dependencies which are not enforceable by CFG,
implying that more constraints beyond CFG are needed to achieve semantic valid production in VAE.
In the theory of compiler,
attribute grammars, or syntax-directed definition has been proposed for attaching semantics to a parse tree generated by context-free grammar.
Thus one straightforward but not practical application of attribute grammars is, after generating a syntactic valid molecule candidate, to conduct offline semantic checking.
This process needs to be repeated until a semantically valid one is discovered,
which is at best computationally inefficient and at worst infeasible, due to extremely low rate of passing checking.
As a remedy, we propose the \emph{syntax-direct variational autoencoder} (SD-VAE),
in which a semantic restriction component is advanced to the stage of syntax tree generator.
This allows the generator with both syntactic and semantic validation.
The proposed syntax-direct generative mechanism in the decoder further constraints the output space to ensure the semantic correctness in the tree generation process.
The relationships between our proposed model and previous models can be characterized in Figure~\ref{fig:diagram}.
\begin{figure}
\centering
\vspace{-6mm}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{diagram-crop}
\caption{ \label{fig:diagram} Illustrations of structured data decoding space }
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.535\textwidth}
\includegraphics[width=\textwidth]{cfg_not_enough_v2.pdf}
\caption{CFG parses program `A=1;B=C+2' \label{fig:cfg_not_enough}}
\end{subfigure}
\caption{Illustration on left shows the hierarchy of the structured data decoding space w.r.t different works
and theoretical classification of corresponding strings from formal language theory.
SD-VAE, our proposed model with attribute grammar reshapes the output space tighter to the meaningful target space than existing works.
On the right we show a case where CFG is unable to capture the semantic constraints, since it successfully parses an invalid program.
}
\vspace{-5mm}
\end{figure}
Our method brings theory of formal language into stochastic generative model.
The contribution of our paper can be summarized as follows:
\begin{itemize}[leftmargin=*]
\item \emph{Syntax and semantics enforcement}:
We propose a new formalization of semantics that systematically converts the offline semantic check into online guidance for stochastic generation using the proposed \emph{stochastic lazy attribute}.
This allows us effectively address both syntax and semantic constraints.
\item \emph{Efficient learning and inference}: Our approach has computational cost $O(n)$ where $n$ is the length of structured data. This is the same as existing methods like CVAE and GVAE which do not enforce semantics in generation. During inference, the SD-VAE runs with semantic guiding on-the-fly, while the existing alternatives generate many candidates for semantic checking.
\item \emph{Strong empirical performance}: We demonstrate the effectiveness of the SD-VAE through applications in two domains, namely (1) the subset of Python programs and (2) molecules.
Our approach consistently and significantly improves the results in evaluations including generation, reconstruction and optimization.
\end{itemize}
\vspace{-3mm}
\section{Background}\label{sec:background}
\vspace{-2mm}
Before introducing our model and the learning algorithm, we first provide some background knowledge which is important for understanding the proposed method.
\subsection{Variational Autoencoder}\label{subsec:vae}
The variational autoencoder~\citep{KinWel13,RezMohWie14} provides a framework for learning the probabilistic generative model as well as its posterior, respectively known as decoder and encoder.
We denote the observation as $x$, which is the structured data in our case, and the latent variable as $z$.
The decoder is modeling the probabilistic generative processes of $x$ given the continuous representation $z$ through the likelihood $p_\theta(x|z)$ and the prior over the latent variables $p(z)$, where $\theta$ denotes the parameters. The encoder approximates the posterior $p_\theta(z|x)\propto p_\theta(x|z)p(z)$ with a model $q_\psi(z|x)$ parametrized by $\psi$.
The decoder and encoder are learned simultaneously by maximizing the evidence lower bound~(ELBO) of the marginal likelihood, \ie,
\begin{equation}\label{eq:vae_elbo}
\Lcal\rbr{X; \theta, \psi} := \sum_{x\in X}\EE_{q(z|x)}\sbr{\log p_\theta(x|z)p(z) - \log q_\psi(z|x)}\le \sum_{x\in X}\log \int p_\theta(x|z)p(z)dz,
\end{equation}
where $X$ denotes the training datasets containing the observations.
\subsection{Context Free Grammar and Attribute Grammar}\label{subsec:att_grammar}
\textbf{Context free grammar\ \ } A context free grammar (CFG) is defined as $G = \langle \Vcal, \Sigma, \Rcal, s \rangle$, where symbols are divided into $\Vcal$, the set of non-terminal symbols, $\Sigma$, the set of terminal symbols and $s \in \Vcal$, the start symbol. Here $\Rcal$ is the set of production rules.
Each production rule $r \in \Rcal$ is denoted as $r = \alpha \rightarrow \beta$ for $\alpha \in \Vcal$ is a nonterminal symbol, and $\beta = u_1u_2 \ldots u_{|\beta|} \in \left(\Vcal \bigcup \Sigma\right)^*$ is a sequence of terminal and/or nonterminal symbols.
\textbf{Attribute grammar\ \ } To enrich the CFG with ``semantic meaning'', \citet{Knuth68} formalizes attribute grammar that introduces attributes and rules to CFG. An attribute is an attachment to the corresponding nonterminal symbol in CFG, written in the format $\langle\textit{v}\rangle.\textit{a}$ for $\textit{v} \in \Vcal$.
There can be two types of attributes assigned to non-terminals in $G$: the \emph{inherited} attributes and the \emph{synthesized} attributes.
An inherited attribute depends on the attributes from its parent and siblings, while a synthesized attribute is computed based on the attributes of its children.
Formally, for a production $u_0 \rightarrow u_1u_2 \ldots u_{|\beta|}$,
we denote $I(u_i)$ and $S(u_i)$ be the sets of \textit{inherited} and \textit{synthesized} attributes of $u_i$ for $i \in \{0, \ldots, |\beta|\}$, respectively.
\subsubsection{A motivational example}
\label{sec:example}
We here exemplify how the above defined attribute grammar enriches CFG with non-context-free semantics. We use the following toy grammar, a subset of SMILES that generates either a chain or a cycle with three carbons: \par
{
\small
\textbf{Production}\hspace*{\fill} \textbf{Semantic Rule}
\grammarindent10ex
\grammarparsep0.5ex
\begin{grammar}
<s> $\rightarrow$ <atom>$_1$ `C' <atom>$_2$
\hspace*{\fill} <s>".matched" $\leftarrow$ <atom>$_1$".set" $\bigcap$ <atom>$_2$".set", \\
\hspace*{\fill} <s>".ok" $\leftarrow$ <atom>$_1$".set" $=$ <s>."matched" $=$ <atom>$_2$".set"
<atom> $\rightarrow$ `C' | `C' <bond> <digit> \hspace*{\fill} <atom>".set" $\leftarrow$ $\varnothing$ | {concat$\large($<bond>".val", <digit>".val"$\large)$}
<bond> $\rightarrow$ `-' | `=' | `#' \hspace*{\fill} <bond>".val" $\leftarrow$ `-' | `=' | `#'
<digit> $\rightarrow$ `1' | `2' | ... | `9' \hspace*{\fill} <digit>".val" $\leftarrow$ `1' | `2' ... | `9'
\end{grammar}
}
where we show the production rules in CFG with $\rightarrow$ on the left, and the calculation of attributes in attribute grammar with $\leftarrow$ on the left. Here we leverage the attribute grammar to check (with attribute \texttt{matched}) whether the ringbonds come in pairs: a ringbond generated at $\langle\textit{atom}\rangle_1$ should match the bond type and bond index that generated at $\langle\textit{atom}\rangle_2$, also the semantic constraint expressed by $\langle\textit{s}\rangle\texttt{.ok}$ requires that there is no difference between the \texttt{set} attribute of $\langle\textit{atom}\rangle_1$ and $\langle\textit{atom}\rangle_2$.
Such constraint in SMILES is known as \emph{cross-serial dependencies} (CSD)~\citep{BreKapPetZae82}
which is non-context-free~\citep{Shieber85}. See Appendix~\ref{sec:smiles_explain} for more explanations.
Figure~\ref{fig:offline_check} illustrates the process of performing syntax and semantics check in compilers. Here all the attributes are \emph{synthetic}, \ie, calculated in a bottom-up direction.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/demo_bottomup}
\caption{Bottom-up syntax and semantics check in compilers. \label{fig:offline_check}}
\vspace{-3mm}
\end{figure}
So generally, in the semantic correctness checking procedure, one need to perform bottom-up procedures for calculating the attributes \emph{after} the parse tree is generated. However, in the top-down structure generating process, the parse tree is not ready for semantic checking, since the synthesized attributes of each node require information from its children nodes, which are not generated yet. Due to such dilemma, it is nontrivial to use the attribute grammar to guide the top-down generation of the tree-structured data. One straightforward way is using acceptance-rejection sampling scheme, \ie, using the decoder of CVAE or GVAE as a proposal and the semantic checking as the threshold. It is obvious that since the decoder does not include semantic guidance, the proposal distribution may raise semantically invalid candidate frequently, therefore, wasting the computational cost in vain.
\vspace{-3mm}
\section{Syntax-Directed Variational Autoencoder}\label{sec:sd_vae}
As described in Section~\ref{sec:example}, directly using attribute grammar in an offline fashion (\ie, after the generation process finishes) is not efficient to address both syntax and semantics constraints.
In this section we describe how to bring forward the attribute grammar online and incorporate it into VAE, such that our VAE addresses both \emph{syntactic} and \emph{semantic} constraints.
We name our proposed method Syntax-Directed Variational Autoencoder (SD-VAE).
\iffalse
This convention to sequence data is usually specified using Context Free Grammar (CFG) as a collection of production rules.
From a generative point of view,
the generation consists of recursive applications of productions rules starting from the start symbol,
yielding a parse tree whose terminal-symbol are leaves, when concatenated from left to right, form the corresponding sequence data.
This process itself can also be represented as a sequence of applied production rules following a pre-order traversal on the parsed tree.
In the syntax-directed formalization proposed in \cite{KusPaiHer17},
instead of directly generating the structured data's corresponding sequence data,
the generative model produces a sequence of production rules that, when applied in the pre-order traversal order of the parse tree,
yields the sequence data which corresponds to the structured one.
In this way, the model only outputs sequence data with legal parse tree, therefore enforcing the syntactical constraints.
Still being a model generating sequences, it remains in the realm of proven sequence techniques such as RNNs and CNNs.
\fi
\vspace{-1mm}
\subsection{Stochastic Syntax-Directed Decoder}\label{subsec:sd_decoder}
\vspace{-1mm}
By scrutinizing the tree generation, the major difficulty in incorporating the attributes grammar into the processes is the appearance of the synthesized attributes. For instance, when expanding the start symbol $\langle\textit{s}\rangle$, none of its children is generated yet. Thus their attributes are also absent at this time, making the $\langle\textit{s}\rangle.\texttt{matched}$ unable to be computed. To enable the on-the-fly computation of the synthesized attributes for semantic validation during tree generation, besides the two types of attributes, we introduce the \emph{stochastic lazy attributes} to enlarge the existing attribute grammar. Such \emph{stochasticity} transforms the corresponding synthesized attribute into inherited constraints in generative procedure; and {lazy} linking mechanism sets the actual value of the attribute, once all the other dependent attributes are ready.
We demonstrate how the decoder with \emph{stochastic lazy attributes} will generate semantic valid output through the same pedagogical example as in Section~\ref{sec:example}. Figure~\ref{fig:online_generation} visually demonstrates this process.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figs/demo_topdown}
\caption{On-the-fly generative process of SD-VAE in order from (a) to (g). Steps: (a) stochastic generation of attribute; (b)(f)(g) constrained sampling with inherited attributes; (c) unconstrained sampling; (d) synthesized attribute calculation on generated subtree. (e) lazy evaluation of the attribute at root node. \label{fig:online_generation}}
\vspace{-3mm}
\end{figure}
\begin{algorithm}[t]
\caption{\textbf{Decoding with Stochastic Syntax-Directed Decoder}}\label{alg:decoder}
\begin{algorithmic}[1]
\State {\bf Global variables:} CFG: $G=(\Vcal, \Sigma, \Rcal, s)$, decoder network parameters $\theta$
\Procedure{GenTree}{$node$, $\Tcal$}
\State Sample stochastic lazy attribute $node._{sa} \sim \Bcal_{\theta}(sa | node, \Tcal)$ \Comment{when introduced on $node$}
\State Sample production rule $r = (\alpha \rightarrow \beta) \in \Rcal \sim p_{\theta}(r | ctx, node, \Tcal)$. \Comment{The conditioned variables encodes the semantic constraints in tree generation.}
\State $ctx \leftarrow \text{RNN}(ctx, r)$ \Comment{update context vector}
\For{$i = 1, \ldots, |\beta|$}
\State $v_i \leftarrow \text{Node}(u_i, node, \{v_j \}_{j = 1}^{i-1}) $ \Comment{node creation with parent and siblings' attributes}
\State GenTree($v_i$, $\Tcal$) \Comment{recursive generation of children nodes}
\State Update synthetic and stochastic attributes of $node$ with $v_i$ \Comment{Lazy linking}
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
The tree generation procedure is indeed sampling from the decoder $p_\theta(x|z)$, which can be decomposed into several steps that elaborated below:
{\bf i) {stochastic predetermination:}} in Figure~\ref{fig:online_generation}(a), we start from the node $\langle\textit{s}\rangle$ with the synthesized attributes $\langle\textit{s}\rangle.\texttt{matched}$ determining the index and bond type of the ringbond that will be matched at node $\langle\textit{s}\rangle$. Since we know nothing about the children nodes right now, the only thing we can do is to `guess' a value. That is to say, we associate a stochastic attribute $\langle\textit{s}\rangle.\texttt{sa}\in \cbr{0, 1}^{C_a}\sim \prod_{i=1}^{C_a}\Bcal(sa_i|z)$ as a predetermination for the sake of the absence of synthesized attribute $\langle\textit{s}\rangle.\texttt{matched}$, where $\Bcal(\cdot)$ is the Bernoulli distribution. Here $C_a$ is the maximum cardinality possible~\footnote{Note that setting threshold for $C_a$ assumes a \textit{mildly context sensitive grammar} (\eg, limited CSD). } for the corresponding attribute $a$. In above example, the $0$ indicates no ringbond and $1$ indicates one ringbond at both $\langle \textit{atom}\rangle_1$ and $\langle \textit{atom}\rangle_2$, respectively.
{\bf ii) constraints as inherited attributes: } we pass the $\langle\textit{s}\rangle.\texttt{sa}$ as inherited constraints to the children of node $\langle \textit{s}\rangle$, \ie, $\langle \textit{atom}\rangle_1$ and $\langle \textit{atom}\rangle_2$ to ensure the semantic validation in the tree generation. For example, Figure~\ref{fig:online_generation}(b) \texttt{`sa=1'} is passed down to $\langle \textit{atom}\rangle_1$.
{\bf iii) sampling under constraints:} without loss of generality, we assume $\langle \textit{atom}\rangle_1$ is generated before $\langle \textit{atom}\rangle_2$. We then sample the rules from $p_{\theta}(r| \langle \textit{atom}\rangle_1, \langle\textit{s}\rangle, z)$ for expanding $\langle \textit{atom}\rangle_1$, and so on and so forth to generate the subtree recursively. Since we carefully designed sampling distribution that is conditioning on the stochastic property, the inherited constraints will be eventually satisfied. In the example, due to the $\langle\textit{s}\rangle.\texttt{sa} = \texttt{`1'}$, when expanding $\langle \textit{atom}\rangle_1$, the sampling distribution $p_{\theta}(r| \langle \textit{atom}\rangle_1, \langle\textit{s}\rangle, z)$ only has positive mass on rule $\langle\textit{atom}\rangle$ $\rightarrow$ \texttt{`C'} $\langle\textit{bond}\rangle\,\,\langle\textit{digit}\rangle$.
{\bf iv) lazy linking:} once we complete the generation of the subtree rooted at $\langle \textit{atom}\rangle_1$, the synthesized attribute $\langle \textit{atom}\rangle_1.\texttt{set}$ is now available. According to the semantic rule for $\langle\textit{s}\rangle.\texttt{matched}$, we can instantiate $\langle\textit{s}\rangle.\texttt{matched} = \langle \textit{atom}\rangle_1.\texttt{set} = \texttt{\{`-1'\}}$. This linking is shown in Figure~\ref{fig:online_generation}(d)(e). When expanding $\langle \textit{atom}\rangle_2$, the $\langle\textit{s}\rangle.\texttt{matched}$ will be passed down as inherited attribute to regulate the generation of $\langle \textit{atom}\rangle_2$, as is demonstrated in Figure~\ref{fig:online_generation}(f)(g).
In summary, the general syntax tree $\Tcal \in L(G)$ can be constructed step by step, within the languages $L(G)$ covered by grammar $G$. In the beginning, $\Tcal^{(0)} = root$, where $root._{symbol} = s$ which contains only the start symbol $s$. At step $t$, we will choose an nonterminal node in the \textit{frontier}\footnote{Here frontier is the set of all nonterminal leaves in current tree.}
of partially generated tree $\Tcal^{(t)}$ to expand. The generative process in each step $t = 0, 1, \ldots$ can be described as:
\vspace{-3mm}
\begin{enumerate}
\item Pick node $v^{(t)} \in Fr(\Tcal^{(t)})$ where its attributes needed are either satisfied, or are stochastic attributes that should be sampled first according to Bernoulli distribution $\Bcal(\cdot |v^{(t)}, \Tcal^{(t)})$;
\item Sample rule $r^{(t)} = \alpha^{(t)} \rightarrow \beta^{(t)} \in \Rcal$ according to distribution $p_{\theta}(r^{(t)} | v^{(t)}, \Tcal^{(t)})$, where $v^{(t)}._{symbol} = \alpha^{(t)}$, and $\beta^{(t)} = u_1^{(t)}u_2^{(t)}\ldots u_{|\beta^{(t)}|}^{(t)}$, \ie, expand the nonterminal with production rules defined in CFG.
\item $\Tcal^{(t + 1)} = \Tcal^{(t)} \bigcup \{(v^{(t)}, u_i^{(t)})\}_{i=1}^{|\beta^{(t)}|}$, \ie, grow the tree by attaching $\beta^{(t)}$ to $v^{(t)}$. Now the node $v^{(t)}$ has children represented by symbols in $\beta^{(t)}$.
\end{enumerate}
\vspace{-3mm}
The above process continues until all the nodes in the frontier of $\Tcal^{(T)}$ are all terminals after $T$ steps.
Then, we obtain the algorithm~\ref{alg:decoder} for sampling both syntactic and semantic valid structures.
In fact, in the model training phase, we need to compute the likelihood $p_\theta(x|z)$ given $x$ and $z$. The probability computation procedure is similar to the sampling procedure in the sense that both of them requires tree generation. The only difference is that in the likelihood computation procedure, the tree structure, \ie, the computing path, is fixed since $x$ is given; While in the sampling procedure, it is sampled following the learned model.
Specifically, the generative likelihood can be written as:
\begin{equation}
\label{eq:p_x_give_z}
p_{\theta}(x | z) = \prod_{t=0}^T p_{\theta}(r_t | ctx^{(t)}, node^{(t)}, \Tcal^{(t)}) \Bcal_{\theta}(sa_t | node^{(t)}, \Tcal^{(t)})
\end{equation}
where $ctx^{(0)} = z$ and $ctx^{(t)} = \text{RNN}(r_t, ctx^{(t-1)})$. Here RNN can be commonly used LSTM, \etc.
\subsection{Structure-Based Encoder}\label{subsec:sb_encoder}
As we introduced in section~\ref{sec:background}, the encoder, $q_\psi(z|x)$
approximates the posterior of the latent variable through the model with some parametrized function with parameters $\psi$. Since the structure in the observation $x$ plays an important role, the encoder parametrization should take care of such information. The recently developed deep learning models~\citep{DuvMacIpaBometal15,DaiDaiSon16,LeiJinRegJaa17} provide powerful candidates as encoder. However, to demonstrate the benefits of the proposed syntax-directed decoder in incorporating the attribute grammar for semantic restrictions, we will exploit the same encoder in~\citet{KusPaiHer17} for a fair comparison later.
We provide a brief introduction to the particular encoder model used in~\cite{KusPaiHer17} for a self-contained purpose. Given a program or a SMILES sequence, we obtain the corresponding parse tree using CFG and decompose it into a sequence of productions through a pre-order traversal on the tree. Then, we convert these productions into one-hot indicator vectors, in which each dimension corresponds to one production in the grammar. We will use a deep convolutional neural networks which maps this sequence of one-hot vectors to a continuous vector as the encoder.
\subsection{Model Learning}\label{subsec:model_learning}
Our learning goal is to maximize the evidence lower bound in Eq~\ref{eq:vae_elbo}. Given the encoder, we can then map the structure input into latent space $z$. The variational posterior $q(z|x)$ is parameterized with Gaussian distribution, where the mean and variance are the output of corresponding neural networks. The prior of latent variable $p(z) = \Ncal(0, I)$. Since both the prior and posterior are Gaussian, we use the closed form of KL-divergence that was proposed in~\citet{KinWel13}.
In the decoding stage, our goal is to maximize $p_{\theta}(x | z)$. Using the Equation~(\ref{eq:p_x_give_z}), we can compute the corresponding conditional likelihood. During training, the syntax and semantics constraints required in Algorithm~\ref{alg:decoder} can be precomputed.
In practice, we observe no significant time penalty measured in wall clock time compared to previous works.
\section{Related work}\label{sec:related_work}
Generative models with discrete structured data have raised increasing interests among researchers in different domains. The classical sequence to sequence model~\citep{SutVinLe14} and its variations have also been applied to molecules~\citep{GomDuvHer16}. Since the model is quite flexible, it is hard to generate valid structures with limited data, though ~\citet{JanWesPaietal18} shows that an extra validator model could be helpful to some degree. Techniques including data augmentation~\citep{Bjerrum17}, active learning~\citep{JanWesJos17} and reinforcement learning~\citep{GuiSanFar17} also been proposed to tackle this issue. However, according to the empirical evaluations from ~\citet{Benhenda17}, the validity is still not satisfactory. Even when the validity is enforced, the models tend to overfit to simple structures while neglect the diversity.
Since the structured data often comes with formal grammars,
it is very helpful to generate its parse tree derived from CFG, instead of generating sequence of tokens directly.
The Grammar VAE\citep{KusPaiHer17} introduced the CFG constrained decoder for simple math expression and SMILES string generation.
The rules are used to mask out invalid syntax such that the generated sequence is always from the language defined by its CFG.
\citet{ParMohSinLietal16} uses a RecursiveReverse-Recursive Neural Network (R3NN) to capture global context information while expanding with CFG production rules. Although these works follow the syntax via CFG, the context sensitive information can only be captured using variants of sequence/tree RNNs~\citep{AlvJaa16, DonLap16, ZhaLuLap15}, which may not be time and sample efficient.
In our work, we capture the semantics with proposed stochastic lazy attributes when generating structured outputs. By addressing the most common semantics to harness the deep networks, it can greatly reshape the output domain of decoder~\citep{HuMaLiuHovetal16}. As a result, we can also get a better generative model for discrete structures.
\section{Experiments}\label{sec:experiments}
Code is available at \url{https://github.com/Hanjun-Dai/sdvae}.
We show the effectiveness of our proposed SD-VAE
with applications in two domains, namely programs and molecules.
We compare our method with CVAE~\citep{GomDuvHer16} and GVAE~\citep{KusPaiHer17}.
CVAE only takes character sequence information, while GVAE utilizes the context-free grammar.
To make a fair comparison, we closely follow the experimental protocols that were set up in \citet{KusPaiHer17}.
The training details are included in Appendix~\ref{sec:training-details}.
Our method gets significantly better results than previous works. It yields better reconstruction accuracy and prior validity by large margins,
while also having comparative diversity of generated structures.
More importantly, the SD-VAE finds better solution in program and molecule regression and optimization tasks. This demonstrates that the continuous latent space obtained by SD-VAE is also smoother and more discriminative.
\subsection{Settings}
Here we first describe our datasets in detail.
The programs are represented as a list of statements.
Each statement is an atomic arithmetic operation on variables (labeled as \texttt{v0}, \texttt{v1}, $\cdots$, \texttt{v9}) and/or immediate numbers ($1, 2, \dots, 9$).
Some examples are listed below:
\begin{center}
\vspace{-2mm}
\texttt{v3=sin(v0);v8=exp(2);v9=v3-v8;v5=v0*v9;return:v5}\\
\texttt{v2=exp(v0);v7=v2*v0;v9=cos(v7);v8=cos(v9);return:v8}
\end{center}
Here \texttt{v0} is always the input, and the variable specified by \texttt{return}
(respectively \texttt{v5} and \texttt{v8} in the examples) is the output,
therefore it actually represent univariate functions $f:\mathbb{R} \to \mathbb{R}$. Note that a correct program should,
besides the context-free grammar specified in Appendix~\ref{sec:program-syntax},
also respect the semantic constraints. For example, a variable should be defined before being referenced. We randomly generate $130,000$ programs, where each consisting of $1$ to $5$ valid statements. Here the maximum number of decoding steps $T=80$.
We hold 2000 programs out for testing and the rest for training and validation.
For molecule experiments, we use the same dataset as in \citet{KusPaiHer17}.
It contains $250,000$ SMILES strings, which are extracted from the ZINC database~\citep{GomDuvHer16}.
We use the same split as \citet{KusPaiHer17}, where $5000$ SMILES strings are held out for testing. Regarding the syntax constraints, we use the grammar specified in Appendix~\ref{sec:grammar}, which is also the same as \citet{KusPaiHer17}. Here the maximum number of decoding steps $T=278$.
For our SD-VAE, we address some of the most common semantics:
\textbf{Program semantics\ \ } We address the following:
\begin{inlineenum}
\item variables should be defined before use,
\item program must return a variable,
\item number of statements should be less than 10.
\end{inlineenum}
\textbf{Molecule semantics\ \ } The SMILES semantics we addressed includes:
\begin{inlineenum}
\item ringbonds should satisfy cross-serial dependencies,
\item explicit valence of atoms should not go beyond permitted.
\end{inlineenum} For more details about the semantics of SMILES language, please refer to Appendix~\ref{sec:smiles_explain}.
\subsection{Reconstruction Accuracy and Prior Validity}
\label{sec:reconstruction-and-prior}
\vspace{-3mm}
\begin{table*}[th]
\centering
\resizebox{1.01\textwidth}{!}
\begin{tabular}{@{}ccccc@{}}
\toprule
& \multicolumn{2}{c}{\textbf{Program}} & \multicolumn{2}{c}{\textbf{Zinc SMILES}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
\textbf{Methods} & \textbf{Reconstruction \%*} & \textbf{Valid Prior \%} & \textbf{Reconstruction \%} & \textbf{Valid Prior \%} \\
\midrule
SD-VAE & $\mathbf{96.46}$ $\mathbf{(99.90,99.12,90.37)}$ & $\mathbf{100.00}$ & $\mathbf{76.2}$ & $\mathbf{43.5}$ \\
GVAE & $71.83$ $(96.30,77.28,41.90)$ & $2.96$ & $53.7$ & $7.2$ \\
CVAE & $13.79$ $(40.46,0.87,0.02)$ & $0.02$ & $44.6$ & $0.7$ \\
\bottomrule
\end{tabular}
}
\caption{Reconstructing Accuracy and Prior Validity estimated using Monte Carlo method.
Our proposed method (SD-VAE) performance significantly better than existing works.\\
* We also report the reconstruction \% grouped by number of statements (3, 4, 5) in parentheses.}
\label{table:rapv}
\end{table*}
We use the held-out dataset to measure the reconstruction accuracy of VAEs. For prior validity, we first sample the latent representations from prior distribution, and then evaluate how often the model can decode into a valid structure.
Since both encoding and decoding are stochastic in VAEs, we follow the Monte Carlo method used in \citet{KusPaiHer17} to do estimation:
\begin{inlineenum}
\item \textit{reconstruction:} for each of the structured data in the held-out dataset,
we encode it 10 times and decoded (for each encoded latent space representation) 25 times,
and report the portion of decoded structures that are the same as the input ones;
\item \textit{validity of prior:} we sample 1000 latent representations $\mathbf{z} \sim \mathcal{N} \left( O , \mathbf{I} \right)$.
For each of them we decode 100 times, and calculate the portion of 100,000 decoded results that corresponds to valid Program or SMILES sequences.
\end{inlineenum}
\textbf{Program \ \ }
We show in the left part of Table~\ref{table:rapv} that our model
has near perfect reconstruction rate, and most importantly, a perfect valid decoding program from prior.
This huge improvement is due to our model that utilizes the full semantics that previous work ignores,
thus in theory guarantees perfect valid prior and in practice enables high reconstruction success rate.
For a fair comparison, we run and tune the baselines in $10\%$ of training data and report the best result.
In the same place we also report the reconstruction successful rate grouped by number of statements.
It is shown that our model keeps high rate even with the size of program growing.
\textbf{SMILES \ \ }
Since the settings are exactly the same, we include CVAE and GVAE results directly from~\citet{KusPaiHer17}.
We show in the right part of Table~\ref{table:rapv} that our model produces a much higher rate of successful reconstruction and ratio of valid prior.
Figure~\ref{fig:vis_reconstruct} in Appendix~\ref{app:recostruct} also demonstrates some decoded molecules from our method.
Note that the results we reported have not included the semantics specific to aromaticity into account. If we use an alternative kekulized form of SMILES to train the model, then the valid portion of prior can go up to $97.3\%$.
\subsection{Bayesian Optimization}
\label{sec:bo}
\vspace{-3mm}
One important application of VAEs is to enable the optimization (\eg, find new structures with better properties) of discrete structures in continuous latent space, and then use decoder to obtain the actual structures.
Following the protocol used in \citet{KusPaiHer17}, we use Bayesian Optimization (BO) to search the programs and molecules with desired properties in latent space. Details about BO settings and parameters can be found in Appendix~\ref{app:bo}.
\newcommand{\mytab}{
\resizebox{1.01\textwidth}{!}
\begin{tabular}{@{}c@{}cc@{}}
\toprule
\textbf{Method} & \textbf{Program} & \textbf{Score} \\
\midrule
& \texttt{v7=5+v0;v5=cos(v7);return:v5} & $0.1742$ \\
CVAE & \texttt{v2=1-v0;v9=cos(v2);return:v9} & $0.2889$ \\
& \texttt{v5=4+v0;v3=cos(v5);return:v3} & $0.3043$ \\
\midrule
& \texttt{v3=1/5;v9=-1;v1=v0*v3;return:v3} & $0.5454$ \\
GVAE & \texttt{v2=1/5;v9=-1;v7=v2+v2;return:v7} & $0.5497$ \\
& \texttt{v2=1/5;v5=-v2;v9=v5*v5;return:v9} & $0.5749$ \\
\midrule
& \texttt{v6=sin(v0);v5=exp(3);v4=v0*v6;return:v6} & $\mathbf{0.1206}$ \\
SD-VAE & \texttt{v5=6+v0;v6=sin(v5);return:v6} & $\mathbf{0.1436}$ \\
& \texttt{v6=sin(v0);v4=sin(v6);v5=cos(v4);v9=2/v4;return:v4} & $\mathbf{0.1456}$ \\
\midrule
Ground Truth & \texttt{v1=sin(v0);v2=exp(v1);v3=v2-1;return:v3} & --- \\
\bottomrule
\end{tabular}
}
}
\begin{figure*}[ht]
\small
\centering
\begin{subfigure}{0.38\textwidth}
\includegraphics[width=\textwidth, trim=0 0 0 0]{best_expr.pdf}
\end{subfigure}
\begin{subfigure}{0.60\textwidth}
\mytab
\end{subfigure}
\caption{On the left are best programs found by each method using Bayesian Optimization.
On the right are top 3 closest programs found by each method along with the distance to ground truth (lower distance is better).
Both our SD-VAE and CVAE can find similar curves, but our method aligns better with the ground truth.
In contrast the GVAE fails this task by reporting trivial programs representing linear functions.}
\label{fig:best-program}
\vspace{-3mm}
\end{figure*}
\textbf{Finding program\ \ } In this application the models are asked to find the program which is most similar to the ground truth program.
Here the distance is measured by $\log(1 + \text{MSE})$, where the MSE (Mean Square Error) calculates the discrepancy of program outputs, given the 1000 different inputs \texttt{v0} sampled evenly in $[-5, 5]$.
In Figure~\ref{fig:best-program} we show that our method finds the best program to the ground truth one compared to CVAE and GVAE.
\textbf{Molecules\ \ } Here we optimize the drug properties of molecules.
In this problem, we ask the model to optimize for octanol-water partition coefficients (a.k.a \textit{log P}), an important measurement of drug-likeness of a given molecule. As \cite{GomDuvHer16} suggests,
for drug-likeness assessment \textit{log P} is penalized by other properties including synthetic accessibility score~\citep{ErtSch09}.
In Figure~\ref{fig:best-mol} we show the the top-3 best molecules found by each method,
where our method found molecules with better scores than previous works. Also one can see the molecule structures found by SD-VAE are richer than baselines, where the latter ones mostly consist of chain structure.
\begin{figure*}[ht]
\small
\centering
\includegraphics[width=0.98\textwidth, trim=0 0 0 0]{mol_found-crop}
\caption{Best top-3 molecules and the corresponding scores found by each method using Bayesian Optimization. }
\label{fig:best-mol}
\vspace{-3mm}
\end{figure*}
\subsection{Predictive performance of latent representation}
\vspace{-3mm}
\begin{table*}[ht]
\centering
\begin{tabular}{@{}ccccc@{}}
\toprule
& \multicolumn{2}{c}{\textbf{Program}} & \multicolumn{2}{c}{\textbf{Zinc}} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
\textbf{Method} & \textbf{LL} & \textbf{RMSE} & \textbf{LL} & \textbf{RMSE} \\
\midrule
CVAE & -4.943 $\pm$ 0.058 & 3.757 $\pm$ 0.026 & -1.812 $\pm$ 0.004 & 1.504 $\pm$ 0.006 \\
GVAE & -4.140 $\pm$ 0.038 & 3.378 $\pm$ 0.020 & -1.739 $\pm$ 0.004 & 1.404 $\pm$ 0.006 \\
SD-VAE & \textbf{-3.754 $\pm$ 0.045} & \textbf{3.185 $\pm$ 0.025} & \textbf{-1.697 $\pm$ 0.015} & \textbf{1.366 $\pm$ 0.023} \\
\bottomrule
\end{tabular}
\caption{Predictive performance using encoded mean latent vector. Test LL and RMSE are reported. }
\label{table:bayesian-opt-predicted}
\vspace{-3mm}
\end{table*}
The VAEs also provide a way to do unsupervised feature representation learning~\citet{GomDuvHer16}. In this section, we seek to to know how well our latent space predicts the properties of programs and molecules. After the training of VAEs, we dump the latent vectors of each structured data, and train the sparse Gaussian Process with the target value (namely the error for programs and the drug-likeness for molecules) for regression.
We test the performance in the held-out test dataset.
In Table~\ref{table:bayesian-opt-predicted},
we report the result in Log Likelihood (LL) and Regression Mean Square Error (RMSE),
which show that our SD-VAE always produces latent space that are more discriminative than both CVAE and GVAE baselines.
This also shows that, with a properly designed decoder, the quality of encoder will also be improved via end-to-end training.
\subsection{Diversity of generated molecules}
\begin{table*}[ht]
\centering
\begin{tabular}{ccccc}
\toprule
Similarity Metric & MorganFp & MACCS & PairFp & TopologicalFp\\
\midrule
GVAE & \textbf{0.92 $\pm$ 0.10} & \textbf{0.83 $\pm$ 0.15} & 0.94 $\pm$ 0.10 & 0.71 $\pm$ 0.14\\
SD-VAE & \textbf{0.92 $\pm$ 0.09} & \textbf{0.83 $\pm$ 0.13} & \textbf{0.95 $\pm$ 0.08} & \textbf{0.75 $\pm$ 0.14} \\
\bottomrule
\end{tabular}
\caption{Diversity as statistics from pair-wise distances measured as $1-s$, where $s$ is one of the similarity metrics. So higher values indicate better diversity. We show $\mathrm{mean}\pm\mathrm{stddev}$ of $\binom{100}{2}$ pairs among 100 molecules. Note that we report results from GVAE and our SD-VAE,
because CVAE has very low valid priors, thus completely only failing this evaluation protocol.}
\label{tab:mol_diversity}
\end{table*}
Inspired by~\citet{Benhenda17}, here we measure the diversity of generated molecules as an assessment of the methods.
The intuition is that a good generative model should be able to generate diverse data and avoid mode collapse in the learned space.
We conduct this experiment in the SMILES dataset. We first sample 100 points from the prior distribution. For each point,
we associate it with a molecule, which is the most frequent occurring valid SMILES decoded
(we use 50 decoding attempts since the decoding is stochastic).
We then, with one of the several molecular similarity metrics, compute the pairwise similarity and report the mean and standard deviation
in Table~\ref{tab:mol_diversity}.
We see both methods do not have the mode collapse problem, while producing similar diversity scores.
It indicates that although our method has more restricted decoding space than baselines, the diversity is not sacrificed.
This is because we never rule-out the valid molecules.
And a more compact decoding space leads to much higher probability in obtaining valid molecules.
\subsection{Visualizing the Latent Space}
\vspace{-3mm}
We seek to visualize the latent space as an assessment of how well our generative model is able to produces a
coherent and smooth space of program and molecules.
\textbf{Program\ \ }
Following \citet{BowVilVinetal16},
we visualize the latent space of program by interpolation between two programs.
More specifically,
given two programs which are encoded to $p_a$ and $p_b$ respectively in the latent space,
we pick 9 evenly interpolated points between them. For each point, we pick the corresponding most decoded structure.
In Table~\ref{table:interpolation}
we compare our results with previous works.
Our SD-VAE can pass though points in the latent space that can be decoded into valid programs without error
and with visually more smooth interpolation than previous works.
Meanwhile, CVAE makes both syntactic and semantic errors, and GVAE produces only semantic errors (reference of undefined variables),
but still in a considerable amount.
\begin{table*}[h!]
\centering
\resizebox{1.01\textwidth}{!}
\begin{tabular}{@{}lll@{}}
\toprule
\multicolumn{1}{c}{\textbf{CVAE}} & \multicolumn{1}{c}{\textbf{GVAE}} & \multicolumn{1}{c}{\textbf{SD-VAE}} \\
\midrule
\texttt{\color{brown}v6=cos(7);v8=exp(9);v2=v8*v0;v9=v2/v6;return:v9} &
\texttt{\color{brown}v6=cos(7);v8=exp(9);v2=v8*v0;v9=v2/v6;return:v9} &
\texttt{\color{brown}v6=cos(7);v8=exp(9);v2=v8*v0;v9=v2/v6;return:v9}\\
\texttt{v8=cos(3);v7=exp(7);v5=v7*v0;{\color{blue}v9=v9/v6};return:v9} &
\texttt{v3=cos(8);v6=exp(9);{\color{blue}v6=v8*v0};{\color{blue}v9=v2/v6};return:v9} &
\texttt{v6=cos(7);v8=exp(9);v2=v8*v0;v9=v2/v6;return:v9} \\
\texttt{v4=cos(3);v8=exp(3);v2=v2*v0;{\color{blue}v9=v8/v6};return:v9} &
\texttt{v3=cos(8);v6=2/8;{\color{blue}v6=v5*v9};{\color{blue}v5=v8v5};return:v5} &
\texttt{v6=cos(7);v8=exp(9);v3=v8*v0;v9=v3/v8;return:v9} \\
\texttt{v6=cos(3);v8=sin(3);{\color{blue}v5=v4*1};{\color{blue}v5=v3/v4};{\color{blue}return:v9}} &
\texttt{v3=cos(6);v6=2/9;{\color{blue}v6=v5+v5};{\color{blue}v5=v1+v6};return:v5} &
\texttt{v6=cos(7);v8=v6/9;v1=7*v0;v7=v6/v1;return:v7} \\
\texttt{v9=cos(1);v7=sin(1);{\color{blue}v3=v1*5};{\color{blue}v9=v9+v4};return:v9} &
\texttt{v5=cos(6);v1=2/9;{\color{blue}v6=v3+v2};v2=v5-v6;return:v2} &
\texttt{v6=cos(7);v8=v6/9;v1=7*v6;v7=v6+v1;return:v7} \\
\texttt{\color{red}v6=cos(1);v3=sin(10;;v9=8*v8;v7=v2/v2;return:v9} &
\texttt{v5=sin(5);v3=v1/9;v6=v3-v3;v2=v7-v6;return:v2} &
\texttt{v6=cos(7);v8=v6/9;v1=7*v8;v7=v6+v8;return:v7} \\
\texttt{\color{red}v5=exp(v0;v4=sin(v0);v3=8*v1;v7=v3/v2;return:v9} &
\texttt{v1=sin(1);{\color{blue}v5=v5/2};v6=v2-v5;v2=v0-v6;return:v2}&
\texttt{v6=exp(v0);v8=v6/2;v9=6*v8;v7=v9+v9;return:v7} \\
\texttt{v5=exp(v0);v1=sin(1);{\color{blue}v5=2*v3};{\color{blue}v7=v3+v8};return:v7} &
\texttt{v1=sin(1);v7=v8/2;{\color{blue}v8=v7/v9};{\color{blue}v4=v4-v8};return:v4} &
\texttt{v6=exp(v0);v8=v6-4;v9=6*v8;v7=v9+v8;return:v7} \\
\texttt{v4=exp(v0);{\color{blue}v1=v7-8};{\color{blue}v9=8*v3};{\color{blue}v7=v3+v8};return:v7} &
\texttt{v8=sin(1);v2=v8/2;{\color{blue}v8=v0/v9};{\color{blue}v4=v4-v8};return:v4} &
\texttt{v6=exp(v0);v8=v6-4;v9=6*v6;v7=v9+v8;return:v7}\\
\texttt{v4=exp(v0);{\color{blue}v9=v6-8};{\color{blue}v6=2*v5};{\color{blue}v7=v3+v8};return:v7} &
\texttt{v6=exp(v0);v2=v6-4;{\color{blue}v8=v0*v1};{\color{blue}v7=v4+v8};return:v7} &
\texttt{v6=exp(v0);v8=v6-4;v4=4*v6;v7=v4+v8;return:v7} \\
\texttt{\color{brown}v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7} &
\texttt{\color{brown}v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7} &
\texttt{\color{brown}v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7} \\
\bottomrule
\end{tabular}
}
\caption{Interpolation between two valid programs ({\color{brown}the top and bottom ones in brown}) where each program occupies a row.
{\color{red} Programs in red are with syntax errors}.
{\color{blue} Statements in blue are with semantic errors} such as referring to unknown variables.
Rows without coloring are correct programs.
Observe that when a model passes points in its latent space, our proposed SD-VAE enforces both syntactic and semantic constraints while making visually more smooth interpolation.
In contrast, CVAE makes both kinds of mistakes, GVAE avoids syntactic errors but still produces semantic errors, and both methods produce subjectively less smooth interpolations.
}
\label{table:interpolation}
\end{table*}
\textbf{SMILES\ \ }
For molecules, we visualize the latent space in 2 dimensions. We first embed a random molecule from the dataset into latent space. Then we randomly generate 2 orthogonal unit vectors $A$. To get the latent representation of neighborhood, we interpolate the 2-D grid and project back to latent space with pseudo inverse of $A$. Finally we show decoded molecules.
In Figure~\ref{figure:latent-space-vis},
we present two of such grid visualizations.
Subjectively compared with figures in \cite{KusPaiHer17},
our visualization is characterized by having smooth differences between neighboring molecules, and more complicated decoded structures.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{figs/latent-1.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{figs/latent-2.pdf}
\caption{Latent Space visualization. We start from the center molecule and decode the neighborhood latent vectors (neighborhood in projected 2D space). }
\label{figure:latent-space-vis}
\vspace{-3mm}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper we propose a new method to tackle the challenge of addressing both syntax and semantic constraints in generative model for structured data.
The newly proposed \emph{stochastic lazy attribute}
presents a the systematical conversion from offline syntax and semantic check to online guidance for stochastic generation,
and empirically shows consistent and significant improvement over previous models,
while requiring similar computational cost as previous model.
In the future work, we would like to explore the refinement of formalization on a more theoretical ground,
and investigate the application of such formalization on a more diverse set of data modality.
\vspace{-2mm}
\subsubsection*{Acknowledgments}
\vspace{-2mm}
This project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF IIS-1639792 EAGER, NSF CNS-1704701, ONR N00014-15-1-2340, NSF IIS-1546113, DBI-1355990, Intel ISTC, NVIDIA and Amazon AWS.
|
{
"timestamp": "2018-02-27T02:03:36",
"yymm": "1802",
"arxiv_id": "1802.08786",
"language": "en",
"url": "https://arxiv.org/abs/1802.08786"
}
|
\section{Introduction}
New generations of cellular mobile networks, called the fifth-generation radios, have been studied intensively both in the industry and academy.
The new generation of networks exploits radio frequencies higher than 6~GHz actively for higher peak data rates and network throughput.
One of the practical issues in cellular mobile radios operating at higher frequency bands is its coverage.
Higher radio frequencies typically have smaller service coverage compared to lower frequency radio, particularly in non-line-of-sight conditions in light of greater signal losses in diffraction and penetration of radio wave propagation.
Mathematical models and tools to predict losses due to radio propagation have been studied in the previous years extensively, e.g.,~\cite{Haneda16_VTCSpring, TR38901}.
In contrast, possible gains or losses attributed to antennas implemented on mobile phone devices have received less attention, particularly under the presence of fingers and human bodies of mobile users~\cite{Hejselbaek17_TAP, Syrytsin17_TAP, Zhao17_AWPL, Zhao17_TAP}.
This paper therefore sheds lights on the gains of practical mobile phone antennas operating at higher frequencies than 6 GHz, particularly at millimeter-waves (mm-waves).
In particular, we study an achievable gain of 60 GHz antenna arrays at a mobile phone that works as a phased antenna with a single transceiver chain.
The study considers realistic operational conditions with influence of finger, body and multipath channels.
A novel {\it total array gain} is studied in this article, which define a received power at a mobile array that it can receive from multipath radio channels in excess to a case when a mobile is equipped with an idealistic omni-directional antenna. The definition is analogous to the mean effective gain for a single-element antenna~\cite{Vaughan87_TVT, Taga90_TVT}, but the present paper studies gains of an array.
Pathloss of a radio channel is called omni-directional pathloss when a base station (BS) and mobile station (MS) is equipped with omni-directional antennas.
The omni-directional pathloss has been studied extensively in mm-wave channel modeling, including the recently established 3GPP standard channel model for new radios~\cite{TR38901}.
The total array gain along with the omni-pathloss therefore provides the pathloss when a mobile is equipped with an antenna array.
In contrast to the conventional array gain which cannot be defined uniquely depending on the choice of a reference single-element antenna in an array, the total array gain is uniquely defined and allows us to compare arrays formed by different antenna elements.
We evaluate the total array gain of two $60$~GHz antenna arrays at a mobile: an $8$-element uniform linear array (ULA) and distributed array (DA) both consisting of patch antennas.
The evaluation is based on electromagnetic field simulations for antennas and measurement-based ray-tracing propagation simulations in a small-cell scenario at an airport.
Finger and human torso effects on radiation characteristics of the considered mobile phone antennas are taken into account.
The rest of the paper is organized as follows: Section~II introduces the two antenna arrays on a mobile phone and their varying postures. Effects of a finger is briefly addressed. Section~III describes ray-tracing supported by experiments, along with the body shadowing. Section~IV first derives the total array gain and compare it for the two considered antenna arrays. Section~V summarizes the main conclusions.
\section{60 GHz Mobile Phone Antennas}
\subsection{Antenna Arrays}
We consider two antenna arrays as practical examples of $60$~GHz arrays for mobile phones: 1) the ULA and 2) DA. Both are realized with square patch antennas oriented to radiate slanted polarizations when the mobile phone is at a standing position as shown in Fig.~\ref{fig:antenna_configuration}. The neighboring patch antennas of the ULA radiate the same polarizations to each other and are separated by half the free-space wavelength. The patch array is installed at the left-top corner of a mobile phone chassis as shown in Fig.~\ref{fig:antenna_configuration_ULA}. In the DA, the patch antennas are installed at each side of the two top corners of the chassis as illustrated in Figs.~\ref{fig:antenna_configuration_DA_left} and \ref{fig:antenna_configuration_DA_right}. The mobile phone has dimensions of $150 \times 75 \times 8~{\rm mm}^3$ in length, width and thickness and is considered to be a ground plane of the antennas. The patch antennas are simulated on a $0.127$~mm thick Rogers 5880 substrate with a relative permittivity of $\epsilon_{\rm r} = 2.2$ and a $17~{\rm \mu m}$-thick copper layer. The antennas have a broadside gain of $G_{\rm b}=8$~dB in the best case. The whole structure was simulated in CST Microwave Studio. The far-field radiation patterns of the antennas show that the maximum backlobe radiation is weaker than the main lobe gain by at least $20$~dB due to the mobile phone chassis serving as an electrically large ground plane. The DA has more uniform illumination of the entire solid angle since the broadsides of antenna elements point different directions. It is however harder for DA to leverage the array gain properly than ULA because DA elements do not illuminate space with similar gains.
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[scale=0.16]{antenna_configuration_ULA_slanted.pdf}
\label{fig:antenna_configuration_ULA}}\\
\subfigure[]{\includegraphics[scale=0.13]{antenna_configuration_DA_left_slanted.pdf}
\label{fig:antenna_configuration_DA_left}}
\subfigure[]{\includegraphics[scale=0.12]{antenna_configuration_DA_right_slanted.pdf}
\label{fig:antenna_configuration_DA_right}}
\caption{Arrangement of the antenna arrays on a mobile phone chassis with dimensions of $150 \times 75 \times 8~{\rm mm}^3$ in length, width and thickness: (a) ULA, (b) and (c) DA. The ULA radiates slanted polarizations to the ground, when the mobile phone is at this standing position.}
\label{fig:antenna_configuration}
\end{center}
\end{figure}
\subsection{Orientations of the Mobile Phone}
Different orientations of the mobile phone are taken into account to analyze realistic operational scenarios. Figure~\ref{fig:antenna_geometry} shows the coordinate system and a base orientation of the mobile phone where the long-side of the mobile phone chassis is along the $y$-axis, while the display faces the $+z$-direction. Orientation of the mobile phone is determined by rotating the coordinate system through three angles, $\phi_0$, $\theta_0$ and $\chi_0$ in Fig.~\ref{fig:antenna_geometry}, while fixing the mobile phone. The three angles rotates the original coordinate system $(x,y,z)$ around $z$, $y_1$ and $z_2$ axes, respectively, so that the new coordinate system becomes $(x',y',z')$~\cite{Hansen98_book}, Appendix A2. The three angles $(\phi_0,\theta_0,\chi_0)$ are set such that the longitudinal axis of the mobile is along a line of every $45^\circ$ azimuth between $0^\circ$ and $315^\circ$ and of the polar angle at $0^\circ, 45^\circ$ and $90^\circ$. The radiation patterns of antenna elements on the rotated coordinate system, $\boldsymbol{E}_{\rm m}$, are derived from those of the base orientation on the original coordinate system as detailed in the Appendix of present paper.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{antenna_geometry.pdf}
\caption{A coordinate system and three rotational axes defining the mobile phone orientation.}
\label{fig:antenna_geometry}
\end{center}
\end{figure}
\subsection{Finger Effects}
We also consider cases when a finger covers one of the antenna elements. A finger covering an antenna gives rise to the reduction of the broadside gain of the patch antenna by 18 to 25~dB~\cite{Heino16_EuCAP}.
A finger is modeled as a single layer with an elliptic cross section and is separated by $3$~mm from the antenna in our simulations to avoid severe reduction of radiation efficiency. When a fingertip points to one of the antennas, other antennas also suffer from shadowing due to the finger, but not as severely as the one covered with the finger. The worst input impedance matching at $60$~GHz among patch antennas is $9.2$~dB of return loss under the presence of a finger for ULA, while the same is $14.3$~dB for DA. The return losses are much greater without a finger. The worst isolation between patch antennas is $16.0$ and $35.3$~dB for ULA and DA, respectively, under the presence of a finger. The isolation is higher if there is no finger. The finger does not have much effects on the matching and isolation, as we ensured a minimum clearance of $3$~mm between a finger and patch antennas.
\section{Radio Propagation Simulations}
We introduce a channel sounding campaign and a ray-tracer in this section. The measured channels from the sounding campaign serve as a reference of experimental evidence that our ray-tracer aims at producing. Once the ray-tracer is qualified to produce realistic channels, it is used to generate multipath components (MPCs) at different mobile locations in the same site. Only a few technical details of the channel sounding are given here, followed by a calibration of our ray-tracer for the same site. The latter is the new contents that have not been published in the literature and hence the main focus of the present section.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.45]{simulation_measurement_airport_floorplan.pdf}
\caption{Floor plan of the small-cell site in an airport.}
\label{fig:simulation_airport_floorplan}
\end{figure}
\subsection{Channel Sounding}
We chose a check-in hall of an airport as a representative small-cell scenario. A summary of multi-frequency channel sounder, and the channel sounding in the airport is given in~\cite{Vehmas16_VTCF}. A floor plan of the airport check-in hall where the channel sounding was performed is depicted in Fig.~\ref{fig:simulation_airport_floorplan}. To avoid human blockage effects, the measurements were conducted in the evening and at night. The transmit (Tx) antenna is placed at 12 different locations altogether though not all the Tx locations are mentioned in Fig.~\ref{fig:simulation_airport_floorplan}; while the receive (Rx) antenna was fixed near one of the walls overlooking the hall at an elevated floor. The measurements covered both the line-of-sight (LOS) and non-LOS (NLOS) channels, while the three-dimensional link distance varies from 18.8 to 107.2 m. The Tx and Rx antennas are $1.58$ and $5.68$~m high above the floor of the main hall. The Tx antenna is an omni-directional biconical antenna with $2$~dBi gain and elevation beamwidths of $60^\circ$, while the Rx antenna is a directive sectorial horn antenna with $19$~dBi gain and $10^\circ$ and $40^\circ$ azimuth and elevation beamwidths, respectively. Both antennas radiate and receive the vertical polarization mainly. The broadside of the Rx antenna was scanned over the azimuth angle from $20^\circ$ to $160^\circ$ with $5^\circ$ steps, and at $0^\circ$ and $-20^\circ$ elevation angles. The sounding was made with $4$~GHz of bandwidth centered at $61$~GHz. Power delay profiles (PDPs) are synthesized from a set of wideband directional channel sounding~~\cite{Vehmas16_VTCF} for calibration of the ray-tracer detailed in the next subsection.
\subsection{Optimization of the Ray-Tracer}
Our in-house ray-tracer simulates multipath channels for a large number of links between BS and MS. The ray-tracer is based on accurate descriptions of the environment in the form of point clouds, obtained by laser scanning~\cite{Jarvelainen16_TAP}, and is capable of simulating relevant propagation mechanisms such as specular reflections, diffraction, diffuse scattering and shadowing. Specular reflections are first identified by finding points lying inside the Fresnel zone between the MS and image of the BS, and then checking if a normal vector of a local surface formed by a group of points supports the specular reflection. Once identified, the reflection coefficients are calculated using the Fresnel equations. Shadowing objects are similarly detected by searching for points within the Fresnel zone for a given path. The ray-tracer provides azimuth and polar angles of arriving MPCs at the MS as well as the co-polarized magnitude of path gain and propagation delay time, i.e., $\left\{ \phi_{l}, \theta_{l}, \alpha^{VV}_{l}, \tau_l \right\}_{l=0}^{L_{\rm p}}$, as outputs where $L_{\rm p}$ is a number of MPCs in a BS-MS link; the path index $l=0$ is allocated to a LOS path. In the present simulations, we take into account the LOS path as well as first and second order specular reflections. Diffuse scattering is found to be of minor effects in the present case~\cite{Vehmas16_VTCF}.
In order to ensure that the ray-tracer reproduces measured channels as accurate as possible, we set permittivity $\varepsilon_{\rm r}$ of objects in the environment so that the reproduced channels resemble the measured ones. Optimum $\varepsilon_{\rm r}$ is found by first calculating the path amplitude with different $\varepsilon_{\rm r}$ ranging from 2 to 6, then deriving the band-limited PDP from the paths and finally minimizing the difference between measured and simulated delay spreads. The shadowing attenuation loss $L_{\rm a}$ for small objects in the environment is chosen heuristically, resulting in $L_{\rm a}=20$~dB. Paths propagating through walls were assigned with very high attenuation losses because these paths do not contribute to the received power. Optimization yielded $\varepsilon_{\rm r}=3.6$, leading to agreement of the measured and simulated PDP shown in Fig.~\ref{fig:pdp_airport_meas_sim} for one of the measured links. A comparison between measured and simulated pathloss, mean delay and delay spreads is shown in Fig. \ref{fig:simulation_optimization_airport_60GHz}
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[scale=0.4]{pdp_airport_meas_sim.pdf}
\label{fig:pdp_airport_meas_sim}}
\subfigure[]{\includegraphics[scale=0.33]{simulation_optimization_airport_60GHz.pdf}
\label{fig:simulation_optimization_airport_60GHz}}
\caption{(a) Measured and simulated PDP for Tx4-Rx link. (b) Large-scale parameters for measured and optimized simulated channels at 60 GHz.}
\label{fig:meas_sim}
\end{center}
\end{figure}
With the optimum parameters, MPCs are generated for links defined by BS and MS locations in Fig.~\ref{fig:simulation_airport_floorplan}. The BS is placed $1$~m from a wall at a height of $5.7$~m. The mobile is placed at a height of $1.5$~m at every $0.6$~m over a route.
In total, we simulated 2639 links, including $1816$ LOS and $823$ obstructed LOS (OLOS). The polarimetric complex amplitudes of each MPC is generated statistically from $\alpha^{VV}$ estimates of the ray-tracer as $\alpha^{HH}= \alpha^{VV}$ and
\begin{equation}
\alpha^{HV} = \alpha^{VH} = \alpha^{VV}/\mathrm{XPR},
\end{equation}
where $\alpha^{VH}$ for example denotes a complex amplitude of a horizontally-transmitted and vertically-received path; $\mathrm{XPR}$ is a cross-polarization ratio (XPR) of an MPC modeled from polarimetric channel sounding~\cite{Karttunen17_WCL} as
\begin{equation}
\mathrm{XPR}|_{\mathrm{dB}} \sim \mathcal{N}(\mu_2(L_\mathrm{ex}),\sigma_2^2),
\label{eq:XPR2}
\end{equation}
\begin{equation}
\mu_2(L_\mathrm{ex}) = \begin{cases} \alpha_2\cdot L_\mathrm{ex}+\beta_2, & \mbox{if } L_\mathrm{ex} \leq -\beta_2/\alpha_2 \\ 0, & \mbox{if } L_\mathrm{ex} > -\beta_2/\alpha_2 \end{cases},
\label{eq:mu2}
\end{equation}
where $\mu_2(L_\mathrm{ex})$ is the mean, $\sigma_2^2$ is the variance of the XPR model; $\alpha_2=-0.6, \beta_2=35$ and $\sigma_2=4$ were used~\cite{Karttunen17_WCL}. The excess loss $L_\mathrm{ex}$ of the MPC is defined as $L_\mathrm{ex}=|\alpha^{VV}|^2-\mathrm{FSPL}(\tau)$, where $\mathrm{FSPL}(\tau)$ is the free space path loss. For a LOS path $l=0$, ${\rm XPR}=\infty$.
\subsection{Human Torso Shadowing}
At $60$~GHz, it is necessary to include a link blockage effect due to a human body holding the mobile phone.
A simple canonical model of the link blockage due to a human body is used.
A relative geometry of the human body to the mobile phone is defined in Fig.~\ref{fig:shadowing_geometry}, where the width of the human body and the separation between the body and mobile phone is $0.5$ and $0.3$~m, respectively.
The human blockage loss is defined by
\begin{equation}
L_{\rm body} = \max \left( 0, L_{\rm b} \left\{ 1- \left( \frac{\phi-(\phi_0-\pi)}{\phi_{\rm b}} \right)^2 \right\} \right) {\rm dB},
\end{equation}
where $\phi$ is the azimuth angle of arrival of an MPC, $\phi_{\rm b} = 39.8^\circ$ is the azimuth angle of the body torso seen from the mobile phone as defined in Fig.~\ref{fig:shadowing_geometry}; $L_{\rm b} = 20$~dB is the maximum body shadowing loss. The model provides different losses depending on the azimuth angle of arrival of MPCs.
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[scale=0.4]{shadowing_geometry.pdf}
\label{fig:shadowing_geometry}}
\subfigure[]{\includegraphics[scale=0.13]{shadowing_loss.pdf}
\label{fig:shadowing_loss}}
\caption{(a) Geometry of the mobile phone and human torso. (b) Canonical model of shadowing losses at 60~GHz for a human torso.}
\label{fig:shadowing}
\end{center}
\end{figure}
\section{Total Array Gain}
\subsection{Definition}
It is possible to define a total array gain of the antenna at MS locations after we have defined the polarimetric complex gain of the antenna radiation patterns, $\boldsymbol{E}_{\rm m}$, and parameters of MPCs, $\left\{ \phi_{l}, \theta_{l}, \boldsymbol{\alpha}_{l}, \tau_l \right\}_{l=1}^{L_{\rm p}}$, where $\boldsymbol{\alpha}_{l} \in \mathbb{C}^{2\times 2}$ is a polarimetric complex gain of an $l$-th MPC. Assuming downlink, the output signal $\boldsymbol{y}$ observed at a mobile antenna array is expressed as
\begin{equation}
\boldsymbol{y} = \boldsymbol{h}x + \boldsymbol{n},
\end{equation}
where $x$ is an input voltage to a base station antenna, $\boldsymbol{n}, \boldsymbol{h} \in \mathbb{C}^N$ are vectors comprised of noise voltage observed at the antenna array and radio channel transfer functions, respectively, $1 \le n \le N$ is an index of an MS antenna. The $n$-th entry of $\boldsymbol h$ is given by
\begin{equation}
\label{eq:h}
h_n = \sum_{l=0}^{L_{\rm p}} \boldsymbol{E}^{\rm H}_{{\rm m},n}(\phi_l, \theta_l) \boldsymbol{\alpha}_l \boldsymbol{E}_{\rm b} e^{\j \xi_l},
\end{equation}
where $\boldsymbol{E}_{{\rm m},n}$ is the polarimetric complex radiation pattern of the $n$-th antenna at a mobile, $\cdot^{\rm H}$ is Hermitian transpose. See Appendix for their definitions; $\boldsymbol{E}_{\rm b} = \left[ 1 ~ 1 \right]^{\rm T}/\sqrt{2}$ represents an ideal dual-polarized isotropic antenna at BS, $\xi$ is a uniformly distributed random phase over $[0~2\pi)$, which is set to $0$ for an LOS path; $\cdot^{\rm T}$ denotes transpose. Adding the random phase leads to small-scale realizations of $\boldsymbol h$. We consider MRC assuming that a moving speed of the mobile is modest so that instantaneous channel information is available at MS, and that the link is noise-limited and hence the mobile aims at maximizing a signal-to-noise ratio. The combining weights are given by $\boldsymbol{w} = \boldsymbol{h}^H / || \boldsymbol{h} ||$, leading to the total array gain as,
\begin{equation}
G_{\rm a} = 10 \log_{10} \left( E_{\boldsymbol{h}} \left[ \left| \boldsymbol{h}\boldsymbol{w} \right|^2 \right] / P_{\rm o} \right) {\rm dB},
\label{eq:array_gain}
\end{equation}
where $E_{\boldsymbol{h}} [\cdot]$ is the Ensemble averaging over small-scale realizations of $\boldsymbol{h}$ and $P_{\rm o}$ is an omni-directional link gain denoted by
\begin{equation}
\label{eq:omni_pathloss}
P_{\rm o} = \sum^{L_{\rm p}}_{l=0} |\boldsymbol{\alpha}_l|^2,
\end{equation}
for a single mobile location. The total array gain includes averaged gains of all antenna elements in the array, as well as the gains attributed to signal precoding and combining. The total array gain is distinct from the conventional array gain in that its value is defined uniquely. The conventional array gain is practically not defined uniquely as it depends on the choice of a reference antenna element in an array. Powers from each antenna element in an array practically vary significantly depending on its polarization, orientation and element types. The total array gain allows fair comparison of phased antenna arrays consisting of different element types and configurations.
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[scale=0.13]{gain_unif_wo.pdf}
\label{fig:gain_unif_wo}}
\subfigure[]{\includegraphics[scale=0.13]{gain_unif_finger_body.pdf}
\label{fig:gain_unif_finger_body}}
\subfigure[]{\includegraphics[scale=0.13]{gain_dist_finger_body.pdf}
\label{fig:gain_dist_finger_body}}
\subfigure[]{\includegraphics[scale=0.13]{histogram_array_gain.pdf}
\label{fig:histogram_array_gain}}
\caption{(a) Total array gain of ULA in free space. (b) The same with human torso and finger shadowing and (c) total array gain of DA with human and finger shadowing. (d) Histogram of the total array gain. In (a)--(c), the red solid and dashed lines are median of the total array gain across different antenna orientations and the maximal gain of $8$-element patch antenna array. In (d), cases 1, 2 and 3 corresponds to the array in free space, with human body shadowing and with body and finger shadowing.}
\label{fig:result_path_gain}
\end{center}
\end{figure}
\begin{table}[t]
\begin{center}
{\caption{Statistics of the total array gain for ULA and DA across 24 different orientations and 2639 locations of the MS. Cases 1, 2 and 3 corresponds to the array in free space, with human body shadowing and with body and finger shadowing}
\label{table:MEG}}
\begin{tabular}{c|ccc|ccc} \hline
Unit: dB & \multicolumn{3}{c|}{Uniform linear array} & \multicolumn{3}{c}{Distributed array} \\
& Case 1 & Case 2 & Case 3 & Case 1 & Case 2 & Case 3 \\ \hline
LOS &&&&&&\\
peak & $14.9$ & $12.9$ & $8.4$ & $10.3$ & $9.7$ & $10.2$ \\
median & $3.8$ & $0.8$ & $-4.1$ & $5.8$ & $4.6$ & $4.5$ \\
outage & $-12.0$ & $-15.5$ & $-14.1$ & $-4.8$ & $-10.1$ & $-10.2$ \\ \hline
OLOS &&&&&&\\
peak & $11.8$ & $11.6$ & $6.3$ & $10.1$ & $9.3$ & $9.6$ \\
median & $2.7$ & $1.5$ & $-2.7$ & $5.9$ & $5.0$ & $4.5$ \\
outage & $-8.1$ & $-12.4$ & $-12.0$ & $-5.1$ & $-6.7$ & $-7.4$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Results and Discussions}
First, the total array gain of the ULA without human body and finger effects, called ``Case 1" hereinafter, is shown in Fig.~\ref{fig:gain_unif_wo} for 2639 MS locations of the small-cell site in the airport. The black lines correspond to total array gains at particular antenna orientations after MRC~\eqref{eq:array_gain}. The red solid and dashed lines correspond to the median gain and the maximum total array gain of an $8$-element patch array, $G_{\rm b}+10\log_{10} 8 = 17.2$~dB, respectively. The antenna orientation causes $\pm 15$~dB variation of the total array gain around the median.
The total array gain of ULA is illustrated in Fig.~\ref{fig:gain_unif_finger_body} when the human body and finger shadowing is considered; we call it ``Case 3" in the following. Table~\ref{table:MEG} summarizes the total array gain at peak, median and outage levels for the ULA with three cases. The ``Case 2" in the table correspond to links with body shadowing. The table shows that the median gain drops by about $8$~dB due to body and finger compared to the free space case. The peak and outage levels are defined by largest and smallest $2$~\% total array gains. It is worth noting that the outage total array gain improves for ULA when finger shadowing is present, because the finger guides the radiated energy to directions where it is otherwise not possible to reach. Diffraction on finger surface mainly produces such radiation~\cite{Heino16_EuCAP}.
The gain of DA with human body and finger shadowing effects is shown in Fig.~\ref{fig:gain_dist_finger_body}. The right-most column of Table~\ref{table:MEG} shows the corresponding gain statistics. The results show better median gain of DA than ULA by about $8$~dB when both the body and finger are present. Even though the peak gain of the DA is smaller than that of the ULA, the median and outage gain of DA is up to $8$~dB higher than that of the ULA, showing robustness of DA for capturing energy delivered by MPCs across different orientations of the mobile phone.
Finally, Fig.~\ref{fig:histogram_array_gain} shows a histogram of total array gain across different MS locations and orientations. The plot shows that gains of DA concentrate on the positive side, while those of ULA are more scattered across positive and negative sides.
It is worth pointing out that total array gains here assume idealistic lossless hardware for analog beamforming; practical beamforming with analog phase shifting networks, regardless of passive or active, reduces a signal-to-noise ratio and hence the total array gains of practical beamforming hardware will be smaller than the values presented in this paper.
\section{Concluding Remarks}
The present paper quantified gains of mobile phone antenna arrays at $60$~GHz equipped with analog beamforming and a single baseband unit. The total array gain is defined by a received power at the array in excess to that of a single-element isotropic antenna. The total array gain circumvents the ambiguity of conventional array gains depending on the choice of a reference antenna element in an array. Our analysis revealed that the total array gain is higher for $8$-element DA than the ULA by up to $8$ and $6$~dB in the median and outage levels, respectively. The ULA achieves the maximum total array gain when there is an LOS to a BS and no human shadowing is involved, while there are many postures of the mobile phone and hence the antenna array where ULA cannot see the LOS, leading to a low gain of $-15$~dB. On the other hand, DA can always illuminate the entire solid angle, making it possible to keep the median and outage noticeably higher than ULA. The present study therefore shows robustness of DA as an mm-wave antenna array on a mobile device. In our forthcoming works, we will address the total array gain when the BS also performs beamforming.
\section*{Appendix: Rotation of a Mobile Phone}
When the mobile phone is at the base orientation shown in Fig.~\ref{fig:antenna_geometry}, we call the corresponding radiation patterns of antenna elements as base patterns. Denoting the base patterns as
$\boldsymbol{E}_{\rm o} = \left[ \boldsymbol{e}_{V}~\boldsymbol{e}_{H}\right]^T$ where $\boldsymbol{e}_{\alpha}$, $\alpha = V~{\rm or}~H$ denotes the far-field radiation pattern of the vertical or horizontal polarizations. They refer to the electric field components tangential to the polar and azimuth angles of the antenna coordinate system, respectively. The radiation pattern vectors are defined by
\begin{equation}
\boldsymbol{e}_{\alpha} = \left[ E_{\alpha,\boldsymbol{\Gamma}_1} \cdots E_{\alpha,\boldsymbol{\Gamma}_l} \cdots E_{\alpha,\boldsymbol{\Gamma}_L} \right]^T \in \mathbb{C}^{L},
\end{equation}
where $\boldsymbol{\Gamma}_l = \left[ \theta_l ~ \phi_l \right]$ refers to the $l$-th pointing direction of the radiation pattern from the origin of the mobile phone, $1 \le l \le L$; $E_{\alpha,\boldsymbol{\Gamma}}$ denotes a complex gain for the respective polarization and pointing direction. The base pattern can be decomposed into a series of spherical harmonics coefficients as
\begin{equation}
\label{eq:decomposition}
{\boldsymbol E}_{\rm o} = \frac{k}{\sqrt{\eta}} \boldsymbol{Fq},
\end{equation}
where $k$ is wavenumber, $\eta$ is a wave impedance of vacuum, ${\boldsymbol F} \in \mathbb{C}^{2L \times J}$ is a matrix denoting $\phi$ and $\theta$ fields of each spherical harmonic as
\begin{equation}
\label{eq:f}
\boldsymbol{F} =
\begin{pmatrix}
\boldsymbol{f}_{V,1-11} & \boldsymbol{f}_{V,2-11} & \cdots &
\boldsymbol{f}_{V,1mn} & \boldsymbol{f}_{V,2mn} & \cdots \\
\boldsymbol{f}_{H,1-11} & \boldsymbol{f}_{H,2-11} & \cdots &
\boldsymbol{f}_{H,1mn} & \boldsymbol{f}_{H,2mn} & \cdots \\
\end{pmatrix},
\end{equation}
where $-n \le m \le n$ and $1 \le n \le N$; $m$ and $n$ are the spherical wavemode indices and $N$ is the total number of considered $n$-modes. The total number of the spherical wavemodes amounts to $J=2N(N+2)$, \cite{Hansen98_book}, pp.15. In~\eqref{eq:f},
\begin{equation}
\boldsymbol{f}_{\alpha,smn} = \left[ f_{\alpha,\boldsymbol{\Gamma}_1,smn} \cdots f_{\alpha,\boldsymbol{\Gamma}_l,smn} \cdots f_{\alpha,\boldsymbol{\Gamma}_L,smn} \right]^T \in \mathbb{C}^{L},
\end{equation}
where $s=1$ or $2$. Furthermore, under the $e^{\j \omega t}$ time convention where $\omega$ is an angular frequency of a radio frequency signal,\footnote{The time convention in~\cite{Hansen98_book} is $e^{-\j \omega t}$.}
\begin{eqnarray}
\label{eq:f1}
f_{V,\boldsymbol{\Gamma},1mn} & = & k_{mn} (-\j)^{n+1}
\left( \frac{\j m\bar{P}_n^{|m|}(\cos{\theta}) }{\sin{\theta}} \right), \\
f_{H,\boldsymbol{\Gamma},1mn} & = & k_{mn} (-\j)^{n+1}
\left( -\frac{\d }{\d \theta} \bar{P}_n^{|m|}(\cos{\theta}) \right), \\
f_{V,\boldsymbol{\Gamma},2mn} & = & k_{mn} (-\j)^n
\left( \frac{\d}{\d \theta} \bar{P}_n^{|m|}(\cos{\theta}) \right), \\
\label{eq:f4}
f_{H,\boldsymbol{\Gamma},2mn} & = & k_{mn} (-\j)^n
\left( \frac{\j m\bar{P}_n^{|m|}(\cos{\theta}) }{\sin{\theta}} \right), \\
k_{mn} & = & \sqrt{\frac{2}{n(n+1)}} \left( -\frac{m}{|m|} \right)^m e^{- \j m\phi} ,
\end{eqnarray}
where $\bar{P}_n^{|m|}(\cdot)$ is the normalized associated Legendre function with the order of $m$.
Finally, in~\eqref{eq:decomposition}, ${\boldsymbol q} \in \mathbb{C}^{J}$ is a vector comprised of spherical harmonics coefficients. The vector can be solved in a least-squares manner using a pseudo-inverse of a matrix $\boldsymbol F$.
Now we consider rotation of the spherical harmonics~\eqref{eq:f1}-\eqref{eq:f4} through the Euler angles $\phi_0$, $\theta_0$ and $\chi_0$ that transform the original one $(x,y,z)$ into another coordinate system $(x', y', z')$ as defined in Fig.~\ref{fig:antenna_geometry}. The three angles are applied to rotate the original coordinate system around $z$, $y_1$ and $z_2$ axes, respectively in this order~\cite{Hansen98_book}, Appendix A2. The spherical harmonics of the rotated coordinate system is given by
\begin{equation}
f'_{\alpha, \boldsymbol{\Gamma}, smn} = \sum_{\mu = -n}^{n} e^{\j m \phi_0} d^n_{\mu m}(\theta_0) e^{\j \mu \chi_0} f_{\alpha, \boldsymbol{\Gamma}, s\mu n},
\end{equation}
where
\begin{eqnarray}
\nonumber
d^n_{\mu m}(\theta) & = & \left\{ \frac{(n+\mu)!(n-\mu)!}{(n+m)!(n-m)!} \right\}^{\frac{1}{2}} \left( \cos{\frac{\theta}{2}} \right)^{\mu +m} \times \\
& & \left( \sin{\frac{\theta}{2}} \right)^{\mu -m} P_{n-\mu}^{(\mu-m, \mu+m)}(\cos\theta),
\end{eqnarray}
where $P_{n-\mu}^{(\mu-m, \mu+m)}(\cos\theta)$ is the Jacobi polynomial~\cite{Hansen98_book}.
Finally, the radiation patterns of the antenna elements on a rotated coordinate system are derived using the same spherical harmonics coefficients as
\begin{equation}
\label{eq:decomposition2}
{\boldsymbol E}_{\rm m} = \frac{k}{\sqrt{\eta}} \boldsymbol{F}'\boldsymbol{q},
\end{equation}
where $\boldsymbol{F}'$ is given in a similar manner as~\eqref{eq:f} with the spherical harmonics on the rotated coordinate system.
\section*{Acknowledgement}
The authors would like to acknowledge the financial support from the Academy of Finland research project ``Massive MIMO: Advanced Antennas, Systems and Signal Processing at mm-Waves (M3MIMO)" (decision \#288670), as well as the support from the Nokia Bell Labs, Finland.
\bibliographystyle{IEEEtran
|
{
"timestamp": "2018-02-26T02:10:11",
"yymm": "1802",
"arxiv_id": "1802.08591",
"language": "en",
"url": "https://arxiv.org/abs/1802.08591"
}
|
\section{Introduction}
In \cite{ILZ} the authors define Jones algebras as certain quotients of Temperley--Lieb algebras. They also show that these algebras may be identified with the endomorphism algebras over the quantum group for $\mathfrak sl_2$ of fusion tensor powers of its natural vector representation. In this paper we study more generally such endomorphism algebras for arbitrary reductive groups in positive characteristics and their quantized root of unity analogues. We call these semisimple algebras {\it higher Jones algebras}. In the quantum $\mathfrak sl_2$-case they coincide with the Jones algebras from \cite{ILZ}. Our main result is an algorithm which determines the dimensions of the simple modules for these higher Jones algebras.
As the first important example and as a role model for our study consider the general linear group $GL_n$. Together with the corresponding root of unity quantum case we show how this gives interesting semisimple quotients of the group algebras for symmetric groups as well as of the corresponding Hecke algebras, and it allows us to determine the dimensions of certain classes of simple modules for these algebras. The results in these cases were obtained a long time ago, see \cite{Ma1} and \cite{Kl} for the modular case (using similar, respectively different techniques), and \cite{GW} for the Hecke algebra case. Nevertheless we treat this case in some detail as it will serve as role model for the general case. In fact, one of our key points is to give a unified treatment of a number of other cases, showing that they can be handled in similar ways.
In order to explain our general strategy we pass now to an arbitrary reductive algebraic group G defined over a field k of prime
characteristic.
The category of tilting modules for $G$ has a quotient category called the fusion category, see \cite{A92}, \cite{AS}. Objects in this category may be identified with certain semisimple modules for $G$ and the higher Jones algebras are then defined as the endomorphism algebras of such objects and of their fusion tensor powers. The main result in \cite {AST1} says that any endomorphism algebra of a tilting module for $G$ has a natural cellular algebra structure. We show that the higher Jones algebras inherite a cellular structure, and exploiting this we are able to compute the dimensions of their simple modules. This applies to the corresponding quantum case as well.
If $G$ is one of the classical groups $GL(V), SP(V)$ or $O(V)$ the module $V$ is (except for $O(V)$ in characteristic $2$ when $\dim V$ is odd) a tilting module for $G$ and via Schur--Weyl duality the endomorphism algebras of the fusion tensor powers of $V$ lead to semisimple quotient algebras of the group algebras of symmetric groups and Brauer algebras. This should be compared with the results in \cite{Ma1} and \cite{We2}, respectively. In the corresponding quantum cases we obtain semisimple quotients of Hecke algebras and $BMW$-algebras (compare \cite{Kl}, \cite{We1}, and \cite{We3}). In all cases we obtain effective algorithms for computing the dimensions
of the corresponding classes of simple modules. We have illustrated this by giving a number of concrete examples.
We want to point out that our approach, based on the theory of tilting modules and the cellular structure on their endomorphism rings as developed in \cite{AST1}, gives a general method for handling many more cases than those mentioned above. To mention just one big family of algebras - which fits into our framework and which are similar in principle to the examples given so far - take again an arbitrary reductive algebraic group $G$ and let $V$ be a simple Weyl module for $G$. This could e.g. be a Weyl module with minuscule highest weight or with highest weight belonging to the bottom alcove of the dominant chamber. Then our approach applies to the fusion tensor powers of $V$ or more generally to fusion tensor products of finite families of simple Weyl modules. As a result we get algorithms for the dimensions of certain simple modules for the corresponding endomorphism algebras. However, in only few cases are these algebras related to ``known" algebras and we have chosen to limit ourselves to the above examples.
The paper is organized as follows. In Section 2 we first set up notation etc. for a general reductive group $G$ and then make this explicit in the case where $G$ is a group a classical type. We shall rely heavily on tilting modules for $G$ and in Section 3 we start out by recalling the basic facts that we will need from this theory. In addition, this section establishes results on fusion tensor products which we then apply in Section 4 to symmetric groups and in Section 5 to Brauer algebras. In Section 6 we turn to quantum groups at roots of unity. Here we prove results analogous to the ones we obtained in the modular case and in Sections 7 and 8 we apply these to Hecke algebras for symmetric groups and to $BMW$-algebras, respectively.
\vskip .5 cm
{\bf Acknowledgments. }
Thanks to the referee for a quick and careful reading as well as for her/his many useful comments and corrections.
\section{Reductive algebraic groups}
This section introduces notation and contains a few basic facts about reductive algebraic groups and their representations over a field of prime characteristic. We shall be rather brief and refer the reader to \cite{RAG} for further details. We also deduce some specific facts needed later on for each of the groups of classical type.
\subsection {General notation} \label{general}
Suppose $G$ is a connected reductive algebraic group over a field $k$. We assume $k$ has characteristic $p > 0$. In this paper all modules will be finite-dimensional.
Let $T$ be a maximal torus in $G$, and denote by $X =X(T)$ its character group. In the root system $R \subset X$ for $(G,T)$ we choose a set of positive roots $R^+$, and denote by $X^+ \subset X$ the corresponding cone of dominant characters. Then $R^+$ defines an ordering $\leq$ on $X$. It also determines uniquely a Borel subgroup $B$ whose roots are the set of negative roots $-R^+$.
Denote by $S$ the set of simple roots in $R^+$. The reflection $s_\alpha$ corresponding to $\alpha \in S$ is called a simple reflection. The set of simple reflections generates the Weyl group $W$ for $R$. We can identify $W$ with
$N_G(T)/T$. Then we see that $W$ acts naturally on $X$: $\lambda \mapsto w(\lambda), \lambda \in X, w \in W$.
In addition to this action of $W$ on $X$ we shall also consider the so-called dot-action given by: $w \cdot \lambda = w(\lambda + \rho) - \rho, w \in W, \lambda \in X$. As usual, $\rho$ is half the sum of the positive roots.
In the category of $G$-modules we have the family of standard modules $\Delta(\lambda)$, and likewise the family of costandard modules $\nabla(\lambda)$. Here $\lambda$ runs through the set of dominant weights $X^+$ and $\Delta(\lambda)$ is also known as the Weyl module with highest weight $\lambda$. The dual Weyl module $\nabla(\lambda)$ is then $\Delta(-w_0 \lambda)^*$ where $w_0$ denotes the longest element in $W$.
The simple module with highest weight $\lambda$ may be realized as the head of $\Delta(\lambda)$ as well as the socle of $\nabla(\lambda)$. Recall that there is up to scalars a unique non-zero homomorphism
\begin{equation} \label{can} c_\lambda: \Delta(\lambda) \rightarrow \nabla(\lambda),
\end{equation}
namely the one factoring through $L(\lambda)$.
A $G$-module $M$ is said to have a $\Delta$-filtration if it has submodules $M^{i}$ with
$$ 0=M^0 \subset M^1 \subset \dots \subset M^r = M, \text { where } M^{i+1}/M^{i} \simeq \Delta(\lambda_i) \text { for some } \lambda_i \in X^+.$$
One defines $\nabla$-filtrations similarly.
If $M$ has a $\Delta$-filtration we set $(M:\Delta(\mu))$ equal to the number of occurrences of $\Delta(\mu)$ in such a filtration (note that these numbers are uniquely determined and independent of which $\Delta$-filtration we choose). When $M$ has a $\nabla$-filtration the numbers $(M:\nabla(\mu))$ are defined analogously.
A crucial result concerning modules with a $\Delta$-filtration says that, if $M$ and $M'$ both have a $\Delta$-filtration, then so does $M\otimes M'$. This is the Wang--Donkin-
-Mathieu theorem, see \cite{Wa}, \cite{Do-book}, and \cite{Ma}.
For $n \in \Z$ and $\alpha \in S$ we denote by $s_{\alpha, n}$ the affine reflection determined by
$$s_{\alpha, n}(\lambda) = s_\alpha (\lambda) - np\alpha.$$
The affine Weyl group $W_p$ is the group generated by all $s_{\alpha, n}$ where $ \alpha \in S$ and $ n \in \Z$ (note that in the Bourbaki convention this is the affine Weyl group corresponding to the dual root systen $R^\vee$).
The linkage principle \cite{A80a} says that, whenever $L(\lambda)$ and $L(\mu)$ are two composition factors of an indecomposable $G$- module, then $\mu \in W_p \cdot \lambda$. It follows that $M$ splits into a direct sum of submodules according to the orbits of $W_p$ in $X$. More precisely, if we set
$$A(p)= \{\lambda \in X | 0 < \langle \lambda + \rho, \alpha^\vee \rangle < p \text { for all } \alpha \in R^+\},$$
called the bottom dominant alcove, then the closure
$$\overline A(p) = \{\lambda \in X | 0 \leq \langle \lambda + \rho, \alpha^\vee \rangle \leq p \text { for all } \alpha \in R^+\}$$
is a fundamental domain for the dot-action of $W_p$ on $X$. We have
$$ M = \bigoplus_{\lambda \in \overline A(p)} M[\lambda], $$
with $M[\lambda]$ equal to the largest submodule in $M$ whose composition factors $L(\mu)$ all have $\mu \in W_p \cdot \lambda$.
\begin{remarkcounter} \label{alcove A}
\begin{enumerate}
\item [a)]
As an immediate consequence of the strong linkage principle \cite{A80a} we have
$$ \Delta(\lambda) = L(\lambda) = \nabla(\lambda) \text { for all } \lambda \in \overline A \cap X^+.$$
\item [b)] We have $A(p) \neq \emptyset$ if and only if $p > \langle \rho, \alpha^\vee \rangle$ for all roots $\alpha$, i.e. if and only if $p \geq h$, where $h$ is the Coxeter number for $R$.
\end{enumerate}
\end{remarkcounter}
\subsection{The general linear groups} \label{GL}
Let $V$ be a vector space over $k$. The reductive group $GL(V)$ plays a particularly important role in this paper. In this section we make the above notations and remarks explicit for the group $GL(V)$.
We set $n = \dim V$ and choose a basis $\{v_1, v_2, \cdots , v_n\}$ for $V$. Then $G_n = GL(V)$ identifies with $GL_n(k)$ and the set $T_n$ of diagonal matrices in $G_n$ is a maximal torus. The character group $X_n = X(T_n)$ is the free abelian group with basis $\epsilon_i$, $i=1, 2, \cdots ,n$ where $\epsilon_i: T_n \rightarrow k^{\times}$ is the homomorphism mapping a diagonal matrix into its $i$'th entry. If $\lambda \in X_n$, we shall write
$$\lambda = (\lambda_1, \lambda_2, \cdots , \lambda_n),$$
when $\lambda = \sum_1^n \lambda_i \epsilon_i.$
The root system for $(G_n,T_n)$ is
$$R = \{\epsilon_i -\epsilon_j | i \neq j\}.$$
It is of type $A_{n-1}$. Our choice of simple roots $S$ will be
$$S = \{\alpha_i = \epsilon_i -\epsilon_{i+1} | i = 1, 2, \cdots , n-1\}$$
inside the set of positive roots $R^+$ consisting of all $\epsilon_i - \epsilon_j$ with $i<j$.
We set
$$\omega_i = \epsilon_1 + \epsilon_2 + \cdots + \epsilon_i, \; i = 1, \cdots , n.$$
Then $\{\omega_1, \cdots , \omega_n\}$ is another basis of $X_n$. Note that $\omega_n$ is the determinant and thus, is trivial on the intersection of $T_n$ with $SL_n(k)$. Consider
$$\rho' = \omega_1 + \cdots + \omega_n = (n, n-1, \cdots , 1).$$
Then $\rho' = \rho +\frac{1}{2} (n+1) \omega_n$ and we shall prefer to work with $\rho'$ instead of $\rho$ (note that, if $n$ is even, $\rho \notin X_n$ whereas $\rho' \in X_n$ for all $n$). As $\omega_n$ is fixed by $W$, the dot-action of $W$ on $X$ is unchanged when we replace $\rho$ by $\rho'$.
We have an inner product on $X_n$ given by $(\epsilon_i, \epsilon_j) = \delta_{i,j}$. It satisfies $(\omega_i, \alpha_j) = \delta_{i,j}$, $i,j = 1, 2, \cdots n-1$, i.e. $\omega_1, \cdots , \omega_{n-1}$ are the fundamentals weights in $X_n$. On the other hand, $(\omega_n, \alpha_j) = 0$ for all $j$. Hence, $(\rho', \alpha_j) = 1$ for all $j = 1, 2, \cdots , n-1$.
The set of dominant weights is
$$X_n^+ = \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n\} = \{\sum_1^n m_i \omega_i | m_i \in \Z_{\geq 0}, i= 1, 2, \cdots , n-1, \;\ m_n \in \Z\}.$$
If $\lambda \in X_n^+$ has $\lambda_n \geq 0$,
then $\lambda$ may be identified with a partition of $|\lambda| = \lambda_1 + \lambda_2 + \cdots + \lambda_n$.
The bottom alcove will be denoted $A_n(p)$. When $n > 1$ it is given by
$$ A_n(p) = \{\lambda \in X_n^+ | \lambda_1 - \lambda_n \leq p - n\}.$$
We have $A_n(p) \neq \emptyset$ if and only if $p \geq n$. In particular, $A_2(p)$ is always non-empty.
In the special case $n = 1$ the group $G_1$ is the $1$-dimensional torus. In that case $X_1 = \Z \epsilon_1$ and $A_1(p) = \Z \epsilon_1$. Note that for any $r \in \Z$ the Weyl module $\Delta_1(r \epsilon_1)$ is the $1$-dimensional $G_1$ module $k_{r\epsilon_1}$.
\begin{remarkcounter} \label{A for type A}
The natural module $V$ for $G_n$ has weights $\epsilon_1, \cdots , \epsilon_n$. It is simple because the highest weight $\epsilon_1$ is minuscule. We have $\epsilon_1 \in A_n(p)$ if and only if $p > n$.
\end{remarkcounter}
\subsection{The symplectic groups} \label{Sp}
Let now $V$ be a $2n$-dimensional symplectic vector space over $k$ with a fixed symplectic form, and consider the semisimple algebraic group $G_n = SP(V)$ consisting of those elements in $GL(V)$ which respect this form. This is naturally a subgroup of $GL(V)$. Note that $G_1 = SL_2(k)$. We let $T_n$ be the maximal torus in $G_n$ obtained as the intersection of the maximal torus in $GL(V)$ with $G_n$. In the notation from Section \ref{GL} the restrictions to $T_n$ of $\epsilon_1, \cdots , \epsilon_n$ form a basis of $X_n = X(T_n)$. The root system for $(G_n, T_n)$ consists of the the elements
$$\{\pm \epsilon_i \pm \epsilon_j, \pm2 \epsilon_i | 1 \leq i \neq j \leq n \},$$
and is of type $C_n$. With respect to the usual choice of positive roots the set of dominant weights is
$$X_n^+ = \{\lambda = \sum_i \lambda_1 \epsilon_i | \lambda_i \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \}.$$
The bottom dominant alcove is also in this case denoted $A_n(p)$. It is given by
$$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_n^+ | \lambda_1 + \lambda_2 \leq p - 2n \} \text { if } n > 1. \end{cases}$$
When $n > 1$ we have $A_n(p) \neq \emptyset$ if and only if $p \geq 2n$, whereas $A_1(p) \neq \emptyset$ for all $p$.
\begin{remarkcounter} \label{A for type C}
The natural module $V$ for $G_n$ is simple for all $p$ as its highest weight $\epsilon_1$ is minuscule. It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$. Note that, for $n > 1$, we have $\epsilon_1 \in A_n(p)$ if and only if $p > 2n$, whereas for $n=1$ the condition is $p > 2$.
\end{remarkcounter}
\subsection{The orthogonal groups} \label{O}
Consider next a vector space $V$ over $k$ equipped with a non-degenerate, symmetric bilinear form. Then the orthogonal group $O(V)$ is the subgroup of $GL(V)$ consisting of those elements which preserve the bilinear form on $V$. We shall separate our discussion into the case where $\dim V$ is odd and the case where $\dim V$ is even.
\subsubsection{Type $B_n$}
Assume that $\dim V$ is odd, say $\dim V = 2n + 1$. Then we set $G_n = O(V)$. Again in this case we have $G_1 \simeq SL_2(k)$. However, the module $V$ for $G_1$ is the $3$-dimensional standard module for $SL_2(k)$. The root system $R$ for $G_n$ has type $B_n$ and may be taken to consist of the elements
$$ R = \{\pm \epsilon_i \pm \epsilon_j, \pm \epsilon_i | 1 \leq i \neq j \leq n \}$$
in $X_n = \oplus _{i=1}^n \Z\epsilon_i$. The set of dominant weights is
$$X_n^+ = \{ \sum_i \lambda_i \epsilon_i \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \}.$$
In this case the bottom dominant alcove $A_n(p)$ is given by
$$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_n^+ | 2 \lambda_1 \leq p - 2n \} \text { if } n > 1. \end{cases}$$
We have $A_n(p) \neq \emptyset$ if and only if $p > 2n$ (except for $n = 1$).
\begin{remarkcounter} \label{A for type B}
Unlike in the previous cases the highest weight of $V$ is no longer minuscule. However, we still have $V = \Delta(\epsilon_1)$ is simple for all $p > 2$, \cite[Section II.8.21]{RAG}. It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$ together with $0$. Note that $\epsilon_1 \in A_n(p)$ if and only if $p > 2n + 2$ except for $n=1$ where the condition is $p > 2$.
\end{remarkcounter}
\subsubsection{Type $D_n$}
Assume that $\dim V$ is even, say $\dim V = 2n$. We set again $G_n = O(V)$. The corresponding root system $R$ then has type $D_n$ and may be taken to consist of the elements
$$ R = \{\pm \epsilon_i \pm \epsilon_j | 1 \leq i \neq j \leq n \}$$
in $X_n =\{\lambda \in \oplus _{i=1}^n \frac{1}{2} \Z\epsilon_i | \lambda_i - \lambda_j \in \Z \text { for all } i, j\}$. The set of dominant weights is
$$X_n^+ = \{ \sum_i \lambda_1 \epsilon_i \in X_n | \lambda_i \geq \lambda_2 \geq \cdots \geq \lambda_{n-1} \geq |\lambda_n| \}.$$
In this case the bottom dominant alcove $A_n(p)$ is given by
$$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | \lambda \in Z, 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_2^+ | \lambda_1 \pm \lambda_2 \leq p - 2 \} \text { if } n = 2, \\\{\lambda \in X_n^+ | \lambda_1 + \lambda_2 \leq p - 2n + 2 \} \text { if } n > 2. \end{cases}$$
When $n>2$ we have $A_n(p) \neq \emptyset$ if only if $p > 2n - 2$, whereas $A_1(p)$ and $A_2(p)$ are always non-empty.
\begin{remarkcounter} \label{A for type D}
$V = \Delta(\epsilon_1)$ is simple for all $p $ (its highest weight is minuscule). It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$. Note that, for $n > 2$, we have $\epsilon_1 \in A_n(p)$ if and only $p > 2n - 2$, whereas the condition for both $n=1$ and $n=2$ is $p > 2$.
\end{remarkcounter}
\section{Tilting modules for reductive algebraic groups}
We return to the situation of a general reductive group $G$ and use the notation from Section \ref{general}. We very briefly recall the basics about tilting modules for $G$ (referring to \cite[Section 2]{Do} or \cite[ChapterII.E]{RAG} for details), and prove the results which we then apply in the next two sections. Moreover, we recall from \cite[Section 4]{AST1} a few facts about the cellular algebra structure on endomorphism rings for tilting modules for $G$, which we also need.
\subsection {Tilting theory for $G$}
A $G$-module $M$ is called tilting if it has both a $\Delta$- and a $\nabla$-filtration. It turns out that for each $\lambda \in X^+$ there is a unique (up to isomorphisms) indecomposable tilting module $T(\lambda)$ with highest weight $\lambda$, and up to isomorphisms these are the only indecomposable tilting modules, see \cite[Theorem 1.1] {Do}. The Weyl module $\Delta(\lambda)$ is a submodule of $T(\lambda)$, while the dual Weyl module $\nabla(\lambda)$ is a quotient. The composite of the inclusion $\Delta(\lambda) \to T(\lambda)$ and the quotient map $T(\lambda) \to \nabla(\lambda)$ equals the homomorphism $c_\lambda$ from (\refeq{can}) (up to a non-zero constant in $k$).
We have the following elementary (and no doubt well-known) lemma.
\begin{lem} \label{quotient} Let $M$ be a $G$-module which contains two submodules $M_1$ and $M_2$ such that $M = M_1 \oplus M_2$. Denote by $i_j: M_j \rightarrow M$, respectively $\pi_j: M \rightarrow M_j$ the natural inclusion, respectively projection, $j = 1,2$. Suppose $f \circ g = 0$ for all $f \in \Hom_G(M_2, M_1)$ and $g \in \Hom_G(M_1, M_2)$. Then the natural map
$$\phi: \End_G(M) \rightarrow \End_G(M_1)$$
which takes $h \in \End_G(M)$ into $\pi_1 \circ h \circ i_1 \in \End_G(M_1)$
is a surjective algebra homomorphism.
\end{lem}
\begin{proof} The surjectivity of $\phi$ is obvious, so we just have to check that $\pi_1 \circ h' \circ h \circ i_1 = \pi_1 \circ h' \circ i_1 \circ \pi_1 \circ h \circ i_1$ for all $h', h \in \End_G(M)$. However, $h \circ i_1 = i_1 \circ \pi_1\circ h \circ i_1 + i_2 \circ \pi_2\circ h \circ i_1$, and by our assumption we see that $(\pi_1 \circ h' \circ i_2) \circ (\pi_2\circ h \circ i_1) = 0$. The desired equality follows.
\end{proof}
Let $Q$ be a tilting module for $G$. Then $Q$ splits into indecomposable summands as follows
$$ Q = \bigoplus_{\lambda \in X^+} T(\lambda)^{(Q:T(\lambda))}$$
for unique $(Q:T(\lambda)) \in \Z_{\geq 0}$.
Set now
$$Q^{\F}= \bigoplus_{\lambda \in A(p)} T(\lambda)^{(Q:T(\lambda))} \text { and } Q^{\Ne} = \bigoplus_{\lambda \in X^+\setminus A(p)}T(\lambda)^{(Q:T(\lambda))}.$$
For reasons explained in Section \ref{Fusion} we call $Q^{\F}$ the fusion summand of $Q$ and $Q^{\Ne}$ the negligible summand of $Q$. Then:
\begin{lem} \label{no composites}
If $f \in \Hom_G(Q^{\F}, Q^{\Ne})$ and $g \in \Hom_G(Q^{\Ne}, Q^{\F})$, then $g \circ f = 0$.
\end{lem}
\begin{proof} It is enough to check the lemma in the case where $Q^{\F} = T(\lambda)$ and $Q^{\Ne} = T(\mu)$ with $\lambda \in A(p)$ and $\mu \in X^+\setminus A(p)$. By Remark \ref{alcove A}a) we have $T(\lambda) = \Delta(\lambda) = L(\lambda)$. Hence, in this case $g \circ f$ is up to scalar the identity on $T(\lambda)$. If the scalar were non-zero, $T(\lambda)$ would be a summand of $T(\mu)$, which contradicts the indecomposability of $T(\mu)$.
\end{proof}
\begin{thm} \label{fusion-quotient}
The natural map $\phi: \End_G(Q) \rightarrow \End_G(Q^{\F})$ is a surjective algebra homomorphism. The kernel of $\phi$ equals
$$\{h \in \End_G(Q) | \Tr(i_\lambda \circ h \circ \pi_\lambda) = 0 \text { for all homomorphisms } i_\lambda: T(\lambda) \rightarrow Q, \; \pi_\lambda: Q \rightarrow T(\lambda),\; \lambda \in X^+\}.$$
\end{thm}
\begin{proof} The combination of Lemma \ref{quotient} and Lemma \ref{no composites} immediately gives the first statement. To prove the claim about the kernel of $\phi$ we first observe that the endomorphisms $i_\lambda \circ h \circ \pi_\lambda$ of $T(\lambda)$ are either nilpotent, or a constant times the identity. Now, nilpotent endomorphisms clearly have trace zero. If $ \lambda \notin A(p)$, then $\dim T(\lambda)$ is divisible by $p$. This holds when $\lambda$ is $p$-singular (i.e. if there exists a root $\beta$ with $\langle \lambda + \rho, \beta^{\vee} \rangle$ divisible by $p$), because then the linkage principle implies that $\mu$ is also $p$-singular for all $\mu \in X^+$ for which $\Delta(\mu)$ occurs in a $\Delta$-filtration of $T(\lambda)$. For a $p$-regular $\lambda$ it then follows by an easy translation argument, see \cite[Section 5]{A92} (the argument used there deals with the quantum case but applies just as well in the modular case). So when $\lambda \notin A(p)$ all endomorphisms of $T(\lambda)$ have trace zero. In particular, we have $\Tr(i_\lambda \circ h \circ \pi_\lambda) = 0$ for all $h \in \End_G(Q)$ when $\lambda \in X^+\setminus A(p)$.
On the other hand, if $\lambda \in A(p)$, then by Remark \ref{alcove A} we see that $T(\lambda)$ is simple, i.e. we have $T(\lambda) = L(\lambda) = \Delta(\lambda)$. So in this case any non-zero endomorphism of $T(\lambda)$ has trace equal to a non-zero constant times $\dim T(\lambda)$. By Weyl's dimension formula $\dim (\Delta(\lambda)) $ is prime to $p$. If $i_1$, respectively $\pi_1$, denotes the natural inclusion of $Q^\F$ into $Q$, respectively projection onto $Q^\F$, then this means that $h \in \End_G(Q)$ is in the kernel of $\phi$ if and only if $i_1 \circ h \circ \pi_1 = 0$ if and only if $i_\lambda \circ h \circ \pi_\lambda = 0$ for all $\lambda \in A(p)$ if and only if $ \Tr(i_\lambda \circ h \circ \pi_\lambda) = 0$ for all $\lambda \in X^+$.
\end{proof}
\subsection{Fusion} \label{Fusion}
Let $\mathcal T$ denote the category of tilting modules for $G$. As noted above, this is a tensor category. Inside $\mathcal T$ we consider the subcategory $\mathcal N$ consisting of all negligible modules, i.e. a tilting module $M$ belongs to $\mathcal N$ if and only if $\Tr(f) = 0$ for all $f \in \End_{G}(M)$. As each object in $\mathcal T$ is a direct sum of certain of the $T(\lambda)$'s and $\dim T(\lambda)$ is divisible by $p$ if and only if $\lambda \notin A(p)$ (as we saw in the proof of Theorem \ref{fusion-quotient}) we see that $M \in \mathcal N$ if and only if $(M:T(\lambda)) = 0$ for all $\lambda \in A(p)$.
We proved in \cite[Section 4]{A92} (in the quantum case - the arguments for $G$ are analogous) that $\mathcal N$ is a tensor ideal in $\mathcal T$. The corresponding quotient category $\mathcal T/ \mathcal N$ is then itself a tensor category. It is denoted $\F$ and called the fusion category for $G$. We may think of objects in $\mathcal F$ as the tilting modules $Q$ whose indecomposable summands are among the $T(\lambda)$'s with $\lambda \in A(p)$. Note that $\mathcal F$ is a semisimple category (with simple objects $(T(\lambda) = L(\lambda))_{\lambda \in A(p)}$), cf. \cite[Section 4]{A92}.
\begin{remarkcounter} \label{p=2$} Note that for $p < h$ the alcove $A(p)$ is empty. This means that in this case $\mathcal N = \mathcal T$. In particular, if $p=2$ the fusion category is trivial except for the case $G=SL_2(k)$ in which case $A(2) =\{0\}$, so that $\mathcal F$ is the category of finite-dimensional vector spaces. For this reason we shall in the following tacitly assume $p > 2$.
\end{remarkcounter}
In order to distinguish it from the usual tensor product on $G$-modules we denote the tensor product in $\mathcal F$ by $\underline \otimes$. If $Q, Q' \in \mathcal F$ then $Q \underline \otimes Q = \pr (Q \otimes Q')$ where $\pr$ denotes the projection functor from $\mathcal T$ to $\mathcal F$ (on the right-hand side we consider $Q, Q'$ as modules in $\mathcal T$).
\begin{cor} \label{fusion} Let $T$ be an arbitrary tilting module for $G$. Then, for any $r \in \Z_{\geq 0}$, the natural homomorphism $\End_G(T^{\otimes r}) \rightarrow \End_G(T^{\underline \otimes r})$ is surjective.
\end{cor}
\begin{proof} Set $Q = T^{\otimes r}$. Then $Q$ is a tilting module and in the above notation $Q^\F = T^{\underline \otimes r}$ ($ = 0$ if $T \in \mathcal N$) . Hence the corollary is an immediate consequence of Theorem \ref{fusion-quotient}.
\end{proof}
\subsection{Cellular theory for endo-rings of tilting modules} \label{cellular}
Recall the notions of cellularity, cellular structure and cellular algebras from \cite{GL}. When $Q$ is a tilting module for $G$ its endomorphism ring $E_Q = \End_G(Q)$ has a natural cellular structure, see \cite[Theorem 3.9] {AST1}. The parameter set of weights for $E_Q$ is
$$ \Lambda = \{\lambda \in X^+ | (Q:\Delta(\lambda)) \neq 0 \}.$$
When $\lambda \in X^+$ the cell module for $E_Q$ associated with $\lambda$ is $C_Q(\lambda) = \Hom_G(\Delta(\lambda), Q)$. Then $\dim C_Q(\lambda) = (Q:\Delta(\lambda)) (= 0$ unless $\lambda \in \Lambda$). We set
$$ \Lambda_0 = \{\lambda \in \Lambda | (Q:T(\lambda)) \neq 0 \}.$$
If $\lambda \in \Lambda_0$ then $C_Q(\lambda)$ has a unique simple quotient which we in this paper denote $D_Q(\lambda)$. The set $\{D_Q(\lambda) | \lambda \in \Lambda_0\}$ is up to isomorphisms the set of simple modules for $E_Q$. We have
\begin{equation} \label{dim simple/tilting}
\dim D_Q(\lambda) = (Q: T(\lambda)), \end{equation}
see \cite[Theorem 4.12]{AST1}.
Finally, recall the following result on semisimplicity, see \cite[Theorem 4.13] {AST1}.
\begin{thm} \label{ss} $E_Q$ is a semisimple algebra if and only if $Q$ is a semisimple $G$-module. In that case we have $\Lambda = \Lambda_0$, $T(\lambda) = \Delta(\lambda) = L(\lambda)$ and $C_Q(\lambda) = D_Q(\lambda)$ for all $\lambda \in \Lambda$.
\end{thm}
\begin{examplecounter} \label{Ex1} Let $T$ be a tilting module for $G$ and set $Q = T^{\otimes r}$ as in the previous section. Then $E_Q= \End_G(T^{\otimes r})$ is a cellular algebra with cell modules $(C_Q(\lambda))_{\lambda \in \Lambda_T^r}$, where $\Lambda_T^r = \{\lambda \in X^+ | (T^{\otimes r} : \Delta(\lambda)) \neq 0\}$, and simple modules $(D_Q(\lambda))_{\lambda \in \Lambda_{0,T}^r}$, $\Lambda_{0,T}^r = \{\lambda \in \Lambda_T^r | (T^{\otimes r}: T(\lambda)) \neq 0\}$.
Denote by $Q_1$ the summand $T^{\underline \otimes r}$ of $Q$. Then the endomorphism ring
$$\overline E_Q = \End_G(Q_1),$$
is, according to Corollary \ref{fusion}, a quotient of $E_Q$, and, by Theorem \ref{ss}, it is a semisimple cellular algebra. In fact, $\overline E_Q$ is a direct sum of the matrix rings, namely
$$\overline E_Q \simeq \bigoplus_{\lambda \in A(p)} M_{(Q_1:T(\lambda))} (k).$$
The simple modules for $\overline E_Q$ are $\{D_Q(\lambda) | \lambda \in A(p) \cap \Lambda_0 \}$. We have
$$\dim D_Q(\lambda) = (Q_1:T(\lambda))$$
for all $\lambda \in A(p) \cap \Lambda_{0,T}^r$.
\end{examplecounter}
\section {Semisimple quotients of the group algebras $kS_r$}
In this section $G_n = GL(V)$ where $V$ is a vector space over $k$ of dimension $n$ with basis $\{v_1, v_2, \cdots , v_n\}$ as in Section \ref{GL}. As we will also look at various subspaces $V'$ of $V$ we shall from now on write $V_n = V$. We write $\Delta_n(\lambda)$ for the Weyl module for $G_n$, $T_n(\lambda)$ for the indecomposable tilting module for $G_n$ with highest weight $\lambda$, etc.
\subsection{Algebras associated with tensor powers of the natural module for $G$} \label{tensor powers}
We let $r \in Z_{\geq 0}$ and consider the $G_n$-module $V_n^{\otimes r}$. As tensor products of tilting modules are again tilting we see that this is a tilting module for $G_n$. Consider the subset $I = \{\alpha_1, \alpha_2, \cdots , \alpha_{n-2}\} \subset S$. Then the corresponding Levi subgroup $L_I$ identifies with $G_{n-1} \times G_1$, where the first factor $G_{n-1} =GL(V_{n-1})$ is the subgroup fixing $v_n$, and the second factor $G_1$ is the subgroup fixing $v_i$ for $i < n$ and stabilizing the line $k v_n$. As an $L_I$-module we have $V_n = V_ {n-1} \oplus k_{ \epsilon_n}$. Here $k_{\epsilon_n}$ is the line $k v_n$ on which $G_1$ acts via the character $\epsilon_n$. This gives
\begin{equation} \label{restriction}
V_n^{\otimes r} \simeq \bigoplus _{s=0} ^r (V_{n-1}^{\otimes s} \otimes k_{(r-s)\epsilon_n})^{\oplus \binom{r}{s}} \text { (as $L_I$-modules)}.
\end{equation}
In particular, $V_{n-1}^{\otimes r}$ is an $L_I$-summand. Its weights (for the natural maximal torus in $L_I$ which is also the maximal torus $T_n$ in $G_n$) consist of $\lambda$'s with $\lambda_n = 0$ whereas any weight $\mu$ of the complement $C = \bigoplus _{s=0} ^{r-1} (V_{n-1}^{\otimes s} \otimes k_{(r-s)\epsilon_n})^{\oplus \binom{r}{s}}$ has $\mu_n > 0$. It follows that
\begin{equation} \label{no cross-homs}
\Hom_{L_I}(V_{n-1}^{\otimes r}, C) = 0 = \Hom_{L_I}(C, V_{n-1}^{\otimes r}).
\end{equation}
Moreover, since $G_1$ acts trivially on $V_{n-1}$ we have $\End_{L_I}(V_{n-1}^{\otimes r}) = \End_{G_{n-1}}(V_{n-1}^{\otimes r})$. Hence we get from Lemma \ref{quotient} (in which the assumptions are satisfied because of (\refeq{no cross-homs})):
\begin{prop}\label{surj GL} The natural algebra homomorphism
$$\End_{G_n}(V_n^{\otimes r}) \rightarrow \End_{G_{n-1}}(V_{n-1}^{\otimes r})$$
is surjective.
\end{prop}
Later on we shall use the following related result.
\begin{prop} \label{restriction of Specht}
Suppose $\lambda \in X^+$ has $\lambda_n = 0$. Then the natural homomorphism
$$\Hom_{G_n}(\Delta_n(\lambda), V_n^{\otimes r}) \rightarrow \Hom_{G_{n-1}}(\Delta_{n-1}(\lambda), V_{n-1}^{\otimes r})$$
is an isomorphism for all $r$.
\end{prop}
\begin{proof} In this proof we shall need the parabolic subgroup $P_I$ corresponding to $I$. We have $P_I = L_I U^{I}$ (semidirect product) where $U^{I}$ is the unipotent radical of $P_I$. We set $\nabla_I(\lambda) = \Ind_B^{P_I}(k_\lambda)$. Our assumption that $\lambda_n = 0$ implies that as an $L_I$-module and as a $G_{n-1}$-module we have $\nabla_I(\lambda) = \nabla_{n-1}(\lambda)$.
We shall prove the proposition by proving the dual statement
$$\Hom_{G_n}( V_n^{\otimes r}, \nabla_n(\lambda)) \simeq \Hom_{G_{n-1}}(V_{n-1}^{\otimes r}, \nabla_{n-1}(\lambda)).$$
First, by Frobenius reciprocity \cite[Proposition I.3.4]{RAG}, we have
$$\Hom_{G_n}( V_n^{\otimes r}, \nabla_n(\lambda)) \simeq \Hom_{P_I}( V_n^{\otimes r}, \nabla_I(\lambda)).$$
Then restricting to $L_I$ gives an isomorphism to $\Hom_{L_I}( V_n^{\otimes r}, \nabla_I(\lambda))$. Finally, we use (\refeq{restriction}) and the weight arguments from the proof of Proposition \ref{surj GL} to see that this identifies with
$$\Hom_{L_I}( V_{n-1}^{\otimes r}, \nabla_I (\lambda)) \simeq \Hom_{G_{n-1}}( V_{n-1}^{\otimes r}, \nabla_{n-1}(\lambda)).$$
\end{proof}
We can, of course, iterate the statement in Proposition \ref{surj GL}: If we set $E_n^r = \End_{G_n}(V_n^{\otimes r})$, then we recover the following well-known fact (cf. \cite[E.17]{RAG}).
\begin{cor} \label{sequence of surjections} We have a sequence of surjective algebra homomorphisms
$$ E_n^r \rightarrow E_{n-1}^r \rightarrow \cdots \rightarrow E_2^r \rightarrow E_1^r.$$
\end{cor}
\vskip 1 cm
Set now $\overline E_n^r = \End_{G_n}(V_n^{\underline \otimes r})$. Note that these are the higher Jones algebras (see the introduction) in the case $G= GL_n$ corresponding to the tilting modules $V_n^{\otimes r}$. We get from Corollary \ref{fusion} that this is a quotient of $E_n^r$. It is a semisimple algebra (see Example \ref{Ex1}), so that by Corollary \ref{sequence of surjections} we get
\begin{thm} \label {ss quotients}
For all $n$ and all $r$ the algebras $\overline E_m^r$, $m=1, 2, \cdots , n$ are semisimple quotient algebras of $E_n^r$.
\end{thm}
\begin{remarkcounter} \label{p-term is 0} \begin{enumerate}
\item [a)] We have $\overline E_m^r = 0$ for all $r$ when $m \geq p$. This is clear for $m > p$, because then $A_m(p) = \emptyset$. If $m = p$, we have that $\epsilon_1$ belongs to the upper wall of $A_m(p)$, see Remark \ref{A for type A}. Hence, $V_p$ is negligible and therefore so are also all tensor powers $V_p^{\otimes r}$. This means that $V_p^{\underline \otimes r} = 0$ for all $r$.
\item [b)] We do not have surjections $\overline E_m^r \to \overline E_{m-1}^r$ analogous to the ones we found in Corollary \ref{sequence of surjections}. In fact, the alcove $A_p(m)$ become larger the smaller $m$ we consider. This means that the algebras $\overline E_m^r$ grow in size when $m$ decreases.
\end{enumerate}
\end{remarkcounter}
\subsection{A class of simple modules for symmetric groups} \label{class of simple}
The group algebra $kS_r$ of the symmetric group on $r$ letters is isomorphic to the algebra $E_n^r$ for all $n \geq r$ see e.g. \cite[3.1]{CL}. Hence, by Theorem \ref{ss quotients} $kS_r$ has the following list of semisimple quotients: $\overline E_1^r, \overline E_2^r, \cdots , \overline E_r^r$. As observed in Remark \ref{p-term is 0} we have $\overline E_n^r= 0$, if $n \geq p$. On the other hand, $kS_r$ is itself semisimple if $p >r$, and its representation theory coincides with the well-known theory in characteristic $0$. So we shall assume in the following that
$p \leq r$.
In the special case $n=1$ we have $V_1^{\otimes r}= k_{r \epsilon_1}$, $r \in \Z$ and these modules together with their duals are the indecomposable tilting modules (as well as the simple modules) for $G_1$. The fusion category for $G_1$ coincides with the full category of finite-dimensional $G_1$-modules. We identify the trivial $1$ line partion of $r>0$ with the element $r \epsilon_1$ in $A_1(p)$. Clearly, we have
$\overline E_1^r = E_1^r = \End_{G_1}(k_{r \epsilon_1}) = k$ for all $r$.
We shall explore the simple modules for $kS_r$ arising from the above quotients $\overline E_m^r$. Note that we just observed that the first algebra $\overline E_1^r$ equals $k$.
Consider the remaining quotients $\overline E_m^r$, $m= 2, \cdots , p-1$ of $kS_r$. We shall describe the simple modules for $kS_r$ arising from these. Recall that the simple modules for $kS_r$ are indexed by the $p$-regular partitions of $r$, i.e. partitions of $r$ with no $p$ rows having the same length. If $\lambda$ is such a partion, we denote the corresponding simple module for $kS_r$ by $D_r(\lambda)$.
Set $\Lambda^r$ equal to the set of partitions of $r$. This is the weight set for the cellular algebra $E_n^r$ whenever $n \geq r$. Define
$$\overline \Lambda^r(p) = \{ (\lambda_1, \lambda_2, \cdots ,\lambda_m) \in \Lambda^r | \lambda \in A_m(p) \text { for some } m < p \}.$$
So $\overline \Lambda^r(p) $ consists of those partitions of $r$ which have at most $m$ non-zero terms and satisfy $\lambda_1 - \lambda_m \leq p - m$. Clearly, the partions in $\overline \Lambda^r(p)$ are all $p$-regular. We shall now derive an algorithm which determines the dimensions of the simple modules $D_r(\lambda)$ when $\lambda \in \overline \Lambda^r(p)$.
We have the following Pieri-type branching formula, which is proved e.g. in \cite[(3.7)]{AS}.
\begin{prop} \label{inductive formula}
Let $m \geq 1$ and suppose $\lambda \in A_m(p)$. Then
$$(V_m^{\otimes r}: T(\lambda)) = \sum_{i: \lambda - \epsilon_i \in \Lambda^{r-1} \cap A_m(p)} (V_m^{\otimes (r-1)}: T(\lambda - \epsilon_i)).$$
\end{prop}
\begin{lem} \label {the p-1 algebra}
Suppose $1 \leq r = a (p-1) + b$ where $0 \leq b < p-1$. Then $V_{p-1}^{\underline \otimes r} = T(a\omega_{p-1} + \omega_b)$. Hence, $\overline E_{p-1}^r = k$.
\end{lem}
\begin{proof} The lemma is clearly true when $r =1$ where $V_{p-1} = T(\omega_1) = L(\omega_1)$. Observe that (with the notation in the lemma) $a\omega_{p-1} + \omega_b$ is the unique element in $\Lambda^r \cap A_{p-1}(p)$. Hence, for $r>1$ the statement follows by induction from Proposition \ref{inductive formula}.
\end{proof}
\begin{thm} \label{main symm}
Let $r > 0$ and suppose $\lambda \in \overline \Lambda^r(p)$. Then the dimension of the simple $kS_r$-module $D_r(\lambda)$ is recursively determined by
$$ \dim D_r(\lambda) = \sum_{i: \lambda - \epsilon_i \in \overline \Lambda^{(r-1)}(p)} \dim D_{r-1}(\lambda - \epsilon_i).$$
\end{thm}
\begin{proof} For any partition $\mu$ of $r$ the corresponding Specht module for $kS_r$ identifies with the cell module $C_r(\lambda) =\Hom_{G_r}(\Delta_r(\mu), V_r^{\otimes r})$ for $E_r^r \simeq kS_r$. Now Proposition \ref{restriction of Specht} shows that, if $\mu$ has at most $m$ terms, then we have $C_r(\mu) \simeq C_m(\mu)$. The surjection $V_m^{\otimes r}$ onto the fusion summand $V_m^{\underline \otimes r}$ then gives a surjection of $C_m(\mu)$ onto the cell module $\overline C_m(\mu)) = \Hom_{G_m}(\Delta_m(\mu), V_m^{\underline \otimes r})$ for the semisimple quotient algebra $\overline E_m^r$ of $kS_r$. This latter module is only non-zero if $m < p$ and $\mu \in A_m(p)$. So if $\mu = \lambda$ with $\lambda$ as in the theorem, we see that $D_r(\lambda) = \overline C_m(\lambda)$. The theorem therefore follows from Proposition \ref{inductive formula} by observing that $\dim \overline C_m(\lambda) = (V_m^{\otimes r} : T(\lambda))$, cf. (\refeq{dim simple/tilting}).
\end{proof}
\begin{examplecounter}
Consider the case $p = 3$. Here we have
$$ \overline \Lambda ^r(3) = \begin{cases} \{(1)\} \text { if } r = 1, \\ \{(r), ( (r+1)/2, (r-1)/2)\} \text { if } r \geq 3 \text { is odd,} \\ \{ (r), ( r/2, r/2)\} \text { if } r \geq 2 \text { is even.} \end{cases}$$
The trivial partition $(r)$ of $r$ corresponds to the trivial simple module $D_r((r)) = k$ (this is true for all primes). For the unique $2$-parts partition $ \lambda$ in $\overline \Lambda ^r$ we get from Theorem \ref{main symm}
$$ \dim D_r(\lambda) = \begin{cases} \dim D_{r-1}(\lambda - \epsilon_1) \text { if $r$ is odd,} \\ \dim D_{r-1}(\lambda - \epsilon_2) \text { if $r$ is even.} \end{cases}$$
Hence we find $\dim D_r(\lambda) = 1$ for all $r$. This is of course also an immediate consequence of the fact that in this case $\overline E_2^r =k$, see Lemma \ref{the p-1 algebra}. Note that $\overline E_2^r$ is the modular Jones algebra appearing in \cite[Section 7] {A17} and it was observed there as well that the Jones algebras are all trivial in characteristic $3$.
\end{examplecounter}
\begin{examplecounter}
Consider now $p = 5$. Then for $r \geq 5$ we have exactly two partitions $\lambda^1(r)$ and $\lambda^2(r)$ of $r$ having $2$ non-zero parts, which belong to $A_2(5)$. Likewise, there are exactly $2$ partitions $\mu^1(r)$ and $\mu^2(r)$ of $r$ with $3$ non-zero parts, which belong to $A_3(p)$. Finally, there is a unique partition $\nu(r)$ of $r$ with $4$ non-zero parts which belongs to $A_4(p)$. To be precise we have
$$ \lambda^1(r) = ((r+2)/2, (r-2)/2) \text { and } \lambda^2(r) = (r/2, r/2), \text { if $r$ is even;}$$
whereas
$$ \lambda^1(r) = ((r+3)/2, (r-3)/2) \text { and } \lambda^2(r) = ((r+1)/2,(r-1)/2), \text { if $r$ is odd.}$$
We leave to the reader to work out the formulas for $\mu^1(r), \mu^2(r)$. The expression for $\nu(r)$ is given in Lemma \ref{the p-1 algebra}.
So $\overline \Lambda^r (p)
= \{(r), \lambda^1(r), \lambda^2(r), \mu^1(r), \mu^2(r), \nu(r) \}$. We choose the enumeration such that $\lambda^1(r) > \lambda^2(r)$ (in the dominance order) and likewise $\mu^1(r) > \mu^2(r)$. For each of these 6 weights we can easily compute the dimension of the corresponding simple $kS_r$-modules via Theorem \ref{main symm}. In Table 1 we have illustrated the results for $ r \leq 10$. In this table the numbers in row $r$ (listed in the above order) are the dimensions of these $6$ simple $kS_r$-modules. When $r$ is small
some weights are repeated, e.g. for $r = 3$ we have $(3) = \lambda^1(3)$, $\lambda^2(3) = \mu^1(3)$ and $\mu^2(3) = \nu(3)$.
\vskip .5 cm
\centerline {
{\it Table 1. Dimensions of simple modules for $kS_r$ when $p= 5$}}
\vskip .5cm
\centerline{
\begin{tabular} { r| c | c c |c c | cc }
r &(r) & $\lambda^1(r)$ & $\lambda^2(r)$ & $\mu^1(r)$ & $\mu^2(r)$& $\nu(r)$ \\
\hline
1 & 1 & &1 & 1& & 1\\
2 & 1 & 1 & 1 & 1 & 1 & 1 & \\
3 & 1& 1 & 2& 2 & 1 & 1 \\
4 & 1 &3 & 2 & 2 & 3 & 1\\
5 &1 & 3 & 5 & 3 & 5 &1\\
6 & 1 & 8& 5& 8 &5 &1 \\
7 &1 &8 &13 &8 & 13 & 1\\
8 & 1& 21&13 &13 & 21 & 1\\
9 & 1 &21&34&34 & 21 & 1\\
10 & 1 &55&34&34 & 55 & 1\\
\end{tabular}}
\vskip .5 cm
The table can easily be extended using the following formulas. Set $a^j(r) = \dim D_r(\lambda^j(r)), \; j=1, 2$. Then Theorem \ref{main symm} gives $a^1(1) = 0, a^2(1) = 1 = a^2(2)$ and the following recursion rules
$$ a^1(2r+1) = a^1(2r) = a^1(2r-1) + a^2(2r-1); \; a^2(2r+2) = a^2(2r+1) = a^1(2r) + a^2(2r)$$
for $r \geq 1$.
Another way of phrasing this is that $a^2(1), a^1(2), a^2(3), a^1(4), a^2(5), a^1(6), \cdots $ is the Fibonacci sequence. The first equations above then determine the remaining numbers $a^j(r)$.
Again we leave it to the reader to find the similar recursion for the dimension of the simple modules corresponding to the $\mu^j(r)$'s. Apart from the fact, that the recursion rules coincide, we see no obvious representation theoretic explanation for the ``symmetry" between the numbers involving $\lambda$'s and those involving $\mu$'s.
\end{examplecounter}
\vskip 1cm
\section{Semisimple quotients of the Brauer algebras} \label{Brauer}
In this section we shall apply our results from Section 2 to the symplectic and orthogonal groups. This will allow us via Schur--Weyl duality for these groups to obtain certain semisimple quotients of the Brauer algebras over $k$ and to give an algorithm for finding the dimensions of the corresponding simple modules.
The Brauer algebra $\mathcal B_r(\delta)$ with parameter $\delta \in k$ may be defined via generators and relations, see e.g. \cite[Introduction]{DDH}. Alternatively, we may consider it first as
just a vector space over $k$ with basis consisting of the so-called Brauer diagrams with $r$ strands. Then one defines a multiplication of two such diagrams by stacking the second diagram on top of the first, see e.g. \cite{B} or \cite[Section 4]{GL}. This gives an algebra structure on $\mathcal B_r(\delta)$.
We have Brauer algebras for an arbitrary parameter $\delta \in k$. However, the ones that are connected with our endomorhism algebras are those where $\delta$ is the image in $k$ of an integer, i.e. where $\delta$ belongs to the prime field $\Fu \subset k$. It follows from the various versions of the Schur--Weyl duality (see below) that in this case $\mathcal B_r(\delta)$ surjects onto the endomorphism algebra of the $r$'th tensor power of the natural modules for appropriate symplectic and orthogonal groups.
\subsection{Quotients arising from the symplectic groups} \label{quotients sp}
We shall use the notation from Section \ref{Sp}. In particular, $V$ will be a $2n$-dimensional symplectic vector space, which we from now on denote $V_n$. We set $G_n = SP(V_n)$ and $E_n^r = \End_{G_n}(V_n^{ \otimes r})$.
Consider now the fusion summand $V_n^{\underline \otimes r}$ of $V_n^{\otimes r}$ with endomorphism ring $\overline E_n^r = \End_{G_n}(V^{\underline \otimes r})$. Then exactly as in Proposition \ref{surj GL} we obtain:
\begin{prop} \label{quotients Sp} For all $n$ and $r$ the algebra $\overline E_n^r$ is a semisimple quotient of $E_n^r$.
\end{prop}
Recalling the description of $A_m(p)$ from Section 2.3 and using Remark 3 we see that $\overline E_n^r = 0$ unless $2n \leq p-1$.
In contrast with the $GL(V)$ case we usually do not have $\overline E_1^r = k$. In fact, $G_1 = SL_2(k)$ and the tensor powers of the natural module for $G_1$ therefore typically have many summands. On the other hand, the top non-zero term is always equal to $k$:
\begin{prop}
$\overline E_{(p-1)/2}^r = k$ for all $r$.
\end{prop}
\begin{proof}
In this proof we drop the subscript $ {(p-1)/2}$ on $V$ and $\Delta$. We have that $V \otimes V $ has a $\Delta$-filtration with factors $\Delta(2 \epsilon_1), \Delta(\epsilon_1 + \epsilon_2)$ and $\Delta(0) = k$. The first two of these have highest weights on the upper wall of $A_{(p-1)/2}(p)$ whereas the highest weight $0$ of the last term belongs to $A_{(p-1)/2}(p)$. It follows that
$V \underline \otimes V = k$. Hence,
$$V^{\underline \otimes r} = \begin{cases} V \text { if $r$ is odd,} \\ k \text { if $r $ is even.} \end{cases}$$
The claim follows.
\end{proof}
The analogue of Proposition \ref{inductive formula} is
\begin{prop} \label{inductive formula Sp}
Let $m \geq 1$ and suppose $\lambda \in A_m(p)$. Then
$$(V_m^{\otimes r}: T_m(\lambda)) = \sum_{i: \lambda \pm \epsilon_i \in A_m(p)} (V_m^{\otimes (r-1)}: T_m(\lambda \pm \epsilon_i)).$$
\end{prop}
\begin{proof} As $\epsilon_1$ is minuscule we have for any $\lambda \in X_n^+$ that the $\Delta$-factors in $\Delta(\lambda) \otimes V_m$ are those with highest weights $\lambda + \mu$ where $\mu$ runs through the weights of $V_m$ (ignoring possible $\mu$'s for which $\lambda + \mu$ belong to the boundary of $X_m^+$). Likewise, if $\lambda \in A_m(p)$ then the same highest weights all belong to the closure of $A_m(p)$. Hence the fusion product $\Delta(\lambda) \underline \otimes V$ is the direct sum of all $\Delta_m(\lambda + \mu)$ for which $\lambda + \mu \in A_m(p)$. As the possible $\mu$'s are the $\pm \epsilon_i$ (each having multiplicity $1$) we get the formula.
\end{proof}
Recall now the Schur--Weyl duality theorem for $SP(V)$, see \cite{DDH}.
\begin{thm} \label{Schur-Weyl Sp} There is an action of $\mathcal B_r(-2n)$ on $V_n^{\otimes r}$ which commutes with the action of $G_n$. The corresponding
homomorphism $\mathcal B_r(-2n) \rightarrow E_n^r$ is surjective for all $n$ and for $n\geq r$ it is an isomorphism.
\end{thm}
The simple modules for $\mathcal B_r(\delta)$ are parametrized by the $p$-regular partitions of $r, r-2, \cdots$, see \cite[Section 4]{GL}, and we shall denote them $D_{\mathcal B_r(\delta)}(\lambda)$.
This parametrization holds for any $\delta \in k$. However, in this section we only consider the case where $\delta$ is the image in $k$ of a negative even number. We identify $\delta$ with an integer in $[0, p-1]$.
Assume $\delta$ is odd and define the following subsets of weights
$$ \overline \Lambda^r(\delta, p) = (\Lambda^r \cup \Lambda^{r-2} \cup \cdots ) \cap A_{(p-\delta)/2}(p).$$
So if $\delta < p-2$ then $ \overline \Lambda^r(\delta, p)$ consists of partitions $\lambda = (\lambda_1, \lambda_2, \cdots \lambda_{(p-\delta)/2})$ with $|\lambda| = r - 2i$ for some $i \leq r/2$ which satisfy $\lambda_1 + \lambda_2 \leq \delta$. On the other hand,
$\overline \Lambda^r(p-2, p) = \{(r-2i) | r-p+2 \leq 2i \leq r\}$.
Note that all partitions in $\overline \Lambda^r(p)$ are $p$-regular.
\begin{thm} \label{main brauer Sp}
Let $r > 0$ and consider an odd number $\delta \in [0,p-1]$. Suppose $\lambda \in \overline \Lambda^r(\delta, p)$. Then the dimension of the simple $\mathcal B_r(\delta)$-module $D_{\mathcal B_r(\delta)}(\lambda)$ is recursively determined by
$$ \dim D_{\mathcal B_r(\delta)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, p)} \dim D_{\mathcal B_{r-1}(\delta)}(\lambda \pm \epsilon_i).$$
\end{thm}
\begin{proof} Combining Theorem \ref{Schur-Weyl Sp} with Theorem \ref{quotients Sp} we see that $\overline E_{(p-\delta)/2}^r$ is a semisimple quotient of $B_r(\delta)$. Then the theorem follows from Proposition \ref{inductive formula Sp} by recalling that the dimensions of the simple modules for
$\overline E_{(p-\delta)/2}^r$ coincide with the tilting multiplicities in $V_{(p-\delta)/2}^{\otimes r}$, see (\refeq{dim simple/tilting}).
\end{proof}
\begin{remarkcounter} If $n \equiv (p-\delta)/2\; (\mo p)$ for some odd number $\delta \in [0,p-1]$, then $-2n \equiv \delta$. Hence, the theorem describes a class of simple modules for $\mathcal B_r(-2n)$ for all such $n$.
\end{remarkcounter}
\begin{examplecounter} \label{brauer p=7}
Consider $p = 7$. Then the relevant $\delta$'s are $5, 3$ and $1$. The weight set $\overline \Lambda^r(5,7)$ contains $3$ elements (except for $r < 4$ where there are fewer) $\lambda^1(r), \lambda^2(r), \lambda^3(r)$ listed in descending order, namely $(4), (2), (0)$, when $ r$ is even, and $(5), (3), (1)$, when $r$ is odd. Likewise, $\overline \Lambda^r(3,7)$ contains $3$ elements (except for $r = 1$) $\mu^1(r), \mu^2(r), \mu^3(r)$ listed in descending order, namely $(2,0), (1,1), (0,0)$, when $ r$ is even, and $(3,0), (2,1), (1,0)$, when $r$ is odd. Finally, $\overline \Lambda^r(1,7)$ consists of a unique element $\nu(r)$, namely $\nu(r) = (0,0,0)$, when $r$ is even, and $\nu(r) =(1,0,0)$, when $r$ is odd.
In Table 2 we have listed the dimensions of the simple modules for $\mathcal B_r(\delta) $ for $r \leq 10$. These numbers are computed recursively using Theorem \ref{main brauer Sp}.
\end{examplecounter}
\eject
\centerline {
{ \it Table 2. Dimensions of simple modules for $\mathcal B_r(\delta)$ when $p= 7$ and $\delta = 5, 3, 1$.
}}
\vskip .5cm
\centerline {
\begin{tabular}{ r| c c c |c c c |c| c c c c c c c c c c}
$\delta$ &&5&&&3&&1& \\ \hline
r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\mu^1(r)$ & $\mu^2(r)$ & $\mu^3(r)$ & $\nu(r)$ \\ \hline
1 & & & 1 & & &1 & 1& \\
2 & & 1 &1 & 1 & 1 & 1 & 1 \\
3 & & 1& 2& 1 & 2 & 3 & 1 \\
4 & 1 &3 & 2 & 6 & 5 & 3& 1 \\
5 &1 & 4 & 5 & 6 & 11 &14 & 1\\
6 & 5 & 9& 5& 31&25 &14 & 1 \\
7 &5 &14 &14 &31& 56 & 70 & 1\\
8 & 19& 28&14 &157 & 126 &70 & 1\\
9 & 19 &47&42&157 & 283 & 353 & 1\\
10 & 66 &89&42&793 & 636 & 353 & 1\\
\end{tabular}}
\vskip 1 cm
\subsection{Quotients arising from the orthogonal groups}
In this section we consider the orthogonal groups. Again we shall see that the very same methods as we used for general linear groups in Section 4 apply in this case.
We shall use the notation from Section \ref{O}. In particular, $V$ will be a vector space with a non-degenerate symmetric bilinear form. If $\dim V$ is odd, we write $\dim V = 2n +1$ and set $V_n = V$ and $G_n = O(V)$. Likewise, if $\dim V$ is even, we write $\dim V = 2n$ and set $V_n = V$ and $G_n = O(V_n)$. In both cases we denote by $E_n^r$ the endomorphism algebra $ \End_{G_n}(V_n^{ \otimes r})$ and by $\overline E_n^r$ the algebra $\End_{G_n}(V_n^{\underline \otimes r})$.
As in the general linear and the symplectic case we have:
\begin{prop} \label{quotients O} For all $n$ and $r$ the algebra $\overline E_n^r$ is a semisimple quotient of $E_n^r$.
\end{prop}
Recalling the description of $A_m(p)$ from Section \ref{O} we observe:
\begin{remarkcounter} \label{orthogonal barE} \begin{enumerate}
\item [a)]By Remarks \ref{A for type B} and \ref{A for type D} we have $\overline E_n^r$ = 0 unless $2n < p-2$ in the odd case, respectively $2n < p+2$ in the even case.
\item [b)] In the even case we get $\overline E_{(p+1)/2}^r = k$ for all $r$ using the same argument as in the symplectic case. On the other hand, this argument does not apply to the odd case, where in fact the highest term $\overline E_{(p-3)/2}^r$ is usually not $k$ (this is illustrated in Example \ref{brauer2 p=7} below).
\end{enumerate}
\end{remarkcounter}
The Schur--Weyl duality for orthogonal groups \cite[Theorem 1.2]{DH} says in particular:
\begin{thm} \label{Schur-Weyl O} Set $\delta = \dim V_n$. There is an action of $\mathcal B_r(\delta)$ on $V_n^{\otimes r}$ which commutes with the action of $G_n$. The corresponding
homomorphism $\mathcal B_r(\delta) \rightarrow E_n^r$ is surjective for all $n$.
\end{thm}
\begin{remarkcounter}
The Schur--Weyl duality for orthogonal groups gives rise to isomorphisms for large enough $n$, see e.g. \cite[Section 3.4]{AST2}. We shall not need this here.
\end{remarkcounter}
We now divide our discussion into the odd and even cases.
\subsubsection{Type B}
In the odd case where $G_n$ has type $B_n$ our methods lead to the higher Jones quotient $\overline E_m^r$ of $\mathcal B_r(2m+1)$ for $1 \leq m \leq (p-3)/2$. Noting that the Brauer algebras in question are those with an odd $\delta $ lying between $3$ and $p-2$ which we have already dealt with in Section \ref{quotients sp}, we shall leave most details to the reader. However, we do want to point out that the inductive formula for
the dimensions of the simple modules for $\overline E_n^r$ in this case is more complicated than in the symplectic case. The reason is that for type $B$ the highest weight for the natural module is not minuscule. This means that instead of the direct analogue of Proposition \ref{inductive formula} we need to use the following general formula (with notation as in Section 2).
\begin{thm} (\cite[Equation 3.20(1)]{AP}).
Let $G$ be an arbitrary reductive group over $k$ and suppose $Q$ is a tilting module for $G$. If $\lambda$ is a weight belonging to the bottom dominant alcove $A(p)$, then
$$ (Q:T(\lambda)) = \sum_{w} (-1)^{\ell (w)} (Q:\Delta(w \cdot \lambda)),$$
where the sum runs over those $w \in W_p$ for which $w \cdot \lambda \in X^+$.
\end{thm}
\begin{examplecounter} \label{brauer2 p=7} Consider $p =7$. Then type $B$ leads to higher Jones algebras of $\mathcal B_r(3)$ and $\mathcal B_r(5)$. The reader may check that the recursively derived dimensions for the class of simple modules in these cases match (with proper identification of the labeling) with those listed in Table 2. Note in particular that to get those for $\mathcal B_r(3)$ we need to decompose $V_1^{\underline \otimes r}$ into simple modules for $G_1$. The Lie algebra for $G_1$ is $\mathfrak{sl}_2$ and the natural $G_1$-module $V_1$ identifies with the simple $3$-dimensional $SL_2$-module.
\end{examplecounter}
\subsubsection{Type D}
In the even case $G_n$ has type $D$. The module $V_n$ equals $\Delta_n(\epsilon_1)$ and its highest weight $\epsilon_1$ is minuscule. This means that we have
\begin{prop} \label{inductive formula O}
Let $n \geq 1$ and suppose $\lambda \in A_n(p)$. Then
$$(V_n^{\otimes r}: T_n(\lambda)) = \sum_{i: \lambda \pm \epsilon_i \in A_n(p)} (V_n^{\otimes (r-1)}: T_n(\lambda \pm \epsilon_i)).$$
\end{prop}
\begin{proof} Completely analogous to the proof of Proposition \ref{inductive formula Sp}.
\end{proof}
Assume now $\delta \in [2, p+1]$ is even and define the following subsets of weights
$$ \overline \Lambda^r(2, p) = \{(r-2i) | 0 \leq r-2i \leq p-2\},$$
$$ \overline \Lambda^r(4, p) =\{(\lambda_1, \lambda_2) \in X_2^+ | (\lambda_1,|\lambda_2|) \in \Lambda^{r-2i} \text { for some $i$ with } 0 \leq r-2i \leq p-2\}, $$
and for $\delta > 4$
$$ \overline \Lambda^r(\delta, p) = \{ (\lambda_1, \lambda_2, \cdots , \lambda_{\delta/2}) \in X_{\delta/2}^+ | (\lambda_1, \cdots , |\lambda_{\delta/2}|) \in \Lambda^{r-2i} \text { for some } i \leq r/2 \text { and } \lambda_1 + \lambda_2 \leq p-\delta + 2\}. $$
\vskip .3 cm
\begin{thm} \label{main brauer O}
Let $r > 0$ and consider an even number $\delta \in [0,p+1]$. Suppose $\lambda \in \overline \Lambda^r(\delta, p)$. Then the dimension of the simple $\mathcal B_r(\delta)$-module $D_{\mathcal B_r(\delta)}(\lambda)$ is recursively determined by
$$ \dim D_{\mathcal B_r(\delta)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, p)} \dim D_{\mathcal B_{r-1}(\delta)}(\lambda \pm \epsilon_i).$$
\end{thm}
\begin{proof} Combining Theorem \ref{Schur-Weyl O} with Theorem \ref{quotients O} we see that $\overline E_{(p-\delta)/2}^r$ is a semisimple quotient of $B_r(\delta)$. Then the theorem follows from Proposition \ref{inductive formula O} by recalling that the dimensions of the simple modules for
$\overline E_{\delta/2}^r$ coincide with the tilting multiplicities in $V_{\delta/2}^{\otimes r}$, see (\refeq{dim simple/tilting}).
\end{proof}
\begin{remarkcounter} If $n \equiv \delta/2 \; (\mo \; p)$ for some even number $\delta \in [2,p+1]$, then $2n \equiv \delta \; (\mo \;p)$. Hence, the theorem describes a class of simple modules for $\mathcal B_r(2n)$ for all such $n$.
\end{remarkcounter}
\begin{examplecounter} Consider $p = 7$. Then the relevant $\delta$'s are $2, 4, 6, 8$. By Remark \ref{orthogonal barE}b we have that the higher Jones quotient algebra for $\mathcal B_r(8)$ is the trivial algebra $k$ (alternatively, observe that $\mathcal B_r(8) = \mathcal B_r(1)$ which we dealt with in Example \ref{brauer p=7}). At the other extreme the (higher) Jones quotient of $\mathcal B_r(2)$ is also a quotient of the Temperley--Lieb algebra $TL_r(2)$. This case is dealt with in \cite[Proposition 6.4]{A17}. So here we only consider the two remaining cases $\delta = 4$ and $\delta = 6$. We have
$$ \overline \Lambda^1(4,7) = \{(1,0)\},$$
$$ \overline \Lambda^2(4,7) = \{(2,0), (1,1), (1,-1), (0,0)\},$$
$$ \overline \Lambda^3(4,7) = \{(3,0), (2,1), (2,-1), (1,0)\},$$
$$ \overline \Lambda^r(4,7) = \begin{cases} \{(4,0), (2,2), (2,-2), (3,1) (3,-1), (2,0), (1,1), (1,-1), (0,0)\} \text { if $r \geq 4$ is even,} \\ \{(5,0), (3,2), (3,-2), (4,1), (4,-1), (3,0), (2,1), (2,-1), (1,0)\} \text { if $r \geq 5$ is odd.} \end{cases}$$
In Table 3 we have denoted these weights $\lambda^1(r), \cdots , \lambda^9(r)$.
Likewise, we have
$$ \overline \Lambda^1(6,7) = \{(1,0,0))\},$$
$$ \overline \Lambda^2(6,7) = \{(2,0,0), (1,1,0), (0,0,0)\},$$
$$ \overline \Lambda^r(6,7) = \begin{cases} \{(3,0,0), (2,1,0), (1,1,1), (1,1,-1), (1,0,0)\} \text { if $r \geq 3$ is odd.} \\ \{(2,1,1)), (2,1,-1)), (2,0,0), (1,1,0),(0,0,0)\} \text { if $r \geq 4$ is even.} \\ \end{cases}$$
In Table 3 we have denoted these weights $\mu^1(r), \cdots , \mu^5(r)$. In this table we have then listed the dimensions (computed via the algorithm in Theorem \ref{main brauer O}) for the simple modules for $\mathcal B_r(4)$, respectively $\mathcal B_r(6)$ corresponding to these sets of weights for $r \leq 10$.
\eject
\centerline {{ \it Table 3. Dimensions of simple modules for $\mathcal B_r(\delta)$ when $p= 7$ and $\delta = 4 $ and $6$.}}
\vskip .5cm
\noindent
\begin{tabular}{ r| c c c c c c c c c | c c c c c c}
&&& &&$\delta =4$&&&&&&&$\delta = 6$&& \\ \hline
r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$&$\lambda^4(r)$&$\lambda^5(r)$&$\lambda^6(r)$&$\lambda^7(r)$&$\lambda^8(r)$&$\lambda^9(r)$ & $\mu^1(r)$& $\mu^2(r)$ & $\mu^3(r)$ &$\mu^4(r)$ & $\mu^5(r)$ \\ \hline
1 & & & & &&&&&1 & & &&& 1& \\
2 &&&&&&1 & 1 & 1 & 1 & &&1& 1 & 1 \\
3 &&&&&&1& 2&2 & 4 & 1&2&1&1& 3 \\
4 & 1 &2 & 2 & 3 & 3 & 9&6&6&4& 2 & 3 & 6 & 7 & 3 \\
5 &1 & 5 & 5 & 4 & 4& 16 &20 & 20 &25 &6 & 18 &9&10&16 \\
6 & 25& 25& 25& 45 & 45&81 & 45 & 45 & 25 &27&28&40&53 & 16& \\
7 &25 &70 &70 &70& 70 & 196& 196 & 196 & 196 &40&148&80 &81 &109 \\
8 & 361,& 266 & 266 & 532 & 532 & 784 & 392 & 392 & 196 &228 &229 &297 &418 &109 \\
9 & 361&798& 798 & 893 & 893 & 2209 & 1974 & 1974 & 1764 &297&1172&646&647&824 \\
10 & 4356 & 2772 & 2772 & 5874 & 5874 & 7921 & 3738 & 3738 & 1764 &1828&1829&2293&3289&824 \\
\end{tabular}
\vskip .5 cm
Together with Example \ref{brauer p=7} this example give a class of simple modules for Brauer algebras with parameter $\delta$ equal to any non-zero element of $\mathbb {F}_7$.
\end{examplecounter}
Note that in the above example we were in type $D_1 = A_1$, $D_2 = A_1 \times A_1$ or $D_3 = A_3$ and we could have deduced the results from the Type A case treated in Section 4. We shall now give another example illustrating type $ D_n$ computations with $n > 3$.
\begin{examplecounter}
Consider $p = 11$ and take $\delta = 10$. Then $\mathcal B_r(10)$ has the higher Jones quotient $\overline E_5^r$. If $r \geq 5$ the weight set $\overline \Lambda^r(10, 11)$ contains $7$ elements, namely
$$ \{(1,1,1,1,-1), (2,1,1,1,0), (1,1,1,1,1), (3, 0,0,0,0), (2,1,0,0,0), (1,1,1,0,0), (1,0,0,0,0)\}$$
when $r$ is odd, and
$$ \{(2,1,1,1, -1), (2,1,1,1,1), (2,1,1,0,0), (1,1,1,1,0), (2,0,0,0,0), (1,1,0,0,0), (0,0,0,0,0)\}$$
when $r$ is even.
If $r \in \{1,2,3,4\}$, the set $\overline \Lambda^r(10, 11)$ consists of the last, the $3$ last, the $4$ last, and the $5$ last elements, respectively, in the above lists.
In Table 4 we have listed the dimensions of the corresponding simple modules for $\mathcal B_r(10)$ for $r \leq 10$ using Theorem \ref{main brauer O}. We have denoted the above $7$ weights $\lambda^1(r), \cdots , \lambda^7(r)$ (in the given order).
\end{examplecounter}
\eject
\centerline{
{ \it Table 4. Dimensions of simple modules for $\mathcal B_r(10)$ when $p= 11$.
}}
\vskip .5cm
\centerline {
\begin{tabular}{ r| c c c c c c c|ccc}
r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\lambda^4(r)$ & $\lambda^5(r)$ & $\lambda^6(r)$ &$\lambda^7(r)$ \\ \hline
1& & & & & & & 1& \\
2 & && & & 1 & 1 & 1 & \\
3 && & & 1& 2 & 1 & 3 & \\
4 & &&3 & 1 & 6 & 6 & 3& \\
5 &1&4 & 1 & 6 & 15 & 10 &15 & \\
6 & 5&5 & 29& 16& 36&40 &15 & \\
7 &21 &55 &21 &36 &105& 85 & 91 & \\
8 &76 & 76& 245&97 &232 & 281 &91 & \\
9 &173& 494 &173&232&568 & 623 & 604& \\
10 &667& 667 &1685&840&1404 & 1795 & 604 & \\
\end{tabular}}
\vskip 1 cm
\section{Quantum Groups}
In the remaining sections $k$ will denote an arbitrary field.
Let $\mathfrak g$ denote a simple complex Lie algebra. Then there is a quantum group $U_q = U_q(\mathfrak g)$ (a quantized enveloping algebra over $k$) associated with $\mathfrak g$. We shall be interested in the case where the quantum parameter $q$ is a root of unity in $k$ and we want to emphazise that the quantum group we are dealing with is the Lusztig version defined via $q$-divided powers, see e.g. \cite[Section 0]{APW}. This means that we start with the ``generic" quantum group $U_v = U_v(\mathfrak g)$ over $\Q(v)$ where $v$ is an indeterminate. Then we consider the $\Z[v,v^{-1}]$-subalgebra $U_{\Z[v,v^{-1}]}$ of $U_v$ generated by the quantum divided powers of the generators for $U_v$. When $q \in k\setminus 0$ we make $k$ into an $\Z[v,v^{-1}]$-algebra by specializing $v$ to $q$ and define $U_q$ as $U_q = U_{\Z[v,v^{-1}]} \otimes_{\Z[v,v^{-1}]} k$. This construction, of course, makes sense for arbitrary $q$, but if $q$ is not a root of unity all finite-dimensional $U_q$-modules are semisimple and our results are trivial. So in the following we always assume that $q$ is a root of unity and we denote by $\ell$ the order of $q$. When $\ell \in \{2, 3, 4, 6\}$ the (quantum) higher Jones algebras we introduce turn out to be trivial ($0$ or $k$) for all $\mathfrak g$ so we ignore these cases. We set $\ell' = \ord(q^2)$, i.e. $\ell' = \ell$, if $\ell$ is odd, and $\ell' = \ell/2$, if $\ell$ is even.
In this section we shall very briefly recall some of the key facts about $U_q$ and its representations relevant for our purposes. As the representation theory for $U_q$ is in many ways similar to the modular representation theory for $G$ that we have been dealing with in the previous sections, we shall leave most details to the reader. However, we want to emphazise one difference: if the root system associated with $\mathfrak g$ has two different root lengths then the case of even $\ell$ is quite different from the odd case (the affine Weyl groups in question are not the same). This phenomenon is illustrated in \cite[Section 6]{AS} where the fusion categories for type $B$ as well as the corresponding fusion rules visibly depend on the parity of $\ell$. The difference will also be apparent in Section 6.3 below where for instance the descriptions of the bottom dominant alcoves in the type $C$ case considered there depend on the parity of $\ell$.
Again, we start out with the general case and then specialize first to the general linear quantum groups, and then to the symplectic quantum groups. We omit treating the case of quantum groups corresponding to the orthogonal Lie algebras, because of the lack of a general version of Schur--Weyl duality in that case.
\subsection{Representation theory for Quantum Groups}
We have a triangular decomposition $U_q = U_q^- U_q^0U_q^+$ of $U_q$. If $n$ denotes the rank of $\mathfrak g$, then we set $X = \Z^n$ and identify each $\lambda \in X$ with a character of $U_q^0$ (see e.g. \cite[Lemma 1.1]{APW}). These characters extend to $B_q = U_q^-U_q^0$ giving us the $1$-dimensional $B_q$-modules $k_\lambda, \lambda \in X$. As in Section 1.1 we denote by $R$ the root system for $\mathfrak g$ and consider $R$ as a subset of $X$. The set $S$ of simple roots corresponds to the generators of $U_q^+$ and we define the dominant cone $X^+ \subset X$ as before. The Weyl group $W$ is still the group generated by the reflections $s_{\alpha}$ with $\alpha \in S$.
Define the bottom dominant alcove in $X^+$ by
$$ A(\ell) = \begin{cases} \{\lambda \in X^+ | \langle \lambda + \rho, \alpha_0^{\vee} \rangle < \ell \} \text { if $\ell$ is odd,} \\ \{\lambda \in X^+ | \langle \lambda + \rho, \beta_0^{\vee} \rangle < \ell' \} \text { if $\ell$ is even.} \end{cases}$$
Here $\alpha_0$ is the highest short root and $\beta_0$ is the highest long root.
The affine Weyl group $W_\ell$ for $U_q$ is then the group generated by the reflections in the walls of $A(\ell)$. Note that, when $\ell$ is odd, $W_\ell$ is the affine Weyl group (scaled by $\ell$) associated with the dual root system of $R$, whereas if $\ell$ is even, $W_\ell$ is the affine Weyl group (scaled by $\ell'$) for $R$, cf. \cite[Section 3.17]{AP}.
Suppose $\lambda \in X^+$. Then we have modules $\Delta_q(\lambda), \nabla_q(\lambda), L_q(\lambda)$ and $T_q(\lambda)$ completely analogous to the $G$-modules in Section 2 with the same notation without the index $q$.
The quantum linkage principle (see \cite{A03}) implies that if $L_q(\mu)$ is a composition factor of $\Delta_q(\lambda)$, then $\mu$ is strongly linked (by reflections from $W_\ell$) to $\lambda$. Likewise, if $\Delta_q(\mu)$ occurs in a Weyl filtration of $T_q(\lambda)$, then $\mu$ is strongly linked to $\lambda $.
The quantum linkage principle then gives the identities
$$ \Delta_q(\lambda) = \Delta_q(\lambda) = L_q(\lambda) = T_q(\lambda) \text { for all } \lambda \in A(\ell),$$
which will be crucial for us in the following.
Suppose $Q$ is a general tilting module for $U_q$. Imitating the definitions in Section \ref{Fusion} we define the fusion summand and the negligible summand of $Q$ as follows
$$ Q^{\F}= \bigoplus_{\lambda \in A(\ell)} T_q(\lambda)^{(Q:T_q(\lambda))} \text { and } Q^{\Ne} = \bigoplus_{\lambda \in X^+\setminus A(\ell)}T_q(\lambda)^{(Q:T_q(\lambda))}$$.
The exact same arguments as in the modular case then give us the quantum analogue of Theorem \ref{fusion-quotient}
\begin{thm} \label{q-fusion-quotient}
Let $Q$ be an arbitrary tilting module for $U_q$. Then the natural map $\phi: \End_{U_q}(Q) \rightarrow \End_{U_q}(Q^{\F})$ is a surjective algebra homomorphism. The kernel of $\phi$ equals
$$\{h \in \End_{U_q}(Q) | \Tr_q(i_\lambda \circ h \circ \pi_\lambda) = 0 \text { for all } i_\lambda \in \Hom_{U_q}( T_q(\lambda), Q), \; \pi_\lambda \in \Hom_{U_q}(Q, T_q(\lambda)),\; \lambda \in X^+\}.$$
\end{thm}
We also have a quantum fusion category (still denoted $\mathcal F$) and a fusion tensor product $\underline \otimes$ on it, see \cite[Section 4]{A92}. This leads to an analogue of Corollary 2.4.
\begin{cor} \label{q-fusion} Let $T$ be an arbitrary tilting module for $U_q$. Then for any $r \in \Z_{\geq 0}$ the natural homomorphism $\End_{U_q}(T^{\otimes r}) \rightarrow \End_{U_q}(T^{\underline \otimes r})$ is surjective.
\end{cor}
All of the above easily adapts to the case, where we replace the simple Lie algebra $\mathfrak g$ by the general linear Lie algebra $\mathfrak {gl}_n$ and we shall explore this case further in the next section.
Finally, the cellular algebra theory recalled in Section \ref{cellular} carries over verbatim. Alternatively, use the quantum framework from \cite[Section 5]{AST1} directly.
\subsection{The General Linear Quantum Group} \label{general linear q-group}
Let $n \geq 1$ and consider the general linear Lie algebra $\mathfrak {gl}_n$. The generic quantum group over $\Q(v)$ associated to $\mathfrak {gl_n}$ has a triangular decomposition in which the $0$ part identifies with a Laurent polynomial algebra $\Q(v)[K_1^{\pm 1}, \cdots , K_n^{\pm 1}]$. If $\lambda = (\lambda_1, \cdots , \lambda_n) \in X_n = \Z^n$ then $\lambda$ defines a character of this algebra which sends
$K_i$ into $v^{\lambda_i}$. In particular, the element $\epsilon_i \in X_n$ with $1$ as its $i$-th entry and $0$'s elsewhere defines the character which sends $K_i$ to $v$ and all other $K_j$'s to $1$. We then have $\lambda = \sum_i \lambda_i \epsilon_i$.
Set $U_{q,n}$ equal to the quantum group for $\mathfrak {gl}_n$ over $k$ with parameter a root of unity $q \in k$. Then we still get for $\lambda \in X_n$ a character of $U_q^0$, see \cite[Section 9]{APW}. If we denote by $V_{q,n}$ the $n$-dimensional vector representation of $U_{q,n}$, then (in analogy with the classical case) $V_{q,n}$ has weights $\epsilon_1, \cdots, \epsilon_n$, all with multiplicity $1$. Moreover, we may (for all $\ell$) identify $V_{q,n}$ with $\Delta_q(\epsilon_1) = \nabla_q(\epsilon_1) = L_q(\epsilon_1) = T_q(\epsilon_1)$.
The bottom alcove in $X_n$ is now denoted $A_n(\ell)$ and given by
$$ A_n(\ell) = \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \text { and } \lambda_1 - \lambda_n \leq \ell'
-n\}.$$
As noted above, $V_{q,n}$ is a tilting module. Hence, so are $V_{q,n}^{\otimes r}$ as well as the corresponding fusion summands $V_{q,n}^{\underline \otimes r}$ for all $r \in Z_{\geq 0}$. We set $E_{q,n}^r = \End_{U_{q,n}}(V_{q,n}^{\otimes r})$ and $\overline E_{q,n}^r = \End_{U_{q,n}}(V_{q,n}^{\underline \otimes r})$. These endomorphism algebras are then cellular algebras, and $\overline E_{q,n}^r$ is in fact semisimple (because $V_{q,n}^{\underline \otimes r}$ is a semisimple $U_{q,n}$-module). Moreover, by Corollary \ref{q-fusion} we have:
\begin{equation} \label {E to barE}
\text {The natural homomorphism } E_{q,n}^r \rightarrow \overline E_{q,n}^r \text { is surjective.}
\end{equation}
Arguing as in Section \ref{tensor powers} we also get:
\begin{equation} \label{surj n>m}
\text {The ``restriction" homomorphisms }E_{q,n}^r \rightarrow E_{q,m}^r \text { are surjective for all } n \geq m \text { and all } r.
\end{equation}
\subsection{Quantum Symplectic Groups} \label{q-sp}
Set now $U_{q,n}$ equal to the quantum group corresponding to the simple Lie algebra $\mathfrak {sp}_{2n}$ of type $C_n$. The vector representation $V_{q,n} = \Delta_q(\epsilon_1)$ is then a tilting module for $U_{q,n}$. As in the corresponding classical case it has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$ .
The bottom alcove in $X_n$ is now denoted $A_n(\ell)$ and given by
$$ A_n(\ell) = \begin{cases} \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \text { and } \lambda_1 + \lambda_2 \leq \ell - 2n \} \text { if $\ell$ is odd,}\\ \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \text { and } \lambda_1\leq \ell' -n-1\} \text { if $\ell$ is even.} \end{cases}$$
In both the even and the odd case we have $A_n(\ell) \neq \emptyset$ if and only if $\ell > 2n$. In the odd case $\epsilon_1$ belongs to $A_n(\ell)$ for $n = 1, 2, \cdots , (\ell -1)/2$, whereas in the even case the same is true for $n=1, 2, \cdots , (\ell -4)/2$.
Again in this case we get (with $E_{q,n}^r = \End_{U_{q,n}}(V_{q,n}^{\otimes r})$ and $\overline E_{q,n}^r = \End_{U_{q,n}}(V_{q,n}^{\underline \otimes r}$)):
\begin{equation} \label{surj E to Ebar qsp}
\text {The natural homomorphisms } E_{q,n}^r \rightarrow \overline E_{q,n}^r \text { are surjective for all $n, r$.}
\end{equation}
\section{A class of simple modules for the Hecke algebras of symmetric groups}
We continue in this section to assume that $k$ is an arbitrary field and that $q \in k$ is a root of order $\ell$.
Let $r$ be a positive integer and denote by $H_r(q)$ the Hecke algebra of the symmetric group $S_r$ with parameter $q\in k$.
Using the notation from Section \ref{general linear q-group} we then have the quantum Schur--Weyl duality:
\begin{thm} \label{q-Schur-Weyl}
The Hecke algebra $H_r(q)$ acts on the tensor power $V_{q,n}^{\otimes r}$ via the quantum $R$ matrix for $U_{q,n}$. This action commutes with the $U_{q,n}$-module structure on $V_{q,n}^{\otimes r}$ giving homomorphisms $H_r(q) \rightarrow E_{q,n}^r$ which are surjective for all $n$ and isomorphisms for $n \geq r$.
\end{thm}
This is the main result of \cite{DPS}.
\begin{cor} \label{q-Jones}
Suppose $r \geq \ell$. Then the Hecke algebra $H_r(q)$ has the following semisimple quotients $\overline E_{q,1}^r, \overline E_{q,2}^r, \cdots ,\overline E_{q, \ell -1}^r$.
\end{cor}
\begin{proof}
By Theorem \ref{q-Schur-Weyl} we have $H_r(q) \simeq E_{q,r}^r$. Then the corollary follows from (\refeq{surj n>m}) and (\refeq{E to barE}) .
\end{proof}
\begin{remarkcounter} \begin{enumerate}
\item
The semisimple quotients of $H_r(q)$ listed in Corollary \ref{q-Jones} are obvious generalisations of the Jones algebras introduced in \cite{ILZ}, and as explained in the introduction this is the reason why we use the name `higher Jones algebras' in this paper.
\item In analogy with
the modular case we see that $ E_{q,1}^r = k = \overline E_{q,\ell-1}^r $ for all $r$.
\end{enumerate}
\end{remarkcounter}
The simple modules for $H_r(q)$ are parametrized by the set of $\ell$-regular partitions of $r$. We denote the simple $H_r(q)$-module associated with such a partition $\lambda$ by $D_{q,r}(\lambda)$. Our aim is to derive an algorithm for computing the dimensions of a special class of simple $H_r(q)$-modules, namely those coming from the higher Jones algebras.
In analogy with the notation in Section \ref{class of simple} we set
$$\overline \Lambda^r(\ell) = \{\lambda = (\lambda_1, \lambda_2, \cdots ,\lambda_m) | \lambda \text { is a partition of $r$ and } \lambda \in A_m(\ell) \text { for some } m < \ell' \}.$$
So $\overline \Lambda^r(\ell) $ consists of those partitions of $r$ which have at most $m < \ell'$ non-zero terms and satisfy $\lambda_1 - \lambda_m \leq \ell' - m$. Clearly, the partitions in $\overline \Lambda^r(\ell)$ are all $\ell'$-regular.
The result in Proposition \ref{inductive formula} carries over unchanged to the quantum case and leads to the following analogue of Theorem \ref{main symm}.
\begin{thm} \label{main q-symm}
Let $r > 0$ and suppose $\lambda \in \overline \Lambda^r(\ell)$. Then the dimension of the simple $H_r(q)$-module $D_{q,r}(\lambda)$ is recursively determined by
$$ \dim D_{q,r}(\lambda) = \sum_{i: \lambda - \epsilon_i \in \overline \Lambda^{(r-1)}(\ell)} \dim D_{q,r-1}(\lambda - \epsilon_i).$$
\end{thm}
This theorem allows us to determine the dimensions of a class of simple modules for $H_r(q)$ just like we did for symmetric groups in Section \ref{class of simple}. The only difference is that $\ell$ in contrast to $p$ may now take any value in $\Z_{>0}$. We illustrate by a couple of examples.
\begin{examplecounter}
Let $\ell = 8$, i.e. $q$ is a root of unity of order $8$. In this case $\overline \Lambda^r(8)$ consists of the trivial partition of $r$ (corresponding to the trivial module for $H_r(q)$), the unique $3$-parts partition $\nu$ of $r$ with $\nu_1 - \nu_3 \leq 1$, and the $2$ parts partitions $(s+1,s-1)$ and $(s,s)$, if $r = 2s$ is even, respectively $(s, s-1)$, if $r = 2s-1$ is odd. It is easy to deduce from Theorem \ref{main q-symm} that the partitions with $2$ parts all correspond to simple $H_r(q)$-modules of dimension $2^s$.
\end{examplecounter}
\begin{examplecounter}
Consider the case $\ell = 12$. Here $\overline \Lambda^r(12)$ consists of the trivial partition $(r)$, the unique partition $\nu$ with $5$ parts satisfying $\nu_1 - \nu_5 \leq 1$, the partitions $\lambda^1(r), \lambda^2(r), \lambda^3(r)$ with $2$-parts
$$ \{\lambda^1(r), \lambda^2(r), \lambda^3(r)\} = \begin{cases} \{(s+2, s-2), (s+1, s-1), (s,s)\} \text { if } r = 2s, \\ \{(s+2, s-1), (s+1,s)\} \text { if } r = 2s +1; \end{cases}$$
the partitions $\mu^1(r), \mu^2(r), \mu^3(r), \mu^r(4)$ with $3$ parts
$$ \{\mu^1(r), \mu^2(r), \mu^3(r), \mu^4(r)\} = \begin{cases} \{(s+2,s-1,s-1),(s+1, s+1, s-2), (s+1, s, s-1), (s,s,s)\}\\ \text { if } r = 3s, \\ \{(s+2,s, s-1), (s+1,s+1, s-1), (s+1,s,s)\} \text { if } r = 3s +1,\\ \{(s+2,s+1, s-1), (s+2,s, s), (s+1,s+1,s)\} \text { if } r = 3s +2;\\ \end{cases}$$
and the partitions $\eta^1(r), \eta^2(r), \eta^3(r)$ with $4$ parts
$$ \{\eta^1(r), \eta^2(r), \eta^3(r)\} = \begin{cases} \{(s+1,s+1,s-1,s-1), (s+1,s,s, s-1), (s,s,s,s)\} \text { if } r = 4s, \\ \{(s+1,s+1,s, s-1), (s+1,s,s, s)\} \text { if } r = 4s +1,\\ \{(s+2,s, s,s), (s+1,s+1, s+1,s-1), (s+1,s+1,s,s)\} \text { if } r = 4s +2,\\ \{(s+2,s+1,s,s), (s+1,s+1,s+1,s) \} \text { if } r = 4s + 3.\end{cases}$$
Here the listed partitions involving a zero or a negative number (these occur only for small $r$) should be deleted. In these cases as well as in the cases where a set with only $2$ elements is listed it is understood that the corresponding or missing $\lambda$, $ \mu$ or $\eta$ does not occur.
We can use Theorem \ref{main q-symm} to compute the dimensions of the simple modules for $ H_r(q)$ where $q$ a root of unity having order $12$. In Table 5 we have listed the results for the first few values of $r$. As we know that both the trivial partition and the partition $\nu$ always correspond to simple modules of dimension $1$ we have not included these two partitions in the table.
\end{examplecounter}
\eject
\centerline{
{ \it Table 5. Dimensions of simple modules for $H_r(q)$ with $\ell = 12$.}}
\vskip .5cm
\centerline
{\begin{tabular}{ r| c c c| c c c c | c c c| c}
& $\lambda^r(1)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\mu^1(r)$ & $\mu^2(r)$& $\mu^3(r)$& $\mu^4(r)$ &$\eta^1(r)$ & $\eta^2(r)$ & $\eta^3(r)$ \\ \hline
1 & &&1 & & & 1 && &&1 \\
2 & & 1 &1 & &1&1 & &1&&1& \\
3 && 1 &2 & 1 & & 2 & 1 &2 &1&&\\
4 &1& 3 & 2 & 3& 2 &3&&2& 3& 1&\\
5 & 4 & 5& & 5 &6 &5 &&5&4&& \\
6 &4 &9 &5& 6 & 5 &16&5 & 4 &5& 9&\\
7 & 13& 14&&22 & 5 &21 && 13 & 14&\\
8 & 13 &27&14&27&43 & 26 & & 13 &27 &14&\\
9 & 40 &41&&43 & 27 & 96 & 26 & 40 & 41&&\\
10 & 40 &81&41&139 & 123 & 122 & & 41 & 40 & 81\\
\end{tabular}}
\section{A class of simples modules for BMW-algebras}
Denote by $x, v, z$ be three indeterminates and set $R = \Z[v, v^{-1}, x, z]/((1-x)z + (v - v^{-1}))$. Let $r \geq 1$ be an integer and consider the general $3$-parameter $BMW$-algebra $BMW_r(R)$ over $R$ as in \cite[Definition 3.1]{Hu}.
As an $R$-module $BMW_r(R)$ is free of rank (2r-1)!! (with basis indexed by Brauer diagrams).
As in the previous sections we denote by $k$ an arbitrary field containing a root of unity $q$ of order $\ell$. We make $k$ into an $R$-algebra by specializing $v$ to $-q^{2n+1}$, $z$ to $q-q^{-1}$, and $x$ to $1 - \sum_{i=-n}^n q^{2i}$. Then the $BMW$-algebra over $k$ that we shall work with is
$$ BMW_r(-q^{2n+1}, q) = BMW_r(R) \otimes_R k.$$
For $q = 1$ it turns out that $BMW_r(-q^{2n+1}, q)$ may be identified with the Brauer algebra $\mathcal B_r(-2m)$, see the remarks after Definition 3.1 in \cite{Hu}. We treated the Brauer algebras in Section \ref{Brauer} so in this section we shall assume $q \neq 1$.
Using the notation from Section \ref{q-sp} the quantum analogue \cite[Theorem 1.5]{Hu} of the Schur--Weyl duality for symplectic groups says:
\begin{thm} \label{qsp-Schur-Weyl}
The algebra $BMW_r(-q^{2n+1}, q)$ acts naturally on the tensor power $V_{q,n}^{\otimes r}$. This action commutes with the $U_{q,n}$-module structure on $V_{q,n}^{\otimes r}$ giving homomorphisms $BMW_r(-q^{2n+1},q) \rightarrow E_{q,n}^r$ which are surjective for all $n$.
\end{thm}
\begin{cor} \begin{enumerate}
\item If $\ell$ is odd,
then the $BMW$-algebra $BMW_r(-q^{2n+1},q)$ surjects onto the semisimple algebra $\overline E_{q,n}^r$ for $n= 1, 2, \cdots , (\ell - 1)/2$ and $r> 0$.
\item If $\ell$ is even,
then the $BMW$-algebra $BMW_r(-q^{2n+1},q)$ surjects onto the semisimple algebra $\overline E_{q,n}^r$ for $n= 1, 2, \cdots , (\ell - 4)/2$ and $r> 0$.
\end{enumerate}
\end{cor}
\begin{proof}
By Theorem \ref{qsp-Schur-Weyl} we have $BMW_r(-q^{2n+1}, q)$ surjects onto $ E_{q,n}^r$ for all $n, r$ and hence also on $\overline E_{q,n}^r$ by (\refeq {surj E to Ebar qsp}). In Section \ref{q-sp} we observed that these latter algebras are non-zero for the $n$'s listed in the corollary.
\end{proof}
Let $\lambda$ is a partition of $r-2i$ for some $i \leq r/2$. In analogy with the Brauer algebra case we denote the simple $BMW_r(-q^{2n+1}, q)$-module corresponding to $\lambda$ by $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$.
Recall the definition of $A_n(\ell)$ from Section \ref{q-sp} and set for any $r>0$
$$ \overline \Lambda^r(n, \ell) = (\Lambda^r \cup \Lambda^{r-2} \cup \cdots ) \cap A_{n}(\ell).$$
Then arguments similar to the ones used above give
\begin{thm} \label{main BMW}
Let $r > 0$.
\begin{enumerate}
\item Suppose $\ell$ is odd. Let $n \in \{1, 2, \cdots (\ell - 1)/2\}$ and $\lambda \in \overline \Lambda^r(n, \ell )$. Then the dimension of the simple $BMW_r(-q^{2n+1}, q)$-module $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$ is recursively determined by
$$ \dim D_{BMW_r(-q^{2n+1}, q)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, \ell)} \dim D_{BMW_{r-1}(-q^{2n+1}, q)}(\lambda \pm \epsilon_i).$$
\item Suppose $\ell$ is even. Let $n \in \{1, 2, \cdots (\ell - 4)/2\}$ and $\lambda \in \overline \Lambda^r(n, \ell )$. Then the dimension of the simple $BMW_r(-q^{2n+1}, q)$-module $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$ is recursively determined by
$$ \dim D_{BMW_r(-q^{2n+1}, q)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, \ell)} \dim D_{BMW_{r-1}(-q^{2n+1}, q)}(\lambda \pm \epsilon_i).$$
\end{enumerate}
\end{thm}
\begin{examplecounter}
We shall illustrate Theorem \ref{main BMW} in the case $\ell$ is even (the odd case is equivalent to the Brauer case in Section \ref{Brauer}). So we take $\ell = 10$. Then the relevant values of $n$ are $1, 2$ and $3$. The weight set $\overline \Lambda^r(1,10)$ contains $2$ elements (except for $r =1$) $\lambda^1(r), \lambda^2(r)$, namely $ (2), (0)$, when $ r$ is even, and $ (3), (1)$, when $r$ is odd. Likewise, $\overline \Lambda^r(2,10)$ contains $2$ elements when $r$ is odd (except for $r=1$) and $4$ elements, when $r$ is even (except for $r = 2$). We denote these weights $\mu^1(r), \mu^2(r), (\mu^3(r), (\mu^4(r))$. They are $(2,2), (2,0), (1,1), (0,0)$, when $ r$ is even, and $ (2,1), (1,0)$, when $r$ is odd. Finally, $\overline \Lambda^r(3,10)$ consists of $2$ elements $\nu^1(r), \nu^2(r)$, namely $(1,1,0), (0,0,0)$, when $r$ is even, and $(1,1,1), (1,0,0)$, when $r$ is odd (except $r=1$).
In Table 6 we have in row $r$ listed (in the order given above) the dimensions of the simple modules for $ B_r(-q^{2n+1}, q) $ for $r \leq 10$. These numbers are computed recursively using Theorem \ref{main BMW}.
\end{examplecounter}
{ \it Table 6. Dimensions of simple modules for $BMW_r(-q^{2n+1}, q)$ when $\ell = 10$ and $n = 1 , 3, 5$.}
\vskip .5cm
\centerline {
\begin{tabular}{ r| c c |c c c c |c c| c }
&$n=1$&&&$n=2$&&&$n=3$& \\ \hline
r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\mu^1(r)$ & $\mu^2(r)$ & $\mu^3(r)$& $\mu^4(r)$ & $\nu^1(r)$ & $ \nu^2(r)$ \\ \hline
1 & & 1& & 1 & &&&1 & \\
2 & 1 & 1 & 1& 1 & 1 & 1 & 1 & 1 \\
3 & 1&2& 2& 3& & & 1 & 2 \\
4 & 3 &2 & 2& 5 & 5 &3& 3& 2 \\
5 &3 & 5 & 12 & 13 & & &3 & 5\\
6 & 8 & 5& 12& 25&25 &13&8 & 5 \\
7 &8 &13 &62 &63&& & 8 & 13\\
8 & 21& 13&62 &125 & 125 &63& 21 & 13\\
9 & 21 &44&312&313 && & 21 & 34\\
10 & 65 &44&312&625 & 625 & 313&55 & 34\\
\end{tabular}}
|
{
"timestamp": "2019-01-03T02:13:41",
"yymm": "1802",
"arxiv_id": "1802.08706",
"language": "en",
"url": "https://arxiv.org/abs/1802.08706"
}
|
\section{Supplementary Material: An ionic impurity in a Bose-Einstein condensate at sub-microkelvin temperatures}
\subsection{Diamagnetic shift for high $n$ Rydberg states \\and electric field compensation}
For the data shown in Fig.~2 of the main article, the position and width of the bare Rydberg transition ($\delta=0$) is calibrated in a dilute thermal sample. For sufficiently long excitation pulses, the linewidth is limited by electric stray fields. Careful electric field compensation allows us to achieve Gaussian widths of the bare Rydberg transition which are $<$\unit{1}{MHz} for $n$ up to $175$ and $<$\unit{3}{MHz} for $n=190$. We have verified, that this broadening does not significantly affect our calculated spectra.
Due to the magnetic field ramp applied for isolating the micro-BEC from the parent condensate, the magnetic offset field present during calibration differs from the field at which the spectra in Fig.~2 have been taken. In the presented spectra, the resulting diamagnetic lineshifts are calibrated and corrected for.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{./FigSupplement1.pdf}
\caption{
Rydberg spectra for the $160S_{1/2}$ Rydberg state at varying magnetic offset fields $B$. Zero detuning is referenced to the atomic Rydberg resonance at zero magnetic field. The red line shows a fit to the data based on Eq.~\ref{Eq:Diamagnetism} to the extracted line centers.
}
\label{fig:Diamag}
\end{figure}
Specifically, for a Rydberg atom in a magnetic field of strength $n^4B>1$ the diamagnetic interaction gets stronger than the linear Zeeman term. As we investigate Rydberg atoms with $n>150$, this regime is entered already at a relatively small magnetic field $B<$ \unit{10}{G}. For $nS$-states in a sufficiently low magnetic field $B$ oriented along the $z$-axis the diamagnetic energy shift of the Rydberg level is given by \cite{Gallagher1994MAT}
\begin{equation}
\Delta E_{dia} = \frac{B^2}{8} \bra{nL_1m_{L_1}}r^2sin(\theta)\ket{nL_1m_{L_1}},
\label{Eq:Diamagnetism}
\end{equation}
which scales as $n^4$. Here, $r$ is the radial position of the Ryd\-berg electron relative to the ionic core and $\theta$ its angle with respect to the $z$-axis. The Rydberg state is described by the principal quantum number $n$, the orbital quantum number $L_1$ and its projection onto the $z$-axis $m_{L_1}$.
Our atomic sample is prepared in the $\ket{5S_{1/2},F=2,m_F=2}$ state (Land\'{e} factor $g_F=1/2$), which features the same linear Zeeman shift as the $\ket{nS_{1/2},m_J=1/2}$ Rydberg states (Land\'{e} factor $g_J=2$). Hence, for the optical transition, the linear Zeeman effect cancels and only the diamagnetic term remains. To quantify the diamagnetic shift, Rydberg spectra with $n\geq127$ have been measured for varying magnetic offset fields of our QUIC trap in a dilute sample. An exemplary dataset for the $160S_{1/2}$ state is shown in Fig.~\ref{fig:Diamag}. From a fit to the data based on Eq.~\ref{Eq:Diamagnetism}, we obtain a correction of \unit{11.8}{MHz} for the magnetic field ($B=$ \unit{7.73}{G}) present during Rydberg spectroscopy in the micro-BEC. The same procedure has been applied to the data for $n=127$ and $n=190$ in Fig.~2, resulting in shifts of \unit{2.9}{MHz} and \unit{22.1}{MHz}, respectively.
Note that for the data shown in Fig.~4 of the main article, the altered loading procedure of the micro-BEC results in a negligible change of the magnetic field strength, and consequently no significant diamagnetic shift.
\subsection{Measurement of the collisional lifetimes $\tau$}
For the measurement of the Rydberg atom's collisional lifetime, we apply state-selective field ionization, mainly following the procedure described in Ref.~\cite{Schlagmueller2016MAT}. Therein, it was observed, that the Rydberg lifetime in a high density and ultracold environment is limited by two processes: $L$-changing collisions of the Rydberg electron and associative ionization forming Rb$_2^+$ molecular ions. For high principal quantum numbers ($n \gtrsim 90$) the first process dominates.
In order to discriminate the initial $S$-state from the high-$L$ product state, we exploit their different ionization thresholds. Specifically, $S$-states tend to ionize adiabatically with a threshold close to the classical limit of $1/(16n^4)$. In contrast, the high-$L$ states ionize diabatically at higher field strength $\sim 1/(9n^4)$ \cite{Gallagher1994MAT}. In the experiment, we wait for a variable delay time $t$ after Rydberg excitation, and subsequently apply a two-step ionization sequence. The first part ionizes predominantly $S$-states, while the second part ionizes all remaining Rydberg atoms, including high-$L$ states. The two pulses are sufficiently delayed in time, so that the ionization products can be distinguished by their arrival time on the detector. Additionally, associated Rb$_2^+$ ions are also distinguished via time-of-flight due to their higher mass. We determine the fraction of detected $S$-states $p_{S}$ and the sum of measured high-$L$ and Rb$_2^+$ signal ($1-p_{S}$). This routine is repeated for increasing ionization delay times $t$.
An exemplary dataset for $n=160$ is shown in Fig.~\ref{fig:Lifetime}(a). We extract the Rydberg lifetime by fitting an exponential decay according to
\begin{equation}
p_{S}(t) = (1-c)+c\cdot \exp\left(-\frac{t-t_0}{\tau}\right).
\label{eq:Lifetimes}
\end{equation}
Here, $c$ and $t_0$ are constants accounting for finite discriminability and pulse lengths.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{./FigSupplement2.pdf}
\caption{
Fraction of $S$-states $p_{S}$ (blue symbols) ionized by the first field pulse (\unit{0.85}{V/cm}) and fraction of high-$L$ states (red symbols) ionized by the second field pulse (\unit{3.6}{V/cm}) of the state-selective ionization sequence as a function of delay time $t$. The data is taken for the $160S$ state at a detuning $\delta=$ \unit{42.2}{MHz}, corresponding to the line center of the spectrum in Fig.~2. Statistical error bars are smaller than the symbol size. The blue solid line is a fit to the data based on Eq.~\ref{eq:Lifetimes}.
}
\label{fig:Lifetime}
\end{figure}
Note that for the Rydberg spectra, presented in Fig.~2 and 4 of the main article, the ionization fields are chosen large enough to ionize both, $S$-states and high-$L$ states, and are switched on after a fixed delay time of \unit{200}{ns}.
\subsection{Numerical calculation of the potential energy curves $U$}
For the calculation of the Born-Oppenheimer potential energy curves $U_{i,e}$, the matrix elements of the Hamiltonian
\begin{align}
\hat{H} = -\frac{C_4}{2\mathbf{R}^4} + \hat{H}_0 & + 2\pi a_s(k)\ \delta^3(\mathbf{r}-\mathbf{R}) \nonumber \\
&+6\pi a_p(k)\delta^3(\mathbf{r}-\mathbf{R})\overleftarrow{\nabla} \cdot \overrightarrow{\nabla}
\label{eq:Hamiltonian}
\end{align}
are evaluated for fixed values of $R$ in a basis set $\ket{n,L_1,J_1,m_{J_1};m_{S_2}}$ \cite{Greene2000MAT,Fabrikant2002MAT,Hamilton2002MAT}. Here, $\hat{H}_0$ denotes the Hamiltonian of the unperturbed Rydberg electron including fine structure coupling between the orbital angular momentum $\hat{\mathbf{L}}_1$ and the spin angular momentum $\hat{\mathbf{S}}_1$ using published quantum defects \cite{Mack2011MAT,Li2003MAT,Han2006MAT}. The Rydberg electron states are specified by their principal quantum number $n$, their orbital angular momentum $L_1$, and their total spin-orbit coupled angular momentum $J_1$ with its projection $m_{J_1}$ along $\mathbf{z}$. The projection of the electronic spin of the ground-state perturber is denoted with $m_{S_2}$. Note that for the spin configuration studied in this work, there is no coupling to the singlet electron-neutral scattering channel and consequently, the nuclear spin of the ground-state perturber does not play a role. The energy-dependent scattering lengths relate to the corresponding phase shifts $\delta_{s,p}(k)$ via $a_s(k) = -\tan(\delta_s(k))/k$ and $a_p(k) = -\tan(\delta_p(k))/k^3$ and are taken from Ref.~\cite{Fabrikant1986MAT}. For the $R$-dependent electron momentum $k$ in the scattering process, we use the semi-classical expression for the kinetic energy of the Rydberg electron $k(R)^2/2 = -1/(2 n^{\star 2}) + 1/R$, where $ n^\star$ is the effective principal quantum number of the Rydberg level of interest \cite{Greene2000MAT}. In the first term of Eq.~\ref{eq:Hamiltonian}, which accounts for the ion-atom interaction, we use the measured atomic polarizability $\alpha=C_4=$\unit{318.8}{a.u.} reported in Ref.~\cite{Holmgren2010MAT}.
The potential energy curves $U_{i,e}$ are finally obtained by full diagonalization of $\hat{H}$ for each value of $R$ on a finite basis set which spans two hydrogenic manifolds. The values of $R$ are spaced quadratically. $U_{i,e}$ is computed for about $n \times 18$ values of $R$ to provide an adequate resolution for the potential wells. For the calculation of $U_{e}$ we omit the first term in Eq.~\ref{eq:Hamiltonian}.
Note that spin-orbit interaction in the electron-atom $p$-wave scattering channel is not included in the Hamiltonian Eq.~\ref{eq:Hamiltonian}. For the spin configuration studied in this work, the main effect of spin-orbit coupling are slight shifts of the divergence from the $p$-wave shape resonance and small modifications of the Born-Oppenheimer potential close by. We have checked at the example of $n=160$ that including spin-orbit coupling for $p$-wave scattering does not significantly affect the simulated excitation spectrum.
\subsection{Monte-Carlo modeling of the Rydberg spectra}
For modeling the lineshape of the Rydberg spectra, we use a Monte-Carlo sampling method, which incorporates the density distribution of the condensate, the intensity profile of the excitation laser beams, and the Fourier-limited Rydberg excitation bandwidth. We start from a random configuration of atoms, reflecting the Thomas-Fermi density distribution of our micro-BEC using measured atom numbers and trap frequencies. Typical Thomas-Fermi radii are about \unit{1.0}{$\upmu$m} and \unit{9.2}{$\upmu$m} in the radial and longitudinal direction, respectively. The atoms are treated as point-like particles with infinite mass, neglecting any excitation dynamics. Furthermore, we assume uncorrelated atoms, as expected for a weakly interacting BEC. One of the atoms is designated to carry the single Rydberg excitation. For each of the remaining atoms, the potential energy $u_i$ is extracted from the interaction potential $U$ according to its distance $R$ to the Rydberg ionic core. The sum over the $u_i$ delivers the energy shift $U_n$ for a single Monte-Carlo configuration. Finally, the spectrum is obtained from the contribution of all $U_n$, weighted by the local excitation probability of the corresponding Rydberg atom and convoluted with a Lorentzian profile reflecting the Rydberg excitation bandwidth. Note that the beam profile of the excitation laser has minor influence on the spectral shape due to the small sample size. The area below the spectrum is finally normalized for comparison to the experimental data.
Note that for the high-$n$ Rydberg states studied here, the averaged modification of the Rydberg $S$-orbit by one perturber is very small. For example, at $n=160$ it amounts to $\int (1-p_s)\rho\ d^3r\approx 2\times 10^{-5}$, with $p_s$ being the $S$-character and $\rho$ the normalized BEC density distribution. The $S$-character is obtained from the diagonalization procedure that yields the Born-Oppenheimer potential energy curve. The fact that this modification is small allows us to employ the pairwise interaction potential for our numerical simulations.
|
{
"timestamp": "2018-04-12T02:07:13",
"yymm": "1802",
"arxiv_id": "1802.08587",
"language": "en",
"url": "https://arxiv.org/abs/1802.08587"
}
|
\section{Introduction} \label{sec:intro}
This paper, one of a series associated with the 2015 release of data from the
\textit{Planck}\footnote{\textit{Planck}\ (\url{http://www.esa.int/Planck}) is a
project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA
member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a
collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from
NASA (USA).}
mission \citep{planck2014-a01},
outlines the construction of the first full-sky multi-frequency catalogue of non-thermal, i.e. synchrotron-dominated sources detected in \textit{Planck}\ full-mission \citep{planck2014-a01} temperature maps. The main
purpose of this catalogue is to provide a full-sky sample of extragalactic radio sources (ERSs), of which the majority
show a flat ($\alpha\,{\simeq}\,0$) observed spectral emission distribution\footnote{Here we follow the convention $S(\nu)\propto \nu^{\alpha}$, with $S(\nu)$
the observed flux density at frequency $\nu$ and $\alpha$ the spectral index.} up to sub-mm/far-IR wavelengths.
It thus extends to higher frequencies than had been achieved with large-area ground-based surveys (NVSS, FIRST, GB6, Parkes,
AT20G, etc.), which are currently limited to centimetre wavelengths. Unlike previous \textit{Planck}\ catalogues that were
constructed on a frequency-by-frequency basis, this is a fully multi-frequency catalogue of point sources obtained
through a multi-band filtering technique, which uses data from both the
Low Frequency Instrument \cite[LFI,][]{planck2014-a07} and
High Frequency Instrument \citep[HFI,][]{planck2014-a09}.
In addition to the aforementioned ERSs, this catalogue also includes
several thousands of candidate compact sources of Galactic origin.
As extensively discussed in previous papers that studied the ERS populations observed at millimetre wavelengths
\citep[e.g.][]{zotti05,zotti10,tucci11,planck2011-6.1,planck2011-6.2,planck2011-6.3a,planck2012-VII,zotti15,planck2016-XLV} the majority of these extragalactic sources are
usually classified as flat-spectrum radio quasars (FSRQ) and BL Lac objects, collectively called
blazars,
and only a few of them are classified as inverted spectrum or
high-frequency peaker (HFP) radio sources.\footnote{It is widely agreed that HFP sources correspond to the early
stages of evolution of powerful radio sources, when the radio emitting region grows and expands in the interstellar
medium of the host galaxy before becoming an extended radio source \citep[see e.g.][]{odea98,zotti05}.}
Blazars are a relatively rare class of active galactic nuclei (AGN) characterized by electromagnetic emission
over the entire energy spectrum, from the radio band to the most energetic gamma rays \citep[e.g.][]{planck2011-6.3b}. They are also
characterized by highly variable, non-thermal synchrotron emission with relatively high linear polarization at
GHz frequencies \citep{moore81} in which the beamed component dominates the observed emission
\citep{angel80}. Interestingly, observations of blazars at mm/sub-mm wavelengths often reveal the transition from
optically thick to optically thin radio emission in the most compact regions, i.e. they provide information on the maximum
self-absorption frequency and thus of the physical dimension, $r_{\rm M}$, of the self-absorbed (optically thick) core
region
\citep{konigl81,tucci11}. Their spectral energy distribution (SED) is characterized by two
broad peaks in $\nu L_{\nu}$: the first peak, attributed to Doppler-boosted synchrotron radiation, occurs at a
frequency $\nu_{\rm p}$ varying from $10^{12}$ to $10^{18}\,$Hz
\citep{niep06}; and
the second peak, attributed to inverse Compton scattering, occurs at high gamma ray energies, up to around $10^{25}\,$Hz
\citep[see e.g.][]{planck2011-6.3b}.
Because of this second emission peak, blazars constitute a numerous class of extragalactic gamma ray sources.
About $90\,\%$ of the firmly
identified extragalactic sources, and about $94\,\%$ of the associated
sources (i.e. of sources having a counterpart with a $> 80\,\%$
probability of being the real identification) in the Third Fermi
Large Area Telescope (LAT) catalogue, are blazars \citep{3LAC}.
Thus, knowledge of the blazar population
is also very relevant in high-energy astrophysics. Furthermore, because of their very broad spectral emission,
blazars are expected to contribute a substantial
fraction of the extragalactic background
and its fluctuations at both millimetre wavelengths and
very high energies, i.e. gamma rays.
For all these reasons, this new full-sky multi-frequency catalogue of non-thermal synchrotron ERSs -- aimed
at filling, albeit partially, the gap between samples selected
at mid/near-IR wavelengths
\citep[e.g.][]{WISEp}
and those collected from space observations at
very short wavelengths ({\it Compton}-EGRET, {\it Fermi}-LAT, Swift/BAT, etc.) -- will not only allow the identification of
new blazars to be included in multi-frequency catalogues
\citep[e.g.][]{massaro09,massaro15},
but
will also help future studies of the blazar phenomenon, of the physical processes occurring in the nuclear region of
this class of sources, and of their cosmological evolution.
The outline of the paper is as follows. In Sect.~\ref{sec:cat0} we describe the criteria we follow to select
radio source candidates at 30 and 143\,GHz by using a blind detection on \textit{Planck}\ maps filtered with the Mexican hat wavelet. In Sect.~\ref{sec:multifreq} we briefly review the multi-frequency detection method we use to construct our
catalogue and we give details about the implementation of this technique on the \textit{Planck}\ data. The mathematical description of the method is
referred to the Appendix~\ref{sec:appendix}, while the practicalities of the implementation of the multi-frequency detection technique for \textit{Planck}\, data are
discussed in Appendix~\ref{sec:practical}.
The catalogue is
introduced in Sect.~\ref{sec:cat1}. We also define and introduce in Sect.~\ref{sec:cat1} the Bright Planck Multi-frequency Catalogue of
Non-thermal Sources ({{PCNTb}}) and also the high-significance subsample ({{PCNThs}}).
The internal \textit{Planck}\ and the external validation of the
catalogues and a brief description of their statistical properties are discussed in Sect.~\ref{sec:valprop}.
The released catalogue is described in Sect.~\ref{sec:contents}.
Finally, we summarize our main
conclusions in Sect.~\ref{sec:conclusions}.
\section{The input \textit{Planck}\ sample} \label{sec:cat0}
Our list of source candidates is the union of those detected with ${\rm S/N}> 3$ in either the 30- and 143-GHz
maps in total intensity made by \textit{Planck}. The maps used here are the full-mission ones and
cover the entire sky \citep{planck2014-a01}. We employ two frequencies to detect sources because our
multi-frequency analysis typically yields lower flux density detection limits for the same S/N than
a single frequency approach (see Sect.~\ref{sec:multifreq}). Thus the completeness is improved, without
compromising reliability. This multi-frequency approach allows us to reduce the single-frequency threshold of ${\rm S/N}> 4$ used for
the PCCS2 \citep{planck2014-a35}, thus allowing additional candidates to enter the catalogue.
Our choice of two \textit{Planck}\ bands at 30 and 143\,GHz was made to combine the power of observations at low radio
frequency (where synchrotron emission is typically dominant in ERS spectra), with the low-noise and
minimum-foreground 143-GHz \textit{Planck}\ band. Note that ERSs still dominate the source counts at 143\,GHz, as displayed
and discussed, e.g. in figure~25 of the PCCS2 paper \citep{planck2014-a35} and also in figure~10 of
\citep{planck2012-VII}. Had we only selected sources at 30\,GHz, we would have taken the risk of losing
inverted spectrum sources, whose spectra can be rising well above 30\,GHz, as well as high-frequency peakers \citep[HFPs,
see for example ][]{HFP}, which display a very distinct spectral emission, peaking up to millimetre wavelengths.
Given the main purpose of this paper -- i.e. the selection of a full-sky sample of non-thermal ERSs -- the adopted
selection criteria help to ensure that we are not defining a sample/catalogue that is {\it biased} in origin against some
particular ERS population, detectable in principle by the \textit{Planck}\ full-sky surveys.
The pipeline used to produce the two initial lists of source candidates at 30 and 143\,GHz with ${\rm S/N}>3$ is the
same one used to produce the PCCS \citep{planck2013-p05} and PCCS2 \citep{planck2014-a35}
low frequency catalogues. For convenience, we use \textit{Planck}\ maps upgraded to a common resolution
parameter $N_{\rm side} = 2048$ \citep[using \texttt{HEALPix}, ][]{gorski2005}.
One of the characteristics of this pipeline
is a two-step analysis, first performing a blind detection in the full-sky map, and, second, repeating the analysis
in a non-blind fashion at the positions of the sources detected in the first step. The goal of this multi-step
analysis is to reduce the number of spurious detections introduced by the filtering approach in the borders of the cut-out
patches, and to measure the flux density of sources and background noise in an optimal way. In practice, in the first step
the sky is projected onto a sufficient number of overlapping square flat patches ($7\pdeg33 \times 7\pdeg33$, $128
\times 128$ pixels in size) such that even after removing the sources detected near the edges of the patch, the full
sky was effectively analysed. Second, at the position of the remaining source candidates, a new patch (with
the same dimensions) is constructed, redoing the filtering and optimization of the Mexican hat wavelet scale, but this
time focusing on the centre of the image, re-assessing the S/N of the source. Sources that in this second step
do not meet the ${\rm S/N}>3$ criterion are rejected. Since the patches overlap in order to smoothly cover the entire sky \citep[see the details in][]{planck2013-p05,planck2014-a35}, multiple entries of the same
source can be obtained in overlapping regions of two or more sky patches.
These multiple entries are removed from the catalogue, keeping only those with the highest S/N in each case.
The \textit{Planck}\ 143-GHz channel has much better angular resolution than the 30-GHz channel. Therefore, it can happen that two or more objects in our \textit{Planck}\ sample, selected at 143\,GHz, are inside the same 30\,GHz beam solid angle. We keep such multiple occurrences in our input sample and deal with them in a way that will be described in Sect.~\ref{sec:description} and the Appendix~\ref{sec:blend}.
Finally, the main disadvantage of selecting sources at 143\,GHz is the possibility of unintentionally selecting Galactic (or extragalactic) cold thermal sources that are bright enough to pass our selection threshold.
We will try to quantify the impact of these sources in Sect.~\ref{sec:Galactic}.
\section{Multi-frequency detection} \label{sec:multifreq}
The vast majority of component-separation methods that are typically used in CMB experiments take advantage of the \emph{spectral diversity} of the different astrophysical components (synchrotron, thermal dust, the CMB itself, etc.) as a means to disentangle the various signals that arrive at the detectors \citep[see for example][]{leach2008,planck2014-a11,planck2014-a12}. However, most of the techniques
that are used for the separation of diffuse components are not well suited for the detection of extragalactic compact sources, except for the particular case of galaxy clusters observed through the thermal Sunyaev-Zeldovich (tSZ) effect. Individual galaxies
leave their imprint on the microwave sky through an enormous variety of astrophysical mechanisms -- from radio active lobes to dust thermal emission -- so that, strictly speaking, each individual galaxy has its own unique spectral behaviour, thus making it impossible for methods that rely on spectral diversity alone to
solve for each individual spectral signature.
New methods, specifically tailored for compact sources, then become necessary. Most of the detection methods for compact-source detection that have been proposed in the literature make use of the \emph{scale diversity} instead of spectral diversity (that is, they focus on the sizes of the sources rather than on their colours). In most cases, the catalogues of extragalactic sources are extracted from CMB maps separately, one frequency channel at a time. The previously published \textit{Planck}\ catalogues \citep{planck2011-1.10,planck2013-p05,planck2014-a35} follow this single-frequency approach.
Methods that attempt to combine the spectral and scale-diversity principles in order to
enhance the detectability of extragalactic compact sources were fist introduced in the literature in the context of the study of the tSZ effect \citep{herr02,herr05,melin06,planck2011-5.1a,planck2013-p05a,planck2013-p05a-addendum,planck2014-a36} an were afterwards generalized to generic compact-source populations \citep{lanz10,lanz13}.
The matched matrix filters (MTXFs) approach was introduced in \cite{MTXFa} and further explored in \cite{MTXFb} as a solution to the problem of multi-frequency detection of extragalactic point sources when the frequency dependence of the sources is not known a priori.
The MTXFs have already been used inside the Planck Collaboration to validate the LFI part of the PCCS catalogue \citep{planck2013-p05} and we will use them in this paper. The main aspects of MTXF theory can be found in \cite{MTXFa} and \citet{MTXFb}, and are also reviewed in the general context of point source detection in CMB experiments in \cite{herranz10} and \cite{multireview}; however, for clarity, we summarize the main
mathematical aspects of MTXF theory in Appendix~\ref{sec:appendix}. The practical details of the implementation of the MTXFs for \textit{Planck}\, data are described in Appendix~\ref{sec:practical}. In particular, we discuss how we obtain the photometry of the sources in section~\ref{sec:photometry} of Appendix~\ref{sec:practical}.
\section{The Planck Multi-frequency Catalogue of Non-thermal Sources} \label{sec:cat1}
\subsection{General description} \label{sec:description}
We provide a Planck Multi-frequency Catalogue of Non-thermal Sources ({{PCNT}})
containing the 29\,400 candidates selected in our input catalogue (as described in Sect.~\ref{sec:cat0}); additionally, we flag
the Bright Planck Multi-frequency Catalogue of Non-thermal Sources ({{PCNTb}}),
a subsample comprising 1424 bright compact sources detected with ${\rm S/N}\geq4.0$ in both the 30- and 143-GHz \textit{Planck}\ channels,
described in Sect.~\ref{sec:bright}. Finally, we flag a high-significance sample of 151 objects that are
detected with the MTXF at ${\rm S/N}\geq4.0$ in {\it all\/} nine \textit{Planck}\ frequency channels. We indicate this high-significance subsample with the acronym {{PCNThs}}, and we discuss it in Sect.~\ref{sec:highsig}. Figure~\ref{fig:positions_PMNT} shows the Galactic coordinates of the 29\,400 candidates in our catalogue. The boundary of the \textit{Planck}\ $70\,\%$ Galactic mask, \GAL070,\footnote{\GAL070\ is one of the \textit{Planck}\
2015 Galactic plane masks, with no apodization used for CMB power spectrum estimation. The whole set of masks consists of \GAL020, \GAL040, \GAL060, \GAL070, \GAL080, \GAL090, \GAL097, and \GAL099, where the numbers represent the percentage of the sky that was left unmasked. These masks can be found online at the Planck Legacy Archive, \url{http://pla.esac.esa.int/pla} See the \textit{Planck}\ Explanatory Supplement for further description of the 2015 data release \citep{planck2014-ES}.} is superimposed in grey as a visual help. Apart from an evident overdensity of points around the Galactic plane and several other regions of the sky such as the Magellanic Clouds, the distribution of targets looks very uniform across the sky.
Table~\ref{tb:4sigmas} indicates the number of $4\,\sigma$ detections, that is, sources detected with ${\rm S/N}\geq4.0$,
at Galactic latitude $|b| \geq 30^{\circ}$, in the {{PCNT}}, compared to the PCCS2. The $|b| \geq 30^{\circ}$
cut was set for comparison reasons, because the PCCS2 applies different Galactic masking criteria for the
\textit{Planck}\ LFI and HFI. The
comparison is only meaningful between frequencies 30 to 143\,GHz because in the HFI channels the PCCS2
does not always apply a $4\,\sigma$ threshold cut (at LFI frequencies, on the other hand, a $4\,\sigma$ threshold
is always applied). For HFI the situation is more complex; the lower threshold is approximately $4\,\sigma$ at 100 and
143\,GHz, $4.6\,\sigma$ at 217\,GHz, and higher at higher frequencies. For this reason and also because the
{{PCNT}} is a non-blind catalogue selected at 30 and 143\,GHz, Table~\ref{tb:4sigmas} gives a fair comparison between
the two catalogues only up to 143\,GHz. The {{PCNT}} catalogue presents more $4\,\sigma$ detections than the PCCS2 up
to 143\,GHz. At higher frequencies the PCCS2 has more $4\,\sigma$ detections. This is expected because the
primary selection of the {{PCNT}} catalogue was performed at 30 and 143\,GHz and, therefore, we are missing the
bulk of thermal-spectrum objects that dominate the source number counts at the higher HFI frequencies.
The gain in
number of detections with respect to the PCCS2 is particularly evident at 30, 44 and 70\,GHz. This is directly
related to the gain in S/N due to the MTXF filtering method,
but it is also the result of the de-blending
discussed in Sect.~\ref{sec:blend}.
Our primary sample has
been constructed by selecting source candidates in two \textit{Planck}\ channels with different angular resolution. The
FWHM at 143\,GHz is $7\parcm3$, whereas it is $32\parcm3$ at 30\,GHz.
As noted in Sect.~\ref{sec:blend}, it is possible that two or more distinct sources detected at 143\,GHz, with an angular
separation smaller than $30\ifmmode {^{\scriptstyle\prime}$ could be seen as a single bright spot at 30\,GHz.
However, since the effective angular resolution of our {\textit{Planck}\ input catalogue} is equal to the \textit{Planck}\ FWHM at 143\,GHz,
we are able to separate
sources at low radio frequencies that would not be resolved in a single frequency catalogue.
It is important to emphasize that single-frequency catalogues (such as the PCCS and PCCS2) {\it
cannot\/} resolve sources below the angular resolution limit of each separate channel, whereas the {{PCNT}} is able
to resolve sources at the 143-GHz angular resolution limit, even for the LFI channels.
A more direct comparison between the two catalogues is shown in Table~\ref{tb:sources_1Jy}, where the number of detections above 1\,Jy and with Galactic latitude $|b| \geq 30\ifmmode^\circ\else$^\circ$\fi$ are listed for the two catalogues. Up to 143\,GHz the two catalogues contain essentially the same bright sources (the small discrepancies in the numbers of sources are compatible with the statistical random fluctuations that appear between catalogues with different flux density error levels).
For higher frequencies, the comparison is not meaningful because the {{PCNT}} is a non-blind catalogue that focuses only on the positions of our 30--143\,GHz input sample.
\begin{table}[htbp!]
\newdimen\tblskip \tblskip=5pt
\caption[]{Number of detections at Galactic latitude $|b| \geq 30^{\circ}$ in the {{PCNT}} (a $4\,\sigma$ threshold has been applied) and PCCS2 catalogues (the lower threshold in the PCCS2 is $4\,\sigma$ for the channels between 30 and 143\,GHz, and higher for the channels between 217 and 857\,GHz).}
\label{tb:4sigmas}
\vskip -3mm
\footnotesize
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\newdimen\pointwidth
\setbox0=\hbox{.}
\pointwidth=\wd0
\catcode`?=\active
\def?{\kern\pointwidth}
\halign{\hbox to 2.0cm{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip 1em&
\hfil#\hfil& \hfil#\hfil\tabskip 0em\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\noalign{\vskip -1pt}
\omit\hfil$\nu$ [GHz]\hfil& {{PCNT}}& PCCS2\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}
*30& 1243& *745\cr
*44& *814& *367\cr
*70& *959& *504\cr
100& 1440& *986\cr
143& 1430& 1283\cr
217& 1138& 1591\cr
353& *653& 1195\cr
545& *496& 1482\cr
857& *531& 4253\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}}}
\endPlancktable
\end{table}
\begin{table}[htbp!]
\newdimen\tblskip \tblskip=5pt
\caption[]{Number of detections with flux density $S\geq1\,$Jy, at Galactic latitude $|b| \geq 30^{\circ}$, in the {{PCNT}} and PCCS2 catalogues. }
\label{tb:sources_1Jy}
\vskip -3mm
\footnotesize
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\newdimen\pointwidth
\setbox0=\hbox{.}
\pointwidth=\wd0
\catcode`?=\active
\def?{\kern\pointwidth}
\halign{\hbox to 2.0cm{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip 1em&
\hfil#\hfil& \hfil#\hfil\tabskip 0em\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\noalign{\vskip -1pt}
\omit\hfil$\nu$ [GHz]\hfil& {{PCNT}}& PCCS2\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}
*30& *169& *151\cr
*44& *152& *143\cr
*70& *105& *105\cr
100& **86& **82\cr
143& **76& **71\cr
217& **83& **55\cr
353& *164& **82\cr
545& *304& *570\cr
857& 1069& 2685\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}}}
\endPlancktable
\end{table}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure1.pdf}
\caption{Position on the sky (Galactic coordinates) of the {{PCNT}} sources. The boundary of the \textit{Planck}\ $70\,\%$ Galactic mask, \GAL070, is superimposed in grey. }
\label{fig:positions_PMNT}
\end{figure*}
\subsection{The Bright Planck Multi-frequency Catalogue of Non-thermal Sources} \label{sec:bright}
The whole {{PCNT}} catalogue contains as many entries (29\,400) as the
input \textit{Planck}\ sample described in Sect.~\ref{sec:cat0}.
Even with the multi-frequency filtering many of these targets are detected with a low S/N in the LFI channels.
We therefore define a Bright Planck Multi-frequency Catalogue of Non-thermal Sources ({{PCNTb}}),
containing
those sources that have ${\rm S/N}\geq 4.0$ at both 30 and 143\,GHz simultaneously. This criterion guarantees that the members of the resulting subsample will show strong radio emission. It also favours the selection of flat radio sources. There are 1424 sources in the {{PCNTb}}, among which 1146 are located outside the \textit{Planck}\ \GAL070\ Galactic mask.
As will be discussed in Sect.~\ref{sec:BZCAT}, many of the sources outside the Galactic mask in the {{PCNTb}} are identified as bright blazars.
\subsection{The high-significance subsample} \label{sec:highsig}
There are 151 sources in our catalogue that are detected at $\mathrm{S/N}\geq 4$ for all nine \textit{Planck}\ frequencies simultaneously. We call this the high-significance subsample ({{PCNThs}}).
The extragalactic sources are either bright sources with a flat spectrum continuing up to very high frequencies (normally blazars and FSRQs), or local galaxies that show both synchrotron and thermal emission, or sometimes Galactic sources that are bright enough to be detected across the entire \textit{Planck}\ frequency range.
The positions on the sky of the {{PCNThs}} sources are shown in Fig.~\ref{fig:positions_HSS}.
As can be seen from the figure, many {{PCNThs}} sources (76 out of 151) lie inside the \textit{Planck}\ \GAL070\ Galactic mask. This was expected, since the criterion of setting a $4\,\sigma$ threshold at all frequencies selects the brightest sources on the sky, and most of these ultra-bright sources lie in the Galactic plane.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure2.pdf}
\caption{Position on the sky (Galactic coordinates) of the {{PCNThs}} sources. The boundary of the \textit{Planck}\ $70\,\%$ Galactic mask, \GAL070, is superimposed in grey. }
\label{fig:positions_HSS}
\end{figure*}
\section{Validation and properties of the catalogues} \label{sec:valprop}
\subsection{Internal \textit{Planck}\ validation} \label{sec:fluxval}
In order to validate the flux densities obtained with the MTXF multi-frequency detection
technique we have compared the
{{PCNT}} flux densities
with those of the PCCS2 \citep{planck2014-a35}. The PCCS2 lists four different flux density estimates for each of its sources: the native detection method flux estimation (\texttt{DETFLUX}); aperture photometry (\texttt{APERFLUX}); Gaussian fitting (\texttt{GAUFLUX}); and point-spread function fitting (\texttt{PSFFLUX}). Except for the case of very extended sources, the four estimations tend to show good agreement. Here we choose the \texttt{DETFLUX} flux density estimation as a reference.
Our input catalogue contains 29\,400 sources, many of them not included in the PCCS2. For this validation step we select for each frequency only sources outside the \textit{Planck}\ \GAL040\ Galactic mask. This is a more restrictive mask than the one we use in the rest of this paper (GAL070); the rationale for this choice is that we want to be sure to avoid Galactic sources as much as possible for this comparison. There are two reasons for this. Firstly,
Galactic sources will be seen against a brighter background, which could cause errors in the determination of their flux densities; secondly, a significant portion of Galactic objects appear as extended sources, particularly for the HFI channels. As discussed in Sect.~\ref{sec:photometry}, accurate filter photometry
requires that the sources are point-like (i.e. smaller than the beam area).
Figure~\ref{fig:val_all} shows results of the comparison between MTFX and PCCS2 flux densities for the nine \textit{Planck}\ channels. In order to quantify the degree of agreement between {{PCNT}} and PCCS2 flux density estimates, we have performed a linear fit based on the orthogonal distance regression method \citep{boggs90}, which takes into account uncertainties in both the $x$ and $y$ axes.\footnote{The number of points that satisfy our criteria and are used for the fitting at
30, 44, 70, 100, 143, 217, 353, 545, and 857\,GHz are
354, 232, 217, 198, 159, 122, 96, 34, and 36, respectively.}
The resulting coefficients to the fit
$S_\mathrm{{{PCNT}}} = a \, S_\mathrm{PCCS2}+b$ are shown in Table~\ref{tb:fits}. For this fit only the sources with $0.5\,{\rm Jy}<S_\mathrm{PCCS2}<10{\rm Jy}$ were used, except for the cases of 545 and 857\,GHz, where the Galactic contamination is stronger. For these two channels, we have just considered sources above 1\,Jy.
{{PCNT}} and PCCS2 flux densities agree very well for the channels up to 217\,GHz. At 353\,GHz and above the {{PCNT}} tends to overestimate flux densities with respect to PCCS2. There are two possible reasons for this disagreement.
The first reason is background contamination: because the HFI channels are more contaminated by Galactic and extragalactic infrared emission, and since the MTXF approach picks up the maximum of the filtered images around the positions of each target in the input catalogue, flux density estimates can be overestimated in regions with strong contamination. Galactic emission at HFI frequencies, particularly at 545 and 857\,GHz, is strong even outside the Galactic mask.
For those frequencies, other flux density estimators (such as aperture photometry) can be more robust.
Our choice of the restrictive
\GAL040\ mask, which leaves only $40\,\%$ of the sky unmasked, and the relatively high threshold of 1\,Jy we set for the 545- and 857-GHz channels, should largely reduce the effect of this source of overestimation.
The second reason for the disagreement between {PCNT}\ PCCS2 flux densities is a fundamental limitation of the multi-frequency filtering technique,
discussed in Appendix~\ref{sec:practical}. The MTXF technique mixes
data from different channels according to Eq.~(\ref{eq:matrix_filters_eqs}). This could lead to intensity leakage between channels, but this is prevented by the orthornomality condition (Eq.~\ref{eq:orthonormal}). The problem is that Eq.~(\ref{eq:orthonormal}) works only if the source profiles $\tau_l (\vec{q})$ are well known for all the frequencies about to be filtered. For this paper we have assumed circular Gaussian beams with the \textit{Planck}\ nominal FWHM values. This assumption is good enough if: (a) sources are point-like; and (b) the instrumental beams are well described by a circular Gaussian (i.e. the beam is stable across the image, with small sidelobes, and circular symmetry). This assumption is not correct in two relevant cases:
\begin{itemize}
\item when beams are significantly non-Gaussian and non-circular, as it is discussed in Appendix~\ref{sec:practical};
\item for sources in the high-frequency \textit{Planck}\ channels that are extended, which is a problem, since if a source is not strictly point-like, the orthonormality relation (Eq.~\ref{eq:orthonormal}) is not satisfied and there will be some leakage between channels, affecting the MTXF photometry.
\end{itemize}
In order to check this second point we have repeated the fit but using only sources that are not flagged as extended in PCCS2. Below 217\,GHz the effect of excluding extended sources in the fit is negligible. Table~\ref{tb:fits2} shows the fits for 217, 353, 545, and 857\,GHz after excluding the sources that are flagged as extended in the PCCS2. If we do not consider the fit errors, the fit is slightly better at 217\,GHz, significantly better at 353\,GHz, and only marginally better at 545 and 857\,GHz, where the effect of Galactic contamination combined with pointing inaccuracies dominates the photometric errors.\footnote{A fit at 857\,GHz between PCCS2 and {{PCNT}} flux densities (using the MTXF flux density estimation at the pixel corresponding to
the
exact coordinates in the input sample instead of the local maximum around that position) gives fit parameters $a = 0.94 \pm 0.04$ and $b = (-350 \pm 485)\,$mJy. Excluding sources flagged as extended, we get $a = 0.90 \pm 0.05$ and $b = (27 \pm 1168)\,$mJy. Now most of the bias due to random positive background fluctuations is removed, but we underestimate fluxes because the coordinates in the input sample (obtained at 30 and 143\,GHz) do not necessarily correspond to the true positions at 857\,GHz.} Taking into account the errors in the fits, the only channel for which it is important to remove the sources flagged as extended is 353\,GHz.
\begin{table}[htbp!]
\newdimen\tblskip \tblskip=5pt
\caption[]{Results of the fit $S_\mathrm{MTXF} = a S_\mathrm{PCCS2}+b$ for the nine \textit{Planck}\ frequencies.}
\label{tb:fits}
\vskip -3mm
\footnotesize
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\newdimen\pointwidth
\setbox0=\hbox{.}
\pointwidth=\wd0
\catcode`?=\active
\def?{\kern\pointwidth}
\halign{\hbox to 2.0cm{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip 1em&
\hfil#\hfil& \hfil#\hfil\tabskip 0em\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\noalign{\vskip -1pt}
\omit\hfil$\nu$ [GHz]\hfil& $a$& *!$b$ [mJy]\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}
*30& $0.97\pm0.01$& $*!65.3\pm*12.0$\cr
*44& $0.97\pm0.01$& $*!75.9\pm*25.8$\cr
*70& $0.96\pm0.01$& $**!3.3\pm*10.8$\cr
100& $1.00\pm0.01$& $**-1.4\pm**8.0$\cr
143& $1.02\pm0.01$& $*!26.7\pm**8.2$\cr
217& $1.05\pm0.01$& $**-4.6\pm*13.8$\cr
353& $1.11\pm0.02$& $*-24.7\pm*25.2$\cr
545& $1.13\pm0.02$& $*-40.7\pm150.3$\cr
857& $1.21\pm0.02$& $!295.3\pm234.9$\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}}}
\endPlancktable
\end{table}
\begin{table}[htbp!]
\newdimen\tblskip \tblskip=5pt
\caption[]{Results of the fit $S_\mathrm{MTXF} = a S_\mathrm{PCCS2}+b$ for the four highest \textit{Planck}\ frequencies, excluding sources that are flagged as extended in the PCCS2.}
\label{tb:fits2}
\vskip -3mm
\footnotesize
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\newdimen\pointwidth
\setbox0=\hbox{.}
\pointwidth=\wd0
\catcode`?=\active
\def?{\kern\pointwidth}
\halign{\hbox to 2.0cm{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip 1em&
\hfil#\hfil& \hfil#\hfil\tabskip 0em\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\noalign{\vskip -1pt}
\omit\hfil$\nu$ [GHz]\hfil& $a$& *!$b$ [mJy]\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}
217& $1.04\pm0.04$& $!**4.9\pm*52.1$\cr
353& $0.98\pm0.04$& $!*91.0\pm*94.7$\cr
545& $1.13\pm0.04$& $-268.6\pm324.0$\cr
857& $1.20\pm0.05$& $!391.9\pm849.4$\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}}}
\endPlancktable
\end{table}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure3.pdf}\\
\caption{{{PCNT}} flux densities versus PCCS2 \texttt{DETFLUX} at the same frequency. The $x=y$ identity is shown with a red dotted line. The dashed line shows the best linear fit to the data points and their error bars for sources with PCCS2 \texttt{DETFLUX} between 0.5 and 10\,Jy up to 353\,GHz and above 1\,Jy at higher frequencies.}
\label{fig:val_all}%
\end{figure*}
\subsection{External validation} \label{sec:ext_val}
The {{PCNT}} contains 29\,400 entries, so it is not practical to perform an individual external validation of all the candidates. Instead of following such an approach, we attempt
statistical external validation by comparing the number counts of our catalogue with existing models. We also try to validate a subset of interesting bright {{PCNT}} sources that are not present in the PCCS2 by looking for matches in existing ground-based observing databases.
Additionally,
we study the reliability and completeness of the catalogue by comparison to
the Combined Radio All-Sky Targeted Eight-GHz Survey catalogue \citep[CRATES,][]{CRATES}
and
we cross-correlate the non-thermal-dominated {{PCNTb}} and {{PCNThs}} catalogues with a catalogue of known blazars.
\subsubsection{Number counts} \label{sec:numbercounts}
One simple approach to check the validity of the {{PCNT}} catalogue is to estimate its number counts and to
compare them with well known models and/or other data.
For this purpose we have considered only extragalactic (outside the \GAL070\ \textit{Planck}\ Galactic mask)
sources detected at the $>4\,\sigma$ level in each channel. Another complementary approach
is to study the distributions of spectral indices of the populations in the catalogue, as well as their
variation with frequency, and to compare these with models and with previous observations.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/figure4.pdf}
\caption{{{PCNT}} differential source number counts for 30--217\,GHz here. Only $4\,\sigma$ detected sources at each frequency outside the \GAL070\ \textit{Planck}\ Galactic mask were used (blue squares). The red
circles correspond to the source counts estimated only for sources with a low-frequency counterpart in the CRATES \citep{CRATES} catalogue. As a comparison, the predicted radio source number counts from the
\cite{tucci11} C2Ex model
are also plotted (continuous black line).
In the case of the 143 and 217 GHz channels, we also plot the number counts predicted by
the \cite{tucci11} C2Co model (dashed green line). \label{fig:counts_all} }
\end{figure}
Figure~\ref{fig:counts_all} displays the {{PCNT}} differential source number counts (blue dots) in the six \textit{Planck}\ channels between 30 and 217 GHz. Only sources above the $4\,\sigma$ detection threshold and outside the \GAL070\ Galactic mask are considered. The corresponding numbers of sources -- used at each frequency channel -- are listed in Table~\ref{tb:summary} (first row). As a check, we also plot the differential number counts (red dots) calculated by using the same source sample, but selecting only those sources with a low-frequency counterpart in the CRATES \citep{CRATES} catalogue (see Section~\ref{sec:ext_val}). Both estimates appear in very good agreement at each \textit{Planck}\ frequency channel and in the whole flux density interval here analysed, thus showing that the source selection we adopted misses only a negligible fraction of low-frequency radio sources.
As a comparison, in each panel we plot the differential source number counts predicted by the C2Ex model of \cite{tucci11}. This was the most successful of their models for predicting number counts at high radio frequencies (i.e. $\nu$> 100 GHz), given that it consistently explained various and independent data sets
\citep{Vieira10,Marriage11,planck2011-6.1}, in the flux density interval $0.05< S <10$ Jy. In the case of the 143 and 217 GHz channels, we also plot the number counts predicted by the C2Co model discussed by \cite{tucci11}.\footnote{Moving to lower \textit{Planck}\ frequencies, the differences between the models C2Ex and C2Co gets smaller and smaller; in particular, their predictions are very similar at 30 and 44 GHz. For this reason, we did not plot the C2Co model at lower frequencies.} This latter model assumed a more compact synchrotron-emission region -- in the inner jet of FSRQs -- than C2Ex, while keeping the same parameters for BL Lac objects
\citep[see][their Section 4.2]{tucci11}. This assumption implies that, in the C2Co model, an almost flat emission spectrum can be maintained up to higher radio frequencies, thus increasing the number of detected sources at high frequencies.
Regarding the counts between 353 and 857\,GHz (not shown in the plot),
we notice an excess of sources with
respect to the model above 300\,mJy. Although there are partial reasons for this (for example local galaxies showing both radio and infrared emission), the simplest explanation could be contamination by thermal emission that appears in the same positions of (already fainter) radio sources detected at lower \textit{Planck}\ frequencies, or purely thermal sources (either Galactic or extragalactic) that are bright enough to be selected at a frequency as low as 143\,GHz in our input sample.
We discuss this possibility in Sect.~\ref{sec:Galactic}.
\subsubsection{Reliability and completeness} \label{sec:relcomp}
To verify the reliability of our catalogue we cross-match our detected sources with the CRATES catalogue
\citep{CRATES} at 8.4\,GHz. However, by construction, the CRATES catalogue misses the steep-spectrum sources. Since we expect to include several sources of this type, we also complement the cross-match (within a 15\ifmmode {^{\scriptstyle\prime}\ search radius) with the GB6
\citep[Northern hemisphere,][]{GB6}
and PMN
\citep[Southern hemisphere,][]{PMN} catalogues, both at $5\,$GHz. Additionally, we
also use other low-frequency catalogues and ground-based observing databases, such as
The Australia Telescope 20\,GHz (AT20G) survey \citep{AT20G} or the SIMBAD
database.\footnote{\href{http://simbad.u-strasbg.fr/}{http://simbad.u-strasbg.fr/}}
Since the {{PCNT}} extends to IR frequencies, on occasion we also use higher frequency catalogues
such as those constructed using IRAS \citep{IRAS} or the
Planck Catalogue of Galactic Cold Clumps \citep[PGCC,][]{planck2014-a37} in order to check
if particular candidates with thermal-like emission can be matched to already known sources.
However, our main focus in this paper will be the reliability and completeness of the
low-frequency, non-thermal part of the catalogue. Due to the double
{{PCNT}} selection criterion at 30\,GHz and 143\,GHz, we will study the reliability of the catalogue
in consecutive steps, starting from sources detected with ${\rm S/N}\geq 4$ separately in each one of the two selection channels and then proceeding to a more restrictive, brighter subsample of
sources detected above the $4\,\sigma$ level simultaneously at all the non-thermal \textit{Planck}\ channels (30, 44, 70, 100, and 143\,GHz). Finally, we will discuss the completeness of the catalogue.
\noindent
\textbf{${\rm S/N}\geq4$ at 30\,GHz:} At 30\,GHz we have 1701 detected sources
outside the Galactic mask \GAL070\ that have ${\rm S/N}\geq4$. Of these, 1566 have a counterpart in the low-frequency catalogues, implying that at most $8\,\%$ of the selected sources are spurious. In
Fig.~\ref{fig:reliab}
we can see the number of unmatched sources as a function of the flux density,
while Fig.~\ref{fig:frac_reliab} shows
the fraction of unmatched sources as a function of the flux density.
As expected, most of the unmatched sources appear near the detection limit because they are mostly caused by artefacts produced by nearby bright sources (they are all flagged as part of a group).
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure5.pdf}
\caption{Number of {{PCNT}} sources with no low-frequency counterpart as a function of the flux density at 30\,GHz. }
\label{fig:reliab}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure6.pdf}
\caption{Fraction of {{PCNT}} sources with no low-frequency counterpart as a function of the flux density at 30\,GHz. }
\label{fig:frac_reliab}
\end{figure}
\noindent
\textbf{${\rm S/N}\geq 4$ at 143\,GHz:} The other interesting channel to check the reliability of our catalogue is at 143\,GHz, the second channel used to obtain the candidates of the initial sample. At this frequency we have 2047 sources outside the \GAL070\ \textit{Planck}\ Galactic mask and with ${\rm S/N}\geq4$. There are 1664 sources with counterparts in the lower frequency radio catalogues and 1720 if we also consider IRAS \citep{IRAS} counterparts. Among the 327
sources not matched to these catalogues, only
14 are in the {{PCNTb}} and are brighter than 200\,mJy at 143\,GHz.
The rest have low flux densities at 143\,GHz. Returning to the 14 relatively bright {{PCNTb}}
sources that are not matched to GB6 or PMN,
nine of them have a counterpart in the CRATES catalogue. Among the remaining five objects, four have cross-identification with the BZCAT5 catalogue of blazars \citep{massaro15}; they were also observed by AT20G, with flux densities at 20\,GHz that are consistent with our 30\,GHz photometry within factors of 0.9--1.2. The last {{PCNTb}} object in this list is 33.7 arcseconds away
from blazar candidate WISE J140610.82$-$070702.4 \citep{WISEp}, whose flux density at 20\,GHz measured by AT20G is also
compatible with the {{PCNT}} 30-GHz flux density.
Of the remaining unmatched sources that
are not in the {{PCNTb}}, 16 are brighter than 200\,mJy at 143\,GHz. However, eight of these have counterparts in the
PGCC \citep{planck2014-a37}, so probably they are associated with regions of Galactic emission.
Of the remaining eight objects, four are detected only at 143\,GHz or at 143\,GHz and one other channel and, therefore, it is reasonable to consider them spurious detections. The remaining objects are of uncertain nature, but all of them (except for one) have flux densities below $300\,$mJy at 143\,GHz; they are also probably spurious detections. The last one, {{PCNT}} ID\,4888, has a measured flux density of 482.4\,mJy at 143\,GHz and is detected above the $4\,\sigma$ level between 70 and 353\,GHz in the {{PCNT}}.
\bigskip
\noindent
\textbf{${\rm S/N}\geq 4$ between 30 and 143\,GHz:} The {{PCNT}} contains 1012 sources detected above the $4\,\sigma$ level, simultaneously
at 30, 44, 70, 100, and 143\,GHz. Among these 1012 sources, 19
do not have a counterpart in PCCS2 in any
of the aforementioned frequency bands.
All of these 19 sources have counterparts within a 15\ifmmode {^{\scriptstyle\prime}\ search radius either in the GB6 or the PMN catalogues.
In order to further confirm that these detections are real ERSs with good photometric
measurements in the {{PCNT}},
we looked for
matches in existing ground-based observing databases, which have data
reaching to frequencies closer to the lower bands of the \textit{Planck}-LFI than the GB6 and the PMN. We excluded sources close to the Galactic plane ($|b|\leq
10^{\circ}$) to avoid contamination by Galactic emission. This left
us with 18 of 19 sources.
The AT20G Survey covers the sky south of
declination $0^{\circ}$. We found 10 matches, each within 6\ifmmode {^{\scriptstyle\prime\prime}\ to 93\ifmmode {^{\scriptstyle\prime\prime}\ of the
{{PCNT}} position, and one within 125\ifmmode {^{\scriptstyle\prime\prime}. The 20-GHz flux densities were taken
between 2004 and 2008, the observing dates depending on the declination band. Thus
none of the observations exactly overlap with the \textit{Planck}\ observations, which took
place
between 12 August 2009 and 23 October 2013. The AT20G
20-GHz flux densities agree with the {{PCNT}} 30-GHz flux densities within a factor of 0.6 to 2.7 (average ratio 1.06). This
is in good agreement with what is expected for possibly variable sources when
observations are taken at two different centimetre-domain frequency bands and at two
different observing epochs several years apart \citep{hovatta07,nieppola07}.
For sources with a declination higher than $-20\ifmmode^\circ\else$^\circ$\fi$, we checked the Owens
Valley Radio Observatory \citep[OVRO,][]{OVRO} 40-m telescope database \citep{OVROpaper}.\footnote{\url{http://www.astro.caltech.edu/ovroblazars/}}
Starting in 2008, they
regularly monitored 1800 blazars at 15\,GHz. We found four matches within 86\ifmmode {^{\scriptstyle\prime\prime}\
or less of our new {{PCNT}} source positions; one of them was also in our AT20G source
identification list. All of these sources showed at least some 15-GHz
variability during the \textit{Planck}\ mission. For all of these four sources the
30-GHz {{PCNT}} flux densities are within a factor of 2 or less of their 15-GHz
long-term average flux densities, thus, considering their variable behaviour, the flux densities
are of comparable amplitudes.
Visual inspection at 30 and 143\,GHz of the \textit{Planck}\ maps around the three remaining $4\,\sigma$ candidates (that are not matched either to AT20G or OVRO sources) suggests the presence of moderate point-like, positive
temperature fluctuations at the positions of three of the targets (ID numbers $2037$, $14439$, and $2140$), but does not reveal any obvious structure around the remaining
target (ID\,$28805$). These four sources, however, all have
counterparts at 5\,GHz in the GB6 or the PMN, and their
{{PCNT}} flux densities at 30\,GHz are compatible with those of their low-frequency counterparts, strongly suggesting that
these objects are real flat spectrum radio sources.
\bigskip
\noindent
\textbf{Completeness:} From the source number counts (see Sect.~\ref{sec:numbercounts}) we can also derive a rough estimate of the completeness limit of the catalogue. This lies at around 300\,mJy up to 70\,GHz, decreasing to about 150\,mJy out to 217\,GHz. At higher frequencies it continuously increases, with 250, 500 and 1000\,mJy limits for 353, 545, and 857\,GHz, respectively.
\subsection{Cross-identification of {{PCNTb}} sources with the BZCAT5 catalogue of blazars} \label{sec:BZCAT}
As noted in Sects.~\ref{sec:cat0} and~\ref{sec:cat1}, the criteria we
set for the creation of our input catalogue favours the selection of non-thermal compact sources. This is
particularly true for the {{PCNTb}} and the {{PCNThs}}, which should by construction contain
many bright flat-spectrum sources. In order to check this, we have cross-correlated both sub-catalogues with an external catalogue of known blazars.
913 out of the 1424 sources of the
{{PCNTb}} have counterparts within $7^{\prime}$ (the \textit{Planck}\ FWHM at 143\,GHz) in the BZCAT5 catalogue of blazars \citep{massaro15}. Most of them, 832, are outside the \textit{Planck}\ \GAL070\ Galactic mask.
The BZCAT5 provides flux density estimation at 143\,GHz from the PCCS catalogue \citep{planck2013-p05}
for 472 of the 913 blazars with counterpart in the {{PCNTb}}.
Remarkably, thanks to our
multi-frequency MTXF photometry, the {{PCNTb}} can provide flux density estimations at 143\,GHz and the corresponding photometric errors for the other 441 blazars in the list.
Among the 151 sources in the {{PCNThs}}, 72 have a
counterpart within a radius of 7\ifmmode {^{\scriptstyle\prime}\ in the BZCAT5 catalogue. 60 of these objects
are outside the \textit{Planck}\ \GAL070\ Galactic mask, and 49 of these have flux densities at 143\,GHz listed in
the BZCAT5. Thanks to our
multi-frequency MTXF photometry, we can provide the flux densities at 143\,GHz for the
remaining 11 BZCAT5 objects.
Figure~\ref{fig:HSS_massaro_fluxes} shows the {{PCNT}} SEDs of the 49 sources of our high-significance subsample
that have flux densities at 143\,GHz listed in
the BZCAT5.\footnote{The BZCAT5 catalogue provides flux density information at 143\,GHz for only $16\,\%$ of its sources.} Not
surprisingly, almost all these blazar SEDs are essentially flat in the whole frequency range analysed here.
Figure~\ref{fig:average_SEDS} shows the average normalized SED of the same 49 sources. This SED is essentially flat, with an excess at 857\,GHz that is partially due to contamination from Galactic emission and, very occasionally, to random associations with nearby galaxies.
There are at least two sources whose
SEDs rise sharply at high frequency. The first one is the brightest object in the plot, which is
only 1\ifmmode {^{\scriptstyle\prime}\ away from the NASA/IPAC Extragalactic Database (NED\footnote{The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.}) coordinates of the galaxy
Centaurus~A. The host galaxy of Cen~A has a dust lane,
responsible for powerful FIR emission. The Cen~A \emph{Herschel}
flux densities at 500 and 350\,$\mu$m are roughly 35 and 100\,Jy \citep[as obtained by visual inspection of the right panel of figure~3 in][]{Parkin},
to be compared to our flux densities of 36 and 110\,Jy at 545 and 857\,GHz, respectively. Regarding the second object,\footnote{The BZCAT5 redshift for this source is $z=0.7$} whose
flux density at 857\,GHz rises to become the third highest in the plot (light purple line in Fig.~\ref{fig:HSS_massaro_fluxes}), it is located 5\ifmmode {^{\scriptstyle\prime}.4 away from nearby galaxy NGC 6503
\citep[$z = 0.000083$,][]{Epinat08}. This random association could explain the observed flux excess at high frequencies for this object.
Many of the SEDs in Fig.~\ref{fig:HSS_massaro_fluxes} show a small but noticeable bump around 100\,GHz. A possible explanation for this excess could be the signature of CO contamination, which presents its strongest transition line ($J\,{=}\,1\,{\rightarrow}\,0$) at 115
GHz, very close to this frequency channel. This possible CO excess could also have
an extragalactic origin, since the redshifted $J\,{=}\,1\,{\rightarrow}\,0$ transition should be observable in the \textit{Planck}\ 100-GHz band for sources with $z \lesssim 0.45$, and this criterion is met by 19 out of the 49 sources plotted in Fig.~\ref{fig:HSS_massaro_fluxes}.
However, the CO emission from blazars is generally considered to be negligible. We have tested this by exploiting the fact that
there is a tight, almost linear, relationship between the CO and the total IR luminosity due to dust emission \citep[e.g.][]{Greve14}. Therefore is possible to estimate the total IR luminosity of a source from its observed excess at 100\,GHz, assuming that such excess is purely due to CO emission, and to compare to direct observations, where available. When IR observations are not available, it still possible to compute the star formation rate (SFR) from the IR emission derived from the CO excess, using the calibration of \cite{Kennicutt12}, and check if the derived S/N is plausible. This kind of comparison is only approximate, but can give us an idea of whether the observed excess could be due to extragalactic CO emission. Only in the case of Cen~A is the IR luminosity from the CO consistent with such direct measurements. In two other cases, {{PCNT}} ID\,721 and ID\,1257, the derived SFRs are plausible (500--600$\,{\rm M}_{\odot}\,{\rm yr}^{-1}$); however, there is no indication that either of these sources have any significant star formation (and low-$z$ blazars do not generally show significant SFRs). The CO excesses of all the other sources would correspond to IR luminosities orders of magnitude higher than observed, or to SFRs far above the Eddington limit (${\rm SFR}\gg10^4\,{\rm M}_{\odot}\,{\rm yr}^{-1}$). We conclude that, with only a few exceptions (most notably Cen~A), the CO emission from blazars in our sample is unimportant.
A second possibility is that the observed excess is caused by Galactic CO emission, visible at high Galactic latitudes \citep[cf.][figures 3, 4, and 5, and the corresponding discussion in section~5 of that paper]{Planck2013-p13}. If that were the case, the 100-GHz excess would also be visible in other {{PCNT}} sources at similar Galactic latitudes and not in the BZCAT5. While it is true that the excess is visible for some of the {{PCNT}} sources not in the BZCAT5, there is no noticeable bump in the averaged SED of the whole {{PCNT}}
(restricting the sample to sources above the $4\,\sigma$ detection level at 143\,GHz). Therefore, while we cannot rule out the possibility of Galactic CO emission as the origin of the observed excesses at 100\,GHz for some of the sources of the catalogue, there is no evidence that this effect is relevant for the majority of sources in the {{PCNT}}.
Finally, a third possibility is that the 100-GHz bump is a manifestation
of the
natural variation in
blazar SEDs at low and intermediate frequencies. Blazar spectra are the combination of emission from different components of the jets. Perhaps the excesses and deficits seen in Fig.~\ref{fig:HSS_massaro_fluxes} are simply a manifestation of these different components, and have no further relevance. It is also possible that the observed excesses at 100\,GHz in Fig.~\ref{fig:HSS_massaro_fluxes} are due to a combination of the three possibilities above.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure7.pdf}
\caption{Spectral energy distributions of the 49 {{PCNThs}} sources outside the \textit{Planck}\ \GAL070\ mask that are identified as blazars in the BZCAT5 catalogue \citep{massaro15}.}
\label{fig:HSS_massaro_fluxes}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure8.pdf}
\caption{Average normalized spectral energy distributions of the 49 {{PCNThs}} sources outside the \textit{Planck}\ \GAL070\ mask that are identified as blazars in the BZCAT5 catalogue \citep{massaro15}.}
\label{fig:average_SEDS}
\end{figure}
\subsection{Statistical properties of the catalogues} \label{sec:statprop}
The main goals of this paper are to introduce the {{PCNT}}, {{PCNTb}}, and {{PCNThs}} catalogues, to explain how they were obtained using \textit{Planck}\ data, and to make them available to the community.
Table~\ref{tb:summary} summarizes some of the results of our investigations into cross-matching the {{PCNT}} to external catalogues of radio sources, blazars, and Galactic cold cores, discussed in the previous sections for sources outside the \GAL070\ mask.\footnote{The total number of {{PCNT}} sources outside the \GAL070\ Galactic mask is 18\,647.} The table
gives an approximate idea of the number of sources detected by \textit{Planck}\ not covered in lower frequency surveys,\footnote{As seen in the previous sections, this number is not
equivalent to an assessment of the reliability of the catalogue. See Sect.~\ref{sec:relcomp} for a full discussion of reliability.} of the level of contamination by Galactic objects for the different \textit{Planck}\ channels, and of the fraction of bright {{PCNT}} sources previously identified as blazars.
A full statistical study of the catalogue is out of the scope of this work; however, to illustrate the potential of the catalogues we outline here their main statistical properties and the science that can be derived from them.
\begin{table*}[htbp!]
\newdimen\tblskip \tblskip=5pt
\caption[]{Brief summary of the statistical results of cross-matching the {PCNT} to external catalogues of radio sources, blazars, and Galactic cold cores. Only sources detected above the $4\,\sigma$ level outside the \GAL070\ Galactic mask are
considered here.}
\label{tb:summary}
\vskip -3mm
\footnotesize
\setbox\tablebox=\vbox{
\newdimen\digitwidth
\setbox0=\hbox{\rm 0}
\digitwidth=\wd0
\catcode`*=\active
\def*{\kern\digitwidth}
\newdimen\signwidth
\setbox0=\hbox{+}
\signwidth=\wd0
\catcode`!=\active
\def!{\kern\signwidth}
\newdimen\pointwidth
\setbox0=\hbox{.}
\pointwidth=\wd0
\catcode`?=\active
\def?{\kern\pointwidth}
\halign{\hbox to 6.0cm{#\leaders\hbox to 5pt{\hss.\hss}\hfil}\tabskip 1em&
\hfil#\hfil& \hfil#\hfil& \hfil#\hfil&
\hfil#\hfil& \hfil#\hfil& \hfil#\hfil&
\hfil#\hfil& \hfil#\hfil& \hfil#\hfil\tabskip 0em\cr
\noalign{\vskip 3pt\hrule \vskip 1.5pt \hrule \vskip 5pt}
\noalign{\vskip -1pt}
\omit\hfil Channel [GHz]\hfil& 30& 44& 70& 100& 143& 217& 353& 545& 857\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}
Number of $\geq4\,\sigma$ detections& 1701& 1123& 1328& 2015& 2047& 1610& 943& 719& 798\cr
$4\,\sigma$ matched to CRATES (\%)& 70.4& 77.4& 80.0& 75.5& 68.9& 67.4& 57.3& 32.3& 22.4\cr
$4\,\sigma$ matched to GB6+PMN (\%)& 88.6& 90.8& 92.4& 91.3& 85.9& 83.8& 75.5& 60.9& 52.5\cr
$4\,\sigma$ matched to PGCC (\%)& 0.9& 0.8& 0.6& 1.6& 1.3& 7.3& 13.9& 18.8& 16.9\cr
$4\,\sigma$ matched to BZCAT5 (\%)& 54.0& 63.4& 64.0& 53.2& 50.0& 51.6& 47.1& 22.7& 11.8\cr
\noalign{\vskip 3pt\hrule\vskip 5pt}}}
\endPlancktablewide
\end{table*}
\subsubsection{Spectral indices and colour-colour plots} \label{sec:spindex}
The distribution of spectral indices in the {{PCNT}} is shown in Fig.~\ref{fig:spec_indx}.
We have estimated the
spectral indices only for those sources detected above the $4\,\sigma$ level in each pair of frequencies. We distinguish
between (mostly) Galactic sources inside the \textit{Planck}\ \GAL70\ Galactic mask and (probably) extragalactic sources outside the
mask. For the extragalactic sources, we again recover the high-frequency synchrotron steepening above 70\,GHz
anticipated by \cite{NEWPSstat} using WMAP-detected sources and confirmed in a series of papers \citep{planck2011-6.1,planck2012-VII,planck2013-p05,planck2014-a35,massardi11,massardi16}. As already found in the
analysis of the Planck Early Release Compact Source Catalogue \citep{planck2011-1.10} and confirmed later
with the PCCS and the PCCS2 \citep{planck2013-p05,planck2014-a35}, extragalactic sources with thermal emission
begin to appear only at 217\,GHz and above, a much higher frequency than expected before the launch of the
\textit{Planck}\ mission.
From these analyses, we can conclude that the {{PCNT}} catalogue is an
excellent resource for studying the statistical properties of the non-thermal source population and the changes in
their emission properties with frequency at all the \textit{Planck}\ channels (a frequency range poorly observed in the
past). The possibility to observe the emission of the same source across such a wide frequency range allows us to apply
additional selection criteria in order to identify important types of source. For example,
selecting sources with flat spectrum behaviour at relatively high frequency (143 and 217\,GHz) will mainly provide
us with a blazar-AGN subsample, as was seen in Sect.~\ref{sec:BZCAT}.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure9.pdf}
\caption{Normalized histograms of {{PCNT}} spectral indices for different pairs of \textit{Planck}\ frequencies.
Only sources detected above the $4\,\sigma$ level in each pair of frequencies are considered.
Sources within the \textit{Planck}\ \GAL070\ mask are shown in red, whereas the sources outside the quoted
Galactic mask appear in blue. Instead of the usual fixed-width bins, the histograms use \texttt{optBINS},
an optimal adaptive data-based binning method that adjusts itself to optimally reflect the details of the structure
of the underlying distribution
\citep{knuth}. }
\label{fig:spec_indx}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure10.pdf}
\caption{Colour-colour plot of the {{PCNTb}} catalogue at a common frequency 217\,GHz.
The red crosses show the 278 {{PCNTb}} sources inside the \textit{Planck}\ \GAL070\ mask, while the blue pluses show the 1146 {{PCNTb}} sources outside the same mask.
We can see the non-thermal and thermal source populations of the {{PCNTb}}; the non-thermal source population is dominant, as is expected from our selection criteria. }
\label{fig:colour-colour}
\end{figure}
Fig.~\ref{fig:colour-colour} shows the colour-colour plot at the common frequency 217\,GHz for the sources in the {{PCNTb}}. This figure is analogous to the top panel of figure~26 in \citep{planck2014-a35}.
The red crosses show the 278 {{PCNTb}} sources inside the \textit{Planck}\ \GAL070\ mask, whereas the blue pluses show the 1146 {{PCNTb}} sources outside the same mask.
We can see the non-thermal and thermal source populations of the {{PCNTb}}. Sources inside the
\textit{Planck}\ \GAL070\ mask tend to have a thermal spectrum, whereas the sources outside the same mask are mainly non-thermal.
The non-thermal population is much more numerous than the thermal population, thanks to our selection criteria at 30 and 143\,GHz. This indicates that the {{PCNTb}} is a good multi-frequency catalogue of bright non-thermal sources.
At frequencies higher than 353\,GHz, we can see a different behaviour between the whole {{PCNT}} sample and the
{{PCNThs}}. In the case of the {{PCNT}}, most of the sources show clear thermal emission, which probably indicates
some kind of Galactic thermal contamination or the classical Eddington bias. However, this thermal emission
dominates only in roughly half of the {{PCNThs}}, and only for the 857-GHz channel. Taking into account that these
sources are visible in all the \textit{Planck}\ channels, the best interpretation in this particular case is that they
are bright local galaxies, where we are able to observe both the radio and thermal emission.
Within a search radius of $5^{\prime}$ and outside the \GAL070\ mask, there are 23 identifications\footnote{Out of 75 {{PCNThs}} sources outside the \GAL070\ mask.} of {{PCNThs}} sources with IRAS compact sources \citep{IRAS,IRAS_GCAT}.\footnote{Among them are several well known local galaxies, such as Cen~A, M82, M87, and the Sculptor Galaxy.}
\subsubsection{Galactic and thermal sources in the catalogues} \label{sec:Galactic}
Fig.~\ref{fig:spec_indx} shows, using different colours, sources outside and inside the \textit{Planck}\ \GAL070\ mask. This allows us to roughly distinguish between Galactic and extragalactic-dominated subsamples of the corresponding catalogues. In this section we briefly discuss the properties of the sources inside the \GAL070\ mask and quantify the impact of cold thermal sources that were unintentionally selected in our input sample.
The Galactic sources of the {{PCNT}}, as seen in Fig.~\ref{fig:spec_indx}, behave similarly
to the extragalactic ones below 143\,GHz, where sources emitting
thermally first appear. This early appearance
of thermal emission
is probably due to the fact that most of the Galactic sources are much
brighter than the extragalactic ones. Galactic sources that have a smooth transition between non-thermal- and
thermal-dominated channels are probably mostly planetary nebulae \citep[see][]{planck2014-XVIII} or supernova remnants \citep[see][]{planck2014-XXXI}. Furthermore, Galactic sources
showing a variation in their spectral behaviour around 100\,GHz can be potential candidates for studying CO line
emission.
This suggests that the spectral index plot can be used as a
stand-alone tool to distinguish Galactic sources in the catalogue.
In addition, but less easily interpretable from the physical point of view, further selection criteria can be
devised based on a principal component analysis (PCA) of the SEDs (see for example Sect.~\ref{sec:PCA}).
As mentioned in Sect.~\ref{sec:cat0}, the use of the 143-GHz band as one of the selection channels for our input \textit{Planck}\ sample carries the risk of including bright thermal sources in the {{PCNT}}.
In some cases, these are extragalactic sources that show both non-thermal synchrotron spectrum at low frequencies and thermal emission at high frequencies; in some other cases, they can be purely thermal sources that are bright enough to be detected above our selection threshold at 143\,GHz.
In order to quantify the impact of thermal emission in the high-frequency channels of the {{PCNT}}, we have
looked for sources detected at ${\rm S/N}\geq 4$ at 143\,GHz that have no counterpart in the CRATES catalogue \citep{CRATES} within a search radius of 32\parcm3 (the \textit{Planck}\ FWHM at 30\,GHz). In order to focus on sources that are likely to be of
extragalactic origin,
we restrict our search to sources outside the \textit{Planck}\ \GAL070\ Galactic mask and that are not in the
PGCC \citep{planck2014-a37}. There are 494 sources in the {{PCNT}} that satisfy these criteria. Taking these, we use a very simple spectral index criterion in order to identify sources with potential thermal-like emission, namely those sources with spectral index between frequencies 143 and 217\,GHz $\alpha_{143}^{217} > 1$. 56 out of these 494 sources ($11.3\,\%$) have thermal spectra according to the $\alpha_{143}^{217} > 1$ criterion. This suggests that the degree of contamination of thermal sources in the whole {{PCNT}} at high Galactic latitudes should be relatively small. However, the situation changes significantly if we focus on the brightest sources at 857\,GHz. If we repeat the analysis, but consider only sources with flux density above 1\,Jy at 857\,GHz, we find that almost half of the sources (47 out of 97 that have $S_{857} \geq 1\,$Jy and ${\rm S/N}_{143}\geq4$ at 143\,GHz, are outside the \GAL070\ mask and are not matched to CRATES) sources are thermal-like according to the $\alpha_{143}^{217} > 1$ criterion. This implies that the contamination from thermal sources is much more relevant in the bright source part of the 857 counts, and that it cannot be neglected even for regions outside the Galactic plane.
To a lesser extent, the same applies to the 545- and 353-GHz \textit{Planck}\ channels.
Source number count plots (not included in this paper for the sake of brevity) at the high-frequency \textit{Planck}\ channels ($\nu \geq 353$\,GHz) show a clear excess of sources above 1\,Jy with respect to the \cite{tucci11} model of radio source number counts.
\subsubsection{Principal component analysis} \label{sec:PCA}
Colour-colour diagrams
and fits to certain spectral laws (a modified blackbody, for example, or a power law with a given spectral index) are useful tools for source
classification. However, the complexity and arbitrariness of the classification rules can grow very quickly when the number of channels (or colours) is large. In that case, statistical procedures such as principal component analysis \citep[PCA, see for example][]{PCA} can help to identify trends and correlations, to reduce the dimensionality of the data and to ease the burden of automatic classification of sources in a catalogue.
PCA is mathematically defined as an orthogonal linear transformation that transforms the data to a new coordinate system, such that the greatest variance through some projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. In our case, the data for each source are the standardized (that is, mean subtracted and normalized to unit variance) flux densities from 30 to 857\,GHz, and the intuitive meaning of the principal components is that of `generalized colours' that identify the main spectral trends. Operationally, finding the principal components is equivalent to solving the eigen-problem for the covariance matrix of the data (in our case, the covariance matrix is a $9 \times 9$ matrix computed by cross-correlating the 29\,400 spectral energy distributions we have in the {{PCNT}}). If $\mathbf{X}$
is the $29\,400 \times 9$ matrix of standardized (normalized) flux densities of the catalogue, the principal components are given by
\begin{equation}
\mathbf{T} = \mathbf{X} \mathbf{W},
\end{equation}
\noindent
where $\mathbf{W}$ is a $9 \times 9$ matrix whose columns are the eigenvectors of $\mathbf{X}^{\sf T} \mathbf{X}$.
We perform PCA on the standardized (normalized) flux densities from 30 to 857\,GHz for all the 29\,400 targets in the input catalogue. In the standardization pre-processing step, the SED of each source is renormalized to unit variance.
This is because we are interested in the shapes of the SEDs, not in their individual amplitudes. After this pre-processing standardization we calculate the covariance matrix of the samples and obtain the principal components through the eigenvectors of the covariance matrix. We find
that the two first components account for $86\,\%$ of the sample variance. Adding a third principal component we account for $91\,\%$ of the sample variance. This means that most of the spectral information contained in the nine frequencies of the catalogue can be coded in just two or three numbers.
It is not always easy to find an intuitive interpretation for these generalized colours. Fig.~\ref{fig:eigenvecs} shows the
components, as a function of frequency, of the nine eigenvectors calculated from the {{PCNT}}. The two first components, which as we have just mentioned account for $86\,\%$ of the spectral information content, could be interpreted
as more or less a flat spectrum with some preponderance of radio emission (which was to be expected, considering our selection criterion at 30 and 143\,GHz), plus a component that peaks towards the high frequencies (probably thermal dust contamination).
The third vector suggests some kind of bump around 545\,GHz, which may be associated with a Galactic cold dust component with $T\simeq10\,$K (or warmer sources at $z>0$). The rest of the components are much more difficult to interpret, but carry much less information about the spectral behaviour of the catalogue.
The utility of the PCA analysis can be seen in Fig.~\ref{fig:PCA2D}, which shows the distribution of the two first principal components for the whole catalogue (red dots).
We can now look for particular subsets of sources and see whether they lie in a restricted region of the diagram or not.
As an example, we have selected a group of bright Galactic sources by looking for coincidences between our catalogue and the PGCC \citep{planck2014-a37}.
Galactic cold clumps are expected to have purely thermal spectra.
There are 3150 matches within a $5^{\prime}$ search radius between our catalogue and the PGCC. If we also restrict ourselves to bright sources that are detected with ${\rm S/N}>4$ at 353, 545, and 857\,GHz in our catalogue,
we keep 1882 objects out of 29\,400.
The sources selected in this way are shown as blue dots in Fig.~\ref{fig:PCA2D}, where they appear strongly clustered in a thin, elongated structure, centred around $(-2.32,0.14)$ in the principle component plane. This suggest that it should be possible to identify sources with strong thermal emission just by applying an automated classification algorithm in the space of the first few principal components.
The $k$-nearest neighbours algorithm \citep[$k$-NN,][]{knn1,knn2} is a non-parametric pattern recognition and classification method that provides an intuitive way to classify a $n$-dimensional sample of objects into two or more categories, starting from a training subsample such as the one we have described above. An object is classified by a majority vote of its neighbours, with the object being assigned to the class most common among its $k$ nearest neighbours ($k$ is a positive integer, typically small, and must be an odd number in order to avoid the possibility of having a tie in the vote). We have applied the $k$-NN method, with $k=5$, to the whole catalogue, restricting the analysis to only the two first principal components.\footnote{We have tried also other values, $k=3,7,9$ and the results do not change significantly.} According to the $k$-NN classificator, there are 4829 objects that should be classified in the same category of thermal-like sources. Fig.~\ref{fig:knndist} shows the distribution on the sky of these sources; for comparison, the \textit{Planck}\ $70\,\%$ Galactic mask is outlined in grey. As we can see, the automatically selected sources lie around the Galactic plane and the Magellanic Clouds. Fig.~\ref{fig:knnSED} shows the average, standardized spectral energy distribution of the selected sources, where we can see an almost purely thermal spectrum with a minor contribution from synchrotron (or maybe free-free) emission in the lower frequencies. This result is in agreement with the physical interpretation we have given above for the two first principal components.
These basic examples should help to demonstrate the potential and the power of the PCA approach. Just by looking at two
principal components, instead of using a complicated set of colour-based rules (as in the standard approach), we
are able to identify -- and, thus, exclude -- sources showing a characteristic thermal dust spectrum. The
same approach could be also easily applied to the first three principal components instead of only the first two.
In general, this technique could be used to flag sources that have
extreme spectral behaviour before performing the statistical analysis of the catalogue of sources of greatest
interest for the purposes of each specific investigation (e.g. non-thermal ERSs, in this case).
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure11.pdf}
\caption{Components, as a function of frequency, of the nine loading vectors (eigenvectors) that map the normalized SEDs into the principal component space.}
\label{fig:eigenvecs}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure12.pdf}
\caption{First two principal components for the whole input catalogue (red points) and a subset of sources selected for their strongly thermal spectra (blue dots).}
\label{fig:PCA2D}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure13.pdf
\caption{Distribution on the sky of sources that were classified as thermal-like by the $k$-NN criterion on the first two principal components. The \textit{Planck}\ $70\,\%$ Galactic mask \GAL070\ is superimposed in grey for comparison. The blob of sources around $l = -79\pdeg5$, $b=-32\pdeg9$ corresponds to the Large Magellanic Cloud.
}
\label{fig:knndist}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/figure14.pdf}
\caption{Average standardized spectral energy distribution of the thermal-like sources selected by the $k$-NN criterion on the first two principal components. }
\label{fig:knnSED}
\end{figure}
\section{The catalogue: access, content and usage} \label{sec:contents}
The {{PCNT}} catalogue
is available from the Planck Legacy Archive.\footnote{\url{http://pla.esac.esa.int/pla}}
The name of the catalogue file in the Planck Legacy Archive is \texttt{COM\_PCCS\_PCNT\_R2.00.fits}.
Fig.~\ref{fig:screenshot} shows a screenshot of the Planck Legacy Archive web interface to the {PCNT}.
The {{PCNT}}
contains the coordinates, flux densities, flux density errors, and MTXF S/N for 29\,400 sources. Here we summarize the catalogue contents.
\begin{itemize}
\item Source identification: \texttt{MAME} (e.g. {{PCNT}}).
\item Position: \texttt{GLON} and \texttt{GLAT} contain the Galactic coordinates, and \texttt{RA} and \texttt{DEC} give the same information in equatorial coordinates (J2000).
\item Flux density: the estimates of flux density for the nine \textit{Planck}\ frequencies (\texttt{Err\_Flux\_xxx}), in mJy, and their associated uncertainties (\texttt{Err\_Flux\_xxx}). The string \texttt{xxx} contains the frequency value.
\item MTXF signal-to-noise ratio for the nine frequencies: \texttt{SNR\_xxxGHz\_MTXF}.
\end{itemize}
To facilitate the usage and scientific exploitation of the catalogue, the {{PCNT}}
provides in addition seven different flag columns:
\begin{itemize}
\item \texttt{PCNTb}, indicating sources belonging to the {{PCNTb}};
\item \texttt{PCNThs}, indicating sources belonging to the {{PCNThs}};
\item \texttt{Group\_Flag}, indicating sources that are found inside a 30-GHz beam area as indicated in Sect.~\ref{sec:blend};
\item \texttt{Matched\_to\_PGCC}, indicating sources that are matched, within a 5\ifmmode {^{\scriptstyle\prime}\ radius, to PGCC sources;
\item \texttt{Matched\_to\_CRATES}, indicating sources that are matched, within a
32\parcm3 radius, to CRATES sources;
\item \texttt{Matched\_to\_BZCAT5}, indicating sources that are matched, within a 7\ifmmode {^{\scriptstyle\prime}\ radius, to the BZCAT5 catalogue of blazars;
\item \texttt{from\_030\_input}, indicating sources that appear only at 30\,GHz in the input catalogue (see Sect.~\ref{sec:cat0} for further details).
\end{itemize}
It should be noted that there is no column that contains the coordinate uncertainties for each source. The errors in position are
inherited from the same Mexican hat wavelet technique used for the construction
of the PCCS2 and therefore can be computed using the same procedure as described in \cite{planck2014-a35}.
Additional information about the catalogue content and format can be found in the FITS file headers.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/figure15.pdf}
\caption{Screenshot from the ESA Planck Legacy Archive {PCNT}\ portal.
}
\label{fig:screenshot}
\end{figure*}
\section{Conclusions} \label{sec:conclusions}
The Planck Multi-frequency Catalogue of Non-thermal (i.e. synchrotron-dominated) Sources observed between
30 and 857\,GHz by the the ESA \textit{Planck}\ satellite mission has been produced using the full mission data. This is the
first fully multi-frequency \textit{Planck}\ catalogue of compact sources that covers all nine of the \textit{Planck}\ frequency
bands.
We constructed our catalogue starting from a set of candidates that are blindly detected with Mexican hat filtering
on the \textit{Planck}\ full-sky maps at 30 and 143\,GHz. From this input sample we selected those source
candidates with signal-to-noise ratio ${\rm S/N}>3$. The number of source candidates in this \textit{Planck}\ input sample is
29\,400. We then ran a multi-frequency filtering (the Matrix Multi-filters) on the nine \textit{Planck}\
frequency channels. The resulting all-sky {PCNT}\ catalogue lists the positions, flux densities, and flux density uncertainties of 29\,400 compact
sources at all nine \textit{Planck}\ frequencies. The {{PCNT}} flux densities agree with previous \textit{Planck}\
observations in the PCCS2 \citep{planck2014-a35}, but our
catalogue goes deeper in flux density limits from 30 to 143\,GHz (down to around 100--300\,mJy). Moreover, the
{{PCNT}} is a band-filled multi-frequency catalogue over the whole \textit{Planck}\ frequency range. Therefore,
it is ideal for studying SEDs between 30 and 857\,GHz for thousands of compact
sources and investigating the statistics of their properties. An indication of the reliability of this multi-frequency
catalogue for statistical studies is given by the preliminary number counts calculated with sources in the {{PCNT}}
and detected outside the Galactic \GAL070\ mask;
these are in very good agreement with the number counts predicted by the C2Ex cosmological
evolution model of ERSs described in \cite{tucci11}.
Although the selection criteria chosen for our \textit{Planck}\ input catalogue tends to favour the selection of
non-thermal, i.e. synchrotron-dominated, sources, the {{PCNT}} also selects many bright dusty compact sources,
the majority of them of Galactic origin. In order to provide a purer set of non-thermal sources, we also define a
subsample of the {{PCNT}} whose components are detected with ${\rm S/N}>4$ at both 30 and 143\,GHz. This Bright Planck
Multi-frequency Catalogue of Non-thermal Sources ({{PCNTb}}) contains 1424 sources, of which 1146 are found to
lie outside the \textit{Planck}\ $70\,\%$ \GAL070\ Galactic mask. Remarkably, 913 out of the 1424 sources of the
{{PCNTb}} have counterparts within 7$^\prime$ (the \textit{Planck}\ FWHM at 143\,GHz) in the BZCAT5 catalogue of blazars
\citep{massaro15}.
Of these matches, 832 lie outside the \textit{Planck}\ \GAL070\ Galactic mask.
Thus, the {{PCNTb}} contains not only flat-spectrum ERSs (mainly classified as blazars), but also
a minority of Galactic sources, with spectra dominated by thermal dust emission.
Finally, we also flag the high-significance subsample ({{PCNThs}}), a subset of 151 sources that are
detected with ${\rm S/N}> 4$ in all nine \textit{Planck}\ channels. The {{PCNThs}} contains high S/N SEDs
between 30 and 857\,GHz for 72 known blazars \citep{massaro15}, 60 of which lie outside the \textit{Planck}\ \GAL070\
Galactic mask.
We release for the scientific community the full {{PCNT}} and the bright subsample {{PCNTb}} catalogues, and
also the {{PCNThs}}, with the corresponding sources flagged in the {{PCNTb}}. We also include in the catalogues nine
columns containing the results of a principal component analysis of the SEDs in our catalogue. We suggest that the
principal component analysis will prove useful for developing automated source classification criteria and,
as an example, we show that sources with a characteristic thermal spectrum, easily identified by using
only a simple classifier based on the two first principal components.
\begin{acknowledgements}
The Planck Collaboration acknowledges the support of: ESA; CNES, and CNRS/INSU-IN2P3-INP (France); ASI, CNR,
and INAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MINECO, JA, and RES (Spain); Tekes, AoF, and CSC
(Finland); DLR and MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI
(Ireland); FCT/MCTES (Portugal); and ERC and PRACE (EU). A description of the Planck Collaboration and a list of
its members, indicating which technical or scientific activities they have been involved in, can be found at
\href{http://www.cosmos.esa.int/web/planck/planck-collaboration}{http://www.cosmos.esa.int/web/planck/planck-collaboration}.
This research has made use of data from the OVRO
40-m monitoring program, which is supported in part by NASA grants NNX08AW31G, NNX11A043G, and
NNX14AQ89G and NSF grants AST-0808050 and AST-1109911. We also thank the Spanish MINECO for partial financial
support under project AYA2015-64508-P and funding from the European Union’s Horizon 2020 research and innovation
programme (COMPET-05-2015) under grant agreement number 687312 (RADIOFOREGROUNDS). This work was supported in part
by the ESAC Science Faculty award ESAC-362/2015.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\end{acknowledgements}
\bibliographystyle{aat}
|
{
"timestamp": "2018-09-12T02:10:23",
"yymm": "1802",
"arxiv_id": "1802.08649",
"language": "en",
"url": "https://arxiv.org/abs/1802.08649"
}
|
\section{Introduction}
Although plenty of evidence exists for dark matter (DM) in the Universe, direct searches so far did not observe an unambiguous signal \cite{Klasen20151}. In particular, for low-mass DM particles with masses of 1-10\,GeV large parts of the accessible parameter space of spin-independent cross section for scatterings on nuclei remain untested, despite many naturally motivated theoretical models for light dark matter, such as e.g. asymmetric DM \cite{asymmReview}.
The CRESST experiment (Cryogenic Rare Event Search with Superconducting Thermometers) currently provides the best sensitivity for scatterings of DM particles with masses below $\sim$2\,GeV \cite{Angloher:2014myn,Angloher:2015ewa}. The detectors operated in CRESST-II phase 2 reached thresholds down to $\sim$300\,eV. These recent results showed that the nuclear-recoil energy threshold is the main driver for low-mass DM search. For particles of mass $m_\chi$ scattering on a nucleus of mass number $A$ a maximum energy transfer of
\begin{equation}
E_{max}\approx 130 \left(\frac{m_\chi}{1\,\mathrm{GeV/c}^2}\right)^2\left(\frac{100}{A}\right)\,\mathrm{eV}
\end{equation}
is expected under standard assumptions \cite{Klasen20151}. Hence, for the next phase of the experiment (CRESST-III), which is dedicated to low-mass DM search, detectors with an extremely low energy threshold are necessary. For CRESST-III phase 1, a straight-forward approach was chosen: the CRESST-II detectors with a mass of $\sim$300\,g are scaled down in size to $\sim$24\,g. Thereby, an improvement of the signal-to-noise ratio of up to a factor $\sim$10 is expected due to basic phonon physics \cite{Probst:1995fk}. In this paper, the design and performance of a prototype detector is presented which shows that nuclear-recoil energy thresholds of $\sim$50\,eV are in reach.
\section{Detector design}
The CRESST-III detector module consists of a 24\,g CaWO$_4$ crystal (20x20x10\,mm$^3$) as target material, called phonon detector (PD), and a silicon-on-sapphire disc (20x20x0.4\,mm$^3$) as a light absorber (see Fig. \ref{fig:CRESST3_detector}), called light detector (LD). This 2-channel readout provides an active background discrimination: while for electron/gamma events a fraction of $\sim$5\% of the deposited particle energy is converted into scintillation light, for nuclear recoils this fraction is reduced due to light quenching \cite{birks1964theory}. Depending on the recoiling nucleus 2-12\% \cite{Strauss:2014ab} light is emitted compared to electron/gamma events of the same energy. Both detectors are equipped with transition-edge-sensors (TES) realized by thin W films which are operated in the transition between the superconducting and normal conducting state \cite{Probst:1995fk}. The resistance change caused by a phonon-induced temperature rise in the crystal is measured with a SQUID system \cite{Angloher:2012vn}. While the design of the LD-TES is unchanged with respect to earlier CRESST measuring campaigns, the TES of the PD was significantly modified and optimized to meet the requirements of low-mass DM search. The detectors are held by CaWO$_4$ sticks (3 each), a design which was already successfully implemented in CRESST-II phase 2 \cite{Angloher:2014myn,strauss:2014part1}. In addition, a novel concept for an instrumented detector holder is introduced: the sticks holding the PD are equipped with TESs which realize a veto against events related to the crystal support.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{detector_scheme.pdf}
\caption{Schematic view of the CRESST-III detector module.}
\label{fig:CRESST3_detector}
\end{figure}
\subsection{CaWO$_4$ cryogenic detector}
The TES of the CaWO$_4$ detector is designed such that the detector is operated in the calorimetric mode \cite{Probst:1995fk}, i.e. the sensor integrates over the impinging non-thermal phonons created by a particle interaction in the crystal. This is achieved by a 2.4x0.85\,mm$^2$ W film (thickness 200\,nm) as thermometer which is weakly coupled to the heat bath (copper detector holder) by a thermal link ($\sim$100\,pW/K at 10\,mK). The thermal coupling is realized by a Au stripe (1.0x0.02\,mm$^2$, thickness: 20\,nm) which is sputtered onto the crystal. A scheme of the TES is shown in Fig. \ref{fig:TES}. The collection area of the sensor for phonons can be increased - without increasing the heat capacity of the thermometer - by superconducting Al phonon collectors. Incident phonons break cooper pairs in the Al which can diffuse into the W-film, couple to its electron system and thereby increase the temperature signal \cite{Angloher2016}. A separate ohmic heater (Au film) is used to inject artificial heater pulses for calibration and stabilization \cite{Angloher:2012vn}.
The CaWO$_4$ crystal (\textit{TUM56f}) used for the prototype module was grown in-house at the Technische Universit\"at M\"unchen (TUM) \cite{erb}. It is expected to have similar properties in terms of radiopurity and optical quality as the crystal TUM40 operated in CRESST-II phase 2 \cite{strauss:2014part1,strauss:2014part2} since it was produced under similar conditions. A mean background level of 3.51\,events/[kg\,keV\,day] at 1-40\,keV was observed with TUM40 which is about an order of magnitude better than previously used commercial CaWO$_4$ crystals.
The surfaces of the crystal are roughened in order to increase the scintillation light output. The dips where the sticks touch (see Fig. \ref{fig:CRESST3_detector}) are polished to optical quality to avoid thermal stress at the contact points.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{TES_PD_V1.pdf}
\caption{Schematic view of the TES of the CaWO$_4$ PD for CRESST-III. The W-film is equipped with Al-phonon collectors and a Au-stripe as weak thermal link. A separated ohmic heater is used to apply artificial pulses for calibration. The electrical and thermal contacts to the holder are provided by Al and Au wire bonds (diameter 25\,$\mu$m). }
\label{fig:TES}
\end{figure}
\subsection{Instrumented detector holder}
The CaWO$_4$ sticks (diameter 2.5\,mm, length $\sim$12\,mm) together with the scintillating and reflective foil surrounding the detectors provide a fully scintillating inner detector housing. This is crucial for the rejection of nuclear recoils from surface-alpha events which can mimic DM particle recoils. The corresponding MeV alpha particles produce enough light in the scintillating material to veto these dangerous backgrounds with high efficiency \cite{strauss:2014part1}.
Phonons from energy depositions in the CaWO$_4$ sticks to a certain extent are transmitted via the point-like contact to the PD which results in a degraded signal in its TES. In particular, nuclear recoils from surface-alpha events ($E=\mathcal{O}$(10-100\,keV)) occurring at the outward sides of the sticks might be detected as low-energy depositions of $\lesssim1$\,keV in the PD. For extremely low energy thresholds, as projected for CRESST-III, such events might limit the sensitivity. Furthermore, possible stress relaxation events related to the holding of the crystal might appear at lowest energies.
To efficiently reject these event classes, the sticks are instrumented with TESs. Standard LD TESs (area of W-film: 0.30x0.08\,mm$^2$) are evaporated on Si carriers (3x3.5x0.4\,mm$^3$) which are glued onto the CaWO$_4$ sticks. Consequently, the full energy deposition is measured by the stick TES with a coincident (degraded) signal in the PD, and vice versa. The ratio of both signals can be used for a discrimination of any type of events occurring in the sticks. To reduce the number of readout channels, the 3 TES of the sticks holding the PD are connected in parallel to one SQUID\footnote{Every stick TES has an individual ohmic heater for calibration and stabilization.}.
\section{Results}
The CRESST-III prototype detector was operated at $\sim$10\,mK in a dilution refrigerator at MPI Munich (cryostat I, run127). In a test measurement with a 3-channel readout (PD, LD and sticks), an exposure of 69.5\,g-days was acquired. A $^{55}$Fe calibration source was placed inside the detector housing for an energy calibration of the PD. The observed overall event rate of about 5\,Hz allows a stable operation above ground, however affects the performance of the detectors (see below). In the following we will focus on the performance of the CaWO$_4$ PD, in particular its energy threshold, and the functionality of the novel instrumented detector holder.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{calibration_spectrum_v1.pdf}
\caption{Main frame: Calibration spectrum from a $^{55}$Fe source recorded with the CRESST-III prototype module. Due to a high pile-up rate in this measurement above ground, the resolution at $E\sim$6\,keV is slightly enhanced compared to $E=0$ (see text). Insert: Energy distribution of artificial 5.90\,keV template pulses summed on recorded baseline samples (\textit{empty baselines}) determined by a template fit. The width of the distribution yields a 5$\sigma$ energy threshold of $E_{th}=(190.6\pm5.2)\,$eV.}
\label{fig:calibration spectrum}
\end{figure}
The acquired energy spectrum is shown in Fig. \ref{fig:calibration spectrum} with the clearly identified $^{55}$Mn peaks at 5.9\,keV (K$_\alpha$) and 6.5\,keV (K$_\beta$) from the $^{55}$Fe source. The resolution at 5.9\,keV is $\sigma = (138.2\pm3.5)$\,eV which is expected to be enhanced compared to energies close to the threshold due to the high overall event rate\footnote{The high pile-up rate shifts the baseline of the pulses with time. Due to a non-linear detector response, a varying baseline negatively affects the resolution, increasingly at higher energies. For low event rates, only a minor energy dependence of the resolution is observed \cite{Angloher:2015ewa}. }.
To derive the energy threshold of the prototype detector, the \textit{empty baseline method} is used which is the default method to validate the CRESST analysis \citep{Angloher:2014myn,Angloher:2015ewa}. Empty baselines, i.e. samples without any triggered pulses, are periodically acquired during data-taking. These samples are fitted with template pulses (see Fig. \ref{fig:template}) to derive the relevant noise level of the baseline. For technical reasons the template pulse of a certain pulse height is summed to the empty baselines\footnote{The template fit optimizes pulse height and onset; for empty baselines the onset is not constrained and consequently the fit fails to reproduce the correct noise level. To fix the onset, a template of arbitrary (but fixed) pulse height is summed to the pulses. The choice of the pulse height does not influence the result of the baseline noise. }. The width of the reconstructed pulse height corresponds to the noise at $E=0$. Fig. \ref{fig:calibration spectrum} (inset) shows the results of the empty baseline method fitted by a Gaussian. A baseline noise of $\sigma=(9.79\pm0.26)\,$mV is derived which corresponds to $\sigma_e=(38.1\pm1.0)\,$eV and a 5$\sigma$ energy threshold of $E_{th}=(190.6\pm5.2)\,$eV.
In the much more quiet CRESST setup, a baseline-noise level of typically $\sigma=$1.5-3.0\,mV is observed. For the CRESST-III prototype detector presented here, this corresponds to an energy threshold of $29-59$\,eV.
The template pulse (from 5.9\,keV $^{55}$Fe pulses) is fitted by a two-component pulse model \cite{Probst:1995fk} to validate the calorimetric operation of the detector. The fit in Fig. \ref{fig:template} demonstrates that the template pulse can be well described by a dominant non-thermal phonon component which decays with the intrinsic time constant of the TES, $\tau_{int}=12.96$\,ms and a thermal-phonon component which decays with the thermal decay time $\tau_t=201.2$\,ms depending on the thermal coupling of the crystal to the heat bath. Both constituents rise with the lifetime $\tau_n=1.27\,$ms of the non-thermal phonons in the detector. This is a clear proof that the detector works as a calorimeter (for details see \cite{Probst:1995fk}).
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{fitted_pulse_v1.pdf}
\caption{Pulse template from 5.9\,keV pulses from a $^{55}$Fe source acquired with the CRESST-III detector prototype (full green). It is fitted (full black) with the two-component pulse model (dashed and dotted black) developed in \cite{Probst:1995fk}. The resulting rise and decay times verify that the device is operated in the calorimetric mode (see text). The slightly tilted baseline originates from the high event rate in the measurement. }
\label{fig:template}
\end{figure}
The CRESST-III prototype module was operated for the first time in an instrumented detector holder. A detailed analysis of this system is beyond the scope of this paper, however, a few general aspects of the functionality are given here. All 3 CaWO$_4$ sticks which are equipped with TES could be run stably and an energy threshold of $\sim$1\,keV was achieved\footnote{A variation of about 20\% in the signal-to-noise ratio among the 3 sticks is observed.}. In Fig. \ref{fig:iStick}, a three-fold coincident pulse is shown, an energy deposition of $(95\pm10)\,$keV in one of the sticks probably originating from an electron/gamma event. Besides the signal in the stick TES (brown) and the corresponding scintillation light output of the CaWO$_4$ stick measured by the LD (red), a degraded signal of $(2.6\pm0.1)$\,keV is detected in the PD TES. The ratio of the energy deposition in the stick and the degraded energy signal in the stick TES yields a degradation factor $D\sim$37 of the signal through the stick-crystal interface. The corresponding light signal originating from the electron/gamma event in the stick ($\sim$95\,keV) is accordingly higher by $D$ compared to a electron/gamma event of the same PD pulse height in the CaWO$_4$ crystal. Therefore, the light signal alone can be used for an efficient discrimination of electron/gamma events occurring in the sticks. This event rejection was demonstrated in CRESST-II phase 2 \cite{strauss:2014part1}. For the rejection of nuclear-recoil backgrounds (e.g from surface-alpha reactions) and stress relaxations (no light signal) this method fails.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{iStick_pulses_v1.pdf}
\caption{Three-fold coincident pulses of a $\sim$95\,keV electron/gamma event occurring in one of the sticks holding the CaWO$_4$ crystal. In the PD, a degraded signal is observed which corresponds to a energy of $\sim$2.6\,keV. The scintillation light output of the stick is recorded with the LD. }
\label{fig:iStick}
\end{figure}
However, using the instrumented detector holder such events can be rejected independently of the signal in the LD. The lowest energy deposition in one of the sticks that produces a PD signal above threshold (190.6\,eV) is $\sim$7.1\,keV which is well above the threshold (${>}10\sigma$) of the stick-TESs. Hence, an efficient rejection of holder-related events down to the energy threshold of the PD is possible.
\section{Conclusion and Outlook}
A prototype CRESST-III module has successfully been tested in a cryogenic setup above ground. Despite rather high cosmogenic and environmental backgrounds as well as a higher electronic noise level (by a factor of 3-6 compared to typical channels in the CRESST setup), a nuclear-recoil energy threshold of $E_{th}=(190.6\pm5.2)\,$eV was reached with the 24\,g CaWO$_4$ crystal. This is the lowest threshold reported for direct DM search experiments. The result demonstrates, that the main design goal of CRESST-III phase 1, namely a threshold of $E_{th}=100$\,eV \cite{Angloher:2015eza}, should be easily achieved when the detectors are operated in the low-noise CRESST setup.
It was verified that the CRESST-III PD with its dedicated TES works in the calorimetric operation mode with a main pulse decay time of $\tau_{n}\sim$13\,ms. Furthermore, a novel instrumented detector holder was implemented and successfully operated. The CaWO$_4$ sticks are equipped with TESs which provide a veto against the different types of events originating from the holding of the CaWO$_4$ target crystal. The results demonstrate an efficient rejection capability of these backgrounds down to the PD threshold.
The detector prototype presented in this paper matches all requirements for low-mass DM search with CRESST-III phase 1. A dedicated study \cite{Angloher:2015eza} shows that with 10 such detectors operated for $\sim$1\,year, significant new parameter space in spin-independent DM particle-nucleus cross section can be probed and extended to masses of $\sim$0.1\,GeV/c$^2$. E.g., at a DM particle mass of 2\,GeV/c$^2$, a sensitivity improvement by $\sim$2 orders of magnitude to a cross section of $6\cdot10^{-6}$\,pb is expected.
At the time of writing, 10 CRESST-III detector modules were assembled and mounted in the CRESST setup at the LNGS (Laboratori Nazionali del Gran Sasso) in Italy. Besides 7 CaWO$_4$ crystals of CRESST-II quality \cite{strauss:2014part2}, 3 CaWO$_4$ crystal of improved quality are installed. The latter are grown in the crystal growth facility at TUM from raw material which was purified by an extensive chemical treatment. First data from CRESST-III phase 1 are expected in summer 2016. The presented detector design is also suited for the second phase of CRESST-III, in which $\sim$100 CaWO$_4$ crystals (24\,g each) of improved radiopurity \cite{Angloher:2015eza} are planned to be used. This future experiment will extend the sensitivity to low-mass DM particles by another 2 orders of magnitude and reach cross-sections close to that of coherent neutrino-nucleus scattering \cite{Guetlein2015}.
\section*{Acknowledgments}
\begin{footnotesize}
This research was supported by the DFG cluster of excellence: “Origin and Structure of the Universe”, the DFG “Transregio 27: Neutrinos and Beyond”, the “Helmholtz Alliance for Astroparticle Phyiscs” and the “Maier-Leibnitz-Laboratorium” (Garching).
\end{footnotesize}
\section*{References}
|
{
"timestamp": "2018-02-26T02:11:48",
"yymm": "1802",
"arxiv_id": "1802.08639",
"language": "en",
"url": "https://arxiv.org/abs/1802.08639"
}
|
\section{Introduction}
Main sequence stars with outer convection zones have long displayed
a remarkable universality regarding their dependence of normalized
chromospheric activity on their normalized rotation rate.
This dependence is evident over a broad range of activity indicators
including X-ray, H$\alpha$, and, in particular, the normalized chromospheric
Ca~{\sc ii}~H+K line emission, $R'_{\rm HK}$ \citep[e.g.,][]{Vil84,Noyes84}.
To compare late-type stars of different spectral type, these and other
investigators since then normalized the rotation period $P_{\rm rot}$ by the
star's convective turnover time $\tau$, as determined from conventional
mixing length theory.
This step is obviously model-dependent, but different prescriptions for
$\tau$ as a function of $B-V$ all have in common that $\tau$ increases
monotonically with $B-V$.
With this normalization, the rotation--activity relations of stars of
different spectral type collapse onto a universal curve.
Empirically, the most useful prescription for the function $\tau(B-V)$
is one that minimizes the scatter of $R'_{\rm HK}$ as a function of
$\tau/P_{\rm rot}$, i.e., the {\em inverse} Rossby number.
For $\tau/P_{\rm rot}\ll1$ (slow rotation), the activity indicator
$R'_{\rm HK}$
increases approximately linearly with $\tau/P_{\rm rot}$, but
saturates for $\tau/P_{\rm rot}\gg1$.
In this Letter, we focus on a new behavior for values of $\tau/P_{\rm rot}$
that are smaller than what was usually considered in earlier investigations.
In this regime, \cite{GBCSH17} found that $R'_{\rm HK}$ increases
with decreasing values of $\tau/P_{\rm rot}$.
The same trend is reproduced when using the earlier $R'_{\rm HK}$
values of \cite{Giam2006} at somewhat higher spectral resolution where the
effects of color-dependent contamination from the line wings is smaller.
Also calibration uncertainties were shown to be small.
The unconventional scaling of $R'_{\rm HK}$ with $\tau/P_{\rm rot}$
can be associated with a theoretically
predicted increase in {\em differential} rotation (DR) at Rossby
numbers somewhat above the solar value, i.e., for slower rotation
in the normalized sense.
This is the regime of antisolar DR (slow equator, fast poles).
The associated increase of magnetic energy with decreasing rotation
rate was first noticed by \cite{KKKBOP15}; see their Figure~12(b).
The sign reversal of DR, however, has a much longer history and goes
back to early work by \cite{Gil77}.
More recently, with the advent of realistic high-resolution simulations
of solar/stellar dynamos, it became evident that dynamo cycles could
only be obtained at rotation rates that are about three times faster
than that of the Sun \citep{BMBBT11}.
Later, \cite{GYMRW14} found hysteresis behavior in the transition from
solar-like to antisolar-like DR as a function of stellar rotation rate.
Solar-like DR could then be obtained for initial conditions with rapid
rotation.
This led \cite{KKB14} to speculate that the Sun might have inherited
its solar-like DR with equatorward acceleration and slow poles from its
youth when it was rotating more rapidly.
However, subsequent models with dynamo-generated magnetic fields by
\cite{FF14} did not confirm the existence of hysteresis behavior.
Thus, at the solar rotation rate, simulations do indeed produce
antisolar DR.
This is a problem of all solar dynamo simulations to date, but it may be
hoped that the qualitative trends found by \cite{KKKBOP15} would still
hold for the Sun, but at slightly rescaled rotation rates.
The present work supports the prediction by \cite{KKKBOP15} of a
reversed trend in the rotation--activity diagram at very low values
of $\tau/P_{\rm rot}$.
The purpose of this Letter is to compare the new data of \cite{GBCSH17}
with those of other stars, notably those of the Mount Wilson HK project
\citep{Bal+95}\footnote{\url{http://www.nso.edu/node/1335}}.
We focus here particularly on the main sequence stars of \cite{BMM17}
(hereafter BMM) and \cite{SB99} (hereafter SB), for which cyclic
dynamo properties have been analyzed in detail.
Many of those stars have two cycle periods, which fall into one of two
classes in diagrams showing the rotation-to-cycle-period-ratio versus
$R'_{\rm HK}$ or age.
These properties give us a perspective on the stars' evolutionary state
in a broader context.
For the stars of the {\em Kepler} sample of \cite{GBCSH17}, the time
series are still too short, so no information about cyclic activity
exists as yet.
However, based on earlier simulations, we suggest that those stars can
exhibit chaotic variability in $R'_{\rm HK}$ by up to 0.35 dex that
might be detectable over longer time spans.
\begin{table}[t!]\caption{
Sample of solar-like {\em Kepler} stars of \cite{GBCSH17}.
}\vspace{12pt}\centerline{\begin{tabular}{lrccrcccc}
\# & S$\;\;$ & $B$--$V$ & $T_{\rm eff}$ & $\tau\;$ & $P_{\rm rot}$ & $P_{\rm rot}^\ast$ &
$\!\!\!\!\!\log\bra{R'_{\rm HK}}\!\!\!$ & age \\
\hline
A & 603 & 0.55 & 6091 & 6.4 & $16.6$ & 17.3 & $-4.74$ & 3.7 \\
B & 785 & 0.66 & 5757 & 12.6 & $25.4$ & 24.8 & $-4.82$ & 4.2 \\
C & 801 & 0.68 & 5692 & 13.7 & $20.8$ & 25.7 & $-4.95$ & 2.8 \\
D & 945 & 0.63 & 5856 & 10.8 & $24.3$ & 23.2 & $-4.80$ & 4.3 \\
E & 958 & 0.62 & 5890 & 10.2 & $23.8$ & 22.6 & $-4.89$ & 4.4 \\
F & 965 & 0.72 & 5564 & 15.9 & $26.3$ & 27.4 & $-4.86$ & 3.7 \\
G & 969 & 0.63 & 5856 & 10.8 & $25.7$ & 23.2 & $-5.06$ & 4.8 \\
H & 991 & 0.64 & 5823 & 11.4 & $21.6$ & 23.7 & $-4.84$ & 3.4 \\
I & 1089 & 0.63 & 5856 & 10.8 & $24.5$ & 23.2 & $-4.97$ & 4.4 \\
J & 1095 & 0.61 & 5923 & 9.7 & $22.6$ & 22.0 & $-4.73$ & 4.2 \\
K & 1096 & 0.62 & 5890 & 10.2 & $19.5$ & 22.6 & $-4.86$ & 3.1 \\
L & 1106 & 0.65 & 5790 & 12.0 & $28.4$ & 24.3 & $-4.93$ & 5.3 \\
M & 1212 & 0.73 & 5530 & 16.4 & $24.7$ & 27.8 & $-4.86$ & 3.3 \\
N & 1218 & 0.64 & 5823 & 11.4 & $19.4$ & 23.7 & $-4.78$ & 2.8 \\
O & 1252 & 0.59 & 5988 & 8.5 & $20.3$ & 20.7 & $-4.72$ & 3.9 \\
P & 1255 & 0.63 & 5856 & 10.8 & $24.2$ & 23.2 & $-4.82$ & 4.3 \\
Q & 1289 & 0.72 & 5564 & 15.9 & $23.8$ & 27.4 & $-4.88$ & 3.1 \\
R & 1307 & 0.77 & 5408 & 18.2 & $22.4$ & 29.2 & $-4.95$ & 2.5 \\
S & 1420 & 0.59 & 5988 & 8.5 & $24.8$ & 20.7 & $-4.79$ & 5.5 \\
$\alpha$ & 724 & 0.63 & 5856 & 10.8 & --- & 23.2 & $-4.79$ & $4^\ast$ \\
$\beta$ & 746 & 0.67 & 5725 & 13.1 & --- & 25.2 & $-4.89$ & $4^\ast$ \\
$\gamma$ & 770 & 0.64 & 5823 & 11.4 & --- & 23.7 & $-4.80$ & $4^\ast$ \\
$\delta$ & 777 & 0.63 & 5856 & 10.8 & --- & 23.2 & $-4.90$ & $4^\ast$ \\
$\epsilon$ & 802 & 0.68 & 5692 & 13.7 & --- & 25.7 & $-4.95$ & $4^\ast$ \\
$\zeta$ & 829 & 0.59 & 5988 & 8.5 & --- & 20.7 & $-4.95$ & $4^\ast$ \\
$\eta$ & 1004 & 0.72 & 5564 & 15.9 & --- & 27.4 & $-5.02$ & $4^\ast$ \\
$\theta$ & 1033 & 0.57 & 6091 & 7.4 & --- & 19.2 & $-4.74$ & $4^\ast$ \\
$\iota$ & 1048 & 0.65 & 5790 & 12.0 & --- & 24.3 & $-5.17$ & $4^\ast$ \\
$\kappa$ & 1078 & 0.62 & 5890 & 10.2 & --- & 22.6 & $-4.95$ & $4^\ast$ \\
$\lambda$ & 1087 & 0.60 & 5957 & 9.1 & --- & 21.4 & $-4.90$ & $4^\ast$ \\
$\mu$ & 1248 & 0.58 & 6025 & 8.0 & --- & 20.0 & $-4.65$ & $4^\ast$ \\
$\nu$ & 1258 & 0.63 & 5856 & 10.8 & --- & 23.2 & $-4.90$ & $4^\ast$ \\
$\xi$ & 1260 & 0.58 & 6025 & 8.0 & --- & 20.0 & $-4.78$ & $4^\ast$ \\
$\pi$ & 1269 & 0.72 & 5564 & 15.9 & --- & 27.4 & $-5.02$ & $4^\ast$ \\
$\rho$ & 1318 & 0.58 & 6022 & 8.0 & --- & 20.0 & $-4.73$ & $4^\ast$ \\
$\sigma$ & 1449 & 0.62 & 5890 & 10.2 & --- & 22.6 & $-5.13$ & $4^\ast$ \\
$\tau$ & 1477 & 0.68 & 5692 & 13.7 & --- & 25.7 & $-4.94$ & $4^\ast$ \\
\label{TSum1}\end{tabular}}
\tablenotemark{$T_{\rm eff}$ is in Kelvin, $\tau$ and $P_{\rm rot}$ is in days,
and age is in $\,{\rm Gyr}$. $P_{\rm rot}^\ast$ (in days) is computed from}
\tablenotemark{\Eq{Pcyc_computed} assuming an age of $t=4\,{\rm Gyr}$,}
\end{table}
\begin{table}[t!]\caption{
F and G dwarfs (italics) and K dwarfs (roman) of BMM.
}\vspace{12pt}\centerline{\begin{tabular}{lrccrrcc}
\# & HD/KIC & $B$--$V$ & $T_{\rm eff}$ & $\tau\;$ & $P_{\rm rot}$ &
$\!\!\!\log\bra{R'_{\rm HK}}\!\!\!$ & age \\
\hline
{\em a}& Sun& 0.66 & 5778 & 12.6 & $25.40$ & $-4.90$ & 4.6\\
{\em b}& 1835& 0.66 & 5688 & 12.6 & $ 7.78$ & $-4.43$ & 0.5\\
{\em c}& 17051& 0.57 & 6053 & 7.5 & $ 8.50$ & $-4.60$ & 0.6\\
{\em d}& 20630& 0.66 & 5701 & 12.6 & $ 9.24$ & $-4.42$ & 0.7\\
{\em e}& 30495& 0.63 & 5780 & 10.9 & $11.36$ & $-4.49$ & 1.1\\
{\em f}& 76151& 0.67 & 5675 & 13.2 & $15.00$ & $-4.66$ & 1.6\\
{\em g}& 78366& 0.63 & 5915 & 10.9 & $ 9.67$ & $-4.61$ & 0.8\\
{\em h}& 100180& 0.57 & 5942 & 7.5 & $14.00$ & $-4.92$ & 2.3\\
{\em i}& 103095& 0.75 & 5035 & 17.4 & $31.00$ & $-4.90$ & 4.6\\
{\em j}& 114710& 0.58 & 5970 & 8.0 & $12.35$ & $-4.75$ & 1.7\\
{\em k}& 128620& 0.71 & 5809 & 15.4 & $22.50$ & $-5.00$ & 5.4\\
{\em l}& 146233& 0.65 & 5767 & 12.0 & $22.70$ & $-4.93$ & 4.1\\
{\em m}& 152391& 0.76 & 5420 & 17.8 & $11.43$ & $-4.45$ & 0.8\\
{\em n}& 190406& 0.61 & 5847 & 9.7 & $13.94$ & $-4.80$ & 1.8\\
{\em o}& 8006161& 0.84 & 5488 & 20.6 & $29.79$ & $-5.00$ & 4.6\\
{\em p}&10644253& 0.59 & 6045 & 8.6 & $10.91$ & $-4.69$ & 0.9\\
{\em q}& 186408& 0.64 & 5741 & 11.5 & $23.80$ & $-5.10$ & 7.0\\
{\em r}& 186427& 0.66 & 5701 & 12.6 & $23.20$ & $-5.08$ & 7.0\\
{\rm a}& 3651& 0.84 & 5128 & 20.6 & $44.00$ & $-4.99$ & 7.2\\
{\rm b}& 4628& 0.89 & 5035 & 21.7 & $38.50$ & $-4.85$ & 5.3\\
{\rm c}& 10476& 0.84 & 5188 & 20.6 & $35.20$ & $-4.91$ & 4.9\\
{\rm d}& 16160& 0.98 & 4819 & 22.8 & $48.00$ & $-4.96$ & 6.9\\
{\rm e}& 22049& 0.88 & 5152 & 21.5 & $11.10$ & $-4.46$ & 0.6\\
{\rm f}& 26965& 0.82 & 5284 & 20.1 & $43.00$ & $-4.87$ & 7.2\\
{\rm g}& 32147& 1.06 & 4745 & 23.5 & $48.00$ & $-4.95$ & 6.4\\
{\rm h}& 81809& 0.80 & 5623 & 19.4 & $40.20$ & $-4.92$ & 6.6\\
{\rm i}& 115404& 0.93 & 5081 & 22.3 & $18.47$ & $-4.48$ & 1.4\\
{\rm j}& 128621& 0.88 & 5230 & 21.5 & $36.20$ & $-4.93$ & 4.8\\
{\rm k}& 149661& 0.80 & 5199 & 19.4 & $21.07$ & $-4.58$ & 2.1\\
{\rm l}& 156026& 1.16 & 4600 & 24.2 & $21.00$ & $-4.66$ & 1.3\\
{\rm m}& 160346& 0.96 & 4797 & 22.7 & $36.40$ & $-4.79$ & 4.4\\
{\rm n}& 1653411& 0.78 & 5023 & 18.6 & $19.90$ & $-4.55$ & 2.0\\
{\rm o}& 166620& 0.90 & 5000 & 21.9 & $42.40$ & $-4.96$ & 6.2\\
{\rm p}& 201091& 1.18 & 4400 & 24.4 & $35.37$ & $-4.76$ & 3.3\\
{\rm q}& 201092& 1.37 & 4040 & 25.9 & $37.84$ & $-4.89$ & 3.2\\
{\rm r}& 2198341& 0.80 & 5461 & 19.4 & $42.00$ & $-5.07$ & 7.1\\
{\rm s}& 2198342& 0.91 & 5136 & 22.1 & $43.00$ & $-4.94$ & 6.2\\
1 & 141004 & 0.60 & 5870 & 9.1 & 25.80 & $-5.00$ & 5.6 \\
2 & 161239 & 0.65 & 5640 & 12.0 & 29.20 & $-5.16$ & 5.5 \\
3 & 187013 & 0.47 & 6455 & 3.1 & 8.00 & $-4.79$ & --- \\
4 & 224930 & 0.67 & 5470 & 13.1 & 33.00 & $-4.88$ & 6.4 \\
\label{TSum2}\end{tabular}}
\end{table}
\begin{figure*}[t!]\begin{center}
\includegraphics[width=\textwidth]{pRHK}
\end{center}\caption[]{
$\log\bra{R'_{\rm HK}}$ versus $\log (\tau/P_{\rm rot})$ for the
stars of M67 with known rotation periods as green uppercase letters,
the F and G dwarfs of BMM as blue italics characters, the K dwarfs
of BMM as red roman characters, and the four stars of SB with
$P_{\rm rot}/\tau\ge2.4$ as orange numbers 1--4.
On the upper abscissa, the Rossby number $P_{\rm rot}/\tau$ is given.
The dashed-dotted line shows the fit of BMM, whereas the
solid line represents a fit to the residuals in \Eq{cResidual3} for
the nine stars with $\log\bra{R'_{\rm HK}}\ge-4.85$.
The dashed line is a direct fit to the same nine stars and the dotted line
shows the fit given by \Eq{representation}.
The arrow indicates the anticipated evolution with increasing age $t$.
Some of the symbols have been shifted slightly to avoid overlap.
The Sun corresponds to the blue italics $a$.
The upper inset shows the residual $\log c$ versus
$\log\bra{R'_{\rm HK}}$ for the stars of M67 as green filled
circles, the F and G dwarfs of BMM as blue diamonds, and the K dwarfs
of BMM as red crosses.
The lower inset shows the increasing magnetic field strength for small
values of $4\pi\tau/P_{\rm rot}$ from Figure~12(b) of \cite{KKKBOP15}.
}\label{pRHK}\end{figure*}
\section{Representation of the data}
To be able to discuss individual stars in their rotation--activity
diagrams, we denote the stars of M67 by uppercase roman and lowercase
Greek characters and identify them by their Sanders number S in \Tab{TSum1}.
The F and G dwarfs of BMM, represented by lowercase italics characters,
their K dwarfs, indicated by lowercase roman characters, and the
four stars of SB with $P_{\rm rot}/\tau\ge2.4$, indicated by the numbers
1--4, are identified by their HD or KIC numbers in \Tab{TSum2}.
In addition to $B-V$, $P_{\rm rot}$, and $R'_{\rm HK}$, we also give
in both tables the effective temperature $T_{\rm eff}$ and,
for $B-V>0.495$, the
gyrochronological age $t$ from the relations of \cite{MH08},
\begin{equation}
t=\left\{P_{\rm rot}/ [0.407\,(B-V-0.495)^{0.325}]\right\}^{1.767};
\label{Age}
\end{equation}
see also Equation~(9) of BMM.\footnote{
This relation gives 3\%--14\% smaller ages than the one of \cite{Bar10},
which was also used by \cite{GBCSH17}, taking $\tau$ from \cite{BK10}.
Here we use \Eq{Age} for consistency with BMM.}
\EEq{Age} can be inverted to compute instead $P_{\rm rot}$ under
the reasonable assumption that $t=4\,{\rm Gyr}$ is valid for all stars of M67;
evidence comes from isochrones \citep{Sara2009,Onehag2011}, gyrochronology
\citep{Barnes2016}, and chromospheric activity combined with gyrochronology
\citep{GBCSH17}.
This yields
\begin{equation}
P_{\rm rot}^\ast=0.407\,(B-V-0.495)^{0.325}\;t^{0.565},
\label{Pcyc_computed}
\end{equation}
where the asterisk is used to distinguish the computed value
from the measured one.
Next, using the semi-empirical relationship for $\tau(B-V)$ of
\cite{Noyes84} in the form
\begin{equation}
\log\tau=1.362-0.166x+0.03x^2-5.3x^3,
\label{BV}
\end{equation}
with $x=1-(B-V)$ and for $B-V<1$, we obtain $\tau/P_{\rm rot}^\ast$ as
a monotonically increasing function of $B-V$ in the range from $0.55$
to $0.8$.
Given these relations, we first show in \Fig{pRHK} all stars
with measured rotation periods in the rotation--activity diagram.
Error bars in $\bra{R'_{\rm HK}}$ and $P_{\rm rot}$ are marked by gray boxes.
The stars of BMM follow an approximately linear increase that can be described
by the fit $\log\bra{R'_{\rm HK}}\approx\log(\tau/P_{\rm rot})+\log c$, where
$\log c\approx-4.63$.
However, in spite of significant scatter, there is a clear increase
in activity for most of the stars of the sample of M67 as
$\tau/P_{\rm rot}$ decreases.
HD~187013 and 224930 (orange symbols 3 and 4 with $P_{\rm rot}/\tau=2.6$
and $2.5$, respectively) of the Mount Wilson stars are found to be compatible
with this trend.
We show two separate fits in \Fig{pRHK}, a direct one and one that has
been computed from a fit to the residual between $\log\bra{R'_{\rm HK}}$
and $\log (\tau/P_{\rm rot})$, i.e.,
\begin{equation}
\log\bra{R'_{\rm HK}} - \log (\tau/P_{\rm rot})
=\log c_1 + \rho\log\bra{R'_{\rm HK}}.
\label{cResidual}
\end{equation}
In the upper inset of \Fig{pRHK} we denote this residual by $\log c$,
where $c$ is a function of $\bra{R'_{\rm HK}}$.
\EEq{cResidual} is then written in terms of an expression for
$\log\bra{R'_{\rm HK}}$ versus $\log(\tau/P_{\rm rot})$.
The parameters in \Eq{cResidual} have been computed from the nine out
of 19 stars for which $\log\bra{R'_{\rm HK}}\ge-4.85$.
This yields $\log c_1\approx2.92$ and $\rho\approx1.54$,
which is shown in the upper inset of \Fig{pRHK} as a solid line.\footnote{
\cite{GBCSH17} computed $\log c_1$ and $\rho$ for all 19 stars using
$\tau(B-V)$ from \cite{BK10} instead of \cite{Noyes84}; their values are
therefore somewhat different: $\log c_1\approx1.11$ and $\rho\approx1.25$.}
Solving for $\log\bra{R'_{\rm HK}}$ gives
\begin{equation}
\log\bra{R'_{\rm HK}} = \log c_2 + \mu_2 \log (\tau/P_{\rm rot}),
\label{cResidual3}
\end{equation}
where $\log c_2=\mu_2\log c_1\approx-5.41$ with
$\mu_2=(1-\rho)^{-1}\approx-1.85$.
It is shown in the main part of \Fig{pRHK} as a solid line.
By comparison, the direct fit for the same nine stars gives
$\log c_2^\ast\approx-4.87$ and $\mu_2^\ast=-0.24$ and is shown in
\Fig{pRHK} as a dashed line.
In addition, we combine the fit of BMM with that of \Eq{cResidual3} as
\begin{equation}
\bra{R'_{\rm HK}}=
\left\{\left[c_0\,(\tau/P_{\rm rot})\right]^q+
\left[c_2\,(\tau/P_{\rm rot})^{\mu_2}\right]^q\right\}^{1/q},
\label{representation}
\end{equation}
where $c_0=10^{-4.631}$ is the residual of BMM and $q=5$
is chosen large enough to make the transition between
the two fits sufficiently sharp.
This special representation now applies to the whole range of
$\tau/P_{\rm rot}$ and we return to it in \Sec{Evolution}.
To remind the reader of Figure~12(b) of \cite{KKKBOP15}, we show
in the lower inset of \Fig{pRHK} the magnetic field strength versus
$4\pi\tau/P_{\rm rot}$.
The $4\pi$ factor emerges because in those models, rotation is
controlled by the Coriolis force, which is proportional to $2\Omega$,
where $\Omega=2\pi/P_{\rm rot}$ is the angular velocity.
\begin{figure*}[t!]\begin{center}
\includegraphics[width=\textwidth]{pRHK_4Gyr_all}
\end{center}\caption[]{
Similar to \Fig{pRHK}, but now with rotation periods computed from $B-V$
using \Eq{Pcyc_computed} and the assumption that M67 is $4\,{\rm Gyr}$ old.
(The green symbols would end up further to the left
if we assumed instead an age of $5\,{\rm Gyr}$.)
Here all stars are included---not just those for which
$P_{\rm rot}$ would also be available; see \Tab{TSum1}.
The inset shows $\tau/P_{\rm rot}^\ast$ as a function
of $B-V$ using \Eq{BV}.
The data points for the stars of M67 are overplotted to illustrate
the scatter and the range in $B-V$ covered by the data.
The red dotted line without surrounding data points shows the result
using the gyrochronology relation of \cite{Bar10} and \cite{BK10}
for $\tau(B-V)$, denoted by B+BK.
}\label{pRHK_4Gyr_all}\end{figure*}
Next, we compare with the diagram where $\tau/P_{\rm rot}^\ast$ is
estimated just from $B-V$ using gyrochronology; see \Eq{Pcyc_computed}
and \Fig{pRHK_4Gyr_all}.
Now, the direct fit for the 15 stars with
$\log\bra{R'_{\rm HK}}\ge-4.85$ gives
$\log c_2^{\rm dir}\approx-5.12$ and $\mu_2^{\rm dir}=-0.87$
and is shown as a dashed line.
The inset reveals that $\tau/P_{\rm rot}^\ast$ is indeed a monotonically
increasing function of $B-V$ in the range from $0.55$ to $0.8$,
as asserted earlier in this section.
The data points for the stars of M67 scatter around this line.
The corresponding relation obtained using the gyrochronology
relation of \cite{Bar10} is also given.
The difference of about $0.3$ dex results from the fact that the
$\tau(B-V)$ of \cite{BK10} is nearly twice as large as that of
\cite{Noyes84}.
As a function of $\tau/P_{\rm rot}^\ast$, the reversed trend of
$\log\bra{R'_{\rm HK}}$ is even more pronounced.
S1420 (green S) appears now more rapidly rotating:
$P_{\rm rot}^\ast=20.7\,{\rm d}$ whereas $P_{\rm rot}=24.8\,{\rm d}$;
see \Tab{TSum1}.
Another example is S1106 (green L) where
$P_{\rm rot}^\ast=24.3\,{\rm d}$ whereas $P_{\rm rot}=28.4\,{\rm d}$.
On the other hand, S801 (green C), S1218 (green N), and S1307
(green R) are now predicted to rotate slower than what is measured.
To understand these departures, we need to remind ourselves of the
possibility of measurement errors, notably in $P_{\rm rot}$, variability
of $\bra{R'_{\rm HK}}$ associated with cyclic changes in their
magnetic field, and of the intrinsically chaotic nature of
stellar activity.
Also, of course, the gyrochronology relation itself is only an
approximation to empirical findings and not a physical law of nature.
\section{Evolution and relation to reduced braking}
\label{Evolution}
Following \cite{vanSaders2016} and \cite{MvS17}, we would expect that
evolved stars lose their large-scale magnetic field and thereby
undergo reduced magnetic braking.
Their angular velocity should then stay approximately constant until
accelerated expansion occurs at the end of their main-sequence life.
For those stars, it might be difficult or even impossible to ever enter
the regime of antisolar DR.
This could be the case for $\alpha$~Cen~A (HD~128620, blue $k$),
KIC~8006161 (blue $o$), and 16 Cyg A and B (HD~186408 and 186427,
i.e., blue $q$ and $r$ symbols, respectively).
These are stars that rotate faster than expected based on their extremely
low chromospheric activity.
Given the intrinsic variability of stellar magnetic fields, it is
conceivable that the idea of reduced braking may not apply to all stars.
Others would brake sufficiently to enter the regime of antisolar
rotation and then exhibit enhanced activity, as discussed above.
With increasing age, those stars would continue to slow down further
and increase their chromospheric activity, as seen in \Fig{pRHK_4Gyr_all}.
\begin{figure*}[t!]\begin{center}
\includegraphics[width=\textwidth]{pRHK_comp3_inset}
\end{center}\caption[]{
Dependence of the residual $\log\tilde{c}$ on $T_{\rm eff}$, which
corresponds to the dotted lines in \Figs{pRHK}{pRHK_4Gyr_all}.
Again, some of the symbols have been shifted to avoid overlapping.
Average and standard deviation are computed for smaller $T_{\rm eff}$ intervals,
as indicated by horizontal dotted lines and gray boxes, respectively.
The inset shows the residual $\log c$ versus $T_{\rm eff}$.
}\label{pRHK_comp}\end{figure*}
It is in principle possible that stars with different $T_{\rm eff}$ show
a systematic dependence of the residual
\begin{equation}
\log\tilde{c}=\log\bra{R'_{\rm HK}}
-\log\left[\;\mbox{``rhs of \Eq{representation}''}\;\right];
\label{cResidual3_tilde}
\end{equation}
see the dotted lines in \Figs{pRHK}{pRHK_4Gyr_all}.
This is examined in \Fig{pRHK_comp}.
It turns out that this residual is essentially flat, i.e., there is
no systematic dependence on $T_{\rm eff}$, and it is consistent with random
departures which do, however, becomes stronger toward larger $T_{\rm eff}$,
as indicated by the gray boxes in \Fig{pRHK_comp}.
The work of \cite{KKKBOP15} has demonstrated that in the antisolar regime,
the magnetic activity can indeed be chaotic and intermittent.
Thus, depending on chance, a star in this regime may appear particularly
active (e.g., S1252, green O symbol with
$\log\bra{R'_{\rm HK}}=-4.72$), while others could be
particularly inactive (e.g., S969, green G symbol,
with $\log\bra{R'_{\rm HK}}=-5.06$).
Other examples are S1449 (green $\sigma$ with
$\log\bra{R'_{\rm HK}}=-5.13$) and S1048
(green $\iota$ with $\log\bra{R'_{\rm HK}}=-5.17$).
We must therefore expect that the magnetic activity of some of these
stars could still change significantly later in time, perhaps on
decadal or multi-decadal timescales.
In fact, we note from a comparison of the Ca~{\sc ii} measurements in
\cite{GBCSH17} with those from the initial chromospheric activity survey
of over a decade ago \citep{Giam2006} that the $R'_{\rm HK}$ values for
the specific stars mentioned above, S969 and S1048, are now each lower
by about 20\% while that for S1449 is lower by 23\%.
Given that the more massive stars of M67 are on their way to becoming
subgiants \citep[e.g.][]{Motta16}, we now discuss whether this could
explain their enhanced activity.
Properties important for convection such as luminosity and radius
may increase substantially above the main sequence values before reaching
the turnoff.
To compare with observations, it is convenient to look at the usual
residual $\log c=\log\bra{R'_{\rm HK}}-\log(\tau/P_{\rm rot})$, which
was given in the inset of \Fig{pRHK} as a function of $R'_{\rm HK}$ and
is now presented in the inset of \Fig{pRHK_comp} as a function of $T_{\rm eff}$.
We see that the four hottest stars of the sample, S603 (green A),
S1095 (green J), S1252 (green O), and S1420 (green S) have a slight,
but systematic excess.
Assuming that their values of $R'_{\rm HK}$ and $P_{\rm rot}$
are accurate, this could mean that the estimated values of $\tau$
are too small.
\cite{Gil85} found that for a certain regime of evolution, stars of
the solar mass and above may have $\tau$ significantly larger (up to
$0.4\,{\rm dex}$) than those of main-sequence stars at the same effective
temperature (see their Figure~10).
However, the regime for this behavior occurred only when these stars
cooled to below the solar main-sequence effective temperature.
As can be seen in the color-magnitude diagram in \cite{Giam2006},
our sample does not include stars which have cooled to this degree; on the
contrary, our sample is still very near the main-sequence, and therefore
we expect \Eq{BV} should still apply.
This would therefore not alter our suggestion that most of the members
of M67 have antisolar DR.
\section{Conclusions}
The phenomenon of antisolar DR is well known from theoretical models of
solar/stellar convective dynamos in spherical shells.
So far, antisolar DR has only been observed in some K giants
\citep{SKW03,WSW05,Kovari_etal15,Kovari_etal17} and subgiants
\citep{Har16}, but not yet in dwarfs.
Our work is compatible with the interpretation that the
enhanced activity at large Rossby numbers (slow rotation)
is a manifestation of antisolar DR.
Our results are suggestive of a bifurcation into two groups of stars:
those which undergo reduced braking and become inactive at
$P_{\rm rot}/\tau\approx2$ \citep{vanSaders2016}, and those that enter
the regime of antisolar rotation and continue to brake at enhanced
activity, although with chaotic time variability.
Interestingly, \cite{Katsova} have suggested that stars with antisolar
DR may be prone to exhibiting superflares \citep{Maehara,Candelaresi}.
This would indeed be consistent with the anticipated chaotic time
variability of such stars.
The available time series are too short to detect antisolar DR through
changes in the apparent rotation rate that would be associated with spots
at different latitudes; see \cite{RA15} for details of a new technique.
It is therefore important to use future opportunities, possibly still
with {\em Kepler}, to repeat those measurements at later times when the
magnetic activity belts might have changed in position.
\acknowledgements
We thank the referee for their thoughtful comments.
We are indebted to Bengt Gustafsson and Travis Metcalfe for useful
discussions and Dmitry Sokoloff for alerting us to their recent paper.
This work has been supported in part by
the NSF Astronomy and Astrophysics Grants Program (grant 1615100),
the Research Council of Norway under the FRINATEK (grant 231444),
and the University of Colorado through its support of the
George Ellery Hale visiting faculty appointment.
We gratefully acknowledge partial support of this investigation by grants
to AURA/NSO from, respectively, the NASA $Kepler/K2$ Guest Observer
program through Agreement No.\ NNX15AV53G and from the NN-EXPLORE program
through JPL RSA 1533727, which is administered by the NASA Exoplanet
Science Institute (NExScI).
The National Solar Observatory is operated by AURA under a cooperative
agreement with the National Science Foundation.
\newpage
|
{
"timestamp": "2018-02-27T02:00:15",
"yymm": "1802",
"arxiv_id": "1802.08689",
"language": "en",
"url": "https://arxiv.org/abs/1802.08689"
}
|
\section{Introduction}
Nuclear star clusters with a central \ac{MBH}\ are dense environments where the
interactions between stars play a crucial role. Although they are among the
densest stellar environments in the Universe, their gravitational potential
is still dominated by the central \ac{MBH}\@. As a result, stars move on nearly
stationary Keplerian orbits. The gravitational potential from the stars
themselves only leads to small potential perturbations that modify the purely
Keplerian potential of the \ac{MBH}\@. Nevertheless, these small perturbations, as
well as the corrections from general relativity, are the ones that drive the
long-term evolution of the stellar cluster.
The evolution of a stellar system with a central \ac{MBH}\ is a classical problem
of stellar dynamics. It was first studied in the context of globular clusters
with a central \ac{MBH}\ by~\cite{Peebles1972}
and~\cite{Bahcall+1976,Bahcall+1977}. These seminal works showed, under
various simplifying approximations~\citep{Nelson+1999}, that the two-body
diffusion coefficients for a spherically symmetric and isotropic system can be
calculated from first principles, where the only unknown is the Coulomb
logarithm. This was subsequently generalized by~\citet{Shapiro+1978}
and~\citet{Cohn+1978}, who derived a two-dimensional diffusion equation (in
energy and angular momentum) and calculated the associated diffusion
coefficients~(see, e.g.,~\cite{Bar-Or+2016}). Although the existence of
central black holes in globular clusters is still unknown, many nuclear star
clusters contain a massive black hole in their center~(see~\cite{Graham2016}
for a review).
In addition to the standard two-body relaxation driven by local scatterings,
there exists in galactic nuclei a more efficient mechanism to change the
angular momentum of stars. This process, named \ac{RR}\ by~\citet{Rauch+1996},
results from the coherent motion of the stars along their nearly fixed
Keplerian orbits: a given test star will be subject to residual torques
persisting on long timescales.
\ac{RR}\ can be separated into two different processes, scalar \ac{RR}\
that drives the evolution of the eccentricity, i.e.\ the magnitude of the angular
momentum, and vector \ac{RR}\ that drives the orbital orientation. The
residual torques associated with scalar \ac{RR}\ are randomized by the in-plane
orbital precession. The residual torques associated with vector \ac{RR}\ persist on
longer timescales, as they are randomized by the changes of the
orbital orientations themselves. This implies that the orbital evolution by vector
\ac{RR}\ is much faster than the one by scalar \ac{RR}\, but can only affect the
direction of the angular momentum vector~\citep{Rauch+1996,
Hopman+2006a}. Extensive studies of vector \ac{RR}\ were presented by~\citet{Kocsis+2011,
Kocsis+2015, Roupas+2017}. Here, our main focus is scalar \ac{RR}\@.
Scalar \ac{RR}\@, which can dominate the angular momentum's evolution, over the
standard two-body relaxation, did not have a formal self-consistent description
for many years. Previous attempts at modeling this process were only
qualitative, and many studies had to use ad-hoc methods to include
it~\citep[e.g.,][]{Hopman+2006a, Madigan+2011, Merritt+2011, Antonini+2013,
Merritt2015a}. Recent advances in $N$-body simulations allowed for the study
of \ac{RR}\ numerically~\citep{Merritt+2011, Hamers+2014}, but were lacking a fully
self-consistent theory.
Only recently, several studies~\citep{Bar-Or+2014, Sridhar+2016,Fouvry+2017a},
put forward independently the foundation for a self-consistent kinetic theory
of \ac{RR}\@. Building upon these works, we show here that in the case of an
isotropic spherical system, scalar RR can be described as a diffusion process,
for which one can derive and calculate the diffusion coefficients\ from first principles.
This paper is organized as follows. In Section~\ref{s:hamiltonian} we write
the orbit-averaged Hamiltonian of a test star, as a Fourier sum over the
orbital angles of both the test star and the field stars. Following the
$\eta$-formalism~\citep{Bar-Or+2014,Fouvry+2017b}, these Fourier components are
the random noise terms that drive the stochastic evolution of the test star's
orbital angular momentum, $J$. In Section~\ref{s:dc} we follow this approach
to write a closed expression for the diffusion coefficient of
scalar \ac{RR}\@. In Section~\ref{s:dcs} we briefly discuss the two-dimensional
($a$, ${J/J_\mathrm{c}}$) structure of the diffusion coefficients\ and compare it to two-body
relaxation. Finally, we summarize our results and discuss future applications
in Section~\ref{s:summary}.
\section{Hamiltonian}
\label{s:hamiltonian}
Let us consider a star of mass $M_{\star}$ moving on a nearly Keplerian orbit of
\ac{sma}\ $a$ around a \ac{MBH}\ of mass $M_\bullet$ embedded in a spherically symmetric
and isotropic in velocity star cluster of density ${\rho(r)}$.
The orbits of the stars in the
potential of the \ac{MBH}\ can be described in angle-action variables. In this
case, it is convenient to use the Delaunay
variables~\citep{BinneyTremaine2008}, where the three actions are:
${J_\mathrm{c}=\sqrt{GM_\bullet a}}$, the maximal (circular) angular momentum for a given
$a$, ${J=\sqrt{1-e^2}J_\mathrm{c}}$, the specific orbital angular momentum with $e$
the eccentricity, and ${J_z=J\cos(\theta)}$, the $z$ component of the
angular momentum with $\theta$ the inclination angle w.r.t.\ an inertial
reference frame. The corresponding angles are: the mean anomaly $\mathcal{M}$,
the argument of pericenter $\omega$, and the longitude of the ascending node
$\Omega$.
Following~\cite{Bar-Or+2014}, we use the addition theorem for spherical
harmonics to write the secular (orbit-averaged) specific Hamiltonian of the test
star as a multipole expansion
\begin{equation}
\label{eq:H}H=H_0(a,J)+\!\!\!\!\sum_{m,n=-\infty}^{\infty}\!\!\!\!\mathrm{e}^{\mathrm{i}(m\Omega+\mathrm{i} n\omega)}\eta_{nm}(a,J,J_z,t),
\end{equation}
In this equation, the first term is the mean field potential, while the second
term describes the potential fluctuations around the mean field due to the
intricate motion of the finite number of field stars.
The mean field potential reads
\begin{equation}
H_{0}(a,J)=\Phi_\mathrm{MBH}(a)+\Phi_\mathrm{GR}(a,J)+\overline{\Phi}_\star(a,J).\label{eq:H0}
\end{equation}
It is composed of the Keplerian potential of the central \ac{MBH},
\begin{equation}
\label{eq:phi_mbh}\Phi_\mathrm{MBH}(a)=-\frac{1}{2}\,\nu_{\mathrm{r}}(a)J_\mathrm{c},
\end{equation}
where ${\nu_{\mathrm{r}}(a)=\sqrt{GM_\bullet/a^{3}}}$ is the fast orbital frequency
imposed by the central \ac{MBH}\@, an effective correction to the
Keplerian potential,
\begin{equation}
\label{eq:phi_gr}\Phi_\mathrm{GR}(a,J)=-3\frac{r_{\mathrm{g}}}{a}\frac{J_\mathrm{c}}{J}\nu_{\mathrm{r}}(a)J_\mathrm{c},
\end{equation}
which reproduces the orbit-averaged Schwarzschild (in-plane) orbital precession,
where ${r_{\mathrm{g}}=GM_\bullet/c^2}$, and ${\overline{\Phi}_\star(a,J)}$ the mean
field potential due to the stellar cluster around the \ac{MBH}\@.
The last two terms of $H_0$ induce respectively a prograde and retrograde
in-plane orbital precession, and the combined precession is
\begin{equation}
\label{eq:nu_p}\dot{\omega}\equiv\nu_{\mathrm{p}}(a,J)=\dpd{H_0}{J}=\nu_{\mathrm{GR}}(a,J)+\nu_{\mathrm{M}}(a,J),
\end{equation}
where
\begin{equation}
\label{eq:nugr}\nu_{\mathrm{GR}}(a,J)=\dpd{\Phi_\mathrm{GR}}{J}=3\frac{r_{\mathrm{g}}}{a}\frac{J_\mathrm{c}^2}{J^2}\nu_{\mathrm{r}}(a),
\end{equation}
is the precession induced by \ac{GR}\@, and
\begin{equation}
\label{eq:num}\nu_{\mathrm{M}}(a,J)=\dpd{\overline{\Phi}_\star}{J}=\frac{\nu_{\mathrm{r}}(a)}{\pi M_\bullet e}\dint[0][\pi]fM_\mathrm{tot}(r[f])\cos f
\end{equation}
is the mass induced precession~\citep{Kocsis+2015}, where ${M_\mathrm{tot}(r)}$ is the
stellar mass enclosed within a radius $r$, and $f$ is the true anomaly.
The last term in \eq~\eqref{eq:H} is due to the discrete nature of the stellar potential
and describes the fluctuations around the mean field potential due to the
motion of the field stars. Note that this is the only time-dependent term in
the Hamiltonian and therefore this term drives the orbital diffusion of the test
star. These terms take the form
\begin{equation}
\label{eq:eta}\eta_{nm}(a,J,J_z,t)=\!\!\sum_{k=1}^N G M_k\!\!\!\!\sum_{n^{\prime}=-\infty}^{\infty}\!\!\!\!\mathrm{e}^{-\mathrm{i}(m\Omega_k(t)+n^{\prime}\omega_k(t))}\psi_{m nn^{\prime}}(\mathbf{I},\mathbf{I}_{k}(t)),
\end{equation}
where the first sum is over the $N$ field stars and $M_k$ is the mass of the
$k$-th field star. In the large $N$ limit, ${\eta_{nm}}$ can be considered as
random Gaussian noise terms, with ${\langle\eta_{nm}\rangle=0}$. In
\eq~\eqref{eq:eta}, we also
introduced the angular Fourier components of the pairwise orbit-averaged
interaction potential given by
\begin{equation}
\label{eq:psi}\psi_{m nn^{\prime}}(\mathbf{I},\mathbf{I}^{\prime})=-\!\!\!\!\sum_{\ell=\ell_{\min}}^{\infty}\!\!\!G^{\ell}_{m nn^{\prime}}(\theta,\theta^{\prime})\,K_{nn^{\prime}}^{\ell}(a,J,a^{\prime},J^{\prime}),
\end{equation}
with
\begin{align}
\label{eq:Kl}K_{nn^{\prime}}^{\ell}(a,J,a^{\prime},J^{\prime})={}&\left\langle K_{\ell}(r,r^{\prime})\,\mathrm{e}^{\mathrm{i}(n f-n^{\prime}f^{\prime})}\right\rangle_{\!\!\circlearrowright}\!\!\nonumber\\={}&\left\langle K_{\ell}(r,r^{\prime})\cos(nf)\cos(n^{\prime}f^{\prime})\right\rangle_{\!\!\circlearrowright},
\end{align}
where ${\ell_{\min}\equiv\max\{1,|m|,|n|,|n^{\prime}|\}}$, ${\mathbf{I}=(J_\mathrm{c},J,J_z)}$ stands for the action vector,
${\langle\,\cdot\,\rangle_\circlearrowright}$ denotes the orbit-average,
and ${K_{\ell}(r,r^{\prime})={\min(r,r^{\prime})}^{\ell}/{\max(r,r^{\prime})}^{\ell+1}}$ is
the usual min-max term from the Legendre expansion of the Keplerian potential.
Here, we note that the component ${\ell=0}$ does not contribute to the diffusion.
Finally, ${G^{\ell}_{m nn^{\prime}}(\theta,\theta^{\prime})}$ is the geometrical factor
\begin{equation}
\label{eq:G}G^{\ell}_{m nn^{\prime}}(\theta,\theta^{\prime})=\frac{4\pi\,y_{\ell}^{n}\,y_{\ell}^{n^{\prime}*}}{2\ell+1}d_{nm}^{\ell}(\theta)d_{n^{\prime} m}^{\ell}(\theta^{\prime}),
\end{equation}
where
${d_{n m}^{\ell}(\theta)}$, related to
the Wigner's rotation matrices ${D_{nm}^\ell(\alpha,\beta,\gamma)=\mathrm{e}^{-\mathrm{i} n\alpha}d_{n m}^\ell(\beta)\mathrm{e}^{-\mathrm{i} m\gamma}}$~\citep[e.g.,][]{Rose1995}, satisfies
${{\langle|d_{n m}^{\ell}|}^2\rangle_{\theta}=2/(2\ell\!+\!1)}$, and
${y_{\ell}^{n}=Y_{\ell}^{n}(\pi/2,\pi/2)}$ satisfies
${4\pi{|y_{\ell}^{n}|}^2/(2\ell\!+\!1)=[(k_+\!-\!1)!!(k_{-}\!-\!1)!!]/[(k_{+})!!(k_{-})!!]}$ and is non zero only if
${k_\pm=\ell\pm n}$ are even.
Thus, this geometrical factor satisfies
\begin{equation}
\label{eq:G_symmetry}{\langle{|G^{\ell}_{m nn^{\prime}}(\theta,\theta^{\prime})|}^2\rangle}_{\theta,\theta^{\prime}}=\frac{16\pi^2{|y_{\ell}^{n}|}^2{|y_{\ell}^{n^{\prime}}|}^2}{{(2\ell+1)}^{4}},
\end{equation}
where
${\langle\,\cdot\,\rangle_{\theta,\theta^{\prime}}=\!\int\!\mathrm{d}\theta\mathrm{d}\theta^{\prime}\sin(\theta)\sin(\theta^{\prime})\,(\cdot)}$ is the average over the
inclination angles of the field and test stars.
\section{The diffusion coefficients}
\label{s:dc}
In this section we connect the stochastic Hamiltonian
in \eq~\eqref{eq:H},
which describes the motion of a test particle for a given set realization of
the field stars, to the diffusion equation describing the evolution of the
angular momentum of test particles undergoing stochastic
perturbations induced by the field stars from the stellar cluster.
Here, we assume a spherically symmetric stellar distribution for the cluster where the
phase-space density of stars ${f(\mathbf{r},\mathbf{v})=f(E,J)}$
depends only on the orbital (positively defined) energy ${E=GM_\bullet/2a}$ and
$J$. The number of stars per unit $a$, per unit $J$ is given by
${N(a,J)=4\pi^{3}(2J/J_\mathrm{c})f(E,J)GM_\bullet}$. It is convenient to write
${N(a,J)=N(a)f_J(J;a)}$, where ${N(a)}$ is the number of stars per unit
$a$ and ${f_J(J;a)}$ is the \ac{PDF}\ of $J$ for a given $a$. In the simplifying
case where ${f(E,J)\propto{|E|}^p}$ (with ${p<3/2}$), one has
${N(a)=(3-\gamma)\tfrac{N_{0}}{a}(\tfrac{a}{a_{0}})^{2-\gamma}}$,
where ${\gamma=p\!+\!\tfrac{3}{2}}$ and
${N_0=g(\gamma)N(<\!a_0)}$, with
${g(\gamma)=\tfrac{\sqrt{\pi}}{2^\gamma}\tfrac{\Gamma(1+\gamma)}{\Gamma(\gamma-1/2)}}$
and ${N(<\!a_0)}$ is the number of stars within a radius $a_0$.
The relaxations in energy and angular momentum are usually treated as two
separate one-dimensional relaxation processes, where the system first relaxes in
angular momentum with fixed energy, and then relaxes in
energy. In the absence of a loss-cone, during the first stage the system relaxes to an isotropic
angular momentum distribution ${f_J(J;a)=2J/J_\mathrm{c}^2}$. This is a general
result of maximal entropy and is independent of the exact details of the
relaxation process, i.e.\ different relaxation processes can change the timescale on
which the system relaxes but not the steady state. This steady state is
slightly modified by the existence of a loss-cone, where stars with
${J<J_\mathrm{lc}(a)}$ are lost, e.g.\ by tidal disruption with
${J_\mathrm{lc}\simeq\sqrt{2r_{\mathrm{t}} GM_\bullet}}$, where
${r_{\mathrm{t}}={(M_\bullet/M_{\star})}^{1/3}R_\star}$ is the tidal radius. For compact objects
and stars with tidal disruption radius smaller than $8 GM_\bullet/c^2$, the
loss-cone is given by ${J_\mathrm{lc}\simeq 4GM_\bullet/c}$, for which orbits plunge
directly into the \ac{MBH}\@. Finally, the existence of a loss-cone logarithmically suppresses the distribution of
angular momentum toward $J_\mathrm{lc}$, so that
${f_J(J;a)\propto(2J/J_\mathrm{c})\log(J/J_\mathrm{c})}$~\citep[e.g.,][]{Bar-Or+2016}.
Following~\citet{Bar-Or+2014} and~\citet{Fouvry+2017b}, the \ac{PDF}\ of $J$, at a
given \ac{sma}\ $a$, $P(J,t;a)$, evolves according to a diffusion (Fokker-Plank)
equation of the form
\begin{equation}
\label{eq:FP}\dpd{P(J,t;a)}{t}=\frac{1}{2}\dpd{}{J}\left[JD_{JJ}^\mathrm{RR}(a,J)\dpd{}{J}\frac{P(J,t;a)}{J}\right],
\end{equation}
where the diffusion coefficient, ${D_{JJ}^\mathrm{RR}(a,J)}$, is proportional to the power spectrum of the
noise terms $\eta_{nm}$ evaluated at the precession frequency ${\nu_{\mathrm{p}}(a,J)}$, so that
\begin{equation}
\label{eq:DRR_I}D_{JJ}^\mathrm{RR}(a,J)=2\sum_{n=1}^{\infty}n^2\widehat{C}_{n}(a,J,n\nu_{\mathrm{p}}(a,J)),
\end{equation}
where ${\widehat{C}_{n}(J,n\nu_{\mathrm{p}}(a,J))}$ is the Fourier transform,
${\hat{f}(\omega)=\!\int_{-\infty}^{\infty}\!\mathrm{d} t f(t)\mathrm{e}^{\mathrm{i}\omega t}}$, of
the correlation function
\begin{equation}
\label{eq:Cft}C_{n}(a,J,t-t^{\prime})=\sum_{\mathclap{m=-\infty}}^{\infty}\;\;\!\!\int_{-J}^{J}\!\!\frac{\rdJ_z}{2J}\langle\eta_{nm}(a,J,J_z,t)\eta_{nm}^*(a,J,J_z,t^{\prime})\rangle,
\end{equation}
and we used the fact that ${\widehat{C}_{n}(a,J,n\nu_{\mathrm{p}}(a,J))}$ is invariant under
${n\to-n}$, to sum only over positive $n$, which introduces a factor $2$ w.r.t.~\citet{Fouvry+2017b}.
The correlation function in \eq~\eqref{eq:Cft} depends on time through the
motion of the field stars. As the Keplerian orbits of the field stars
evolve, the cluster's potential changes, and on long timescales, the system is
reshuffled and the potential fluctuations become uncorrelated. Here, the main
source of orbital evolution is the apsidal precession of the orbits due to the
enclosed stellar mass, $M_\mathrm{tot}(r)$, as well as the relativistic in-plane
precessions. Assuming that the field stars are moving on Keplerian orbits
precessing in-plane because of the mean field Hamiltonian $H_0$
(\eq~\eqref{eq:H0}) (and ignoring collective effects~\citep[see][]{Fouvry+2017b}), the diffusion
coefficient from \eq~\eqref{eq:DRR_I} can be written explicitly as
\begin{align}
\label{eq:DRR_II}D_{JJ}^\mathrm{RR}(a,J)\!=&4\pi G^2\sum_{i}M_i^2\sum_{n=1}^{\infty}\!\sum_{n^{\prime}=-\infty}^{\infty}\!\!\!\!n^2\!\dinta^{\prime} N_i(a^{\prime})\!\dintJ^{\prime} f_{J,i}(J^{\prime};a^\prime)\nonumber\\&\hspace{-1.8cm}\times{|A_{nn^{\prime}}(a,J,a^{\prime},J^{\prime})|}^2\,\delta(n\nu_{\mathrm{p}}(a,J)-n^{\prime}\nu_{\mathrm{p}}(a^{\prime},J^{\prime})),
\end{align}
where we considered a mass spectrum ${\{M_i\}}$ of field stars and defined a susceptibility
coefficient which is averaged over both $J_z$ of test star and the field star
\begin{align}
\label{eq:A}&{|A_{nn^{\prime}}(a,J,a^{\prime},J^{\prime})|}^2=\!\!\!\!\sum_{\ell=\ell_{\min}}^\infty\!\sum_{m=-\ell}^{\ell}\!\int_{-J}^{J}\!\!\!\frac{\rdJ_z}{2 J}\!\!\int_{-J^{\prime}}^{J^{\prime}}\!\!\!\frac{\rdJ_{z}^{\prime}}{2J^{\prime}}\,{|\psi_{m nn^{\prime}}(\mathbf{I},\mathbf{I}^{\prime})|}^{2}\nonumber\\&\!\!\!=\!\!\!\sum_{\ell=\ell_{\min}}^\infty\!\!\!\frac{16\pi^2{|y_{\ell}^{n}|}^2{|y_{\ell}^{n^{\prime}}|}^2}{{(2\ell+1)}^{3}}\,{|K_{nn^{\prime}}^{\ell}(a,J,a^{\prime},J^{\prime})|}^{2}.
\end{align}
By solving the resonant condition in \eq~\eqref{eq:DRR_II}, we can carry out
the integral over $J^{\prime}$ to obtain
\begin{align}
\label{eq:DRR_III}D_{JJ}^\mathrm{RR}(a,J)={}&4\pi G^2\sum_i M_i^2\!\!\sum_{n,n^{\prime}=1}^{\infty}\!\!\frac{n^2}{n^{\prime}}\dinta^{\prime} N_i(a^{\prime})\nonumber\\&\times\sum_{J^{\prime}}\frac{f_{J,i}(J^{\prime};a^{\prime}){|A_{nn^{\prime}}(a,J\!,a^{\prime}\!,J^{\prime})|}^2}{|\partial_{J^{\prime}}\nu_{\mathrm{p}}(a^{\prime}\!,J^{\prime})|},
\end{align}
where the sum on $J^{\prime}$ runs over the solutions, $J_{+}$ and $J_{-}$, of the
resonant conditions ${\nu_{\mathrm{p}}(a^{\prime},J_\pm)=\pm(n/n^{\prime})\,\nu_{\mathrm{p}}(a,J)}$,
which depend on $a^{\prime}$, $a$, $J$ and on the ratio, ${n/n^{\prime}}$, of the
resonance numbers.
The information about the underlying cluster is contained in the angular
momentum distribution function ${f_{J,i}(J;a)}$, in the mass weighted \ac{sma}\
distribution ${\sum_i M_i^2 N_i(a)}$, and in the stellar contribution to the
precession which enters the resonant condition ${\delta(n\nu_{\mathrm{p}}(a,J)-n^{\prime}\nu_{\mathrm{p}}(a^{\prime},J^{\prime}))}$ in \eq~\eqref{eq:DRR_II}, while scaling with mass as ${\sim\sum_i(M_i/M_\bullet)N_i(<a)\nu_{\mathrm{r}}(a)}$ (see
\eq~\eqref{eq:num}). In a multi-mass population, the system will
undergo a strong mass segregation, where heavier masses will develop a steeper
density slope than the lighter ones~\citep{Alexander+2009}. This means that
at small \acp{sma}\ the heavy stars will dominate the diffusion, which in turn increases
the diffusion rate by the heavy to light mass ratio. For simplicity, in the
upcoming applications, we limit ourselves to a single-mass
population.
\Eq~\eqref{eq:DRR_III} is the main result of this work. It shows that for a
spherically symmetric and isotropic stellar distribution the diffusion coefficients\ associated
with \ac{RR}\ can be derived and calculated from first principles. Carrying out the
integral in \eq~\eqref{eq:DRR_III} is conceptually straightforward but can be
technically challenging. It requires solving the resonant condition and
integrating over $a^{\prime}$ and over the two true anomalies $f$ and $f^{\prime}$ in
${|A_{nn^{\prime}}|^{2}}$ (see \eq~\eqref{eq:Kl}). We provide a code,
{\small{\textsc{scRRpy}}}\footnote{Available at \url{https://github.com/benbaror/scrrpy}.}, in which the integration
is carried out using the Vegas Monte-Carlo integration scheme~\citep{Lepage1978}.
The Fokker-Planck equation~\eqref{eq:FP} can be rewritten in the more
traditional form
\begin{equation}
\label{eq:FP_II}\dpd{P(J,t)}{t}=-\dpd{}{J}D_{J}P(J,t)+\frac{1}{2}\dpd{}{{J^2}}D_{JJ}P(J,t),
\end{equation}
where the two-diffusion coefficients satisfy the fluctuation-dissipation
relation
\begin{equation}
D_J=\frac{1}{2 J}\,\frac{\partial(J D_{JJ})}{\partial J},\label{fluctuation_dissipation_relation}
\end{equation}
with ${D_{JJ}=D_{JJ}^\mathrm{RR}}$.
Assuming an isotropic $J$ distribution and a single-mass population,
\eq~\eqref{eq:DRR_III} can be written as
\begin{equation}
\label{eq:DRR_SM}D_{JJ}^\mathrm{RR}(a,J)=2\tilde{\tau}^2T_\mathrm{c}(a)d_\mathrm{RR}(a,J),
\end{equation}
where ${\tilde{\tau}=(M_{\star}/M_\bullet)\sqrt{N(<2 a)}J_\mathrm{c}\nu_{\mathrm{r}}(a)}$ is the
typical strength of the residual torque~\citep{Gurkan+2007},
${T_\mathrm{c}(a)\sim 1/\nu_{\mathrm{M}}(a)}$ is the typical coherence time (see below), ${N(<2a)}$ is the number of
stars with \acp{sma}\ smaller than ${2a}$, and
\begin{align}
\label{eq:dRR_SM}d_\mathrm{RR}(a,J)={}&\!\!\sum_{n,n^{\prime}=1}^{\infty}\!\!\frac{n^2}{n^{\prime}}\dinta^{\prime}\frac{N(a^{\prime})}{N(<2a)}\nonumber\\&\times\sum_{J^{\prime}}\frac{4\piJ^{\prime} a^2{|A_{nn^{\prime}}(a,J\!,a^{\prime}\!,J^{\prime})|}^2}{{J_\mathrm{c}(a^{\prime})}^2|\partial_{J^{\prime}}\nu_{\mathrm{p}}(a^{\prime}\!,J^{\prime})|T_\mathrm{c}(a)},
\end{align}
is dimensionless.
Since we assumed that the cluster is isotropic in velocities, i.e.\ ${\partial f(J,E)/\partial J=0}$, there is no
dynamical friction\footnote{Here we define dynamical friction as the drift term
proportional to the mass of the subject star, as opposed to the parametric
drift proportional to the mass of the field stars~\citep{Chavanis2012}.}, and no
amplification through collective effects~\citep{Fouvry+2017b}. In the absence
of a loss-cone, the zero flux steady state solution reads
${f(J)=2J/J_\mathrm{c}^{2}}$. In practice, ${f(J)}$ is logarithmically suppressed near
the loss-cone, $J_\mathrm{lc}$, and therefore deviates from the isotropic ${f(J)\propto J}$
distribution. As a result, both dynamical friction and collective effects can
become important near the loss-cone. However, as \ac{RR}\ is quenched near the loss
cone (see Figure~\ref{fig:DRR}), these effects are expected to be of no
practical importance for the overall relaxation (which also includes two-body
relaxation).
In Figure~\ref{fig:DRR} we show the \ac{RR}\ diffusion coefficient\ for the normalized angular
momentum ${j=J/J_\mathrm{c}}$, with the notation ${D_{jj}=D_{JJ}^\mathrm{RR}/J_\mathrm{c}^{2}}$, as a function of $j$
and compare it with two-body
relaxation~\citep[][\eq~{(125)}]{Bar-Or+2016} at
${a\simeq 8\,\mathrm{mpc}}$. Here, we assume a stellar population with a \ac{BW}\ cusp
density profile ${\rho(r)\propto r^{-7/4}}$ and stars of one solar mass
${M=M_\odot}$ around a \ac{MBH}\ of mass ${M_\bullet=4\times 10^{6}M_{\odot}}$, with
a total stellar mass $M_\bullet$ within the radius of influence
${r_\mathrm{h}=2\,\mathrm{pc}}$. The various bumps appearing in Figure~\ref{fig:DRR}
are associated with the respective contributions from different resonance pairings ${(n,n^{\prime})}$.
Figure~\ref{fig:DRR} also demonstrates the fast
convergence of $D_{JJ}^\mathrm{RR}$ w.r.t\ the harmonic number $\ell$. Unlike two-body
relaxation which has a logarithmic divergence (manifested in the Coulomb
logarithm), \ac{RR}\ has no divergences.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure1}
\caption{\label{fig:DRR}~Normalized angular momentum ${j=J/J_\mathrm{c}}$
diffusion coefficients as a function of $j$ for two-body (dotted line)
and resonant relaxation (\ac{RR}, dots). The \ac{RR}\ diffusion coefficient is
calculated using~\eq~\eqref{eq:DRR_III}, while the two-body relaxation
time is calculated from \eq~(125) in~\cite{Bar-Or+2016}. The sharp drop
in the \ac{RR}\ diffusion coefficient occurs where the precession frequency
of the test star ${\nu_{\mathrm{p}}(j)}$ is comparable to the coherence frequency
of the system ${1/T_\mathrm{c}(a)}$ (see Figure~\ref{fig:drr}). We also show
the convergence w.r.t.\ the maximal harmonic number $\ell$ in black
lines, from ${\ell=1}$ (thickest line) to ${\ell=5}$ (thinnest
line with dots). In this figure, the diffusion coefficients are evaluated
at ${a\simeq 8\,\mathrm{mpc}}$ for an isotropic \ac{BW}\ cusp
${f(E)\propto E^{1/4}}$ of solar mass stars around a \ac{MBH}\ of
${M_\bullet={4\times 10^6 M_\odot}}$ and a total stellar mass $M_\bullet$
within the radius of influence ${r_\mathrm{h}=2\mathrm{pc}}$, using
{\small{\textsc{scRRpy}}}\@.}
\end{center}
\end{figure}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure2}
\caption{\label{fig:drr}~The dimensionless diffusion coefficient $d_\mathrm{RR}$ (\eq~\eqref{eq:dRR_SM})
as a function of $j$ at different \acp{sma}\@. At small eccentricity (${1-j\ll 1}$), $d_\mathrm{RR}$ can be
approximated by ${d_\mathrm{RR}\approx\langle\tau^2\rangle/\tilde{\tau}^2\approx 0.07(1-j)}$ (dashed line). When the relativistic precession approaches ${\sim 0.45/T_\mathrm{c}}$, $d_\mathrm{RR}$ drops as $j$ becomes an adiabatic invariant.
The cusp's parameters are the same as in Figure~\ref{fig:DRR}.
}
\end{center}
\end{figure}
As shown in Figure~\ref{fig:drr}, near ${J=J_\mathrm{c}}$ (i.e.\ ${e\ll 1}$),
$D_{JJ}^\mathrm{RR}$ is well approximated by ${D_{JJ}^\mathrm{RR}\simeq 2T_\mathrm{c}\langle\tau^2\rangle}$,
where ${\langle\tau^2\rangle/\tilde{\tau}^2\simeq 0.07(1-j)}$ is the
averaged residual torque~\citep{Bar-Or+2016}, and $T_\mathrm{c}$ is the typical
correlation time of the system. Here, we also find that the ansatz
${T_\mathrm{c}\simeq\sqrt{\pi/2}\,\nu_{\mathrm{p}}^{-1}(2a,1/\sqrt{2})}$
from~\cite{Bar-Or+2016}, for which the correlation time is proportional to the
median precession time evaluated at ${2a}$ (the median eccentricity is
${e=j=1/\sqrt{2}}$), provides a good approximation
of $D_{JJ}^\mathrm{RR}$, as long as ${\nu_{\mathrm{p}}(2a,1/\sqrt{2})}$ is dominated by mass
precession, i.e.\ most of the field stars at this \ac{sma}\ are non-relativistic.
This implies that for non-relativistic orbits, \ac{RR}\ scales as
${D_{JJ}^\mathrm{RR}/J_\mathrm{c}^2\sim(M/M_\bullet)\nu_{\mathrm{r}}(a)}$ which is independent of the number of
stars~\citep{Rauch+1996, Hopman+2006a},
while two-body relaxation scales like
$D_{JJ}^\mathrm{NR}\sim N(<a)(M^2/M_\bullet^2)\nu_{\mathrm{r}}(a)\log\Lambda$. Therefore, since
$N(<a)\le M/M_\bullet$, \ac{RR}\ can be significantly faster that two-body
relaxation in some regions of orbital space~\citep[e.g.,][]{Bar-Or+2016}.
When the precession frequency of the test star, ${\nu_{\mathrm{p}}=\nu_{\mathrm{M}}+\nu_{\mathrm{r}}}$, approaches
${1/T_\mathrm{c}}$, $D_{JJ}^\mathrm{RR}$ sharply drops as it enters the relativistic regime where
the precession frequency of the test star is higher than the precession of the
bulk of the field stars, and $J$ becomes an adiabatic invariant. In Figure~\ref{fig:drr}
we show that this suppression of \ac{RR}\ occurs at $j_0$ where
${\nu_{\mathrm{GR}}(a,j_0)\simeq 0.45/T_\mathrm{c}}$.
\section{Discussion}
\label{s:dcs}
In this section, we briefly investigate the phase-space structure of $D_{JJ}^\mathrm{RR}$,
compare it to the standard two-body relaxation diffusion coefficients\@, and comment about its
contribution to various physical phenomena in the vicinity of the \ac{MBH}\@. We
use {\small{\textsc{scRRpy}}}\ to calculate $D_{JJ}^\mathrm{RR}$
and for simplicity, we consider as previously a stellar cluster composed of a
single-mass population with a \ac{BW}\ power-law density cusp
${\rho(r)\propto r^{-7/4}}$.
As shown in Figure~\ref{fig:phasespace}, diffusion by \ac{RR}\ can be faster than
two-body relaxation in a limited region of phase space. Interestingly, the
orbits of the young stellar population cluster in the Milky-Way Galactic
center~\citep[the S-stars cluster,][]{Ghez+2003, Schödel+2003,
Gillessen+2009,Gillessen+2017} are within this region.
At low ${j=J/J_\mathrm{c}}$ and low $a$, \ac{RR}\ is quenched by adiabatic invariance. This
is because the relativistic precession $\nu_{\mathrm{GR}}$ increases as ${1/(a j^2)}$
(see \eq~\eqref{eq:nu_p}), and when ${\nu_{\mathrm{GR}}(j)}$ is larger than the coherence
frequency $1/T_\mathrm{c}$, the diffusion coefficient\ decays rapidly, as demonstrated in
Figure~\ref{fig:DRR}. In Figure~\ref{fig:phasespace}, this translates to a
line in ${(a,J/J_\mathrm{c})}$ phase space where RR is quenched and two-body
relaxation takes over. This line is associated with the so-called
``Schwarzschild barrier'' that is observed in $N$-body\xspace\
simulations~\citep{Merritt+2011}. At large \ac{sma}, the mass precession time becomes
comparable to the orbital time and two-body relaxation wins over \ac{RR}\@.
Generally, event rates associated with loss-cone dynamics like \acp{TDE}\ and
binary disruptions will be governed either by the dynamics near the boundary
between full- and empty loss-cone or by the dynamics near the radius of
influence $r_\mathrm{h}$, depending on which radius is smaller. Typically, for a \ac{MBH}\ with
a mass of ${10^{5}\!-\!10^{7}\,M_\odot}$, these are of the same order of
magnitude~\citep[e.g.,][]{Alexander+2017}. Near the radius of influence, the
precession time is comparable to the orbital period and thus the dynamics is
governed by two-body relaxation. Therefore, \ac{RR}\ is not expected to have a
significant effect on these rates.
As shown in Figure~\ref{fig:phasespace}, \ac{RR}\ can have an effect on the
dynamics of stars deep in the cusp. \citet{Hopman+2006a} suggested that \ac{RR}\
can significantly increase the rates of \acp{EMRI}\@. Additionally, $N$-body\xspace\
simulations~\citep{Merritt+2011, Brem+2014}, which inherently contain both \ac{RR}\
and two-body relaxation, showed that in practice the rates of \acp{EMRI}\ are
comparable to the ones obtained by considering only two-body relaxation.
\citet{Bar-Or+2016} showed that as long as the region where the \ac{RR}\ diffusion
dominates over two-body relaxation is far from the region where \ac{GW}\ emission
dominates the orbital evolution (see dotted contour in
Figure~\ref{fig:phasespace}), \ac{RR}\ will not contribute significantly to the
\acp{EMRI}\ event rate. This is indeed the case for the cusp considered in
Figure~\ref{fig:phasespace}.
\begin{figure*}[htbp!]
\begin{center}
\includegraphics[width=0.9\textwidth]{Figure3}
\caption{\label{fig:phasespace}~Phase space structure of
${D_{JJ}^\mathrm{RR}/J_\mathrm{c}^2}$, showed as a color map on a logarithmic scale. Diffusion
by \ac{RR}\ is faster than two-body or non-resonant relaxation (NR) in a limited region of
phase-space (solid black contours), which is far from the region where
gravitational wave emission dominates the orbital evolution (dashed contour). The orbits of
S-stars observed in the Galactic center~\citep{Gillessen+2017} (red
circles), lie in the \ac{RR}-dominated region. Orbits beyond the relativistic
loss-cone, ${J_\mathrm{lc}=4GM_\bullet/c}$ (solid red line), are short-lived. Solar
mass stars will tidally disrupt if their orbital pericenter distance is
smaller than
${r_{\mathrm{t}}={(M_\bullet/M_\odot)}^{1/3}R_\odot}$~\citep{Alexander2017} (dashed
red line). The cusp's parameters are the same as in Figure~\ref{fig:DRR}.
}
\end{center}
\end{figure*}
Both \ac{RR}\ and two-body relaxation will drive the stellar distribution toward an
isotropic distribution in angular momentum, i.e.\ ${f(J)=2J/J_\mathrm{c}^{2}}$, or
$f(e)=2e$ in eccentricity, when neglecting loss-cone effects. This will
happen over the relaxation timescale, which is of order of the diffusion
timescale ${T_j(a)=1/D_{jj}^\mathrm{iso}(a)}$, where
${D_{jj}^\mathrm{iso}(a)=\!\int\!\mathrm{d} j\,2 j D_{jj}(a,j)}$ is the
isotropic averaged diffusion coefficient\@.
In Figure~\ref{fig:timescales} we show the relaxation times for \ac{RR}\ and for
two-body relaxation. While the two-body diffusion time scales as
${T_{j}^\mathrm{NR}(a)\sim{(M_\bullet/M_\star)}^2 P(a)/(N(<a)\log\Lambda)}$,
where ${P(a)}$ is the orbital period, the \ac{RR}\ diffusion time scales as
${T_{j}^\mathrm{RR}(a)\sim(M_\bullet/M_\star)P(a)}$ in the region where the
precession is dominated by mass precession. As shown in
Figure~\ref{fig:timescales}, \ac{RR}\ can be significantly faster than two-body
relaxation deep in the cusp until it is quenched by the relativistic
precession. Although faster than two-body relaxation, the \ac{RR}\ diffusion
timescale is longer than the ages of some of the young stars observed in our
Galactic center~\citep{Habibi+2017}. This suggests that these stars did not have
the time to relax by \ac{RR}\ to the current nearly thermal ${f(e)\propto e}$
distribution observed today~\citep{Gillessen+2017}. Let us note however that in
a multi-mass system, one has
${T_{j}^\mathrm{RR}(a)\simM_\bullet\langleM_{\star}\rangle/\langle{M_{\star}}^2\rangle P(a)}$ and the diffusion time can be shorter by an order of
magnitude in regions where stellar black holes dominate the total enclosed
mass, as expected from strong mass segregation~\citep{Alexander+2009}.
An additional relaxation mechanism that will randomize the orbital orientation
is the so-called ``vector resonant relaxation''~\citep{Rauch+1996,
Hopman+2006a, Kocsis+2015}. As shown in Figure~\ref{fig:timescales} this
process can randomize the orientations of the orbits (but not their
eccentricities) on a shorter timescale
${T_{j}^{\mathrm{VRR}}\simeq(M_\bullet/M_\star)(P(a)/\sqrt{N(<a)})}$~\citep{Kocsis+2015}.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure4}
\caption{\label{fig:timescales}~Angular momentum relaxation timescales
${T_j=1/D_{jj}^\mathrm{iso}}$ by \ac{RR}\ (solid line), and by
two-body relaxation (NR) (dashed line), as a function of semi-major
axis. Relaxation by \ac{RR}\ is faster than two-body relaxation
in a limited range of semi-major axis. For comparison, we show the main
sequence ages of some of the S-stars estimated recently
by~\cite{Habibi+2017} (red circles with error bars) and the vector \ac{RR}\
timescale (dotted line).
The cusp's parameters are the same as in Figure~\ref{fig:DRR}.
}
\end{center}
\end{figure}
\section{Summary}
\label{s:summary}
Relaxation processes in dense stellar systems around a \ac{MBH}\ are a classical
problem of stellar dynamics. Understanding these processes
is crucial for the long-term steady-state stellar distribution of nuclear clusters and mass
segregation therein, short-term transient phenomena such as tidal
disruptions, gravitational wave emissions, and hypervelocity stars and the
distribution of unique source populations such as young stars, X-ray binaries
and radio pulsars to name a few~\citep{Alexander2017}.
All these phenomena depend both on the relaxation in energy and in angular
momentum. The relaxation in energy is well described by two-body relaxation,
where the diffusion coefficients\ can be calculated from first principles for an isotropic
distribution function. The only poorly determined quantity is the Coulomb
logarithm, which has only a small effect on the diffusion.
Despite the approximation made in the derivation of these
diffusion coefficients~\citep[e.g.,][]{Nelson+1999}, they are in a good agreement with the ones
measured in direct $N$-body\xspace\ simulations~\citep{Bar-Or+2013}. However, relaxation
in angular momentum can be dominated by \ac{RR}~\citep{Rauch+1996,
Hopman+2006a}. While this was demonstrated by~\citet{Eilon+2009} and especially
by~\citet{Merritt+2011} using direct $N$-body\xspace\ simulations, a complete and
self-consistent theory of \ac{RR}\ was still lacking. The foundation for a concrete
kinetic theory of \ac{RR}\ was put forward independently
by~\citet{Bar-Or+2014},~\cite{Sridhar+2016}, and~\cite{Fouvry+2017a}.
In~\citet{Fouvry+2017b}, we generalized the method of~\citet{Bar-Or+2014} to a
general stochastic Hamiltonian with integrable mean field and showed it to be
equivalent to the (degenerate) \ac{BL}\ and Landau equations~\citep{Heyvaerts2010,
Chavanis2012, Chavanis2013}. This means that the different approaches
of~\citet{Bar-Or+2014, Sridhar+2016} and~\citet{Fouvry+2017a}, although
different in details, are essentially equivalent.
Building upon~\citet{Bar-Or+2014} and~\cite{Fouvry+2017b}, we presented here,
for the first time, a calculation of the scalar \ac{RR}\ diffusion coefficients\ from first
principles and without any free parameters. This brings to a closure the long
journey, started by~\cite{Rauch+1996}, of bringing the kinetic theory of \ac{RR}\
to the same level of completeness as the standard two-body relaxation
one. Although this treatment is limited to the diffusion of the angular
momentum magnitude in a spherical and isotropic background distribution, for
which collective effects can be ignored~\citep[e.g.,][]{Nelson+1999}, the same
limitations also apply to standard two-body relaxation
(see~\cite{Vasiliev2015} for applications to non-spherically symmetric systems).
Here, we also assumed that the \ac{MBH}\ dominates the potential. This assumption
will break down close to the radius of influence, where the contribution of the
underlying stellar population is comparable to that of the \ac{MBH}\@. This is not
a significant limitation as \ac{RR}\ is negligible compared to two-body relaxation
at this point (see Figure~\ref{fig:timescales}). Some of these limitations
could be mitigated in following studies, and the entire kinetic theory could be
tested against future $N$-body\xspace\ simulations that are already approaching a
realistic number of stars in galactic nuclei~\citep{Panamarev+2018}.
The ability to calculate \ac{RR}\ diffusion coefficients\ provides us with the opportunity to make
more realistic estimates on the effects of \ac{RR}\ on astrophysical phenomena in
galactic nuclei. As shown in Figure~\ref{fig:timescales}, \ac{RR}\ can dramatically
reduce the relaxation time in angular momentum. As a result, even short-lived
populations (like the young S-star cluster) can be relaxed to a thermal
eccentricity distribution. As \ac{RR}\ can efficiently drive the angular momentum
evolution, it may contribute to the supply rate of stellar objects into the
loss-cone. This contribution is significant only if the loss-cone is close to
the region where \ac{RR}\ dominates the diffusion over two-body relaxation and will
depend on the underlying stellar distribution and the specific loss-cone
scenario. We show that for a standard stellar population following a \ac{BW}\ cusp
around a ${4\times 10^6 M_\odot}$ \ac{MBH}, the contribution of \ac{RR}\ to the
\acp{EMRI}\ and \acp{TDE}\ rates is negligible.
\section*{Acknowledgements}
We are grateful to Tal Alexander, Adrian Hamers, Kirill Lezhnin, John
Magorrian, Christophe Pichon and Scott Tremaine for fruitful discussions. BB
acknowledges support from the Schmidt Fellowship. JBF acknowledges support
from Program number HST-HF2--51374 which was provided by NASA through a grant
from the Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Incorporated, under NASA
contract NAS5--26555. This research is carried out in part within the
framework of the Spin(e) collaboration (ANR-13-BS05-0005,
\url{http://cosmicorigin.org}).
\bibliographystyle{apj}
|
{
"timestamp": "2018-02-27T02:07:30",
"yymm": "1802",
"arxiv_id": "1802.08890",
"language": "en",
"url": "https://arxiv.org/abs/1802.08890"
}
|
\section{Introduction}
In the field of geometric function theory, the hyperbolic metric plays an important role. In higher dimensional Euclidean spaces, the hyperbolic metric exists only in balls and half-spaces and the lack of hyperbolic metric in general domains has been a primary motivation for introducing the so-called hyperbolic-type metrics in the sense of Gromov. For example, $\widetilde{j}$-metric, Apollonian metric, Seittenranta's metric, half apollonian metric, scale-invariant Cassinian metric and M$\mathrm{\ddot{o}}$bius-invariant Cassinian metric (see [\cite{Beardon1,Hasto1,Hasto2,Hasto3,Ibragimov1, Ibragimov3, Ibragimov4,Seittenranta,Vuorinen}] and the references therein). All these metrics are defined in terms of distance functions and can be classified into one point metrics or two-point metrics based on the number of boundary points used in their definitions. Recently, in the paper \cite{AZW}, the authors proposed an approach to construct a metric from the one-point metrics. More precisely, let $(X,d)$ be an arbitrary metric space. For each $p\in X$, they defined a distance function $\tau_p$ on $X \setminus\{p\}$, by
$$
\tau_p(x,y)=\log(1+2\frac{d(x,y)}{\sqrt{d(p,x)}\sqrt{d(p,y)}})
$$
and proved that for each $p\in X$, the distance function $\tau_p$ is Gromov hyperbolic with $\delta=\log3 +\log2$. In fact, the following more general distance function was first introduced by O. Dovgoshey, P. Hariri and M. Vuorinen in \cite{Vuorinen2}.
$$
h_{D,c}(x,y)=\log(1+c\frac{d(x,y)}{\sqrt{d_D(x)d_D(y)}})
$$
where $D$ is a nonempty open set in a metric space $(X, d) $ and $d_D(x)=\mathrm{dist}(x, \partial D) , c\geq 2$. They shown that $h_{D,c}$ is a metric and $2$ is the best possible.
Although hyperbolicity yields a very satisfactory theory, for certain analytic purposes, hyperbolicity by itself is not enough, and one needs certain enhancements. In the paper \cite{NJ}, the authors introduced the notion of strongly hyperbolic space and given certain enhancements. They shown that strongly hyperbolic spaces are Gromov hyperbolic spaces that are metrically well-behaved at infinity, and under weak geodesic assumptions, the strongly space are strongly bolic as well. They shown that $\mathrm{CAT}(-1)$ spaces are strongly hyperbolic and also shown that the Green metric defined by a random walk on a hyperbolic group is strongly hyperbolic. Since the strongly hyperbolic space has better properties, it is interesting to determine which hyperbolic metric in geometric function theory is a strongly hyperbolic metric or to construct a strongly hyperbolic metric on a given metric space. We consider this problem in Ptolemy spaces in this paper.
Firstly, we show that the $\log$-metric of a Ptolemy space is a strongly hyperbolic metric. That is, we show that if $(X,d)$ is a Ptolemy space, then $(X,\log(1+d))$ is a strongly hyperbolic space. Using our result, we can show that the metric space $(X, S_p)$ is also a strongly hyperbolic space. Here $$S_p(x,y)=\log(1+
\frac{d(x,y)}{[1+d(x,p)][1+d(y,p)]})$$
for a fix point $p\in X$ and $x,y\in X$.
Secondly, motivated by the recent works of A. G. Aksov, Z. Ibragimov and W. Whiting in \cite{AZW}, we construct a strongly hyperbolic metric on a Ptolemy metric space. To formulate the results of our paper, for each $p\in X$, we define a distance function $\chi_p$ on $X \setminus\{p\}$, by
$$
\chi_p(x,y)=\log(1+\frac{d(x,y)}{d(p,x)d(p,y)}).
$$
We prove that if $(X,d)$ is a Ptolemy space, for each $p\in X$, the distance function $\chi_p$ is a strongly hyperbolic metric.
We also consider the distortion of the above metric $\chi_p$ under M\"{o}bius maps of a punctured ball in $\mathbb{R}^n$.
\section{Strongly hyperbolic metrics on Ptolemy spaces}
We begin by recalling some basic notions and facts. Let $X$ be a metric space, fix a base point $o\in X$, the Gromov product of $x, x'\in X$ with respect to $o$ is defined as
$$
(x|x')_o:=\frac{1}{2}(|ox|+|ox'|-|xx'|).
$$
Note that $(x|x')_o\geq 0$ by the triangle inequality.
\begin{definition}[Gromov]\label{def-1}
A metric spaces $X$ is \emph{$\delta$-hyperbolic}, where $\delta \geq 0$, if
$$
(x|y)_o\geq \min\{(x|z)_o,(z|y)_o\}-\delta
$$
for all $x, y, z,o\in X$.
\end{definition}
In the paper \cite{NJ}, the authors given the following enhancements of hyperbolicity.
\begin{definition}[\cite{NJ}, Definition 4.1]We say that a metric space is \emph{strongly hyperbolic} with parameter $\epsilon>0$ if
$$
\exp(-\epsilon(x|y)_o)\leq \exp(-\epsilon(x|z)_o)+\exp(-\epsilon(z|y)_o)
$$
for all $x, y, z, o\in X$; equivalently, the four-point condition
$$
\exp(\frac{\epsilon}{2}(|xy|+|zt|))\leq \exp(\frac{\epsilon}{2}(|xz|+|yt|))+\exp(\frac{\epsilon}{2}(|xt|+|zy|))
$$
holds for all $x, y, z, t\in X$.
\end{definition}
The authors mentioned the motivation for considering this notion of strongly hyperbolic is the following theorem in the paper \cite{NJ},
\begin{theorem}[\cite{NJ}, Theorem 4.2] \label{NJThereom}
Let $X$ be a strongly hyperbolic space with parameter $\epsilon$. Then X is an $\epsilon$-good, $\log2/\epsilon$-hyperbolic space. Furthermore, $X$ is strongly bolic provided that $X$ is roughly geodesic.
\end{theorem}
Strongly bolic metric spaces was considered by V. Lafforgue in
\cite{Lafforgue} in relation with conjecture of Baum-Connes. Here for hyperbolic spaces $(X,d)$ which are roughly geodesic, strong bolicity in the sense of Lafforgue \cite{Lafforgue} amounts to the following:
for every $\eta,r>0$, there exists $R>0$ such that $d(x,y)+d(z,t)\leq r$ and $d(x,z)+d(y,t)\geq R$ imply that
$d(x,t)+d(y,z)\leq d(x,z)+d(y,t)+\eta$.
From the above theorem \ref{NJThereom}, we know that the strongly hyperbolic space has better properties than general hyperbolic spaces.
Thus it is interesting to construct a strongly hyperbolic metric on a metric space.
\begin{definition} A metric space $(X,d)$ is called \emph{Ptolemy space} if the following Ptolemy inequality
$$
d(x_1,x_2)d(x_3,x_4)\leq d(x_1,x_4)d(x_2,x_3)+d(x_1,x_3)d(x_2,x_4)
$$
holds for all quadruples $x_1,x_2,x_3,x_4\in X$.
\end{definition}
\begin{lemma}\label{lemma-2.1}
Suppose $(X,d)$ is a metric space and $x_i\in X$ for $i=1,2,3,4$. Then
$$
d(x_1,x_2)+d(x_3,x_4)\leq d(x_1,x_3)+d(x_1,x_4)+d(x_2,x_3)+d(x_2,x_4).
$$
\end{lemma}
\begin{prof}
By the triangle inequality, we have
\begin{align}\label{ine-2.1.2} \nonumber
d(x_1,x_2)\leq d(x_1,x_3)+d(x_3,x_2),\\ \nonumber
d(x_1,x_2)\leq d(x_1,x_4)+d(x_4,x_2),\\ \nonumber
d(x_3,x_4)\leq d(x_3,x_1)+d(x_1,x_4),\\ \nonumber
d(x_3,x_4)\leq d(x_3,x_2)+d(x_2,x_4). \nonumber
\end{align}
We sum the above four inequalities and obtain that
$$
d(x_1,x_2)+d(x_3,x_4)\leq d(x_1,x_3)+d(x_1,x_4)+d(x_2,x_3)+d(x_2,x_4).
$$\qed
\end{prof}
\begin{theorem}\label{keytheorem}
Suppose that $(X,d)$ is a Ptolemy space, then the metric space $(X,\log(1+d))$ is a strongly hyperbolic space with parameter $2$.
\end{theorem}
\begin{prof}
Let $x_1,x_2,x_3,x_4\in X$, we introduce the following notations for convenience.
$ \rho_{ij}=\log(1+d(x_i,x_j))$, $d_{ij}=d(x_i,x_j)$ for all $i,j\in \{1,2,3,4\}$. Thus
$$
\rho_{ij}=\log(1+d_{ij}).
$$
Now, we need to show that
$$
e^{(\rho_{12}+\rho_{34})}\leq e^{(\rho_{13}+\rho_{24})}+e^{(\rho_{14}+\rho_{23})},
$$
which is equivalent to the following inequality,
$$
(1+d_{12})(1+d_{34})\leq (1+d_{13})(1+d_{24})+(1+d_{14})(1+d_{23}).
$$
Notice that $(X,d)$ is a Ptolemy space, by Lemma \ref{lemma-2.1}, we have
\begin{equation*}
\begin{split}
(1+d_{12})(1+d_{34})&\;=1+d_{12}+d_{34}+d_{12}d_{34}\\
&\;\leq 2+d_{13}+d_{24}+d_{14}+d_{23}+d_{14}d_{23}+d_{13}d_{24}\\
&\;=(1+d_{13})(1+d_{24})+(1+d_{14})(1+d_{23}).\\
\end{split}
\end{equation*}
Thus, we show that the metric space $(X, \log(1+d))$ is a strongly hyperbolic hyperbolic space with parameter $\epsilon=2$.
\qed
\end{prof}
Let $(X,d)$ be any metric space, fix a base point $p\in X$, and the following distance function $s_p$ was considered in the paper \cite{BHX},
$$
s_p(x,y)= \frac{d(x,y)}{[1+d(x,p)][1+d(y,p)]}
$$
for $x,y\in X$. Sometimes this is a distance function, but in general it may not satisfy the triangle inequality. In this paper, we have the following result.
\begin{theorem}Suppose $(X,d)$ is a Ptolemy space and $p\in X$. Then $(X, s_p)$ is also a Ptolemy space.
\end{theorem}
\begin{proof}
Firstly, we prove that $s_p$ is a metric. Obviously, $s_p(x,y)\geq 0$, $s_p(x,y)=s_p(y,x)$ and $s_p(x, y)=0$ if and only
if $x=y$. So it is enough to show that the triangle inequality holds. That is, for all $x, y, z\in X\setminus\{p\}$,
$$
s_p(x, y)\leq s_p(x, z)+s_p(z, y),
$$
which is equivalent to
$$
d(x,y)[1+d(z,p)]\leq d(x,z)[1+d(y,p)]+d(y,z)[1+d(x,p)].
$$
Since $(X,d)$ is a Ptolemy space, the above inequality holds naturally, which implies that $s_p$ is a metric on $X$.
Now, we show that $(X, s_p)$ also is a Ptolemy space.
For any $x_i\in X$ for $i=1,2,3,4$. Set $p_i=1+d(p,x_i)$ and $d_{ij}=d(x_i,x_j)$,
thus $s_p(x_x,x_j)=d_{ij}/p_ip_j$
for $i,j\in \{1,2,3,4\}$.
Since $(X,d)$ is a Ptolemy space, we have
$$
d_{12}d_{34}\leq d_{13}d_{24}+d_{14}d_{23},
$$
Thus
$$
\frac{d_{12}d_{34}}{p_1p_2p_3p_4}\leq \frac{d_{13}d_{24}}{p_1p_2p_3p_4}+\frac{d_{14}d_{23}}{p_1p_2p_3p_4}.
$$
That is
$$
s_p(x_1,x_2)s_p(x_3,x_4)\leq s_p(x_1,x_3)s_p(x_2,x_4)+s_p(x_1,x_4)s_p(x_2,x_3),
$$
which implies that $(X, s_p)$ also is a Ptolemy space.
\end{proof}
Using $s_p$, we define the following metric $S_p$ on $X$
by
$$
S_p(x,y)=\log(1+s_p(x,y)).
$$
According to Theorem \ref{keytheorem}, we have the following result.
\begin{theorem}
Supppose $(X,d)$ is a Ptolemy and $p\in X$. The metric space $(X, S_p)$ is a strongly hyperbolic space with parameter $\epsilon=2$. Thus $(X, S_p)$ is a $\log2/2$-hyperbolic space.
\end{theorem}
Suppose $(X,d)$ is a metric space. For each $p\in X$, A. G. Aksov, Z. Ibragimov and W. Whiting defined a distance function $\tau_p$ on $X\setminus\{p\}$
in \cite{AZW} by
$$
\tau_p(x,y)=\log(1+2\frac{d(x,y)}{\sqrt{d(p,x)}\sqrt{d(p,y)}}).
$$
They obtained the following result.
\begin{theorem}[\cite{AZW}, Theorem 2.1 and Lemma 4.1]Let $(X, d)$ be a Ptolemy space and let $p\in X$ be an
arbitrary point. Then the distance function $\tau_p$ is a metric on $X\setminus\{p\}$. In
particular, the space $(X\setminus\{p\}, \tau_p)$ is Gromov hyperbolic with $\delta=\log 3+\log2$.
\end{theorem}
Motivated by the definition of $\tau_p$, for each $p\in X$, we define a distance function $\chi_p$ on $X\setminus\{p\}$
by
$$
\chi_p(x,y)=\log(1+\frac{d(x,y)}{d(p,x)d(p,y)}).
$$
Usually, $\chi_p$ is not a metric on $X\setminus\{p\}$. But, when $(X,d)$ is a Ptolemy space, we have the following result.
\begin{theorem}Let $(X, d)$ be a Ptolemy metric space and let $p\in X$ be an
arbitrary point. Then the distance function $\chi_p$ is a metric on $X\setminus\{p\}$.
\end{theorem}
\begin{proof}
Obviously, $\chi_p(x,y)\geq 0$, $\chi_p(x,y)=\chi_p(y,x)$ and $\chi_p(x, y)=0$ if and only
if $x=y$. So it is enough to show that the triangle inequality holds. That is, for all $x, y, z\in X\setminus\{p\}$,
$$
\chi_p(x, y)\leq\chi_p(x, z)+\chi_p(z, y),
$$
which is equivalent to
$$
\frac{d(x,y)}{d(x,p)d(y,p)}\leq \frac{d(x,z)}{d(x,p)d(z,p)}
+\frac{d(y,z)}{d(y,p)d(z,p)}
+\frac{d(x,z)d(y,z)}{d(z,p)^2d(x,p)d(y,p)}.
$$
That is
\begin{align}\label{eq0-metric}
d(x,y)d(z,p)\leq d(x,z)d(y,p)+d(y,z)d(x,p)+\frac{d(x,z)d(y,z)}{d(z,p)}.
\end{align}
Since $(X,d)$ is a Ptolemy space, the above inequality (\ref{eq0-metric}) holds naturally, which completes the proof.
\end{proof}
\begin{lemma}\label{lemma-2.2}
Suppose $(X,d)$ is a Ptolemy metric space and $x_i\in X$ for $i=0,1,2,3,4$.
Set $p_i=d(x_0,x_i)$ and $d_{ij}=d(x_i,x_j)$ for $i,j\in \{1,2,3,4\}$.
Then
$$
p_3p_4d_{12}+p_1p_2d_{34}\leq p_1p_3d_{24}+p_2p_4d_{13}+p_2p_3d_{14}+p_1p_4d_{23}.
$$
\end{lemma}
\begin{proof}
By the Ptolemy inequality, we have
\begin{align}\label{ine-2.1.2} \nonumber
p_3p_4d_{12}\leq p_3p_1d_{24}+p_3p_2d_{14},\\ \nonumber
p_3p_4d_{12}\leq p_4p_2d_{13}+p_1p_4d_{23},\\ \nonumber
p_1p_2d_{34}\leq p_1p_3d_{24}+p_1p_4d_{23},\\ \nonumber
p_1p_2d_{34}\leq p_2p_4d_{13}+p_2p_3d_{14}. \nonumber
\end{align}
We sum the above four inequalities and obtain that
$$
p_3p_4d_{12}+p_1p_2d_{34}\leq
p_1p_3d_{24}+p_2p_4d_{13}+p_2p_3d_{14}+p_1p_4d_{23}.
$$
\end{proof}
Using the above lemma \ref{lemma-2.2}, we obtain the following result.
\begin{theorem}Let $(X, d)$ be a Ptolemy metric space and let $p\in X$ be an
arbitrary point. Then the metric space $(X\setminus\{p\}, \chi_p)$ is strongly hyperbolic space with parameter $2$.
Thus $(X\setminus\{p\}, \chi_p)$ is $\log2/2$-hyperbolic space.
\end{theorem}
\begin{proof}Let $x_1,x_2,x_3,x_4\in X\setminus\{p\}$, we introduce the following notations for convenience.
$d_{ij}=d(x_i,x_j)$, $p_i=d(p,x_i)$ and $\rho_{ij}=\chi_p(x_i,x_j)$ for $i,j\in \{1,2,3,4\}$. Thus
$$
\rho_{ij}=\log(1+\frac{d_{ij}}{p_ip_j})
$$
for $i,j\in \{1,2,3,4\}$.
Now, we need to show that
$$
e^{(\rho_{12}+\rho_{34})}\leq e^{(\rho_{13}+\rho_{24})}+e^{(\rho_{14}+\rho_{23})},
$$
which is equivalent to the following inequality
\begin{align}\nonumber
(1+\frac{d_{12}}{p_1p_2})(1+\frac{d_{34}}{p_3p_4})\leq (1+\frac{d_{13}}{p_1p_3})(1+\frac{d_{24}}{p_2p_4})\\ \nonumber
+(1+\frac{d_{14}}{p_1p_4})(1+\frac{d_{23}}{p_2p_3}). \nonumber
\end{align}
That is
\begin{align}
\frac{d_{12}}{p_1p_2}+\frac{d_{34}}{p_3p_4}+\frac{d_{12}}{p_1p_2}\frac{d_{34}}{p_3p_4}
&\leq\frac{d_{13}}{p_1p_3}+\frac{d_{24}}{p_2p_4}+\frac{d_{13}}{p_1p_3}\frac{d_{24}}{p_2p_4}\\ \nonumber
&+\frac{d_{14}}{p_1p_4}+\frac{d_{23}}{p_2p_3}+\frac{d_{14}}{p_1p_4}\frac{d_{23}}{p_2p_3}+1,\nonumber
\end{align}
which is equivalent to the following inequality
\begin{align}\label{strongly}
p_3p_4d_{12}+p_1p_2d_{34}+d_{12}d_{34}&\leq p_2p_4d_{13}+p_1p_3d_{24}+d_{13}d_{24}\\ \nonumber
&+p_2p_3d_{14}+p_1p_4d_{23}+d_{14}d_{23}\\ \nonumber
&+p_1p_2p_3p_4.\nonumber
\end{align}
Since $(X,d)$ is a Ptolemy space, we have
$$
d_{12}d_{34}\leq d_{13}d_{24}+d_{14}d_{23}.
$$
From Lemma \ref{lemma-2.2}, we have
$$
p_3p_4d_{12}+p_1p_2d_{34}\leq p_2p_4d_{13}+p_1p_3d_{24}+p_2p_3d_{14}+p_1p_4d_{23}.
$$
Thus, the above inequality \ref{strongly} holds, which implies that
$(X\setminus\{p\}, \chi_p)$ is a strongly space with parameter $2$. From Theorem \ref{NJThereom}, we know that $(X\setminus\{p\}, \chi_p)$ is $\log2/2$-hyperbolic space. \end{proof}
\section{Distortion property under M\"{o}bius transformations}
In the following, we use the notation $\mathbb{R}^n,n\geq 2$ for the Euclidean-dimensional space.
The Euclidean distance between $x, y\in \mathbb{R}^n$ is denoted by $|x-y|$. Given $x\in\mathbb{R}^n$ and $r>0$, the open ball centered at $x$ with radius $r$ is denoted by $B^n(x,r):=\{y\in \mathbb{R}^n: |x-y|< r\}$. Denote by $\mathbb{B}^n := B^n(0,1)$, the unit ball in $\mathbb{R}^n$. One of our objectives in this section is to study the distortion property of our metric under M\"{o}bius maps from a punctured ball onto another punctured ball. Distortion properties of the scale-invariant Cassinian metric of the unit ball under M$\mathrm{\ddot{o}}$bius maps has been studied in \cite{Ibragimov3}. Recently, in the \cite{Msahoo}, M. R. Mohapatra and S. K. Sahoo also considered the distortion of the $\widetilde{\tau}$-metric under Mobius maps of a punctured ball.
\begin{theorem}Let $a \in \mathbb{B}^n$ and $f: \mathbb{B}^n \setminus \{0\}\rightarrow \mathbb{B}^n \setminus\{a\} $ be a M$\ddot{o}$bius map with $f(0)=a$. Then for $x,y\in\mathbb{B}^n \setminus \{0\}$, we have
$$
\chi_0(x,y)\leq\chi_a(f(x),f(y))\leq\chi_0(x,y)-\log(1-|a|^2).
$$
The equalities hold if and only if $a=0$.
\end{theorem}
\begin{proof}
If $a=0$, the proof is trivial since $f(x)=Ax$ for some orthogonal matrix $A$. Now we assume that $a\neq 0$. Let $\sigma$ be the inversion in the sphere
$\mathbb{S}^{n-1}(a^{*},r)=\{x\in \mathbb{R}^n: |x-a^{*}|=r\}$, where
$$
a^{*}=\frac{a}{|a|^2},r=\sqrt{|a^{*}|^2-1}=\frac{\sqrt{1-|a|^2}}{|a|}.
$$
Note that the sphere $\mathbb{S}^{n-1}(a^{*},r)$ is orthogonal to $\mathbb{S}^{n-1}$ and that $\sigma(a)=0$. In particular, $\sigma$ is a M$\mathrm{\ddot{o}}$bius map with $\sigma(\mathbb{B}^n \setminus \{a\})= \mathbb{B}^n \setminus \{0\}$. Recall that
$$
\sigma(x)=a^{*}+\big(\frac{r}{|x-a^{*}|}\big)^2(x-a^{*}).
$$
Then $\sigma\circ f$ is an orthogonal matrix (see, for example, [\cite{Beardon}, Theorem 3.5.1(i)]). In particular,
$$
|\sigma(f(x))-\sigma(f(y))|=|x-y|.
$$
By computation, we have
$$
|\sigma(x)-\sigma(y)|=\frac{r^2|x-y|}{|x-a^{*}||y-a^{*}|}.
$$
Thus
$$
|\sigma(f(x))-\sigma(f(y))|=\frac{r^2|f(x)-f(y)|}{|f(x)-a^{*}||f(y)-a^{*}|}=|x-y|,
$$
which implies that
$$
|f(x)-f(y)|=\frac{|x-y|}{r^2}|f(x)-a^{*}||f(y)-a^{*}|.
$$
Since $f(0)=a$, we have
$$
|f(x)-a|=\frac{|f(x)-a^{*}||a-a^{*}|}{|a^{*}|^2-1}|x|\quad\text{and}\quad|f(y)-a|=\frac{|f(y)-a^{*}||a-a^{*}|}{|a^{*}|^2-1}|y|.
$$
Notice that
$$
\chi_0(x,y)=\log(1+\frac{|x-y|}{|x||y|})
$$
and
$$
\chi_a(f(x),f(y))=\log(1+\frac{|f(x)-f(y)|}{|f(x)-a||f(y)-a|}).
$$
We have
\begin{align}\nonumber
\chi_a(f(x),f(y))&=\log(1+\frac{|f(x)-f(y)|}{|f(x)-a||f(y)-a|})\\ \nonumber
&=\log(1+\frac{|x-y|}{|x||y|}\frac{|a^{*}|^2-1}{|a-a^{*}|^2})\\ \nonumber
&=\log(1+\frac{1}{1-|a|^2}\frac{|x-y|}{|x||y|}).\nonumber
\end{align}
Since $|a|<1$, we have $1\leq\frac{1}{1-|a|^2}$. Thus
$$
1+\frac{|x-y|}{|x||y|}\leq1+\frac{1}{1-|a|^2}\frac{|x-y|}{|x||y|}\leq\frac{1}{1-|a|^2}+\frac{1}{1-|a|^2}\frac{|x-y|}{|x||y|}.
$$
So
$$
\chi_0(x,y)\leq\chi_a(f(x),f(y))\leq\chi_0(x,y)-\log(1-|a|^2).
$$
Obviously, the equalities hold if and only if $a=0$.
\end{proof}
\noindent\textbf{Acknowledgements.} This work was supported by the National Natural Science Foundation of China under grant Nos.\,11301165,11571099.
|
{
"timestamp": "2018-03-06T02:10:20",
"yymm": "1802",
"arxiv_id": "1802.08829",
"language": "en",
"url": "https://arxiv.org/abs/1802.08829"
}
|
\subsection*{Abstract}
\textbf{\large Abstract---}
This paper develops a technique to detect whether the cross traffic competing with a flow is elastic or not, and shows how to use the elasticity detector to improve congestion control. If the cross traffic is elastic, i.e., made up of flows like Cubic or NewReno that increase their rate when they perceive available bandwidth, then one should use a scheme that competes well with such traffic. Such a scheme will not be able to control delays because the cross traffic will not cooperate to maintain low delays. If, however, cross traffic is inelastic, then one can use a suitable delay-controlled algorithm.
Our elasticity detector uses an asymmetric sinusoidal pulse pattern and estimates elasticity by computing the frequency response (FFT) of the cross traffic estimate; we have measured its accuracy to be over 90\%. We present the design and evaluation of Nimbus\xspace, a congestion control protocol that uses the elasticity detector to switch between delay-control and TCP-competitive modes. Our results on emulated and real-world paths show that Nimbus achieves throughput comparable to or better than Cubic always, but with delays that are much lower when cross traffic is inelastic. Unlike BBR, Nimbus\xspace is fair to Cubic, and has significantly lower delay by 40-50 ms. Compared to Copa, which also switches between a delay-controlling and a TCP-competitive mode, Nimbus\xspace is more robust at correctly detecting the nature of cross traffic, and unlike Copa, it is usable by a variety of delay-based and TCP-competitive methods.
\if 0
We use the elasticity detector to demonstrate a congestion control framework that always achieves high utilization, but which also achieves low delays when cross traffic permits it. The detector uses an asymmetric sinusoidal pulse pattern and estimates elasticity by computing the frequency response (FFT) of the cross traffic estimate; we have measured its accuracy to be over 90\%. We have developed Nimbus\xspace, a protocol that explicitly switches between TCP-competitive and delay-control modes using the elasticity detector. Our results on emulated and real-world paths show that Nimbus achieves throughput comparable to or better than Cubic always, but with delays that are much lower when cross traffic is inelastic. Unlike BBR, Nimbus\xspace is fair to Cubic, and has significantly lower delay by 40-50 ms.
Compared to Copa, which also switches between a delay-based and a TCP-competitive mode, Nimbus\xspace is more robust at correctly detecting the nature of cross traffic, and unlike Copa, is usable by a variety of delay-based methods.
\fi
\if 0
This paper proposes Nimbus, a method for controlling congestion at
the granularity of traffic aggregates rather than individual flows
or host-pairs. The sending end of a traffic aggregate is much more
likely to always have packets ready to be sent than an individual
flow, which opens up new opportunities. Aggregates can modulate packet
transmissions to estimate the volume of cross traffic and determine
whether the cross traffic is elastic (i.e., long-running flows with
adaptive congestion control) or inelastic (i.e., short TCP flows,
flows bottlenecked elsewhere along the path, and constant-rate
flows). We develop a dual-mode congestion control method, and an
estimator of elastic cross traffic that determines which mode to
use. The first mode uses a delay-control rule for achieving high
throughput with modest queueing delays when running with inelastic
traffic. The second mode implements a fair way to co-exist with elastic
cross traffic. Experiments with our Linux implementation
of Nimbus on emulated and real network paths show that it
outperforms schemes like Cubic, Vegas, and BBR across a
wide range of scenarios.
\fi
\if 0
Traffic in the Internet is either \emph{elastic} or \emph{inelastic}; either it responds to congestion events, or it does not.
This paper considers the problem of congestion control for \emph{persistent} traffic, or elastic traffic which is perpetually backlogged.
We propose Nimbus\xspace, a congestion control algorithm for persistent elastic traffic.
We show that a delay-based rate control law is best when competing with inelastic traffic but cannot compete with elastic traffic, that an AIMD law is competitive with elastic traffic but incurs unnecessarily high delays against inelastic traffic, and that Nimbus\xspace can determine which type of traffic it is competing against and switch to an appropriate rate control law.
Nimbus\xspace provides full utilization of remaining link capacity and latencies within 15\% of the minimum path RTT when competing against inelastic traffic, and fairness when competing against elastic traffic.
\fi
\section{Does Mode Switching Help Cross-Traffic?}
\label{app:cross-traffic-impact}
\begin{figure}[t]
\centering
\includegraphics[width=0.43\textwidth]{images/fct3.pdf}
\caption{\small{Using Nimbus\xspace reduces the p95 FCT of cross-flows relative to BBR at all flow sizes, and relative to Cubic for short flows. Vegas provides low cross-flow FCT, but its own rate is low.}}
\label{fig:emp:fct}
\end{figure}
Using the same setup as \Sec{rw-wl}, we measure the flow completion time (FCT) of cross-flows.
\Fig{emp:fct} compares the 95th percentile (p95) cross-flow FCT for cross-flows of different sizes. The FCTs are normalized by the corresponding value for Nimbus\xspace at each flow size.
BBR exhibits much higher cross-flow FCT at all sizes compared to all the other protocols, consistent with the observation of unfairness ($\S$\ref{s:big-experiment}).
For small cross-flows ($\leq$ 15 KB), the p95 FCT with Nimbus\xspace and Copa are comparable to Vegas and lower than Cubic.
With Nimbus\xspace p95 FCT of cross traffic at higher flow sizes are slightly lower than Cubic because of small delays in switching to TCP-competitive-mode.
At all flow sizes, Vegas provides the best cross-flow FCTs, but its own flow rate is dismal; Copa is more aggressive than Vegas but less than Nimbus, but at the expense of its own throughput (\Sec{rw-wl}).
\section{Is Cross Traffic Ever Inelastic?}
\label{app:delay-control-motivation}
Our experiments on more than 25 Internet paths show that scenarios where cross traffic is predominantly inelastic are common. Figure~\ref{fig:dsmotivation} shows the average throughput and delay for 100 runs of a loss-based scheme (Cubic) compared to a delay-based scheme (Nimbus\xspace delay, described in $\S$\ref{s:nimbus-protocol}) on one of these paths. The delay-based\xspace scheme generally achieves much lower delays than Cubic, with similar throughput. This shows that there is an opportunity to significantly improve delays using delay-based\xspace algorithms, provided we can detect loss-based TCPs and compete with them fairly when needed.
\begin{figure}[tbh]
\includegraphics[width=\columnwidth]{images/dsmotivation2.pdf}
\caption{\small \textbf{Loss-based vs. delay-based congestion control.} The plot shows the average throughput and delay for 100 experiments with Cubic and the Nimbus delay-control algorithm (\S\ref{s:nimbus-protocol}). The experiments were run between a residential client (location redacted for anonymity) and an EC2 server in California. Each experiment lasted one minute.}
\label{fig:dsmotivation}
\vspace*{-3pt}
\end{figure}
\section{How Well Does Nimbus\xspace Compete with BBR?}
\label{app:bbr-comparison}
\begin{figure}[tbh]
\includegraphics[width=\columnwidth]{images/NimbusvsBBR.png}
\vspace{-8mm}
\caption{\textbf{ Nimbus\xspace's performance against BBR is similar to that of Cubic.} Both Nimbus\xspace and Cubic compete against 1 BBR flow on a 96 Mbit/s link. For various buffer sizes, Nimbus\xspace achieves the same throughput as Cubic. }
\label{fig:nimbus_vs_bbr}
\end{figure}
We now evaluate how well a Nimbus\xspace flow competes with a BBR flow.
In this experiment, the cross traffic is 1 BBR flow and the bottleneck link bandwidth is 96 Mbit/s. We vary the buffer size from 0.5 BDP to 4 BDP. \Fig{emp:switch} shows the average throughput of Nimbus\xspace and Cubic flows while competing with BBR over a 2-minute experiment. Nimbus\xspace achieves the same throughput as Cubic for all buffer sizes.
When the buffer size is $\leq$ 1 BDP, BBR is not ACK-clocked, and Nimbus\xspace classifies it as inelastic traffic. As a result, Nimbus\xspace gets a relatively small fraction of the link bandwidth. In this scenario, Cubic also gets a small fraction of the link, because BBR sends traffic at its estimate of bottleneck link and is too aggressive.
When the buffer size is $\geq$ 1 BDP, BBR becomes ACK-clocked because of the cap on its congestion window. Nimbus\xspace now classifies BBR as elastic traffic. Nimbus\xspace stays in competitive mode, behaving like Cubic and achieving the same throughput as Cubic.
\section{Understanding Scenarios where Copa's Mode Switching Makes Errors}
\label{app:copa-compare}
We explore the dynamics of Nimbus\xspace and Copa in a few experiments from the scenarios described in $\S$\ref{s:copa-compare}. Recall that the link capacity is 96 Mbit/s, the propagation RTT is 50~ms, and the buffer size is 2~BDPs.
\subsection{Constant Bit Rate Cross Traffic}
\Fig{appendix-cbr} shows throughput and delay profile for Copa and Nimbus\xspace while competing against inelastic Constant Bit Rate (CBR) traffic. We consider two scenarios (i) CBR occupies a small fraction of the link (24 Mbits/s, 25\%) and (ii) CBR occupies majority of the link (80 Mbit/s, 83\%).
When the CBR traffic is low (\Fig{appendix-cbr} \subref{fig:appendix-cbr:copa-24} and \subref{fig:appendix-cbr:nimbus-24}), both Copa and Nimbus\xspace correctly identify it as non buffer-filling and inelastic respectively and achieve low queuing delays.
When the CBR's share of the link is high (\Fig{appendix-cbr} \subref{fig:appendix-cbr:copa-80}), Copa incorrectly classifies the cross traffic as buffer-filling and as a result stays in competitive mode, leading to high queuing delays. Copa relies on a pattern of emptying queues to detect whether the cross traffic is buffer-filling or not. However, when the rate of cross traffic is $z$, the fastest possible rate at which the queue can drain is $\mu - z$, even if Copa reduces its rate to zero. For 80~Mbit/s of cross traffic, this implies:
\begin{align}
\max(-\frac{dQ}{dt}) &= \mu - z \\ \nonumber
&= 0.17 \times \mu\\ \nonumber
&= 0.17 \times \frac{BDP}{RTT}. \nonumber
\end{align}
Therefore, if the queue size ever exceeds $5 \times 0.17 BDP$, then Copa won't be able to drain the queue in 5 RTTs, and it will misclassify the cross traffic as buffer-filling. The queue size can grow large due to a transient burst or if Copa incorrectly switches to competitive mode. Once Copa is in competitive mode, it will drive the queues even higher and can therefore get stuck in competitive mode.
In contrast (\Fig{appendix-cbr} \subref{fig:appendix-cbr:nimbus-80}), Nimbus\xspace doesn't rely on emptying queues and correctly classifies cross traffic as inelastic, achieving low queuing delays.
\begin{figure*}
\begin{center}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/copa-CBR-24.pdf}
\subcaption{Copa: 24 Mbit/s CBR}
\label{fig:appendix-cbr:copa-24}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/nimbus-CBR-24.pdf}
\subcaption{Nimbus\xspace: 24 Mbit/s CBR}
\label{fig:appendix-cbr:nimbus-24}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/copa-CBR-80.pdf}
\subcaption{Copa: 80 Mbit/s CBR}
\label{fig:appendix-cbr:copa-80}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/nimbus-CBR-80.pdf}
\subcaption{Nimbus\xspace: 80 Mbit/s CBR}
\label{fig:appendix-cbr:nimbus-80}
\end{subfigure}
\caption{\textbf{Queuing delay and throughput dynamics for inelastic CBR cross traffic.} When the CBR traffic is low (\subref{fig:appendix-cbr:copa-24}), Copa classifies the traffic as non buffer-filling and is able to achieve low queuing delays. But when the CBR traffic occupies a high fraction (\subref{fig:appendix-cbr:copa-80}), Copa incorrectly classifies the traffic as buffer-filling, resulting in higher queuing delays. In both the situations (\subref{fig:appendix-cbr:nimbus-24} and \subref{fig:appendix-cbr:nimbus-80}), Nimbus\xspace correctly classifies the traffic as inelastic and achieves low queuing delays.}
\label{fig:appendix-cbr}
\end{center}
\end{figure*}
\subsection{Elastic cross traffic}
\Fig{appendix-cross} shows throughput and delay over time for Copa and Nimbus\xspace while competing against an elastic NewReno flow. We consider two scenarios: (1) both flows have the same propagation RTT, and (2) the cross traffic's propagation RTT is $4\times$ higher than the Copa or Nimbus\xspace flow. When the RTTs are the same (\Fig{appendix-cross} \subref{fig:appendix-cross:copa-1} and \subref{fig:appendix-cross:nimbus-1}), both Copa and Nimbus\xspace correctly classify the cross traffic, achieving a fair share of throughput.
When the cross traffic RTT is higher (\Fig{appendix-cross} \subref{fig:appendix-cross:copa-4}), the NewReno flow ramps up its rate slowly, causing Copa to misclassify the traffic and achieve less than its fair share of the throughput. Here, Copa achieves 27 Mbit/s but its fair share is at least 48 Mbit/s (and 77 Mbit/s if one considers the RTT bias). In contrast (\Fig{appendix-cross} \subref{fig:appendix-cross:nimbus-4}), Nimbus\xspace correctly classifies the cross traffic as elastic, achieving its RTT-biased share of throughput.
\begin{figure*}[tbh]
\begin{center}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/copa-crossrtt-1.pdf}
\subcaption{Copa: Cross Traffic RTT = 1 $\times$ Flow RTT}
\label{fig:appendix-cross:copa-1}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/nimbus-crossrtt-1.pdf}
\subcaption{Nimbus\xspace: Cross Traffic RTT = 1 $\times$ Flow RTT}
\label{fig:appendix-cross:nimbus-1}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/copa-crossrtt-4.pdf}
\subcaption{Copa: Cross Traffic RTT = 4 $\times$ Flow RTT}
\label{fig:appendix-cross:copa-4}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[height=1.5in]{images/nimbus-crossrtt-4.pdf}
\subcaption{Nimbus\xspace: Cross Traffic RTT = 4 $\times$ Flow RTT}
\label{fig:appendix-cross:nimbus-4}
\end{subfigure}
\caption{\textbf{Queuing delay and throughput dynamics for elastic cross traffic.} When the elastic cross traffic increases fast enough (\subref{fig:appendix-cross:copa-1}), Copa classifies it as buffer-filling and is able to achieve its fair share. But, when the elastic cross traffic increases slowly (\subref{fig:appendix-cross:copa-4}), Copa incorrectly classifies the traffic as non buffer-filling, achieving less throughput than its fair share. In both the situations (\subref{fig:appendix-cross:nimbus-1} and \subref{fig:appendix-cross:nimbus-4}), Nimbus\xspace correctly classifies the traffic as elastic and is able to achieve its fair share of throughput.}
\label{fig:appendix-cross}
\end{center}
\end{figure*}
\section{Conclusion}
\label{s:concl}
This paper showed a method for detecting the elasticity of cross traffic and showed that it is a useful building block for congestion control. The detection technique uses a carefully constructed asymmetric sinusoidal pulse and observes the frequency response of cross traffic rates at a sender. We presented several controlled experiments to demonstrate its robustness and accuracy. Elasticity detection enables protocols to combine the best aspects of delay-control methods with TCP-competitiveness. We found that our proposed methods are beneficial not only on a variety of emulated conditions that model realistic workloads, but also on a collection of 25 real-world Internet paths.
\if 0
using two distinct modes in end-to-end congestion control for traffic aggregates. By taking advantage of the persistence of traffic in a bundle, we showed how to estimate the volume and the elasticity of cross traffic. These estimators allow Nimbus\xspace to pick the best mode in which to operate: with inelastic traffic, the delay-control rule achieves high throughput while maintaining delays near a specified threshold, and with elastic cross traffic, the TCP-competitive mode emulates the behavior of a specified number of elastic long-running flows.
There are important directions for future work. First, how should Nimbus\xspace be modified to allow multiple concurrent bundles to share bandwidth equitably with each other and with other competing traffic? Second, how best to detect elasticity for non-ACK-clocked traffic (e.g., BBR)? Third, how high must the access link rate be compared to the bottleneck rate for the methods we have proposed to be effective? Fourth, are there other ways to modulate transmissions on pulses to detect cross traffic that can handle access rates that are lower than 25\% of the bottleneck link rate. And last but not least, how to improve performance on paths with rate policers that drop packets (some of our experiments are on such paths) and on paths with active queue management schemes?
\fi
\section{Delay-Control Mode}
\label{s:delaymode}
\vspace{-5pt}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{images/delay_control}
\caption{\small{{\bf Achieving low delays---}all algorithms achieve close to the spare capacity in throughput, while Nimbus and Vegas achieve low delays as well. The delay axis (vertical) is inverted; better performance is up and to the right.
Nimbus can be configured for even lower delays at the expense of slightly reduced throughput.
}}
\label{fig:delaymode-inelastic}
\end{figure}
The goal of the delay-control rule is to
achieve high throughput while
maintaining a specified threshold queueing delay, $d_t >0$. We target such a delay because it allows Nimbus\xspace to achieve high utilization and also helps estimate the cross traffic, $z(t)$, accurately. For ease of explanation,
we assume that the path from sender to receiver has only one bottleneck link; the scheme works even otherwise.
In general, the bottleneck link rate $\mu$ could change with time (e.g., a wireless link),
but we assume here that it is constant (i.e., a typical wired link).
The delay-control rule has two terms. The first term uses the estimate of $z(t)$ to infer the {\em spare capacity} and transmits at a rate proportional to this quantity. The second term adjusts the transmission rate in proportion to the difference between the measured delay and the specified delay threshold.
\Para{Spare capacity term.} The sender, currently sending at rate $R(t)$, estimates the spare capacity as
\begin{equation}
s(t) =
\mu
- z(t)
- R(t)
= \mu \left(1 - \frac{R(t)}{\Rout(t)}\right)
\label{eq:s}
\end{equation}
Then, the first term in the delay-control rule is a rate update at time $t+\delta$, where $\delta \geq \mbox{RTT}$:
\begin{equation}
R(t + \delta) = R(t) + \alpha s(t),
\label{eq:spare}
\end{equation}
where $\alpha$ is a step parameter that determines the swiftness with which to move toward $s(t)$.
\if 0
A high $\alpha$ can destabilize the system, a low $\alpha$ means slower
convergence. Since our measurement estimates are delayed by at
least 1 round-trip time (RTT), $\delta > \mbox{RTT}$.
\fi
An equivalent interpretation of the update rule in Eq.~(\ref{eq:spare}) is
as an exponentially weighted moving average (EWMA) filter because it may be rewritten as
\begin{equation}
R(t + \delta) = (1 - \alpha) R(t) + \alpha (\mu - z(t)).
\label{eq:ewmaspare}
\end{equation}
The intuition is that we would like $R(t)$ to converge toward $\mu - z$.
An EWMA is a standard approach used in many similar network estimation problems, and is a natural method in our context.
\Para{Threshold delay term.} Eq.~(\ref{eq:ewmaspare}) is insufficient because with stochastic traffic, a sender pushing the total offered load to $\mu$ would cause buffers to fill. Moreover, if the measured delays are lower than the threshold, the cross-traffic estimator may not produce correct results. In both cases, a term that forces the delay toward the threshold addresses the problems.
Denoting the minimum RTT by $x_p$ and the current RTT by $x_t$, the simplest such term is proportional to $d_t +x_p - x(t)$. This gives the Nimbus\xspace{} delay-control rule:
\begin{equation}
R(t+\delta) =
R(t)
+ \alpha s(t)
+ \beta\frac{\mu}{\delta}
(d_t + x_p - x(t)).
\label{eq:delayrule}
\end{equation}
The second term with the $\beta$ factor is scaled by $\delta$ to make it unitless.
Previous work (XCP~\cite{xcp} and RCP~\cite{rcpstable}) has established stability bounds on $\alpha$ and $\beta$ for nearly identical control laws.
Note that this scheme provides a good estimate of $s(t)$ only when the
bottleneck queue is non-empty, so a non-zero threshold delay is important
for high utilization.
This delay-control rule causes the RTT ($x(t)$) to hover around
$d_t+x_p$. Because $d_t > 0$, this rule ensures high
bottleneck link utilization.
\Fig{delaymode-inelastic} shows this method's ability to hold delays across three cross-traffic rates on a 96 Mbit/s path with 50 ms RTT and a bottleneck buffer of twice the bandwidth-delay product (BDP).
We show Nimbus\xspace with two different target queueing delay settings, $d_{t}$ = 5 ms and $d_{t}$ = 12.5 ms, with cross traffic occupying 25\%, 50\%, and 75\% of the bottleneck rate. Nimbus and Vegas outperform the other protocols in these experiments.
BBR kept the queue size between 1 and 1.5 BDP throughout the experiment; we believe that with cross traffic, BBR over-estimated the available capacity and BDP, which caused it to build up queues. We observed that with no cross traffic, BBR did not have this problem, a similar finding to a recent study~\cite{hockexperimental}.
\cut{For this delay-control rule to work the sender need not have a precise
estimate of $\mu$. An imprecise estimate $\hat \mu$ is equivalent to
scaling down the $\alpha$ and $\beta$ factors by $\frac{\hat
\mu}{\mu}$. We can estimate $\hat \mu$ by using the packet train
method~\cite{packettrain} or even by crudely using $\hat \mu = R(t)$.
}
\cut{
\subsection{Estimating \textit{x}}
We present here a simple new method to estimate $x$ (or more
precisely, $x_p - x$), required by Eq. (\ref{eq:delayrule}). All prior
techniques used in protocols that require the one-way delay end up
using the {\em round-trip time}, because they cannot assume that the
sender and receiver clocks are synchronized on the
Internet. Unfortunately, using the RTT implies that the protocol is
now sensitive to traffic or congestion on the reverse path. Similarly, calculating $R_{out}$ based on received time of packets at the sender makes our estimation sensitive to congestion on the reverse paths.
Suppose that the sender and receiver clocks are not synchronized and
the clock times differ by $\eta$ seconds. The sender includes a
timestamp $s_{i}$ whenever a packet is sent, and the receiver echoes
this timestamp to the sender in its ACK for packet $i$. The receiver
also adds its own timestamp, $r_{i}$, to the ACK corresponding the
received packet. Let $\hat{x}_{i}$ be the calculated one-way delay of packet $i$, i.e.,
\begin{equation}
\hat{x}_{i}
\label{eq:x} = r_{i} - s_{i}
\end{equation}
Note that $x_p$ is estimated as $\min_{i} \hat {x}_i$. Hence, both the
estimate of $x_p$ and our estimate of $x$ are off by the same constant
amount, $\eta$, which implies that $d_t + x_p - x$ may be correctly
estimated using $\hat{x}$ using Eq. (\ref{eq:x}).
}
\section{Cross-Traffic Estimation}
\label{s:switching}
We first show how to estimate the total rate of cross traffic (\S\ref{s:zrate}) from measurements at the sender. Then, we show how to detect whether elastic flows are a significant contributor to the cross traffic, describing the key ideas (\S\ref{s:zelasticity}) and a practical method (\S\ref{sec:pracelas}).
Figure~\ref{fig:sysmod} shows the network model and introduces some notation. A sender communicates with a receiver over a single bottleneck link of rate $\mu$. The bottleneck link is shared with cross traffic, consisting of an unknown number of flows, each of which is either elastic or inelastic. $S(t)$ and $R(t)$ denote the time-varying sending and receiving rates, respectively, while $z(t)$ is the total rate of the cross traffic. We assume that the sender knows $\mu$, and use prior work to estimate it ($\S$\ref{s:implementation}).
Our cross traffic estimation technique requires some degree of traffic persistence. The sender must be able to create sufficient traffic variations
and observe the impact on cross traffic over a period of time. Hence it is best suited for control of large flows, such as large file downloads, data backups to the cloud, etc. Fortunately, it is precisely for such transfers that delay-control congestion control can provide the most benefit, since short flows are unlikely to cause bufferbloat~\cite{bufferbloat}.
\subsection{Estimating the Rate of Cross Traffic}
\label{s:zrate}
We estimate $z(t)$ using the estimator
\begin{equation}
\hat{z}(t) = \mu \frac{S(t)}{R(t)} - S(t).
\label{eq:zfromR}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/sys_model2.pdf}
\caption{\small \textbf{Network model.}
The time-varying total rate of cross traffic is $z(t)$. The bottleneck link rate is $\mu$. The sender's transmission rate is $S(t)$, and the rate of traffic received by the receiver is $R(t)$.}
\label{fig:sysmod}
\end{figure}
To understand why this estimator works, see Figure~\ref{fig:sysmod}. The total traffic into the bottleneck queue is $S(t) + z(t)$, of which the receiver sees $R(t)$. As long as the bottleneck link is busy (i.e., its queue is not empty), and the router treats all traffic the same way, the ratio of $R(t)$ to $\mu$ must be equal to the ratio of $S(t)$ and the total incoming traffic, $S(t)+z(t)$. Thus, any protocol that keeps the bottleneck link always busy can estimate $z(t)$ using Eq.~\eqref{eq:zfromR}.
In practice, we can estimate $S(t)$ and $R(t)$ by considering $n$ packets at a time:
\begin{equation}
\centering
S_{i,i+n} = \frac{n} {s_{i+n} - s_i}, \qquad R_{i,i+n} = \frac{n} {r_{i+n} - r_i}
\label{eq:SR}
\end{equation}
where $s_k$ is the time at which the sender sends packet $k$, $r_k$ is the time at which the sender receives the ACK for packet $k$, and the units of the rate are packets per second. Note that measurements of $S(t)$ and $R(t)$ must be performed over the {\em same} $n$ packets.
\if 0
Using ACKs to calculate $r_{k}$ makes the $R(t)$ calculation sensitive to ACK compression. To mitigate this effect, we can use receiver timestamps to define $r_k$ as the time at which the receiver received packet $k$. This method is more accurate but requires that the receiver communicate $R(t)$ to the sender in ACKs. In our TCP-based implementation, we use ACK timestamps to avoid receiver modifications.
\fi
We have conducted several tests with various patterns of cross traffic to evaluate the effectiveness of this $z(t)$ estimator. The overall error is small: the 50th and 95th percentiles of the relative error are 1.3\% and 7.5\%, respectively.
\if 0
Unlike prior work on estimating cross-traffic rate~\cite{strauss2003measurement,jain2002pathload,hu2003evaluation}, our method is in-band and does not use any probe packets; it relies on the property that the sender is persistently backlogged. \ma{I think we can cut this.}
\fi
\subsection{Elasticity Detection: Principles}
\label{s:zelasticity}
\if 0
The problem of elasticity detection is to distinguish long-lived cross-traffic
flows (termed {\em elastic}), which compete aggressively for link bandwidth,
from short-lived flows (termed {\em inelastic}) that arrive and depart very
quickly with a small burst of packets. Nimbus should compete for its fair share
of bandwidth with elastic flows, but let the inelastic flows pass through
unharmed.
\fi
We seek an online estimator to determine if the cross traffic includes elastic flows using only measurements at the sender.\footnote{Receiver modifications might improve accuracy by avoiding the need to estimate $R(t)$ from ACKs at the sender, but would be a little harder to deploy.}
A strawman approach might attempt to detect elastic flows by estimating the contribution of the cross traffic to queuing delay. For example, the sender can estimate its own contribution to the queuing delay---i.e., the ``self-inflicted'' delay---and if the total queuing delay is significantly higher than the self-inflicted delay, conclude that the cross traffic is elastic.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{images/overview-cubic2.pdf}
\caption{\small Delay measurements at a single point in time do not reveal elasticity. The bottom plot shows the total queuing delay (orange line) and the self-inflicted delay (green line). The experiment setup is the same as Figure~\ref{fig:overview:cubic}.}
\label{fig:si-delay}
\end{figure}
This simple scheme does not work. To see why, consider again the experiment in Figure~\ref{fig:overview:cubic}, where a Cubic flow shares a link with both elastic and inelastic traffic in two separate time periods. Figure~\ref{fig:si-delay} plots the self-inflicted queuing delay for the Cubic flow in the same experiment. The self-inflicted delay looks nearly identical in the elastic and inelastic phases of the experiment. The reason is that a flow's share of the queue occupancy is proportional to its throughput, independent of the elasticity of the cross traffic. This example suggests that no measurement at a single point in time can be used to reliably distinguish between elastic and inelastic cross traffic.
In this experiment, the Cubic flow gets roughly 50\% of the bottleneck link; therefore, its self-inflicted delay is roughly half of the total queuing delay at all times.
\smallskip
\noindent{\bf To detect elasticity, tickle the cross traffic!}
Our method detects elasticity by monitoring how the cross traffic responds to induced traffic variation at the bottleneck link over a period of time. The key observation is that elastic flows react in a predictable way to rate fluctuations at the bottleneck. Because elastic flows are ACK-clocked, if a new ACK is delayed by a time duration $\delta$ seconds, then the next packet transmission will also be delayed by $\delta$. The sending rate depends on this delay: if the mean inter-arrival time between ACKs is $d$, adding an extra delay of $\delta$ on each ACK would reduce the flow's sending rate from $1/d$ to $1/(d+\delta)$ packets per second. By contrast, inelastic traffic does not respond like this to mild increases in delay.
We use this observation to detect elasticity by inducing changes in the inter-packet spacing of cross traffic at the bottleneck link. To achieve this, we transmit data in {\em pulses}, taking the desired sending rate, $S(t)$, and sending at a rate first higher, then lower than $S(t)$, while ensuring that the mean rate remains $S(t)$. Sending in such pulses (e.g., modulated on a sinusoid) modulates the inter-packet spacing of the cross traffic in the queue in a controlled way. If enough of the cross-traffic flows are elastic, then because of the explicitly induced changes in the ACK clocks of those flows, they will react to the changed inter-packet time. In particular, when we increase our rate and transmit a burst, the elastic cross traffic will reduce its rate in the next RTT; the opposite will happen when we decrease our rate.
\Fig{ts:elastic} and \Fig{ts:inelastic} show the responses of elastic and inelastic cross-traffic flows ($z$), when the sender transmits data in sinusoisal pulses ($S(t)$) at frequency $f_p=5$~Hz. The elastic flow's sending rate after one round-trip time of delay is inversely correlated with the pulses in the sending rate, while the inelastic flow's sending rate is unaffected.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\columnwidth}
\includegraphics[width=\columnwidth]{images/TimeSeries_TCP}
\caption{Elastic traffic}
\label{fig:ts:elastic}
\end{subfigure}
\begin{subfigure}[b]{0.45\columnwidth}
\includegraphics[width=\columnwidth]{images/TimeSeries_Poisson}
\caption{Inelastic traffic}
\label{fig:ts:inelastic}
\end{subfigure}
\caption{\small{\textbf{Cross traffic's reaction to sinusoidal pulses.} The pulses cause changes in the inter packet spacing for cross traffic. Elastic traffic reacts to these changes after a RTT. Inelastic cross traffic is agnostic to these changes.} }
\label{fig:ts}
\end{figure}
\subsection{Elasticity Detection: Practice}
\label{sec:pracelas}
\label{s:pracelas}
To produce a practical method to detect cross traffic using this idea, we must address three challenges. First, pulses in our sending rate must induce a measurable change in $z$, but must not be so large as to congest the bottleneck link. Second, because there is natural variation in cross-traffic, as well as noise in the estimator of $z$, it is not easy to perform a robust comparison between the predicted change in $z$ and the measured $z$. Third, because the sender does not know the RTTs of cross-traffic flows, it does not know when to look for the predicted response in the cross-traffic rate.
The first method we developed measured the {\em cross-correlation} between $S(t)$ and $z(t)$. If the cross-correlation was close to zero, then the traffic would be considered inelastic, but a significant non-zero value would indicate elastic cross traffic.
We found that this approach works well (with square-wave pulses) if the cross traffic is substantially elastic and has a similar RTT to the flow trying to detect elasticity, but not otherwise. The trouble is that cross traffic will react after its RTT, and thus we must align $S(t)$ and $z(t)$ using the cross traffic's RTT, which is not easy to infer. Moreover, the elastic flows in the cross traffic may have different RTTs, making the alignment even more challenging, and rendering the method impractical.
\smallskip
\noindent {\bf From time to frequency domain.} We have developed a method that overcomes the three challenges mentioned above. It uses two ideas. First, the sender modulates its packet transmissions using {\em sinusoidal pulses} at a known frequency $f_p$, with amplitude equal to a modest fraction (e.g., 25\%) of the bottleneck link rate.
These pulses induce a noticeable change in inter-packet times at the link without causing congestion, because the queues created in one part of the pulse are drained in the subsequent, and the period of the pulses is short (e.g., $f_p = 5$~Hz). By using short pulses, we ensure that the total burst of data sent in a pulse is a small fraction of the bottleneck's queue size.
Second, the sender looks for periodicity in the cross-traffic rate at frequency $f_p$, using a frequency domain representation of the cross-traffic rates. We use the Fast Fourier Transform (FFT) of the time series of the cross traffic estimate $z(t)$ over a short time interval (e.g., 5 seconds). Observing the cross-traffic's response at a known frequency, $f_p$, yields a method that is robust to different RTTs in cross traffic.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/FFT2}
\caption{\small{\textbf{Cross traffic FFT for elastic and inelastic traffic.} Only the FFT of elastic cross traffic has a pronounced peak at $f_p$ (5 Hz).}}
\label{fig:fft}
\end{figure}
\Fig{fft} shows the FFT of the $z(t)$ time-series estimate produced using Eq.~(\ref{eq:zfromR}) for examples of elastic and inelastic cross traffic, respectively. Elastic cross traffic exhibits a pronounced peak at $f_p$ compared to the neighboring frequencies, while for inelastic traffic the FFT magnitude is spread across many frequencies. The magnitude of the peak depends on how elastic the cross traffic is; for example, the more elastic the cross traffic, the sharper the peak at $f_p$.
Because the peak magnitude depends on the proportion of elastic flows in cross traffic, we found that a more robust indicator of elasticity is to compare the magnitude of the $f_p$ peak to the next-best peak at nearby frequencies, rather than use a pre-determined absolute magnitude threshold. We define the {\em elasticity metric}, $\eta$ as follows:
\if 0
\begin{equation}
\centering
\eta = \frac{|FFT_{z}(f_{p})|}{\max_{f_{p} < f < 2f_{p}} |FFT_{z}(f)|}
\label{eq:elasticity}
\end{equation}
\fi
\begin{equation}
\centering
\eta = \frac{|FFT_{z}(f_{p})|}{\max_{f \in (f_p,2f_{p})} |FFT_{z}(f)|}
\label{eq:elasticity}
\end{equation}
Eq.~\eqref{eq:elasticity} compares the magnitude of the FFT at frequency $f_{p}$ to the peak magnitude in the range from just above $f_{p}$ to just below $2f_{p}$. In \Fig{fft}, $\eta$ for elastic traffic is 10, whereas for inelastic traffic it is close to 1.
In practice, the cross traffic is less likely to be either only elastic or only inelastic, but will be a mix. \Fig{elasticity:cdf} shows elasticity of the cross traffic when we vary the percentage of bytes belonging to elastic flows in the cross traffic. Based on this data, we propose a hard-decision rule: if $\eta \leq 2$, then the cross traffic is considered inelastic; otherwise, it is elastic.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/CDF}
\caption{ \small{Distribution of elasticity with varying elastic fraction of cross traffic. Completely inelastic cross traffic has elasticity values close to zero, while completely elastic cross traffic exhibits high elasticity values. Cross traffic with some elastic fraction also exhibits high elasticity ($\eta>2$).}}
\label{fig:elasticity:cdf}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/Semi-sine-2.png}
\caption{ \small{\textbf{Example of an asymmetric sinusoidal pulse.} The pulse has period $T=1/f_p$. The positive half-sine lasts for $T/4$ with amplitude $\mu/4$, and the negative half-sine lasts for the remaining duration, with amplitude $\mu/12$. The two half-sines are designed to cancel out each other over one period.}}
\label{fig:semi-sine}
\end{figure}
\smallskip
\noindent {\bf Pulse shaping.} Rather than a pure sinusoid, we use an {\em asymmetric sinusoidal pulse}, as shown in \Fig{semi-sine}. In the first one-quarter of the pulse cycle, the sender adds a half-sine of a certain amplitude (e.g., $\mu/4$) to $S(t)$; in the remaining three-quarters of the cycle, it subtracts a half-sine with one-third of the amplitude used in the first quarter of the cycle (e.g., $\mu/12$). The reason for this asymmetric pulse is that it enables senders with low sending rates, $S(t)$, to generate pulses. For example, for a peak amplitude of $\mu/4$, a sender with $S(t)$ as low as $\mu/12$ can generate the asymmetric pulse shown in \Fig{semi-sine}; a symmetric pulse with the same peak rate would require $S(t) > \mu/4$. A peak pulse rate of $\mu/4$ causes a noticeable change to inter-packet times by transmitting a fraction of BDP worth of packets over a short time period (less than an RTT).
What should the duration, $T$, of the pulse be? The answer depends on two factors: first, the duration over which $S$ and $R$ are measured (with which the sender estimates $z$), and second, the amount of data we are able to send in excess of the mean rate without causing excessive congestion. If $T$ were smaller than the measurement interval of $S$ and $R$, then the pulse would have no measurable effect, because the excess in the high part and low part would cancel out over the measurement interval. But $T$ cannot be too large because the sender sends in excess of the mean rate $S(t)$ for $T/4$.
Based on these considerations, we set $T$ so that $T/4$ is on the order of the current RTT or a little less than that to avoid packet losses (e.g., $T/4$ could be the minimum observed RTT), and measure $S$ and $R$ (and hence, $z$) using Eq. (\ref{eq:SR}) over the duration of the current RTT (i.e., over exactly one window's worth of packets). As a concrete example, a flow with minimum RTT 50 ms would use $T/4 = 50$ ms, giving a pulse frequency of $1/0.2 = 5$ Hz.
\if 0
The duration of the sinusoidal pulse, $T$, must be at least as long as the current RTT. The reason is that $S(t)$ and $R(t)$ are estimated over the duration of the current RTT (see Eq. (\ref{eq:SR}). If $T$ were smaller than the current RTT, we will not be able to observe the effect of the pulses on cross traffic. For observing the impact of pulses, the duration of a pulse must be greater than time interval over which $S$ and $R$ are calculated. This implies $(1/f_p) > RTT$, hence we choose a frequency of $f_{p}=5$ Hz.
\fi
Our pulses are designed to produce an observable pattern in the FFT when the cross traffic is elastic. Using asymmetric sinusoidal pulses creates extra harmonics at multiples of the pulse frequency $f_p$. However, these harmonics do not affect the elasticity metric in Eq.~\eqref{eq:elasticity}, which only considers the FFT in the frequency band $[f_p, 2f_p)$.
\if 0
Rather than a pure sinusoid, we use an {\em asymmetric} sinusoidal pulse, as shown in \Fig{semi-sine}. In the first one-quarter of the pulse cycle, the sender adds a half-sine of a certain amplitude (e.g., $\mu/4$) to $S(t)$; in the remaining three-quarters of the cycle, it subtracts a half-sine with one-third of the amplitude used in the first quarter of the cycle (e.g., $\mu/12$). The reason for this asymmetric sinusoidal pulse is that it enables senders with low sending rates ($S(t)$) to generate pulses. For example, for a peak amplitude of $\mu/4$, a sender with $S(t)$ as low as $\mu/12$ can generate the asymmetric pulse shown in \Fig{semi-sine}; a symmetric pulse with the same peak rate would require $S(t) > \mu/4$. The reason why we need a peak pulse rate of $\mu/4$ is because we need to cause a noticeable change to inter-packet times over a short time period (less than an RTT); that isn't possible unless we burst out at least a fraction of BDP worth of packets. In \S\ref{s:related}, we contrast our pulses with BBR's, which transmits rectangular pulses at 1.25 and 0.75 times the maximum receive rate (this rate may be thought of as a proxy for the bottleneck link rate, $\mu$).
\fi
\if 0
Using asymmetric pulses has two advantages. First, compressing the first half of the pulse (the positive part) over a shorter time period enables the sender to transmit a more concentrated burst of packets without increasing its peak sending rate. This concentrated burst creates a larger disruption to the ACK clock of the elastic flows compared to a symmetric pulse, making them easier to detect. Second, the asymmetric pulse enables senders with low sending rates ($S(t)$) to generate pulses. For example, for a peak amplitude of $\mu/4$, a sender with $S(t)$ as low as $\mu/12$ can generate the asymmetric pulse shown in \Fig{semi-sine}; a symmetric pulse with the same peak rate would require $S(t) > \mu/4$.
\fi
\subsection{Estimating bottleneck bandwidth}
\label{s:btl-bw-est-tech}
\vspace{-5pt}
We show how Eq.~(\ref{eq:zfromR}) can be used to estimate $\mu$. This equation has two unknowns: $\mu$ (which we assume for simplicity doesn't change with time) and $z(t)$. If we assume that the cross traffic does not change much in a small time interval $\Delta t$, then $z(t) \approx z(t+\Delta t)$, from which we can estimate $\mu$:
\begin{equation}
\mu \approx \frac{R(t) - R(t + \Delta t)}{\frac{R(t)}{R_{out}(t)} - \frac{R(t + \Delta t)}{R_{out}(t + \Delta t)}}
\label{eq:muest}
\end{equation}
However, there is one challenge to using Eq.~(\ref{eq:muest}):
the assumption that $z(t) \approx z(t+\Delta t)$ might not hold when Nimbus\xspace continuously generates pulses. Recall that elastic cross traffic may react to these pulses, creating fluctuations in $z(t)$ over short timescales, which can throw Eq.~(\ref{eq:muest}) off.
To solve this problem, we introduce a separate {\em bandwidth estimation phase} in Nimbus\xspace to estimate $\mu$. In this phase, which runs periodically (e.g., every 60 seconds), Nimbus\xspace stops transmitting sinusoidal pulses and waits one RTT for fluctuations in $z(t)$ from previous pulses to settle. It then transmits a single square pulse at time $t=t_0$, and samples $z(t)$ at $t=t_0$ and $t=t_0 + \Delta t$. If $\Delta t <$ RTT, the cross traffic will not yet have reacted to this pulse, and hence $z(t_0) \approx z(t_0+\Delta t)$ while $R(t_0) \neq R(t_0 + \Delta t)$. Therefore, we can use Eq.~(\ref{eq:muest}) to estimate $\mu$. We use $\Delta t = 0.5$RTT in our implementation.
\input{eval-bw-estimate}
\subsection{Benefits to Bundled Flows}
In \Sec{other-bundle-transports}, we presented the overall throughput and delay benefits of running a Nimbus\xspace bundle.
Here, we show that Nimbus\xspace also benefits the component flows.
We measure the impact of \bundling on flows using the metric of flow completion time (FCT).
Without \bundling, the bottleneck queue's scheduling discipline (e.g., FIFO) and TCP sending rates would together determine an inflexible ordering of flows.
If some flows pushed too many packets into the bottleneck queue (e.g., because of a high congestion window), they might increase the completion time for short flows.
Indeed, \bundling can help achieve smaller FCT by intelligently scheduling the packets of the \possbundle flows.
\Para{Experiment setup.} We set up a realistic WAN cross-traffic workload over an emulated Mahimahi\xspace bottleneck link similar to the one in \Sec{other-bundle-transports}. Crucially, to measure FCT benefits for the \bundle, we generate {\em non-backlogged} flows within the \bundle using the same flow size distribution as the cross-traffic. The \bundle contributes an average load of 60 Mbit/s, while the cross-traffic contributes 20 Mbit/s.
First, we measure the throughput (over 10 ms intervals) and delays (per-packet samples) obtained by the \bundle using a canonical scheduling discipline, namely FIFO (over flow arrivals).
We also record the arrival time and demand of each flow.
Unfortunately, a FIFO schedule results in high tail FCT for short flows, since the WAN flow size distribution is heavy-tailed.
Our \bundle implementation currently only supports FIFO scheduling.
Therefore, to understand the potential benefit of employing other scheduling disciplines
\footnote{We believe that implementing simple scheduling disciplines (e.g., round robin) for high-rate \bundles is feasible.}
within the \bundle, we \emph{simulate} the scheduling disciplines over the rate and delay traces that we collect.
To verify the fidelity of our simulator, in Appendix~\ref{app:simulator-correct} we successfully compare a simulated FIFO policy to FIFO FCT achieved in emulation.
While the \possbundle scheduling discipline affects the ordering of packet transmissions within the \bundle, replacing FIFO with any other {\em work-conserving} scheduling discipline affects neither the overall rate nor the delays obtained in emulations by either the \bundle or the cross-traffic.
In the experiments below, we measure FCTs of the \bundled flows and the cross-flows under different scenarios.
Unless specified otherwise, the \bundle implements per-packet round-robin scheduling (RR) across flows, with \bundle congestion control algorithms Nimbus\xspace, Cubic, Vegas, and BBR.
We compare the benefits of \bundling to the scenario when both the \bundled flows and the cross-traffic transmit independently using Cubic, as is the norm in the Internet today.
We call this baseline {\em NoBundle\xspace.}
We choose the \bundle parameters to avoid negatively impacting the cross-traffic.
As an approximation, for a link load $\rho$ (as a fraction of link rate), we use the average number of jobs in a processor-sharing~\cite{processor-sharing} queue,
$\frac{\rho}{1-\rho}$, as an estimate of the total number of active long-lived flows (\bundle and cross-traffic).
We then choose $n_b$, the number of long-lived \bundle flows, proportionally to the load offered to the link by the \bundle.
In our experiments, the link utilization is $\frac{80}{96}$, and the \bundle offers a fraction $\frac{60}{60+20}$ of that link load.
We thus choose $N = \frac{80}{16}\times\frac{60}{80} \approx 4$.
\Para{FCT benefits for \bundled flows.}
\Fig{emp:fct:fg} shows the median (bar) and 95th percentile (whisker) FCT for \bundled flows across the different \bundle congestion control algorithms, normalized by the median FCT from NoBundle\xspace in each flow size range.
Nimbus\xspace consistently improves on NoBundle\xspace, achieving between 30--75\% of the median FCT across flow sizes ($\approx$70\% for 15 KB and 150 MB flows).
Short flows benefit from Nimbus's\xspace reduction of packet delays.
BBR and Cubic have higher median FCT for short flows (15 KB and 150 KB) relative to Nimbus\xspace, but their high throughput leads to median FCT comparable to Nimbus\xspace for longer flows (1.5 MB and beyond).
For Nimbus\xspace, BBR, and Cubic, \bundling provides a high initial sending rate for medium to long flows (150 KB--150 MB), resulting in their lower FCT.
Running a Vegas \bundle generally decreases the median FCT relative to NoBundle\xspace, but the tail FCT is much higher: when Vegas backs off in the presence of elastic cross-traffic, flows in the \bundle suffer from lower throughput and much higher FCT.
We find that Nimbus\xspace, Cubic, and BBR have tail FCTs comparable to NoBundle\xspace for short flows, since they achieve similar delays in the tail.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.9\columnwidth}
\includegraphics[width=\columnwidth]{images/emp-rr-20-60-fgrnd-median}
\caption{FCT of \emph{\bundle} flows}
\label{fig:emp:fct:fg}
\end{subfigure}
%
\begin{subfigure}[b]{0.9\columnwidth}
\includegraphics[width=\columnwidth]{images/emp-rr-20-60-bgrnd-median}
\caption{FCT of \emph{cross-flows}}
\label{fig:emp:fct:bg}
\end{subfigure}
\caption{\small{\textbf{Comparison of FCT---} Nimbus improves bundle FCT without affecting the FCTs of cross-traffic flows on a non-backlogged Web workload.}}
\label{fig:emp:fct}
\end{figure}
\label{s:background-fct}
\Para{Impact on cross-traffic FCT.} \Bundling improves FCT for the \bundled flows, but does the improvement come at the cost of higher FCT for the cross-traffic?
\Fig{emp:fct:bg} shows the FCT of cross-traffic with different \bundle congestion control algorithms. Nimbus\xspace and Vegas actually {\em improve} both the median and tail FCT of short (15 KB) and long (150 MB) cross-flows, by reducing the overall delays experienced by all traffic at the bottleneck.
Cubic is closest to the cross-traffic (both run TCP Cubic).
In contrast, BBR's aggressive bandwidth-probing significantly increases the median and tail FCT for cross-traffic across all flow sizes.
The average FCT of Nimbus\xspace and Cubic are comparable to NoBundle\xspace, while Vegas provides 50--60\% and BBR between 1.2--2.4$\times$ the average FCT of NoBundle\xspace.
\label{s:scheduling}
\Para{FCT benefits with advanced scheduling disciplines.} Using scheduling disciplines better than RR can improve the FCTs compared to the results above.
In \Fig{bundle:foreground}, we show the median FCT (bar) and $95^{th}$ percentile (whisker) of Nimbus\xspace (relative to NoBundle\xspace) when using shortest remaining processing time (SRPT) scheduling.
Though it requires knowledge of flow sizes ahead of time, SRPT achieves a median FCT between 20--70\% of NoBundle\xspace, and improves on RR across flow sizes.
SRPT also improves the $95^{th}$ percentile FCT over NoBundle\xspace for all but the longest flows.
The improvements over RR in tail FCT are most significant for medium-sized flows (150 KB--15 MB) which last sufficiently long to benefit from scheduling decisions, gaining at the expense of the longest flows.
The choice of a good \bundle scheduling policy is crucial: FIFO scheduling (not shown) results in median FCTs higher than NoBundle\xspace by 5--35$\times$.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{images/emp-scheduling-20-60-fgrnd-2.pdf}
\caption{\small{\textbf{Scheduling flows within bundles---}
Across flow sizes, Nimbus\xspace improves median FCT compared to not \bundling; using SRPT scheduling instead of round-robin provides additional improvement.
}}
\label{fig:bundle:foreground}
\end{figure}
In summary, Nimbus\xspace, with its explicit mode switching, is the only congestion control algorithm that simultaneously improves the \possbundle FCT while not impacting the cross-traffic flows' FCT.
Furthermore, Nimbus\xspace enables advanced scheduling disciplines for the \bundle to further improve FCT.
\if 0
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{images/emp-40-40-total-bgrnd.pdf}
\caption{FCTs for background traffic with and without bundling.}
\label{fig:bundle:background}
\end{figure}
\fi
\subsection{Is Nimbus More Accurate than Copa?}
\label{s:copa-compare}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/copa-comparison-combined.pdf}
\caption{\small{Nimbus\xspace achieves higher classification accuracy than Copa when (i) inelastic cross traffic occupies a large fraction of the link (left), and (ii) elastic cross traffic with higher RTT then the flow's RTT (right).}}
\label{fig:copa-compare}
\end{figure}
The experiment in $\S$\ref{s:big-experiment} showed that Nimbus\xspace makes fewer mistakes in selecting its mode compared to Copa. We compare the classification accuracy of Nimbus\xspace to Copa in two scenarios. First, we generate inelastic cross traffic of different rates and measure the fraction of the time a backlogged Nimbus\xspace and Copa flow operate in the correct mode. This experiment uses a 96 Mbit/s bottleneck link with a 50 ms propagation RTT and a 100 ms drop-tail buffer (2 BDP). We consider both constant-bit-rate (CBR) and Poisson cross traffic, generated as described in $\S$\ref{s:big-experiment}.
\Fig{copa-compare} (left) shows that Nimbus\xspace has high accuracy in all cases, but Copa's accuracy drops sharply when the cross traffic occupies over 80\% of the link.
In the second scenario, a backlogged Nimbus\xspace or Copa flow competes against a backlogged NewReno flow, and we vary the RTT of the NewReno flow between $1-4\times$ the RTT of the Nimbus\xspace/Copa flow. \Fig{copa-compare} (right) shows that Copa's accuracy degrades as the RTT of the cross traffic increases; Nimbus\xspace's accuracy is much higher, dropping only slightly when the cross traffic RTT is $4\times$ larger than Nimbus\xspace.
These two results highlight the probems with Copa's approach of setting the operating mode using an expected pattern of queuing delays. We investigated the reasons for Copa's errors in these experiments. In the first case with inelastic cross traffic, Copa is unable to drain the queue quickly enough (every 5 RTTs); this throws off Copa's detection rule and it switches to competitive mode. In the second case, because a NewReno cross traffic flow with a high RTT increases its rate slowly, it has a small impact on the queuing delay during Copa's 5 RTT period. Therefore, Copa can drain the queue as it expects and it concludes that there isn't any non-Copa cross traffic. This error continues until the rate of the cross traffic gets large enough to interfere with Copa's expected queuing delay behavior.
By contrast, Nimbus\xspace directly estimates the volume and elasticity of cross traffic and is more robust. Appendix~\ref{app:copa-compare} provides examples of the throughput and queuing delay dynamics of Copa and Nimbus\xspace in the two scenarios.
\subsection{Does Mode Switching Help Cross-Traffic?}
\label{s:cross-traffic-impact}
\begin{figure}[t]
\centering
\includegraphics[width=0.43\textwidth]{images/fct3.pdf}
\caption{\small{Using Nimbus\xspace reduces the p95 FCT of cross-flows relative to BBR at all flow sizes, and relative to Cubic for short flows. Vegas provides low cross-flow FCT, but its own rate is low.}}
\label{fig:emp:fct}
\end{figure}
Using the same setup as \Sec{rw-wl}, we measure the flow completion time (FCT) of cross-flows.
\Fig{emp:fct} compares the 95th percentile (p95) cross-flow FCT for cross-flows of different sizes. The FCTs are normalized by the corresponding value for Nimbus\xspace at each flow size.
BBR exhibits much higher cross-flow FCT at all sizes compared to all the other protocols, consistent with the observation of unfairness ($\S$\ref{s:big-experiment}).
For small cross-flows ($\leq$ 15 KB), the p95 FCT with Nimbus\xspace and Copa are comparable to Vegas and lower than Cubic.
With Nimbus\xspace p95 FCT of cross traffic at higher flow sizes are slightly lower than Cubic because of small delays in switching to TCP-competitive-mode.
At all flow sizes, Vegas provides the best cross-flow FCTs, but its own flow rate is dismal; Copa is more aggressive than Vegas but less than Nimbus, but at the expense of its own throughput (\Sec{rw-wl}).
\section{Delay-Control Deep-Dive}
\label{s:delay-control-eval}
\subsection{Does $\eta$ Track the True Elastic Fraction?}
\label{s:elasticity-in-real-cross-traffic}
\begin{figure}[t]
\centering
\includegraphics[width=0.43\textwidth]{images/emp-switch3}
\caption{\small{The elasticity metric, and hence Nimbus's\xspace mode, closely tracks the prevalence of elastic cross-traffic (ground truth measured independently using the volume of ACK-clocked flows). Green-shaded regions indicate inelastic periods.}}
\label{fig:emp:switch}
\end{figure}
We use the setup of $\S$\ref{s:rw-wl} to present a mix of elastic and inelastic cross traffic, and evaluate how well the detector's decision correlates with the true proportion of elastic traffic.
We define a cross-flow generated from the CAIDA trace as ``elastic'' if it is guaranteed to have ACK-clocked packet transmissions over its lifetime, i.e., flows with sizes higher than TCP's default initial congestion window (10 packets in Linux 4.10, which is our transmitter).
The top chart in \Fig{emp:switch} shows the fraction of bytes belonging to elastic flows as a function of time. The bottom chart shows the output of the elasticity detector along with the dashed threshold line at $\eta=2$. The green shaded regions are times when Nimbus\xspace was in delay-control mode. These correlate well with the times when the elastic fraction was low; when the elasticity fraction is $<0.3$, the elasticity detector correctly outputs ``inelastic'' over 90\% of the time, when one accounts for the 5 second estimation delay. Even counting that delay, the accuracy is over 80\%.
\subsection{Can Multiple Nimbus\xspace Flows Co-Exist?}
\label{s:multiple-nimbus-fair-sharing}
Does mode switching permit low delays in the presence of multiple mode switching flows (and possibly other cross-traffic)? Can multiple such flows share a bottleneck link fairly with each other and with cross-traffic? We run Nimbus\xspace with Vegas as its delay-mode algorithm and supply it with the correct link rate. (We use Vegas because Nimbus\xspace's rule in its present form is not fair to other flows with the same rule.)
\Fig{mul_staircase} demonstrates how Nimbus\xspace flows react as other Nimbus\xspace flows arrive and leave (there is no other cross-traffic). Four flows arrive at a link with rate 96 Mbit/s and round-trip time\xspace 50 ms. Each flow begins 120 s after the last one began, and lasts for 480 s. The top half shows the rates achieved by the four flows over time. Each new flow begins as a watcher. If the new flow detects a pulser ($t = 120, 240, 360$ s), it remains a watcher. If the pulser goes away or a new flow fails to detect a pulser, one of the watchers becomes a pulser ($t = 480, 720$ s). The pulser can be identified visually by its rate variations.
The flows share the link rate equally. The bottom half of the figure shows the achieved delays with red background shading to indicate when one of the flows is (incorrectly) in competitive-mode. The flows maintain low RTTs and stay in delay-mode for most of the time.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{images/mul_rate.pdf}
\includegraphics[width=0.8\columnwidth]{images/mul_rtt.pdf}
\caption{\small \textbf{Multiple competing Nimbus\xspace flows.} Multiple Nimbus\xspace flows achieve fair sharing of a bottleneck link (top graph). There is at most one pulser\xspace flow at any time, which can be identified by its rate variations. Together, the flows achieve low delays by staying in delay mode for most of the duration (bottom graph). The red background shading shows when a Nimbus\xspace flow was (incorrectly) in competitive mode.}
\label{fig:mul_staircase}
\end{figure}
\Fig{mul_nimbus_switching} demonstrates multiple Nimbus\xspace flows switching in the presence of cross-traffic. We run three Nimbus\xspace flows on an emulated 192 Mbit/s link with a propagation delay of 50 ms. The cross traffic is synthetic. In the first 90 s, the cross-traffic is elastic (three Cubic flows), and for the rest of the experiment, the cross-traffic is inelastic (96 Mbit/s constant bit-rate). The top graph shows the total rate of the three Nimbus\xspace flows, along with a reference line for the fair-share rate of the aggregate. The graph at the bottom shows the measured queuing delays.
Nimbus\xspace shares the link fairly with other cross-traffic, and achieves low delays by staying in the delay mode in the absence of elastic cross-traffic.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{images/overview-nimbus-mul.pdf}
\caption{\small{\textbf{Multiple Nimbus\xspace flows and other cross-traffic.} There are 3 Nimbus\xspace flows throughout. Cross traffic in duration 30-90~s is elastic and made up of three Cubic flows. Cross traffic in duration 90-150~s is inelastic and made up of a 96 Mbit/s constant bit-rate stream. Multiple Nimbus\xspace flows achieve their fair share rate (top) while maintaining low delays in the absence of elastic cross traffic (bottom).}}
\label{fig:mul_nimbus_switching}
\end{figure}
\subsection{Switching \xspace{\em vs.} Other Transports}
\label{s:big-experiment}
We illustrate Nimbus\xspace on a synthetic workload with time-varying cross traffic.
We emulate a bottleneck link in Mahimahi\xspace~\cite{mahimahi}, a link emulator.
The network has a bottleneck link rate of 96 Mbit/s, a minimum RTT of 50 ms, and 100 ms (2 BDP) of router buffering. Nimbus\xspace sets a target queuing delay threshold of 12.5 ms in delay-control mode. We compare Nimbus\xspace with Linux implementations of Cubic, BBR, and Vegas, our empirically-validated implementation of Compound atop CCP, and implementations of Copa and PCC provided by the respective authors.
The cross-traffic varies over time between elastic, inelastic, and a mix of the two.
We generate inelastic cross-traffic using Poisson packet arrivals at a specified mean rate.
Elastic cross-traffic uses Cubic.
We generate all elastic traffic (Nimbus\xspace and cross-traffic) using {\small \tt iperf}~\cite{iperf}.
\Fig{big} shows the throughput and queuing delays for the various algorithms, as well as the correct fair-share rate over time.
Throughout the experiment, Nimbus\xspace
achieves both its fair share rate and low ($\leq 20$ ms) queuing delays in the presence of inelastic cross-traffic.
With elastic cross-traffic,
Nimbus\xspace switches to competitive mode within 5 s and achieves close to its fair-share rate.
The delays during this period approach the buffer size because the competing traffic is buffer-filling; the delays return to their previous low value (20 ms) within 5 s after the elastic cross-flows complete. Nimbus\xspace stays in the correct mode throughout the experiment, except for one interval in the competitive period.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/big-exp2.pdf}
\caption{\small Performance on a 96 Mbit/s Mahimahi\xspace link with 50 ms delay and 2 BDP of buffering while varying the rate and type of cross traffic as denoted at the top of the graph.
$x$M denotes $x$ Mbit/s of inelastic Poisson cross-traffic. $y$T denotes $y$ long-running Cubic cross-flows.
The solid black line indicates the correct time-varying fair-share rate that the protocol should achieve given the cross-traffic. For each scheme, the solid line shows throughput and the dash line shows queuing delay. For Nimbus\xspace and Copa, the shaded regions indicate TCP-competitive periods.}
\label{fig:big}
\end{figure}
Cubic experiences high delays close to the buffer size (100 ms) throughout the experiment.
BBR's throughput that is often {\em significantly higher} than its fair share and suffers from high delays even with inelastic cross-traffic; this is consistent with a prior result~\cite{copa}. Setting its sending rate to the estimated link rate is problematic in the presence of cross-traffic. Furthermore, BBR's use of the maximum achieved delivery rate as the ``right'' sending rate has been shown~\cite{bbr-evaluation} to cause BBR to unnecessarily inflate queuing delays.
Vegas suffers from low throughput in the presence of elastic cross-traffic, as it reduces its sending rate in the presence of large delays. Compound ramps up its rate quickly whenever it detects low delays, but behaves like TCP Reno otherwise. Consequently, it attains lower than its fair-share rate in the presence of Cubic flows, and suffers from high delays even with inelastic cross-traffic.
Copa mostly uses the correct mode but it has frequent incorrect mode switches. This increases variability and causes Copa to lose throughput (55 Mbit/s versus 68 Mbit/s for Nimbus) and occasionally suffer high delay fluctuations in the inelastic periods. Against Cubic flows, Copa achieves lower throughput than its fair share mainly because it emulates Reno.
PCC optimizes delivery rates using an online objective function, but this local optimization results in {\em }significant unfairness to TCP cross-traffic as well as high queuing delays.
\subsection{Real-World Internet Paths}
\label{s:realworld}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/california-harihome2.pdf}
\caption{EC2 California to Host A}
\label{fig:realworld:1}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/ireland-nghome2.pdf}
\caption{EC2 Ireland to Host B}
\label{fig:realworld:2}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/london-shaileshh2.pdf}
\caption{EC2 Frankfurt to Host C}
\label{fig:realworld:3}
\end{subfigure}
\caption{\small {\bf Performance on three example Internet paths.} The $x$ axis is inverted; better performance is up and to the right. On paths with buffering and no drops, ((a) and (b)), Nimbus\xspace achieves the same throughput as BBR and Cubic but reduces delays significantly. On paths with significant packet drops (c), Cubic suffers but Nimbus\xspace achieves high throughput.}
\label{fig:realworld}
\end{figure*}
We ran Nimbus\xspace on the public Internet with a test-bed of five servers and five clients, 25 paths in all. The servers are Amazon EC2 instances located in California, London, Frankfurt, Ireland, and Paris, and are rated for 10 Gbit/s.
We verified that the bottleneck in each case was not the server's Internet link.
We use five residential hosts in different ASes as clients.
To understand the nature of cross traffic on these paths, we ran experiments with Nimbus\xspace's delay-control algorithm (without mode switching) and Cubic each performing bulk transfers over a three-day period. The results showed that scenarios where cross traffic is predominantly inelastic and thus delay-control algorithms can be effective are common; see Appendix~\ref{app:delay-control-motivation} for details.
Next, on each path, we initiated bulk data transfers using Nimbus\xspace, Cubic, BBR, and Vegas.
We ran one minute experiments over five hours on each path, and measured the achieved average throughput and mean delay.
\Fig{realworld} shows throughput and delays over three of the paths. The $x$ (delay) axis is inverted; better performance is up and to the right. We find that Nimbus\xspace achieves high throughput comparable to BBR in all cases, at noticeably lower delays.
Cubic attains high throughput on paths with deep buffers (\Fig{realworld:1} and \Fig{realworld:2}), but not on paths with packet drops or policers (\Fig{realworld:3}).
Vegas attains poor throughput on these paths due to its inability to compete with elastic cross-traffic.
These trends illustrate the utility of mode switching on Internet paths: it is possible to achieve high throughput and low delays over the Internet using delay-control algorithms with the ability to switch to a different competitive mode when required.
\Fig{rw_cdf} summarizes the results on the paths with queuing. Nimbus\xspace obtained similar throughput to Cubic and 10\% lower than BBR but at significantly lower delay (40-50 ms lower than BBR) on these paths.
\begin{figure}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/real_delay.pdf}
\label{fig:rw_cdf:delay}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/real_rate.pdf}
\label{fig:rw_cdf:rate}
\end{subfigure}
\vspace{-5mm}
\caption{\small \textbf{Paths with queuing.} Nimbus\xspace{} reduces the RTT compared to Cubic and BBR (40-50 ms lower), at similar throughput.}
\label{fig:rw_cdf}
\end{figure}
\if 0
\cut{Overall, our results, sampled in \Fig{realworld}, highlight the variability of Internet paths.
There were instances where Nimbus\xspace achieved the best throughput and delay among the tested algorithms (5/18), instances where all algorithms performed well (8/18), and instances where another algorithm outperformed Nimbus\xspace (5/18).
For example, in Figure~\ref{fig:realworld:2}, we found that on this large-BDP link Nimbus\xspace suffered due to its choice of NewReno as the TCP-competitive rate increase rule.
Cubic has a more aggressive update and achieved higher throughput as a result.
Results from the other 15 paths are in Appendix~\ref{app:realworld}.
We observed that Nimbus\xspace is predominantly in delay-mode on some paths, and in competitive-mode on some others.
Further, Nimbus\xspace performs similarly to BBR and Cubic on some paths, and similarly to Vegas (albeit invariably with higher throughput) on others.
This suggests that mode-switching occurs on real Internet paths.}
\fi
\subsection{How Robust is Elasticity Detection?}
\label{s:robustness-synthetic}
We now evaluate the robustness of our elasticity detection method to cross-traffic and Nimbus's\xspace RTTs, bottleneck link rate, the share of the bottleneck controlled by Nimbus\xspace, bottleneck buffer size, and Nimbus's\xspace pulse size.
Unless specified otherwise, we run Nimbus\xspace as a backlogged flow on a 96 Mbit/s bottleneck link with a 50 ms propagation delay and a 100 ms drop-tail buffer (2 BDP).
We supply Nimbus\xspace with the correct link rate for these experiments, allowing us to study the robustness of elasticity detection with respect to the properties mentioned above.
We consider three categories of synthetic cross-traffic sharing the link with Nimbus\xspace: (i) fully inelastic traffic (Poisson); (ii) fully elastic traffic (backlogged NewReno flows); and (iii) an equal mix of inelastic and elastic. The duration of each experiment is 120 seconds. The main performance metric is {\em accuracy}: the fraction of time that Nimbus\xspace correctly detects the presence of elastic cross-traffic.
\Para{Impact of cross-traffic RTT\xspace.}
We vary the cross-traffic's propagation round-trip time\xspace from 10 ms (0.2$\times$ that of Nimbus\xspace) to 200 ms (4$\times$ that of Nimbus\xspace). \Fig{cross_rtt} shows the mean accuracy across 5 runs of each category of cross-traffic. We find that varying cross-traffic RTT\xspace does not impact detection accuracy. For purely inelastic and purely elastic traffic, Nimbus\xspace achieves an average accuracy of more than 98\% in all cases, while for mixed traffic, Nimbus\xspace achieves a mean accuracy of 85\% in all cases (a random guess would've achieved only 50\%).
\begin{figure}
\includegraphics[width=\columnwidth]{images/cross_accuracy.pdf}
\vspace{-8mm}
\caption{\small Nimbus\xspace classifies purely elastic and inelastic traffic with accuracy greater than 98\%. For a mix of elastic and inelastic traffic, the average accuracy is greater than 80\% in all cases.}
\label{fig:cross_rtt}
\end{figure}
\Para{Impact of pulse size, link rate, and link share controlled by Nimbus\xspace.}
We perform a multi-factor experiment varying Nimbus's\xspace pulse size from 0.0625$\times$---0.5$\times$ the link rate, Nimbus's\xspace fair share of the bottleneck link rate from 12.5\%---75\%, and bottleneck link rates set to 96, 192, and 384 Mbit/s.
The accuracy for purely elastic cross-traffic was always higher than 95\%.
\Fig{heatmap} shows the average detection accuracy over five runs of the other two categories of cross-traffic.
Nimbus\xspace achieves an accuracy of more than 90\% averaged over all the points.
In general, increasing the pulse sizes improves accuracy because Nimbus\xspace can create a more easily observable change in the cross-traffic sending rates.
An increase in the link rate results in higher accuracy for a given pulse size and Nimbus\xspace link share because the variance in the rates of inelastic Poisson cross-traffic reduces with increasing cross-traffic sending rate, reducing the number of false peaks in the cross-traffic FFT.
For the same reason, decreasing Nimbus's\xspace share of the link also results in higher accuracy in general.
However, at low link rates, Nimbus\xspace has low accuracy ($\sim$60\%) when it uses high pulse sizes and controls a low fraction of the link rate. We believe that this is due to a quirk in the way the Linux networking stack reports round-trip time measurements under sudden sending rate changes.
\begin{figure}
\includegraphics[width=\columnwidth]{images/heat_map.pdf}
\caption{\small Nimbus\xspace is robust to variations in link bandwidth and fraction of traffic controlled by it. Increasing pulse size increases robustness.}
\label{fig:heatmap}
\end{figure}
\Para{Impact of buffer size and RTT\xspace.} We vary the buffer size from 0.25 BDP to 4 BDP for each category of cross traffic, at propagation round-trip times\xspace of 25 ms, 50 ms, and 75 ms.
With purely elastic or inelastic traffic, Nimbus\xspace has an average accuracy (across 5 runs) of 98\% or more in all cases but one (see below), while with mixed traffic, the accuracy is always 80\% or more.
With shallow buffers, when the buffer size is less than the product of the delay threshold $x_{t}$ and the bottleneck link rate (i.e., 0.25 BDP when the round-trip time\xspace is 50 ms), Nimbus\xspace classifies all traffic as elastic. However, this low accuracy does not impact the performance of Nimbus\xspace, as Nimbus\xspace achieves its fair-share throughput and low delays (bounded by the small buffer size).
Further, accuracy decreases when Nimbus's\xspace RTT exceeds its pulse period. Since Nimbus's\xspace measurements of rates are over one RTT, any oscillations over a smaller period are hard to observe.
\subsection{Robustness of Switching}
\subsection{Throughput \& Delay Benefits With Varying Shares of Bottleneck Link Rate}
\subsection{Does Nimbus\xspace Need to Control a Large\\Link Share?}
\label{s:robustness-wan-traces}
\begin{figure}
\begin{center}
\begin{subfigure}[b]{0.8\columnwidth}
\includegraphics[width=\columnwidth]{images/pulse_load_30_2.pdf}
\label{fig:pulse_load:30}
\vspace{-5mm}
\subcaption{Cross-Traffic Load: 30\%}
\end{subfigure}
\begin{subfigure}[b]{0.8\columnwidth}
\includegraphics[width=\columnwidth]{images/pulse_load_50_2.pdf}
\label{fig:pulse_load:50}
\vspace{-5mm}
\subcaption{Cross-Traffic Load: 50\%}
\end{subfigure}
\begin{subfigure}[b]{0.8\columnwidth}
\includegraphics[width=\columnwidth]{images/pulse_load90_2.pdf}
\label{fig:pulse_load:90}
\vspace{-5mm}
\subcaption{Cross-Traffic Load: 90\%}
\end{subfigure}
\caption{\small At low cross-traffic loads, Nimbus's\xspace queuing delay approaches that of Vegas while its throughput approaches that of Cubic. At high loads, Nimbus\xspace behaves like Cubic. Increasing pulse size improves switching accuracy and performance.}
\label{fig:pulse_load}
\end{center}
\end{figure}
Must Nimbus\xspace control a significant fraction of the link rate to reap the benefits of mode switching? We evaluate Nimbus's\xspace delay and throughput relative to other algorithms when Nimbus\xspace controls varying shares of the link rate. We generate cross-traffic as described in \Sec{other-bundle-transports} but vary the offered load of the cross-traffic at three levels (30\%, 50\%, and 90\% of the link rate). We measure throughput and delay for two pulse sizes: $0.125\mu$ and $0.25\mu$.
\Fig{pulse_load} shows our findings. First, in all cases, Nimbus\xspace lowers delay without hurting throughput, with the delay benefits most pronounced when cross traffic is low. Second, as cross traffic increases, Nimbus's\xspace delay improvements decrease, because it must stay in TCP-competitive mode more often. Third, Nimbus's\xspace behavior is better at the larger pulse size, but its benefits are generally robust even at $0.125\mu$.
Elasticity detection is less accurate with smaller pulse amplitude ($0.125\mu$), causing more errors in mode switching. With this pulse size, Nimbus\xspace does not lower its delays or maintain its throughput as effectively at medium load (50\%) when switching correctly matters more for good performance.
\section{Evaluation}
\label{s:eval}
We answer the following questions in this section.
\begin{CompactEnumerate}
\item[\Sec{rw-wl}:] Does Nimbus\xspace achieve low delay and high throughput?
\item[\Sec{copa-compare}:] Is Nimbus\xspace's mode switching more accurate than Copa?
\item[\Sec{robustness-wan-traces}:] Does Nimbus\xspace need to control a large link share?
\item[\Sec{elasticity-in-real-cross-traffic}:] Does the elasticity detector track the elastic fraction?
\item[\Sec{robustness-synthetic}:] How robust is Nimbus's\xspace elasticity detection?
\item[\Sec{multiple-nimbus-fair-sharing}:] Can multiple Nimbus\xspace flows co-exist well?
\item[\Sec{realworld}:] Does Nimbus\xspace perform well on real Internet paths?
\item[\Sec{versatility}:] Can we use other delay-based\xspace and buffer-filling algorithms?
\end{CompactEnumerate}
We evaluate the elasticity detection method and Nimbus\xspace using the Mahimahi emulator with realistic workloads, and on Internet paths.
Our Internet experiments are over paths between Amazon's EC2 machines around the world, well-connected university networks, and residential hosts.
\input{eval-wan-distribution}
\input{eval-copa-compare.tex}
\input{eval-switching-wan-traces.tex}
\input{eval-empirical-elasticity.tex}
\input{eval-switching-synthetic.tex}
\input{eval-multiple-nimbus.tex}
\input{eval-real-world}
\input{eval-versatility.tex}
\input{limitations}
\subsection{Versatility in choosing Loss and Competitive Mode Algorithms:}
\subsection{How General Is Mode Switching?}
\label{s:versatility}
We hypothesize that mode switching is a useful building block: a mode switching algorithm can use a variety of congestion control algorithms for its delay-based\xspace and TCP-competitive modes, switching using our elasticity detection method.
We have implemented Cubic, Reno, and MulTCP~\cite{multcp} as competitive-mode algorithms, and Nimbus\xspace delay (\S\ref{s:delay-control}), Vegas, FAST~\cite{fasttcp}, and COPA~\cite{copa} as delay-mode algorithms. In \Fig{vs}, we illustrate two combinations of delay and competitive mode algorithms sharing a bottleneck link with synthetic elastic and inelastic cross-traffic active at different periods during the experiment. The fair-share rate over time is shown as a reference.
Both Reno$+$Nimbus-delay-mode (\Fig{vs:reno}) and Cubic$+$COPA (\Fig{vs:COPA}) achieve
their fair-share rate while keeping the delays low in the absence of elastic cross-traffic.
\begin{figure}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/overview-nimbus-reno-delay.pdf}
\caption{Reno + Nimbus delay}
\label{fig:vs:reno}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/overview-nimbus-cubic-COPA.pdf}
\caption{Cubic + COPA}
\label{fig:vs:COPA}
\end{subfigure}
\caption{\small{ \textbf{Nimbus\xspace's versatility.} Mode switching with different combinations of delay-based\xspace and TCP-competitive\xspace algorithms.}}
\label{fig:vs}
\end{figure}
\subsection{Comparing Nimbus\xspace with Other Transports}
\subsection{Does Nimbus\xspace Achieve Low Delays\\and High Throughput?}
\label{s:other-bundle-transports}
\label{s:rw-wl}
We evaluate the delay and throughput benefits of mode switching using trace-driven emulation. We generate cross-traffic from an empirical distribution of flow sizes derived from a wide-area packet trace from CAIDA~\cite{caida-dataset}.
This packet trace was collected at an Internet backbone router on January 21, 2016 and contains over 30 million packets recorded over 60 seconds.
The maximum rate over any 100 ms period is 2.2 Gbit/s.
We generate Cubic cross-flows with flow sizes drawn from this data, with flow arrival times generated by a Poisson process to offer a fixed average load.
One backlogged flow running a fixed algorithm (Nimbus\xspace, Cubic, Vegas, Copa, or BBR) and the WAN-like cross-flows share a 96 Mbit/s Mahimahi\xspace bottleneck link with a propagation RTT of 50 ms and a bottleneck buffer of 100 ms.
We generate cross-traffic to fill 50\% of the link (48 Mbit/s) on average.
Nimbus\xspace uses a queuing delay threshold of 12.5 ms in delay-mode and emulates Cubic in competitive-mode.
\Fig{emp} shows throughput (over 1-second intervals) and packet delays
of Nimbus\xspace, Vegas, Cubic, Copa, and BBR.
Nimbus\xspace achieves a throughput distribution comparable to Cubic and BBR. Unlike Cubic and BBR, however, Nimbus\xspace also achieves low RTTs, with a median only $10$ ms higher than Vegas and $>50$ ms lower than Cubic and BBR. Vegas suffers from low throughput.
Because of mode switching, Nimbus\xspace and Copa both achieve low delays while maintaining an overall high throughput. However, Copa has lower throughput than Nimbus\xspace about 20\% of the time (lowest 20th percentile in the rate CDF). The reason is that Copa makes some incorrect mode switches in periods where the cross traffic includes large elastic flows.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/emp-combined2.pdf}
\caption{\small{Nimbus\xspace reduces delay relative to Cubic and BBR while achieving comparable throughput on a cross-traffic workload derived from a packet trace collected at a WAN router. Vegas and Copa also reduce delay but lose throughput.}}
\label{fig:emp}
\end{figure}
We also measured the flow completion time (FCT) of cross-flows and found that Nimbus\xspace benefits cross traffic: it reduces 95th percentile completion times for cross traffic flows by 3-4$\times$ compared to BBR, and 1.25$\times$ compared to Cubic for short ($\leq$ 15 KB) flows. Appendix~\ref{app:cross-traffic-impact} provides details.
\section{Implementation}
\label{s:impl}
\vspace{-5pt}
We implemented Nimbus\xspace with two components: a Linux kernel module and associated user-space component which communicate using Linux netlink sockets.
Nimbus\xspace's congestion control algorithm, including the switching logic, delay control, and TCP-competitive mode, are $687$ lines of Go in the user-space program.
Meanwhile, the kernel module integrates with Linux Pluggable TCP~\cite{linux-pluggable} API to enforce rates and collect measurements -- the sending and receiving rates and round-trip time -- to report to the user-space component.
Our implementation is compatible with existing applications through the socket API. We have achieved transfer rates up to 10 Gbit/s in our testbed and observed rates up to 2 Gbit/s on intercontinental links in the WAN. Unless otherwise specified, our implementation uses $f_S = 5$ and $n_b = 2$.
Because our algorithm runs in user-space, it is possible to implement Nimbus\xspace on DPDK by replacing the kernel module with a similar component which interfaces with DPDK. We plan to explore this in the future.
\section{Introduction}
\label{s:intro}
Achieving high throughput and low delay has been a primary motivation
for congestion control research for decades. Congestion control
algorithms can be broadly classified as {\em loss-based} or {\em
delay-based}. Loss-based
methods like Cubic~\cite{cubic}, NewReno~\cite{newreno}, and Compound~\cite{compound, compound2} reduce their window
only in response to packet loss or explicit congestion notification (ECN) signals, whereas delay-based algorithms like Vegas~\cite{vegas}, FAST~\cite{fasttcp}, LEDBAT~\cite{ledbat},
Sprout~\cite{sprout}, and Copa~\cite{copa} reduce their rates as delays
increase to control packet delays and avoid ``bufferbloat''~\cite{bufferbloat}.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/overview-cubic}
\caption{Cubic: High delay}
\label{fig:overview:cubic}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/overview-nimbus-delay.pdf}
\caption{Delay-control (\S\ref{s:delay-control}): Throughput drops}
\label{fig:overview:vegas}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/overview-nimbus-cubic-delay.pdf}
\caption{Nimbus\xspace: Lower delay}
\label{fig:overview:nimbus}
\end{subfigure}
\caption{\small {\bf Example of mode switching.} In this experiment, we compare Cubic, a delay-control algorithm (\S\ref{s:delay-control}), and Nimbus, which uses mode switching. In each experiment, the flow shares a 48 Mbit/s bottleneck link with one elastic long-running Cubic flow for 60 seconds (starting at $t=30$ sec.), followed by 60 seconds of inelastic traffic sending at 24 Mbit/s. Cubic has a large queuing delay throughout. The delay-control scheme achieves low delay when the cross traffic is inelastic but suffers significant throughput loss when it competes with Cubic. Using mode switching, Nimbus achieves the fair throughput against elastic traffic {\em and} low queuing delay when the cross traffic is inelastic.}
\label{fig:overview}
\end{figure*}
There is, however, a major obstacle to deploying delay-based
algorithms on the Internet: their throughput is dismal when competing
against loss-based senders at a shared bottleneck. The reason
is that loss-based senders increase their rates until they observe
losses, which causes queuing delays to increase; in response to
increasing delays, a competing delay-based flow will reduce its rate, hoping
to reduce delay. The loss-based flow then uses this
freed-up bandwidth. The throughput of the delay-based flow
plummets, but delays don't reduce. Because most traffic on the Internet today uses loss-based algorithms, it is hard
to justify deploying a delay-based scheme.
Is it possible to achieve the low delay of delay-based\xspace algorithms whenever possible, while ensuring that their throughput does not degrade in the presence of schemes like Cubic or NewReno? This question has received some attention recently, with Copa~\cite{copa} proposing an algorithm with two modes---a ``default'' mode that controls delay but which competes poorly against schemes like Cubic, and a ``TCP-competitive'' mode that has no delay control but that competes more aggressively. Copa uses round-trip time (RTT) observations to determine if the RTT time-series is consistent with only other Copa flows sharing the bottleneck (Copa periodically empties network queues); if so, the sender uses the delay-controlled mode, but if not, the sender switches to a TCP-competitive mode.
We introduce a new, more robust, approach to set the best operating mode. Our method explicitly characterizes whether the competing cross traffic is \emph{elastic} or not. An elastic sender (or flow) is one that uses feedback from the receiver to increase its rate
if it perceives that there is more available bandwidth, and slows down if it perceives an increase in cross traffic (e.g., by
being ``ACK clocked'', by reacting to packet losses, etc.). Examples include backlogged Cubic, NewReno, Compound, and BBR~\cite{bbr} flows. When the cross traffic is elastic, delay-based control is susceptible to poor throughput. To cope, it is important to detect this situation and switch to a mode that mimics the behavior of the prevailing cross traffic.
By contrast, inelastic senders do not attempt to extract all available bandwidth, and run in open-loop fashion independent of cross traffic; examples include short TCP connections, application-limited flows, constant bit-rate streams, and flows that are rate-limited by an upstream bottleneck. When the cross traffic is inelastic, delay-based control can achieve high throughput while controlling delays.
\if 0
Our experiments on more than 25 Internet paths show that scenarios where cross traffic is predominantly inelastic are common. Figure~\ref{fig:dsmotivation} shows results from one of these paths. The plot shows the average throughput and average delay for Cubic and a delay-based\xspace scheme (described in \S\ref{s:nimbus-protocol}) for 100 experiments with each protocol over a three-day period. Each experiment ran for 60 seconds. The delay-based\xspace scheme generally achieves much lower delays than Cubic, with similar throughput. This shows that there is an opportunity to significantly improve delays using delay-based\xspace algorithms, provided we can detect buffer-filling TCPs and compete with them fairly when needed.
\begin{figure}
\includegraphics[width=\columnwidth]{images/dsmotivation2.pdf}
\caption{\small Buffer-filling vs. delay-control congestion control. The plot shows the average throughput and delay for 100 experiments with Cubic and the Nimbus delay-control algorithm (\S\ref{s:nimbus-protocol}). The experiments were run between a residential client (location redacted for anonymity) and an EC2 server in California. Each experiment lasted one minute.}
\label{fig:dsmotivation}
\vspace*{-3pt}
\end{figure}
\fi
\smallskip
\noindent
{\bf Approach.} We have developed an elasticity detector that uses only end-to-end observations to monitor the cross traffic. The sender continuously modulates its rate to create small traffic fluctuations at the bottleneck at a specific frequency (e.g., 5 Hz). It concurrently estimates the cross traffic rate using RTT samples and its own send and receive rates, and observes if the cross traffic fluctuates at the same frequency. If it does, then the sender concludes that it is elastic; otherwise, it is inelastic.
This technique works well because elastic and inelastic flows react differently to short-timescale traffic variations at the bottleneck. In particular, most elastic TCP flows are ACK clocked; therefore, fluctuations in the inter-packet arrival times at the receiver, reflected in ACKs, cause similar fluctuations in subsequent packet transmissions. By contrast, the rate of an inelastic flow is not sensitive to traffic variations at the bottleneck.
We present the design and evaluation of {\em Nimbus\xspace}, a congestion control protocol that uses our elasticity detector to switch between delay-control and TCP-competitive modes. Unlike Copa, Nimbus\xspace supports different algorithms in each mode. We report results with Vegas~\cite{vegas}, Copa's ``default mode''~\cite{copa}, FAST~\cite{fasttcp}, and our Nimbus\xspace delay-control method as delay-based\xspace algorithms, and Cubic, Reno~\cite{newreno}, and MulTCP~\cite{multcp} as TCP-competitive algorithms.
\Fig{overview} shows an example result with Nimbus\xspace.
We have implemented the elasticity detector and Nimbus in Linux using the congestion control plane~\cite{ccp-sigcomm18}. Our experimental results show that:
\begin{CompactEnumerate}
\item On an emulated bottleneck link (96 Mbit/s, 50 ms delay, 100 ms buffering) with a WAN cross-traffic trace, Nimbus\xspace achieves throughput comparable to BBR and Cubic but with a significantly smaller (50 ms) median delay.
\item Nimbus's\xspace reduction in overall delays benefits the cross-traffic significantly: tail flow completion times for cross-traffic sharing the link with Nimbus\xspace are 3--4$\times$ smaller than with BBR both for short ($<$ 15 KB) and long ($>$ 150 MB) flows, and 1.25$\times$ smaller than Cubic for short flows.
\item Compared to Copa, Nimbus\xspace accurately classifies inelastic cross traffic whereas Copa's accuracy is always lower than 80\%; Copa's classifier also fails when the inelastic cross traffic volume exceeds 80\%. Moreover, with elastic cross traffic, Copa's accuracy degrades from 85\% to 15\% as the RTT ratio between the cross traffic and reference flow increases from 1 to 4. By contrast Nimbus's accuracy is close to 100\%, degrading to 90\% only when the RTT ratio is 4:1.
\item Our elasticity detection method detects the presence of elastic cross-traffic correctly more than 90\% of the time across a wide range of network characteristics such as cross-traffic RTTs, buffer size, Nimbus\xspace RTTs, bottleneck link rates, and the share of the bottleneck link rate controlled by Nimbus\xspace.
\item On Internet paths, Nimbus\xspace achieves throughput comparable to or better than Cubic on most of the 25 real-world paths we tested, with lower delays in 60\% of paths and similar delays in the other 40\% of paths. Compared to BBR, Nimbus\xspace achieves 10\% lower mean throughput, but at 40-50 ms lower packet delay.
\end{CompactEnumerate}
\if 0
\begin{itemize}
\item We define elasticity and show that detecting elasticity requires observing how cross traffic changes across time; in particular, we show that simple approaches based on measurements of delays or rates at a single point in time cannot detect elasticity accurately (\S\ref{s:elasticity}).
\item We present techniques for measuring both the volume of cross traffic and estimating its elasticity using only end-to-end observations of rates and round-trip delays (\S\ref{s:switching}).
\item We develop Nimbus\xspace, a full dual-mode congestion control protocol that uses our cross traffic estimation techniques to achieve low delay with inelastic cross traffic and compete fairly with elastic cross traffic. We show how multiple Nimbus\xspace flows can coordinate to detect the nature cross traffic and select the right operating mode (\S\ref{s:impl}).
\item We implement Nimbus\xspace using the recent Congestion Control Plane (CCP)~\cite{ccp-hotnets} platform. Our implementation uses CCP to run on a Linux kernel datapath while performing Nimbus\xspace's signal processing operations such as FFTs in a userspace program (\S\ref{s:impl}).
\item We evaluate Nimbus\xspace experimentally in a variety of emulated network conditions as well as real-world paths. Our results show...
\end{itemize}
\fi
\if 0
Transport protocols on the Internet may be characterized as either {\em elastic} or {\em inelastic}. An elastic sender uses feedback from the receiver to increase its rate if it perceives that the rate is lower than the available bandwidth, and slows down if it perceives an increase in cross traffic (e.g., by being ``ACK clocked'', by reacting to packet losses, etc.). Examples of elastic senders include congestion control algorithms such as Cubic~\cite{cubic}, NewReno~\cite{newreno}, Compound~\cite{compound}, and BBR~\cite{bbr}. By contrast, inelastic senders do not attempt to extract all available bandwidth, and run in open-loop fashion independent of cross traffic; examples include short TCP flows, application-limited flows (e.g., a constant bit-rate video stream), and flows that are rate-limited.
This paper proposes a method for a transport protocol sender to detect whether the {\em competing} traffic sharing its bottleneck link is elastic or not, and shows that elasticity detection is a useful building block for end-to-end congestion control. The primary motivation is to deploy congestion control methods that control packet delays. Examples of such {\em delay-based} algorithms include Vegas~\cite{vegas}, FAST~\cite{fasttcp}, LEDBAT~\cite{ledbat}, Sprout~\cite{sprout}, and Copa~\cite{copa}; unlike {\em loss-based} methods like Cubic, NewReno, and Compound, which reduce their window only in response to packet loss or explicit congestion notification (ECN) signals, delay-based algorithms reduce their rates as delays increase, to control packet delays and avoid ``bufferbloat''~\cite{bufferbloat}.
\fi
\if 0
Researchers have developed two categories of approaches to tackle bufferbloat: active queue management (AQM) and delay-sensitive congestion control (delay-based\xspace) algorithms. AQM approaches~\cite{red, pi, avq, pie, codel} drop or mark packets before buffers fill up at routers; by providing earlier feedback of congestion to senders, buffer-filling senders can slow down before buffers fill. Because AQM schemes require changes to routers, however, they have faced deployment hurdles. By contrast, delay-based\xspace algorithms like Vegas~\cite{vegas}, FAST~\cite{fasttcp}, LEDBAT~\cite{ledbat}, Sprout~\cite{sprout}, and Copa~\cite{copa} require no router changes, using end-to-end delay and rate measurements to control queuing delay.
\fi
\if 0
In practice, algorithms like Compound~\cite{compound} and BBR~\cite{bbr} that have been deployed use delay measurements (instead of, or in addition to, losses and ECN), but do not control delays. For example, Compound combines delay and loss measurements to grab bandwidth quickly, but it fills up buffers at its bottleneck and does not achieve low delay.
While delay-based\xspace algorithms suffer low throughput when cross traffic is elastic, they exhibit the desired behavior when cross traffic is inelastic: they achieve low delays while fully utilizing the bottleneck link. Therefore, if we can detect the presence of elastic TCPs, we can use a {\em TCP-competitive} algorithm (e.g., Cubic) to compete fairly with elastic flows when they appear, but switch to a delay-based\xspace algorithm when the cross traffic is inelastic. We call this approach {\em explicit mode switching}.
\fi
\section{Limitations}
\label{s:limit}
The detector does not reliably characterize BBR as elastic because BBR responds on time scales longer than an RTT; here, Nimbus uses delay-based control more often than desired with BBR cross traffic. However, we find that Nimbus achieves similar throughput to Cubic in practice: with more than 1 BDP of router buffering, BBR becomes window-limited and is detected as elastic, but with shallow buffers, BBR obtains a disproportionate share of the link compared to both Cubic and Nimbus (Appendix $\S$\ref{app:bbr-comparison}).
The detector correctly characterizes delay-based\xspace schemes like Vegas as elastic, but in response runs a Cubic-like method, causing delays to grow when high throughput could have been achieved with lower delays. Determining if elastic cross traffic is also delay-controlled is an open question.
\if 0
Our method detects only ACK-clocked flows as elastic, not rate-based ones.\pg{We need to change this a bit based on HB's intro. Please note that I have added an experiment with BBR in the appendix.} Almost all traffic today uses ACK-clocking, but a notable exception is BBR~\cite{bbr}, which uses both a rate and a congestion window. Nimbus\xspace detects BBR only when they are window-limited.
\fi
The detector assumes that the flow has a single bottleneck. Multiple bottlenecks can add noise to Nimbus\xspace's rate measurements, preventing accurate cross-traffic estimation. The challenge is that the spacing of packets at one bottleneck is not preserved when traversing the second bottleneck.
\section{Impact of varying cross traffic load}
\label{app:load-var}
\Fig{load-var1} shows that as the cross traffic' load increases Nimbus\xspace's RTTs increase.
Table~\ref{tab:var-load2} shows the impact of cross traffic load on Nimbus\xspace's switching mechanism. When the cross traffic load is low ($<20\%$) or high ($>80\%$) Nimbus\xspace's classification error is low and we see that ground traffic (percentage of elastic cross traffic to link capacity) remains pretty much the same throughout. For load's between (20 and 80 percent) switching happens quite often. Under these conditions Nimbus\xspace make a classification error of 25\%.
\begin{table}
\begin{center}
\begin{tabular}{||c c||}
Cross Traffic Load(\%) & Miss-classification (\%)\\
10 & 9.6\\
30 & 28.4\\
50 & 24.1\\
70 & 26.3\\
90 & 14.9
\end{tabular}
\end{center}
\caption{\small {\bf Elasticity miss-classification vs. cross traffic load---}The elasticity estimation becomes more accurate with both low and high cross traffic load.}
\label{tab:var-load2}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{images/emp-var-load.pdf}
\caption{\small {\bf Average RTT vs. cross traffic load---}Nimbus\xspace reduces delays more effectively at low cross traffic load.}
\label{fig:load-var1}
\end{figure}
\section{Motivation and Key Ideas}
\label{s:motivation}
Congestion control done at the level of individual TCP connections.
\section{Multiple Nimbus Flows}
\label{s:multinimbus}
What happens when a bottleneck is shared by multiple Nimbus\xspace flows running on different senders?
The goal is for all the Nimbus\xspace flows to remain in delay-control mode when there is no other elastic cross traffic, and compete well with elastic cross traffic otherwise. The problem is that pulsing may create adverse interactions that confuse the different Nimbus\xspace instances.
One approach is for the Nimbus\xspace flows to all pulse at the same frequency. However, in this case, they will all detect a peak in the FFT at the oscillation frequency. They will all then stay in TCP-competitive mode and won't be able to maintain low delays, even when there is no elastic cross traffic. A second approach is for different Nimbus\xspace flows to pulse at different frequencies. The problem with this approach is that it cannot scale to more than a few flows, because the set of possible frequencies is too small (recall that we require $T/4 \approx \mbox{RTT}$).
\smallskip
\noindent{\bf Watchers, meet Pulser.}
We propose a third approach. One of the Nimbus\xspace{} flows assumes the role of the {\em pulser}, while the others are {\em watchers}. The coordination between them involves no explicit communication; in fact, each Nimbus\xspace flow is unaware of the identities, or even existence, of the others.
The pulser\xspace sends data by modulating its transmissions on the asymmetric sinusoid. The pulser\xspace uses two different frequencies, $f_{pc}$ in TCP-competitive mode, and $f_{pd}$ in delay-control mode; e.g., 5 Hz and 6 Hz. The values of these frequencies are fixed and agreed upon beforehand. A watcher\xspace infers whether the pulser\xspace is pulsing at frequency $f_{pc}$ or frequency $f_{pd}$ by observing the FFT at those two frequencies and uses it to set its own mode to match the pulser\xspace's mode. The variation in the queuing due to pulsing will cause a watcher\xspace flow's receive rate to pulse at the same frequency. The watcher\xspace computes the FFT at the two possible pulsing frequencies and picks the mode corresponding to the larger value. With this method, a watcher\xspace flow can determine the mode without estimating the bottleneck link rate or controlling a significant portion of the link rate.
For multiple Nimbus\xspace{} flows to maintain low delays during times when there is no elastic cross traffic on the link, the pulser\xspace flow must classify watcher\xspace traffic as inelastic. Note that from the pulser\xspace's perspective, the watchers\xspace flows are part of the cross traffic; thus, to avoid confusing the pulser\xspace, the rate of watchers\xspace must not react to the pulses of the pulser\xspace. To achieve this goal, a watcher applies a low pass filter to its transmission rate before sending data. The low pass filter cuts off all frequencies in the sending rate that exceed $\min(f_{pc},f_{pd})$.
\smallskip
\noindent
{\bf Pulser election.}
A decentralized and randomized election process decides which flow is the pulser\xspace and which are watchers\xspace. If a Nimbus\xspace flow determines that there is no pulser\xspace (by seeing that there is no peak in the FFT at the two potential pulsing frequencies), then it decides to become a pulser with a probability proportional to its transmission rate:
\begin{equation}
p_{i} = \frac{\kappa\tau}{\textnormal{FFT Duration}} \times \frac{R_{i}}{\hat{\mu}_{i}}.
\label{eq:pi}
\end{equation}
Each flow makes decisions periodically, e.g., every $\tau = 10$ ms, and $\kappa$ is a constant. Since the FFT duration is 5 seconds, each $p_i$ is small (note that $\sum_i R_i \leq \mu$), but since flows make decisions every $\tau$ seconds, eventually one will become a pulser.
If the estimates $\hat{\mu}_{i}$ are equal to the true bottleneck rate $\mu$, then the expected number of flows that become pulsers over the FFT duration is at most $\kappa$. To see why, note that the expected number of pulsers is equal to the sum of the probabilities in Eq.~\eqref{eq:pi} over all the decisions made by all flows in the FFT duration. Since $\sum_i R_i \leq \mu$ and each flow makes ($\textnormal{FFT Duration} / \tau$) decisions, these probabilities sum up to at most $\kappa$.
It is also not difficult to show the number of pulsers within an FFT duration has approximately a Poisson distribution with a mean of $\kappa$~\cite{probabilitybook}. Thus the probability that after one flow becomes a pulser, a second flow also becomes a pulser before it can detect the pulses of the first flow in its FFT measurements is $1 - e^{-\kappa}$. Therefore, $\kappa$ controls the tradeoff between fewer conflicts vs. longer time to elect a pulser.
For any value of $\kappa$, there is a non-zero probability of more than one concurrent pulser. If there are multiple pulsers, then each pulser will observe that the cross traffic has more variation than the variations it creates with its pulses. This can be detected by comparing the magnitude of the FFT of the cross traffic $z(t)$ at $f_p$ with the FFT of the pulser's receive rate $R(t)$ at $f_p$. If the cross traffic's FFT has a larger magnitude at $f_p$, Nimbus\xspace concludes that there must be multiple pulsers and switches to a watcher with a fixed probability.
\smallskip
\noindent{\bf Remark.}
The multiple-Nimbus\xspace scheme for coordinating pulsers bears resemblance to receiver-driven layered multicast (RLM) congestion control~\cite{rlm}. In RLM, a sender announces to the multicast group that it is conducting a probe experiment at a higher rate, so any losses incurred during the experiment should not be heeded by the other senders. By contrast, in Nimbus\xspace, there is no explicit coordination channel, and the pulsers and watchers coordinate via their independent observations of cross traffic patterns.
\if 0
Using a similar argument and assuming $\tau$ to be small, it will take another
$$X_{2} = \sum_{i=0}^{n}\frac{p_{i}}{\sum_{j=0}^{n}p_{j}}(\frac{\mu^{*}}{\sum_{j=0,j\ne i}^{n}R_{j}}) * \textnormal{Duration of FFT}$$
for a second Pulser to be elected. Since $X_{2}$ is greater than Duration of FFT, this rule ensures that in expectation atmost 1 pulser will be elected. Not only that this rule gives higher preference to bigger watcher flows over smaller ones.
But, what if multiple flows become Pulser at the same time?
\pg{Talk about cases where pulser and watcher election is not required. Say that FFT used by watcher is smaller to increase switching speed}
\fi
\section{Nimbus Protocol}
\label{s:nimbus-protocol}
This section describes a protocol that uses mode switching. It has a TCP-competitive mode in which the sender transmits using Cubic's congestion avoidance phase, and a delay-control mode that uses a method, described below. The sender switches between the two modes using the elasticity detector described in the previous section, transmitting data at the time-varying rate dictated by the congestion control, and modulating its transmissions on the asymmetric sinusoid of \Fig{semi-sine}.
\subsection{Delay-Control Rule}
\label{s:delay-control}
\if 0
\Para{Competitive mode:} Nimbus\xspace{} emulates TCP Cubic for TCP-competitive mode. The choice of Cubic is guided by the fact that Cubic is the default congestion control algorithm on linux. This suggests that most of elastic cross traffic is composed of Cubic flows and in order to compete with the elastic flows fairly, Nimbus\xspace{} must emulate Cubic. Nimbus\xspace calculates the congestion window using Cubic's congestion window update rule and sets the pre-pulsing rate to (congestion window)/$RTT$. All the Cubic congestion control parameters were chosen to match the default linux implementation.
\fi
Nimbus uses a delay-control control rule inspired by ideas in XCP~\cite{xcp}, TIMELY~\cite{timely}, and PIE~\cite{pie}. It seeks to achieve high throughput while maintaining a specified threshold queuing delay, $d_t >0$. A positive $d_t$ ensures that the link is rarely under-utilized, and allows us to estimate $z(t)$. The protocol seeks to deliver an ideal rate of $\mu-z(t)$. The reason we designed our own method rather than use an existing one like Vegas~\cite{vegas} is because our ability to estimate $z$ yields tighter controls on delay than prior protocols.
The control rule has two terms. The first seeks to achieve the ideal rate, $\mu - z$. The second seeks to maintain a specified threshold queuing delay, $d_t$, to prevent the queue from both emptying and growing too large.
Denote the minimum RTT by $x_{\min}$ and the current RTT by $x_t$. The Nimbus\xspace delay-control rule is
\begin{equation}
S(t+\delta) =
(1-\alpha) S(t)
+ \alpha (\mu - z(t))
+ \beta\frac{\mu}{x_t}
(x_{\min} + d_t - x_t).
\label{eq:delayrule}
\end{equation}
Prior work (XCP~\cite{xcp} and RCP~\cite{rcpstable}) has established stability bounds on $\alpha$ and $\beta$ for nearly identical control laws. Our implementation uses $\alpha=0.8$, and $\beta=0.5$.
\subsection{Mode Switching}
Nimbus\xspace uses the pulsing parameters described in $\S$\ref{s:pracelas}, calculating $S$ and $R$ over one window's worth of packets and setting $T/4$ to the minimum RTT. It computes the FFT over multiple pulses and uses the $z$ measurements reported in the last 5 seconds to calculate elasticity ($\eta$) using Eq. \eqref{eq:elasticity}.
We found earlier (\Fig{elasticity:cdf}) that a good threshold for $\eta$ is 2. To prevent frequent mode switches, Nimbus\xspace applies a hysteresis to this threshold before switching modes. When in delay-control mode, $\eta$ must exceed 2.25 to switch to TCP-competitive mode, and when in TCP-competitive mode, $\eta$ must be lower than 2 to switch to delay-control mode.
It is important that the rate be initialized carefully on a mode switch. When switching to TCP-competitive mode, Nimbus\xspace sets the rate (and equivalent window) after switching to the rate that was being used five seconds ago (five seconds is the duration over which we calculate the FFT). The reason is that the elasticity detector takes five seconds to detect the presence of elastic cross traffic, and the arrival of elastic traffic over the past five seconds would have reduced the delay mode's rate. We set the new congestion window to the inflection point of the Cubic function, so the rate over the next few RTTs will rise faster in each successive RTT, and we reset \texttt{ssthresh} to this congestion window to avoid entering slow start.
While switching to delay mode, Nimbus\xspace resets the delay threshold to the current value of $x_t - x_{\min}$, rather than to the desired threshold $d_t$, and linearly decreases the threshold used in the control rule to the desired $d_t$ over several RTTs. The reason is that $x_t - x_{\min}$ is likely to have grown much larger than $d_t$ in TCP-competitive mode, and using $d_t$ would only cause the sender to reduce its rate and go down in throughput. The approach we use ensures instead that the delay mode will drain the queues gradually and won't lose throughput instantly, and also provides a safeguard against incorrect switches to delay mode.
\if 0
There are some special cases to handle. First, if there is no observable peak in FFT of $z$, the sender has no good estimate of the elasticity, so Nimbus\xspace defaults to competitive mode. Second, when the current rate is less than 10\% of the bottleneck link, Nimbus\xspace switches to TCP-competitive mode as it has too little of the bottleneck for delay mode to be effective. Third, when the current rate is greater than 90\% of bottleneck rate switch to delay mode, because Nimbus\xspace is the dominant contributor to that network path.
\fi
\subsection{Implementation}
\label{s:implementation}
We implemented Nimbus\xspace using the congestion control plane (CCP)~\cite{ccp-sigcomm18}, which provides a convenient way to express the signal processing operations in user-space code while achieving high rates. The implementation runs at up to 40 Gbit/s using the Linux TCP datapath. All the Nimbus\xspace software is in user-space. The Linux TCP datapath uses a CCP library to report to our Nimbus\xspace implementation the estimates of $S$, $R$, the RTT, and packet losses every 10 ms. $S$ and $R$ are measured using the methods built in TCP BBR over one window's worth of packets. Our implementation of Nimbus\xspace is rate-based, and sets a cap on the congestion window to prevent uncontrolled ``open-loop'' behavior when ACKs stop arriving.
\if 0
Unlike traditional congestion control algorithms, Nimbus\xspace{} doesn't make per-ACK congestion control decisions and is in fact a rate based protocol. Nimbus\xspace{} leverages the Congestion Control Plane (CCP) ~\cite{ccp-hotnets} platform, to collect $S$, $R$, $RTT$ and loss measurements and uses them for setting the sending rate. CCP reports these measurements every 10ms\footnote{This is the minimum inter-measurement report time for CCP} to our user space program running Nimbus\xspace.
\fi
On every measurement report, Nimbus\xspace (1) updates the congestion control variables of the current mode and calculates a pre-pulsing sending rate, (2) superimposes the asymmetric pulse in \Fig{semi-sine} to obtain the actual sending rate, and (3) generates the FFT of $z$ and makes a decision to switch modes using a 5-second measurement period for the FFT.
We note that calculating $z$ requires an estimate of the bottleneck link rate ($\mu$). There has been much prior work~\cite{packettrain, dovrolis2001packet, downey1999using, lai2000measuring, jacobson1997pathchar, lai2001nettimer, mar2000pchar} in estimating $\mu$, any of which could be incorporated in Nimbus\xspace{}. Like BBR, our current implementation uses the maximum received rate, taking care to avoid incorrect estimates due to ACK compression and dilation.
\subsection{Visualizing Nimbus and Other Schemes}
\label{s:switching-illustration}
\input{eval-overall-perf}
\section{Outline}
Sec I. Intro
* Context
- holy grail goal for all apps: high throughput and low delay.
-- Delay based protocols can work with low delays (new protocols are trying to get low delays COPA/BBR)
--- good when the cross traffic is not competitive(UDP or application limited) bad when it isn't (cross traffic will kill)
-- Loss based protocols are not able to get low delays even when possible
-- Nimbus build an estimator for estimating competitiveness(elasticity) of cross traffic , use it to for a dual mode congestion control protocol that combines a delay and loss based scheme
-- Show Figure 2 of NSDI as an Illustration
- use cases
-- home networks
-- data centers
-- aggregation between organizations
-- Internet traffic Analysis?
* Contributions:
- Concept of elasticity and formal definition of elastic traffic.
- Estimating Cross Traffic elasticity
- Pulsing to detect cross traffic elasticity
- Multiple Nimbus
- Build Nimbus as a TCP, practical scheme
* Assumptions
- Single Bottleneck link. Widely accepted for home settings. Justify this for aggregates as well
- Nimbus Pulser controls a reasonable fraction of the link capacity and can pulse
Sec II. System
-Cross traffic estimation logic
-- How are send rate (Rin) and receive rate (Rout) calculated
-- Include Figure 3 from NSDI paper
- Elasticity overview
-- Define what it means to be elastic (ACK clocked)
-- Show that elastic cross traffic reacts to pulses and inelastic doesn't (Figure 4 NSDI)
-- Introduce FFTs and how we calculate elasticity (Figure 5 NSDI)
--- Explain why just looking queuing delays is not enough
--- Explain advantages of FFTs, cross traffic can have different/variable base RTTs, can pulses continuously
--Assymetric pulse
-- Introduce Elasticity metric for switching
--- Show figure 6 of NSDI paper as a proof of concept
--- Show how and when are measurements taken and used for switching
-- bottleneck bandwidth estimation use old techniques such as packet train or
- Multiple Nimbuses
-- Introduce pulser and watcher. Pulser makes decisions for watchers/followers.
-- Pulser uses frequency 1 for competitive and frequency 2 for delay based
-- Watchers don't react to pulses by the pulser, watchers use EWMA to smoothen reactions to pulses of the pulser
-- Emphasize watcher doesn't need to estimate Z for deciding mode
-- Election process for pulser watcher switching
--Show an illustration figure???
- Show that we can use any competetive scheme (cubic, mulTCP, reno) and any delay based scheme(vegas, fast, our own delay based scheme)
- Figure????
- Introduce or delay based scheme as well
Sec VI. Evaluations
(1) Big Experiment
- Figure 11 of NSDI paper
- Redo with just cubic instead of MulTCP
(2) Empirical Traffic Generator (figure 12 and 13 of NSDI)
- Nimbus gets same throughput but way lower delays
- Show switching works under complicated scenarios
(3) Real World Experiment (fig 16 of NSDI)
- 5 Servers in 5 different countries
- 5 client in 2 different countries
- We do good on paths with buffering
- Show policed paths
(4) Nimbus can be used for aggregation (Fig 14 NSDI) - under debate
- Maybe show default scheduling instead of complicated scheduling
Robustness
(5) Heat Map of percentage of times switched correctly
(6) Elasticity works for cross traffic RTT and buffer scenarious (Show CDF of elasticity)
(7) Multiple Nimbus
- demonstrate pulser/watcher switchins
-- Experiment with multiple flows coming and leaving
- Show switching works with multiple nimbuses
--Overview experiment
Limitations
- Model doesn't take into account aggress traffic which is not Ack clocked (for example BBR)
- Systems issues involved
- Bad Rin and Rout measurements under specific conditions (for example WIFI)
\section{System Model and Goals}
\label{s:model}
\label{s:overview}
A persistent traffic source, unlike one that comes and goes, can learn the congestion state of
the network all the time to more accurately estimate and react to cross traffic.
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{images/sys-model2}
\caption{\small {\bf Nimbus\xspace system model---}An aggregate sender sends packets to an aggregate receiver. The network has cross traffic from flows outside of the aggregate running a variety of protocols. The time-varying total rate of this cross traffic is $z(t)$, of which $z_i(t)$ is ``inelastic''; the rest is transmitted by $n_x(t)$ long-running TCP flows. The bottleneck link rate is $\mu$. The control algorithm's goal is to compute a time-varying transmission rate, $R(t)$ for the aggregate, such that it behaves like $n_b$ long-running TCP flows. We denote the rate of traffic received by the Nimbus\xspace{} receiver by $R_{\mbox{out}}$.}
\label{fig:sysmod}
\end{figure}
\Fig{sysmod} shows the system model for Nimbus\xspace's design and introduces some notation. The Nimbus\xspace sender transmits packets aggregated from many flows at a time-varying total rate, $R(t)$. The sender chooses which packet to transmit according to a scheduling policy across aggregated flows, such as round-robin, shortest-flow-first (SRPT), etc. Although we focus on determining the aggregate rate, we also evaluate the impact of a few bundle scheduling policies in $\S$\ref{s:scheduling}.
\smallskip
\noindent{\bf What is the correct rate, $R(t)$ for the aggregate?} One answer could be to estimate the difference between the bottleneck link capacity and the bandwidth used by the cross traffic: $\mu - z(t)$. This is wrong because when cross traffic is elastic, this quantity will be close to 0, because elastic flows will tend to grab all available bandwidth. In the presence of elastic flows, Nimbus\xspace must compete for its fair share of the bottleneck bandwidth, not just settle for the left-over bandwidth.
We define the desired value for $R(t)$ as the average rate that $n_b$ long-running TCPs would achieve under the same cross traffic conditions. The choice of $n_b$ is a local policy decision and affects how aggressively the bundle behaves; one setting could be the number of TCP flows longer than some length in the bundle.
If the inelastic portion of the cross traffic is $z_i(t)$, and the cross traffic also contains $n_x$ long-running elastic flows, then we define the desired rate for the bundle to be
\vspace{5pt}
\begin{equation}
R(t) = (\mu - z_i(t))\frac{n_b}{n_b + n_x}.
\label{eqn:nimbusgoal}
\end{equation}
\vspace{5pt}
The intuition for this equation is that inelastic traffic does not adapt its rate to congestion; hence, elastic flows in the cross traffic and in Nimbus\xspace should divide the remaining capacity, $\mu - z_i(t)$, in the ratio $n_x$ to $n_b$.\footnote{Nimbus\xspace may also include inelastic flows, but we can account for these by setting $n_b$ higher to ensure that Nimbus\xspace does not put itself at a disadvantage relative to cross traffic.}
Of course, Nimbus\xspace does not know the value of $z(t)$, nor whether the cross traffic includes elastic flows. Nonetheless, achieving the rate in Eq.~(\ref{eqn:nimbusgoal}) by itself is not difficult. Any scheme that emulates TCP's congestion control rules for $n_x$ flows will achieve this rate on average. However, as explained in $\S$\ref{s:intro}, Nimbus\xspace also aims to minimize self-inflicted delay. Whenever cross traffic is inelastic, we can potentially achieve the same rate with much lower delay than traditional loss-based control algorithms (especially in networks with deep buffers). This makes the problem much more challenging and requires new techniques to infer the elasticity of cross traffic using observations by the Nimbus\xspace sender and receiver.
\section{Real Internet Paths}
\label{app:realworld}
We tested Nimbus\xspace on $18$ paths in total; we report the mean throughput and delay each protocol achieved on each path in \Fig{full-realworld}.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2london-ng-home}
\caption{EC2 London -- Host A}
\label{fig:realworld:london:A}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2london-akshayhome}
\caption{EC2 London -- Host B}
\label{fig:realworld:london:B}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2london-deepti-home}
\caption{EC2 London -- Host C}
\label{fig:realworld:london:C}
\end{subfigure}
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2brazil-ng-home}
\caption{EC2 Brazil -- Host A}
\label{fig:realworld:brazil:A}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2brazil-akshayhome}
\caption{EC2 Brazil -- Host B}
\label{fig:realworld:brazil:B}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2brazil-deepti-home}
\caption{EC2 Brazil -- Host C}
\label{fig:realworld:brazil:C}
\end{subfigure}
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2sydney-ng-home}
\caption{EC2 Sydney -- Host A}
\label{fig:realworld:sydney:A}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2sydney-akshayhome}
\caption{EC2 Sydney -- Host B}
\label{fig:realworld:sydney:B}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2sydney-deepti-home}
\caption{EC2 Sydney -- Host C}
\label{fig:realworld:sydney:C}
\end{subfigure}
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2california-ng-home}
\caption{EC2 California -- Host A}
\label{fig:realworld:california:A}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2california-akshayhome}
\caption{EC2 California -- Host B}
\label{fig:realworld:california:B}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/ec2california-deepti-home}
\caption{EC2 California -- Host C}
\label{fig:realworld:california:C}
\end{subfigure}
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/nimbus2001-ng-home}
\caption{EC2 University -- Host A}
\label{fig:realworld:univ:A}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/nimbus2001-akshayhome}
\caption{EC2 University -- Host B}
\label{fig:realworld:univ:B}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{images/realworld-appendix/nimbus2001-deepti-home}
\caption{EC2 University -- Host C}
\label{fig:realworld:univ:C}
\end{subfigure}
\caption{\small {\bf Evaluation on Internet paths}---Throughput and latency performance for BBR, Cubic, Nimbus\xspace, and Vegas on 18 Internet paths spanning 6 EC2 instances across 5 continents.}
\label{fig:full-realworld}
\end{figure*}
\section{Related Work}
\label{app:related}
Vegas aims to maintain between $\alpha$ and $\beta$ packets in the bottleneck queue using AIAD on its window.
Fast TCP\cite{fasttcp} exploits delay information to set its congestion window to maintain a fixed number of packets in the bottleneck queue. While this goal is similar to Vegas, FAST is tuned for higher rates.
TCP Nice~\cite{tcp-nice} and LEDBAT~\cite{rfc6817} are delay-based algorithms to utilize spare
bandwidth without hurting ``foreground'' transfers like web traffic. These schemes don't perform well competing with buffer-filling algorithms.
\if 0
Nice
maintains two estimates of RTTs, a min RTT and a max RTT, through which it
tracks an estimate of the maximum queueing delay possible. Along with Vegas-like
increase/decrease rules, Nice multiplicatively decreases its congestion window
whenever a fraction $f$ of its packets experience queueing delay more than a
fraction $t$ of the maximum queueing delay possible. LEDBAT~\cite{rfc6817} is
another delay-based algorithm with a set target delay. It increases its
congestion window proportional to the difference between the target and the
current delay, i.e., target - current.
FAST, Nice and LEDBAT can achieve high throughput and low delays when there is a
low volume of cross-traffic. But they are not designed to be
throughput-competitive with loss-based algorithms that probe aggressively for
bandwidth. Nimbus\xspace detects the presence of aggressive cross-traffic, and competes
fairly by switching to its TCP-competitive mode.
\fi
Compound TCP~\cite{compound} maintains both a loss-based window and
a delay-based window, and transmits data based on the sum of the two
windows. The delay-based window is updated with rules similar to Vegas. Compound
achieves high throughput as soon as the link utilization drops, because of its
delay window, but does not control delays because of its loss window.
\if 0
Equation-based congestion control
(EBCC~\cite{ebcc}) sends traffic at a rate computed to match TCP under similar
RTT and drop probabilities in the network. EBCC maintains a rate of `loss
events,' where a loss event may include multiple packets lost within an RTT. The
loss event rate and the RTT are used to compute the desired transmission rate
using the TCP throughput equation. EBCC uses packet loss as a signal for
congestion control, and hence cannot achieve a low queueing delay. In contrast,
Nimbus\xspace controls delays directly.
MulTCP~\cite{multcp} emulates the behavior of an aggregate of $N$ TCP loss-based
algorithms. It evolves its congestion window as follows. It increases its
congestion window by $N/cwnd$ for each ACK, and decreases it to $cwnd *
(N-0.5)/N$ after each drop. However, as our experiments show (\S\ref{s:eval}),
MulTCP sending rates are very bursty and variable, and MulTCP competes unfairly
with other loss-based TCPs. Nimbus\xspace's TCP-competitive mode is smoother and fairer
to other elastic traffic than MulTCP.
\fi
BBR~\cite{bbr} maintains estimates of the bottleneck bandwidth ($b$) and minimum
propagation delay ($d$). The bandwidth estimate tracks the maximum packet
delivery rate to the receiver, while the delay estimate tracks the minimum delay
achieved. BBR paces traffic at a rate $b$ keeping $b * d$ packets in flight.
Unfortunately, the presence of other loss-based algorithms sharing the
bottleneck buffer inflates BBR's delay estimate $d$. Depending on the bottleneck
buffer size, either the BBR-induced losses limit the throughput of the
other TCP flows, or BBR's delay probing reduces its in-flight data, resulting in
lower throughput for BBR itself~\cite{bbr-reno-graph}.
\if 0
We believe both
scenarios are undesirable. In contrast, Nimbus\xspace competes fairly by detecting the
presence of other elastic traffic through a novel elasticity detector, and
switches to TCP-competitive mode.
\fi
\if 0
Switch-based algorithms like XCP~\cite{xcp}, RCP~\cite{rcp} and PIE~\cite{pie}
employ feedback loops much similar to Nimbus\xspace's delay mode, using spare bandwidth
and queueing delay to control endpoint sending rates. In contrast, Nimbus\xspace delay
mode applies this control end-to-end by estimating the spare bandwidth through
the derivative of the queueing delay. This estimation requires us to retain
nonzero queueing delays; this is anyway beneficial to maintain high utilization.
\fi
\section{Related Work}
\label{s:related}
\if 0
Vegas aims to maintain between $\alpha$ and $\beta$ packets in the bottleneck queue using AIAD on its window. Similar to Vegas, FAST~\cite{fasttcp} exploits delay information to set its congestion window to maintain a fixed number of packets in the bottleneck queue. TCP Nice~\cite{tcp-nice} and LEDBAT~\cite{rfc6817} are delay-based algorithms to utilize spare bandwidth without hurting ``foreground'' transfers like web traffic. Timely~\cite{timely} is a delay-based algorithm designed to work in data centers over RDMA. These schemes don't perform well when competing with buffer-filling algorithms.
\fi
\if 0
TCP Nice maintains two estimates of RTTs, a min RTT and a max RTT, through which it
tracks an estimate of the maximum queueing delay possible. Along with Vegas-like
increase/decrease rules, Nice multiplicatively decreases its congestion window
whenever a fraction $f$ of its packets experience queueing delay more than a
fraction $t$ of the maximum queueing delay possible. LEDBAT~\cite{rfc6817} is
another delay-based algorithm with a set target delay. It increases its
congestion window proportional to the difference between the target and the
current delay, i.e., target - current.
\fi
\if 0
FAST, Nice and LEDBAT can achieve high throughput and low delays when there is a
low volume of cross-traffic. But they are not designed to be
throughput-competitive with loss-based algorithms that probe aggressively for
bandwidth. Nimbus\xspace detects the presence of aggressive cross-traffic, and competes
fairly by switching to its TCP-competitive mode.
\fi
\if 0
Compound TCP~\cite{compound} maintains both a loss-based window and
a delay-based window, and transmits data based on the sum of the two
windows. Compound achieves high throughput as soon as the link utilization drops because of its delay window, but does not control self-inflicted delays because of its loss window.
\fi
\if 0
Equation-based congestion control
(EBCC~\cite{ebcc}) sends traffic at a rate computed to match TCP under similar
RTT and drop probabilities in the network. EBCC maintains a rate of `loss
events,' where a loss event may include multiple packets lost within an RTT. The
loss event rate and the RTT are used to compute the desired transmission rate
using the TCP throughput equation. EBCC uses packet loss as a signal for
congestion control, and hence cannot achieve a low queuing delay. In contrast,
Nimbus\xspace controls delays directly.
MulTCP~\cite{multcp} emulates the behavior of an aggregate of $N$ TCP loss-based
algorithms. It evolves its congestion window as follows. It increases its
congestion window by $N/cwnd$ for each ACK, and decreases it to $cwnd *
(N-0.5)/N$ after each drop. However, as our experiments show (\S\ref{s:eval}),
MulTCP sending rates are very bursty and variable, and MulTCP competes unfairly
with other loss-based TCPs. Nimbus\xspace's TCP-competitive mode is smoother and fairer
to other elastic traffic than MulTCP.
\fi
BBR~\cite{bbr} maintains estimates of the bottleneck bandwidth ($b$) and minimum propagation delay ($d$). It paces traffic at a rate $b$ while keeping $b\times d$ packets in flight. The presence of other loss-based algorithms sharing the bottleneck buffer inflates BBR's delay estimate, $d$. Depending on the bottleneck buffer size, either the BBR-induced losses limit the throughput of the other TCP flows, or BBR's delay probing reduces its in-flight data, resulting in lower throughput for BBR itself~\cite{bbr-reno-graph}.
Vegas~\cite{vegas}, FAST~\cite{fasttcp}, and Copa~\cite{copa} all aim to maintain a bounded number of packets in the bottleneck queue using different control rules. TCP Nice~\cite{tcp-nice} and LEDBAT~\cite{rfc6817} are delay-based algorithms to use spare bandwidth without hurting ``foreground'' transfers. Timely~\cite{timely} is a designed for RDMA in datacenters. Other than Copa, these schemes perform poorly when competing with buffer-filling algorithms. Copa has a mechanism to switch into a TCP-competitive mode, but it relies on low-level properties of the dynamics of Copa and does not generalize.
\if 0
PCC~\cite{pcc} adapts the sending rate based on ``micro-experiments'' that evaluate how changing the sending rate (multiplicatively) impacts performance according to a specified utility function. PCC defines two utility functions: one targeting high throughput and low loss, and the other targeting low delay. PCC's behavior depends on the utility function; unlike Nimbus\xspace, it does not achieve both low delay and compete with elastic loss-based TCPs.
\fi
\if 0
Prior work on estimating cross-traffic rate~\cite{strauss2003measurement,jain2002pathload,hu2003evaluation} use probe packets, and are slow in estimating cross traffic. For example pathload~\cite{jain2002pathload} takes 10-30 s to generate a single cross traffic estimate. In contrast, our method is in-band and it relies on the property that the sender is backlogged.
\fi
The multiple-Nimbus\xspace scheme for coordinating pulsers bears resemblance to receiver-driven layered multicast (RLM) congestion control~\cite{rlm}. In RLM, a sender announces to the multicast group that it is conducting a probe experiment at a higher rate, so any losses incurred during the experiment should not be heeded by the other senders. By contrast, in Nimbus\xspace, there is no explicit coordination channel, and the pulsers and watchers coordinate via their independent observations of cross traffic patterns.
\if 0
The idea of sharing congestion information among connections was explored in Congestion Manager~\cite{CM} for traffic between a single pair of hosts. Nimbus\xspace can provide this benefit for larger traffic aggregates, such as all traffic between two organizations.
\fi
\section{Related Work}
\label{s:related}
The closest previous schemes to Nimbus\xspace are Copa~\cite{copa} and BBR~\cite{bbr}. These schemes also periodically modulate their sending rates, but they do not infer the elasticity of cross traffic.
\smallskip
\noindent{\bf Copa.} Copa aims to maintain a bounded number of packets in the bottleneck queue. Copa's control dynamics induces a periodic pattern of sending rate that nearly empties the queue once every 5 RTTs. This helps Copa flows obtain an accurate estimate of the minimum RTT and the queuing delay. In addition, Copa uses this pattern to detect the presence of non-Copa flows: Copa expects the queue to be nearly empty at least once every 5 RTTs if only Copa flows with similar RTTs share the bottleneck link.\footnote{Copa estimates if a queue is ``nearly empty'' using the observed short-term RTT variation (see $\S$2.2 of~\cite{copa}).} If the estimated queuing delay does not drop below a threshold in 5 RTTs, Copa switches to a TCP-competitive mode.
Unlike Copa, Nimbus\xspace does not look for the expected queue dynamics caused by its transmission pattern. Instead, it estimates the rate of the cross traffic using end-to-end rate and delay measurements, and it observes how the cross traffic reacts to the rate fluctuations it induces over a period of time. This enables Nimbus\xspace to directly estimate the elasticity of cross traffic. Although elasticity detection takes longer (e.g., a few seconds), our experiments show that is significantly more robust than Copa's method. For example, we find that Copa misclassifies cross traffic when the inelastic volume is high, or when it is elastic but slowly increasing ($\S$\ref{s:copa-compare}). Moreover, since Nimbus\xspace's cross traffic estimation technique does not rely on low-level properties of the method's dynamics, it can be applied to any combination of delay-control and TCP-competitive control rules.
\smallskip
\noindent{\bf BBR.}
BBR maintains estimates of the bottleneck bandwidth ($b$) and minimum RTT ($d$). It paces traffic at a rate $b$ while capping the number of in-flight packets to $2\times b\times d$. BBR periodically increases its rate over $b$ for about one RTT and then reduces it for the following RTT. BBR uses this sending-rate pattern to obtain accurate estimates of $b$ using the maximum achieved delivery rate; specifically, it tests for higher $b$ in the rate-increase phase and subsequently drains the extra queuing this causes in the rate-decrease phase.
BBR does not compete fairly with other elastic TCP flows (e.g., Cubic). In particular, BBR's method to set $b$ can be overly aggressive in the presence of other TCP flows. Depending on the bottleneck buffer size, either the BBR-induced losses limit the throughput of the other TCP flows (with shallow buffers), or BBR's hard cap on its in-flight data based on $d$ causes it to get lower throughput than its fair share (with deep buffers).
\smallskip
\noindent{\bf Other related schemes.}
PCC~\cite{pcc} adapts the sending rate based on ``micro-experiments'' that evaluate how changing the sending rate (multiplicatively) impacts performance according to a specified utility function. PCC defines two utility functions: one targeting high throughput and low loss, and the other targeting low delay. Recently, PCC-Vivace~\cite{vivace} improved on PCC with a utility function framework and a rate control algorithm based on regret minimization. The behavior of these schemes depends on the utility function; unlike Nimbus\xspace, they do not achieve both low delay and compete well with elastic loss-based TCPs simultaneously. Vegas~\cite{vegas} and FAST~\cite{fasttcp} are delay-based algorithms that aim to maintain small queues at the bottleneck using different control rules. Other delay-based algorithms include TCP Nice~\cite{tcp-nice} and LEDBAT~\cite{rfc6817}, which aim to use spare bandwidth without hurting ``foreground'' transfers. Timely~\cite{timely} is designed for RDMA in datacenters. These schemes generally perform poorly when competing with loss-based elastic algorithms. Compound TCP~\cite{compound} maintains both a loss-based window and a delay-based window, and transmits data based on the sum of the two windows. Compound achieves high throughput as soon as the link utilization drops because of its delay window, but does not control self-inflicted delays because of its loss window.
\if 0
PCC~\cite{pcc} adapts the sending rate based on ``micro-experiments'' that evaluate how changing the sending rate (multiplicatively) impacts performance according to a specified utility function. PCC defines two utility functions: one targeting high throughput and low loss, and the other targeting low delay. PCC's behavior depends on the utility function; unlike Nimbus\xspace, it does not achieve both low delay and compete with elastic loss-based TCPs.
\fi
\if 0
Prior work on estimating cross-traffic rate~\cite{strauss2003measurement,jain2002pathload,hu2003evaluation} use probe packets, and are slow in estimating cross traffic. For example pathload~\cite{jain2002pathload} takes 10-30 s to generate a single cross traffic estimate. In contrast, our method is in-band and it relies on the property that the sender is backlogged.
\fi
\if 0
The multiple-Nimbus\xspace scheme for coordinating pulsers bears resemblance to receiver-driven layered multicast (RLM) congestion control~\cite{rlm}. In RLM, a sender announces to the multicast group that it is conducting a probe experiment at a higher rate, so any losses incurred during the experiment should not be heeded by the other senders. By contrast, in Nimbus\xspace, there is no explicit coordination channel, and the pulsers and watchers coordinate via their independent observations of cross traffic patterns.
\fi
\section{Bundle FCT Simulator Correctness}
\label{app:simulator-correct}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{images/emp-sim-em-20-60-fgrnd.pdf}
\caption{{\bf Validation of bundle FCT simulator---}FCT from simulation and emulation match for the FIFO scheduling discipline.}
\label{fig:bundle:verify}
\end{figure}
We validate the fidelity of our simulator by comparing the FCT from the simulated FIFO \bundle to our emulation setup (which implements FIFO).
\Fig{bundle:verify} compares the FCT of flows of varying size ranges, normalized by the median FCT obtained from emulation for each flow size.
The box shows
the $25^{\text{th}}$, $50^{\text{th}}$, and $75^{\text{th}}$ percentiles,
while the whiskers show the $10^{\text{th}}$ and $90^{\text{th}}$ percentiles.
There is a close match between the simulated and emulated results in all the flow size ranges, providing confidence in the results of our simulation.
\section{Robustness of Switching}
\label{app:switch-robust}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{images/robust_bandwidth}
\caption{Bandwidth}
\label{fig::robustness:reno}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{images/robust_buffer_rtt}
\caption{Buffer Size and RTT}
\label{fig:robustness:buffer_rtt}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{images/pulse_size}
\caption{Pulse Size}
\label{fig:robustness:pulse_size}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\columnwidth]{images/aqm.pdf}
\caption{Switching with shallow buffers and AQM.}
\label{fig:aqm-robustness}
\end{subfigure}
\caption{\small {\bf Robustness of switching---}across different values of bandwidth, buffer size and RTTs, pulse size, and active queue management schemes.}
\label{fig:robustness}
\end{figure}
\Fig{robustness} shows the fraction of time spent in the incorrect mode, or the mis-classification rate, for various network conditions.
\Fig{aqm-robustness} shows the same metric when using small buffers and a number of active queue management schemes.
Nimbus\xspace works well for classes Elastic and Inelastic but makes some classification error for Elastic+Inelastic.
Switching works better in cases where the pulse size is bigger than the standard deviation of background traffic.
When pulse size is small, this condition might break and Nimbus\xspace can make errors in classification.
On low bandwidth links the standard deviation in background traffic become comparable to pulse size leading to mis-classification.
|
{
"timestamp": "2018-07-24T02:11:46",
"yymm": "1802",
"arxiv_id": "1802.08730",
"language": "en",
"url": "https://arxiv.org/abs/1802.08730"
}
|
\section{Introduction}
Gravitational waves (GWs) emitted from the coalescence of binary black holes (BBHs) have been observed \citep{2016PhRvL.116f1102A,
2016PhRvL.116x1103A, 2016PhRvX...6d1015A, 2017PhRvL.118v1101A, 2017PhRvL.119n1101A},
but how and where these BBHs formed are still open questions.
Several formation channels and environments have been proposed, including dense stellar clusters \citep{2000ApJ...528L..17P, 2010MNRAS.402..371B,
2013MNRAS.435.1358T, 2014MNRAS.440.2714B, 2015PhRvL.115e1101R, 2016PhRvD..93h4029R, 2016ApJ...824L...8R, 2016ApJ...824L...8R,
2017MNRAS.464L..36A, 2017MNRAS.469.4665P}, isolated field binaries \citep{2012ApJ...759...52D, 2013ApJ...779...72D, 2015ApJ...806..263D, 2016ApJ...819..108B, 2016Natur.534..512B},
captures between primordial black holes \citep{2016PhRvL.116t1301B,
2016PhRvD..94h4013C, 2016PhRvL.117f1101S, 2016PhRvD..94h3504C},
active galactic nuclei discs \citep{2017ApJ...835..165B, 2017MNRAS.464..946S, 2017arXiv170207818M},
and galactic nuclei \citep{2009MNRAS.395.2127O, 2015MNRAS.448..754H, 2016ApJ...828...77V, 2016ApJ...831..187A, 2017arXiv170609896H},
but observationally distinguishing these possible pathways from each other is
difficult. Recent studies have pointed out that the BH spin orientations are likely to be different between different channels \citep{2016ApJ...832L...2R};
however, several of the already observed BBH mergers show surprisingly very little, or anti-aligned, spin.
Another possibility is to consider BBH eccentricities, which have been shown to be non-negligible for a relatively large fraction of BBH mergers forming in both classical
globular cluster (GC) systems \citep{2014ApJ...784...71S, 2017ApJ...840L..14S, 2017arXiv171107452S, 2017arXiv171204937R, 2018ApJ...853..140S, 2017arXiv171206186S},
as well as in galactic nuclei \citep[\textit{e.g.},][]{2009MNRAS.395.2127O, 2012PhRvD..85l3005K, 2017arXiv171109989G}.
For example, \cite{2017arXiv171107452S} recently showed, using simple analytical arguments,
that $\sim 5\%$ of all BBHs mergers forming in GCs will have an eccentricity $>0.1$ at $10$ Hz, compared to $\sim 0\%$ for field mergers.
This surprisingly high fraction of eccentric mergers originate from GW capture mergers that form during resonating three-body
interactions \citep[\textit{e.g.},][]{2006ApJ...640..156G, 2014ApJ...784...71S}; a population that only can be probed when General Relativistic (GR) effects
are included in the $N$-body equation-of-motion (EOM). This highly motivates the current development of
eccentric GW templates \citep[\textit{e.g.},][]{2017PhRvD..95b4038H, 2017arXiv170510781G, 2018PhRvD..97b4031H}.
Multi-band GW observations provide other interesting possibilities for constraining
the formation of merging BBH systems \citep{2017arXiv171011187M, 2017arXiv170200786A, 2017ogw..book...43C, 2017ApJ...842L...2C}. For example, as pointed
out by \cite{2016PhRvL.116w1102S, Seto:2016}, the first BBH merger observed (GW150914) could have been seen by a GW instrument similar to the proposed
`Laser Interferometer Space Antenna' (LISA) a few years before entering the band of the
`Laser Interferometer Gravitational-Wave Observatory' (LIGO). A LISA type mission would therefore make it possible to `prepare' for LIGO events,
opening the prospect for detailed studies of precursors to GW sources \citep{2016PhRvL.116w1102S, 2017ApJ...842L...2C}. Other possibilities include measuring the
BBH eccentricity distribution in the LISA band, which has been shown to differ between BBH progenitor channels \citep[\textit{e.g.},][]{2016PhRvD..94f4020N, 2017MNRAS.465.4375N}.
However, despite these encouraging possibilities, we note that no detailed work has been performed so far on
how BBH mergers dynamically formed in stellar clusters distribute when post-Newtonian (PN) terms \citep[\textit{e.g.},][]{2014LRR....17....2B} in the $N$-body EOM are taken into account.
That is, the newly resolved populations of GW mergers forming during resonating few-body interactions \citep{2017arXiv171206186S, 2017arXiv171204937R} and
the possibility for second generation BBH mergers \citep{2017arXiv171204937R}, have not yet been properly discussed in relation to multi-band GW astrophysics.
We take the first step in this paper.
In this paper we expand upon the work by \cite{2017ApJ...842L...2C}, who studied how BBH mergers are likely to distribute
as a function of their GW frequency and eccentricity, depending on their formation mechanism. They concluded correctly, that
the majority of BBH mergers that form in resonating binary-single interactions are likely to elude the LISA band. However,
no detailed calculations or simulations were performed, only a few orders of magnitude estimates were done based on the work by \cite{2014ApJ...784...71S}.
To improve on their analysis, we present here a study on how BBH mergers assembled in GCs distribute as a function of their GW frequency at formation
when PN terms are included in the $N$-body EOM, using both detailed numerical and analytical methods, together with GC Monte-Carlo (MC) simulations
performed by the \texttt{MOCCA} (MOnte Carlo Cluster simulAtor) code \citep{Giersz2013}.
As described above, and initially pointed out by \cite{2014ApJ...784...71S}, the largest effect from including PN terms, is the formation of BBH
mergers that form during resonating binary-single interactions; a population we refer to in this paper as {\it binary-single GW mergers}.
In agreement with \cite{2017ApJ...842L...2C}, we indeed find that the majority of these binary-single BBH mergers form with a GW frequency
that is above the frequency range of LISA, but within or just below the range of LIGO. By considering the full distribution of GC BBH mergers,
we are further able to estimate that with the PN terms about $5-10\%$ of all BBHs from GCs will not drift through the LISA band before entering the LIGO band (excluding the fraction
that naturally eludes the LISA band because of low signal-to-noise (S/N) caused by their high eccentricity \citep[\textit{e.g.},][]{2017ApJ...842L...2C}), compared to $\approx 0\%$ in the Newtonian case.
As discussed in \cite{2017ApJ...842L...2C}, a population that only appears in the LIGO band is not expected in the standard isolated field binary scenario, suggesting that the
fraction of all BBH mergers forming in clusters can be estimated by simply measuring the fraction that appears only in the LIGO band.
Our results are described in the following sections.
\section{BBH Mergers in LISA/LIGO}
In this section we study how BBH mergers assembled through three-body interactions in GCs distribute as a function of their
GW frequency and chirp mass at their time of formation, when GW emission at the 2.5 PN level is included in the EOM. Using both
numerical (Section \ref{sec:Post-Newtonian $N$-body Scatterings}) and analytical methods (Section \ref{sec:Analytical Estimate}), we find that
$5 \sim 10\%$ of all the BBH mergers observable by LIGO will never appear in the LISA band.
This result complements the recent study by \cite{2017ApJ...842L...2C}, and
naturally opens up for a wealth of new possibilities for constraining how and where
BBH mergers form using multi-band GW detections.
We note that to estimate the actual observable fraction of BBH mergers that will elude the LISA band, but appear in the LIGO band,
one needs to fold in detailed models for both the design of the instruments \citep[\textit{e.g.},][]{2016PhRvL.116w1102S}, as well as the
mass, distance and orbital distributions of the merging BBHs; quantities that unfortunately are poorly constrained at the moment.
To keep our results clear, we therefore only discuss the part of the distribution that is directly shaped by the dynamics, and the inclusion
of PN terms.
\subsection{Post-Newtonian $N$-body Scatterings}\label{sec:Post-Newtonian $N$-body Scatterings}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{MOCCATEST_case1A_LISALIGO}
\caption{Distribution of BBH mergers, as a function of their GW frequency at the time of formation $f_{fm}$ (x-axis) and their source frame chirp mass $\mathscr{M}_{c}$ (y-axis),
dynamically assembled through binary-single interactions simulated with an $N$-body code that includes GW emission at the 2.5PN
level \citep{2017arXiv171206186S}. As described in Section \ref{sec:Post-Newtonian $N$-body Scatterings}, the initial conditions for the interactions were
extracted from a large set of GC models evolved by the MC cluster code \texttt{MOCCA}, as part of the `MOCCA-Survey Database I' project.
The {\it black points} show the distribution of BBHs that merge outside their host cluster after being dynamically ejected (ejected GW mergers),
where the {\it red points} show the BBHs that merge during a binary-single interaction inside the cluster (binary-single GW mergers).
As seen, the majority of the binary-single GW mergers form with a GW frequency that is above the LISA
band, but within or just below the LIGO band, implying that they will enter the LIGO band without having drifted through the LISA band first (see also \citep{2017ApJ...842L...2C}).
This is in contrast to the classical ejected mergers, that all drift through both the LISA and the LIGO band. As pointed out by \cite{2017ApJ...842L...2C},
measuring the fraction of sources observed in LISA and LIGO can be used to constrain the number of BBH mergers dynamically assembled in clusters.
We note that PN terms are crucial for such a test.}
\label{fig:BBHdist}
\end{figure}
To numerically study the GW frequency distribution of BBHs and the effect from PN terms in the EOM, we used data from
about $2000$ star cluster models evolved by the \texttt{MOCCA} code \citep[][and references therein]{Giersz2013} as part of the
`MOCCA-Survey Database I' project \citep{2017MNRAS.464L..36A}.
From this set of models, we extracted all the strong binary-single interactions between three BHs each with a mass $<100 M_{\odot}$.
These binary-single interactions were originally evolved in \texttt{MOCCA} simulations using the Newtonian code \texttt{fewbody} \citep{Fregeau2004}.
To study the effect from PN corrections, we therefore re-simulated all of the interactions with our own few-body code that includes the 2.5 PN term
accounting for GW emission \citep[\textit{e.g.},][]{2014LRR....17....2B}. To achieve better statistics, we simulated each interaction $5$ times, which led to a total of $\sim 2.5\times10^{6}$
binary-single simulations. Due to computational limitations, we limited each interaction to a maximum of $2500$ orbital times of the initial target BBH, which led to a $\approx 98\%$ completion fraction.
The results presented below are based on this completed set of PN binary-single interactions. We refer the reader to \cite{2017arXiv171206186S} for a
more detailed explanation of this re-simulation procedure.
The distribution of GW frequencies at formation, $f_{fm}$, and source frame chirp masses, $\mathscr{M}_{c}$, for the binary-single assembled BBH mergers
derived from the \texttt{MOCCA} dataset, as described above, is shown in Figure \ref{fig:BBHdist}. To resolve the distribution evaluated at present time
requires orders of magnitude more simulations than we could perform. The presented distribution therefore includes
all BBHs that merge after $1$ Gyr and before a Hubble time (By considering the time resolved BBH merger history
derived in \citep{2017arXiv171206186S}, we do expect the distribution shown in Figure \ref{fig:BBHdist} to be similar to the present day distribution).
To derive the GW frequency at formation of a given BBH, $f_{fm}$, we used the approximation presented in \cite{Wen:2003bu}, where we took the time of formation to be
the moment the BBH in question can be treated as an `isolated' binary free from significant perturbations by the unbound (ejected BBH merger) or bound (binary-single GW merger) single BH.
To quantify if a BBH can be treated as `isolated', we used a tidal threshold of $0.1$ as described in \cite{2014ApJ...784...71S}.
As described in the caption of Figure \ref{fig:BBHdist}, the {\it black points} denote the distribution of BBHs that
merge after being dynamically ejected through a binary-single interaction from their GC (ejected BBH mergers), where the {\it red points}
show the BBHs that merge during a binary-single interaction through the emission of GWs (binary-single GW mergers). The red points therefore only appear when
PN terms are included in the EOM.
From considering the distribution shown in Figure \ref{fig:BBHdist}, one concludes that the majority of the BBHs that merge after being dynamically ejected from their GC (black points)
form with $f_{fm} \lesssim10^{-2}$ Hz, implying they will pass through the LISA band before entering the LIGO band. This is in contrast to the
binary-single GW mergers (red points), which in the majority of cases form with $f_{fm} \gtrsim 10^{-1}$ Hz, and therefore will
enter the LIGO band without having appeared in the LISA band first. From simply counting, we find
that only $\sim 0.1\%$ of the classically ejected BBH mergers will enter the LIGO band without having appeared in the LISA band first,
whereas if one includes the binary-single GW mergers the fraction is instead $5-10\%$. As also noted by \cite{2017ApJ...842L...2C}, the binary-single GW mergers form directly
in the most sensitive region of a detector similar to the proposed DECIGO \citep[\textit{e.g.},][]{2011CQGra..28i4011K, 2018arXiv180206977I}, a result that
undoubtedly will be an interesting science case in addition to those discussed in \cite{2017arXiv171011187M}.
\subsection{Analytical Estimate}\label{sec:Analytical Estimate}
The population of BBH mergers that will appear in the LIGO band without having drifted through the LISA band first,
is greatly dominated by binary-single GW mergers (see Figure \ref{fig:BBHdist}). This trend originates from the fact that
the BBH mergers that form during binary-single interactions must inspiral and merge on a timescale that is comparable to the orbital timescale of the initial target BBH.
This is only possible if the BBH pericenter distance is small \citep[\textit{e.g.},][]{2014ApJ...784...71S}, implying that the corresponding GW frequency will be relatively high.
This is in contrast to the ejected BBH mergers, which are formed with no restriction to their merger time. Because the binary-single GW mergers dominate
the population that enters LIGO and not LISA, we may estimate their fraction analytically, as we will demonstrate below.
For this, we follow the work by \cite{2017arXiv171107452S}, in which the probability for eccentric GW mergers forming in binary-single interactions was
derived for a dense stellar system.
We start by considering two BHs each with mass $m$ in a binary with initial semi-major axis (SMA) $a_{\rm in}$ and eccentricity $e$, that form inside a
dense stellar cluster characterized by an escape velocity $v_{\rm esc}$. As described in \cite{2017arXiv171107452S}, this newly formed BBH will undergo continuous BH
binary-single hardening interactions in the cluster core if $a_{\rm in}$ is less than the local hard binary value \citep[\textit{e.g.},][]{Hut:1983js}.
Each of these interactions lead to an average decrease in the BBH SMA from $a$ to $\delta a$, where the average value of
$\delta$ is $7/9$, assuming the binary energy distribution derived in \cite{Heggie:1975uy}. This hardening process will continue until the interacting
BBH receives a recoil velocity through one of its interactions that is $>v_{\rm esc}$, which is possible if its SMA $a < a_{\rm ej}$, where \citep{2017arXiv171107452S}
\begin{equation}
a_{\rm ej} \approx \frac{1}{6} \left(\frac{1}{\delta} - 1\right) \frac{Gm}{v_{\rm esc}^2},
\end{equation}
after which the BBH is kicked out of the cluster. After leaving the cluster, the BBH will then merge in isolation through the emission
of GW emission. This is the dynamical channel for the classical ejected BBH mergers. However, as described above, GW mergers can also form
during the hardening binary-single interactions before ejection is possible when PN terms are included in the $N$-body EOM \citep{2017arXiv171107452S}.
Considering both of these merger types, one can now argue that the fraction of BBH mergers that only show up in the LIGO band is approximately given
by $P_{\rm bs}/P_{\rm ej}$, where $P_{\rm ej}$ is the probability that an ejected BBH
will merge in isolation within a Hubble time, and $P_{\rm bs}$ is the probability that the BBH instead undergoes a binary-single GW merger during hardening before
ejection is possible \citep{2017arXiv171107452S}. In the following we calculate these two probabilities by integrating over the dynamical hardening history of a typical BBH, assuming the hardening
is dominated by equal mass BH binary-single interactions. We further assume that $P_{\rm bs} \ll 1$, which allows us to treat $P_{\rm ej}$ and $P_{\rm bs}$ as uncorrelated variables.
This is an excellent approximation for classical GC systems, but might break down for dense nuclear star clusters \citep[\textit{e.g.},][]{2016ApJ...831..187A}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Pcap_over_PHM}
\caption{The {\it black solid contours} show our analytical estimate of the fraction $P_{\rm bs}/P_{\rm ej}$, where $P_{\rm bs}$ is the probability that a BBH
undergoes a binary-single GW merger during hardening until ejection is possible, and $P_{\rm ej}$ is the probability that, if the BBH is ejected, it will merge
within a Hubble time. This fraction is shown as a function of the host cluster escape velocity $v_{\rm esc}$ (x-axis) and BH mass $m$ (y-axis). As argued
in Section \ref{sec:Analytical Estimate}, the fraction approximately equals the fraction of BBH mergers that will show up in the LIGO band without having drifted through the LISA band.
The {\it blue dotted contours} show the probability that the BBH in question merges in between its binary-single interactions before ejection is possible,
referred to as $P_{\rm IM}$, where IM is short for `Isolated Merger', as further described in \citep{2017arXiv171107452S}. For our analytical estimate
we have assumed that $P_{\rm IM} \ll 1$ and $P_{\rm bs} \ll 1$. The {\it dark shaded region} shows where our analytical approach is believed to break down (assuming a constant
density of single BHs of $n_{\rm s} = 10^{6}$ pc$^{-3}$). As seen, the fraction of BBH mergers that never will appear in the LISA band is $\sim 5-10\%$ for classical GC systems, which is
in good agreement with our numerical results presented in Section \ref{sec:Post-Newtonian $N$-body Scatterings}.
Our analytical approach is described in Section \ref{sec:Analytical Estimate}, which is based on the recent work by \cite{2017arXiv171107452S}.
}
\label{fig:Pfrac}
\end{figure}
The probability that the interaction between a BBH with initial SMA $a$ and a single BH results in a binary-single GW merger during the interaction is, to leading order \citep{2017arXiv171107452S},
\begin{equation}
P_{\rm bs}(a) \approx \frac{2r_{\rm cap}}{a} \times N_{\rm IMS},
\end{equation}
where $N_{\rm IMS}$ is the average number of temporary BBHs formed during the binary-single interaction ($N_{\rm IMS} \approx 20$),
referred to as intermediate state (IMS) binaries \citep{2014ApJ...784...71S}, and $r_{\rm cap}$ is the `characteristic' three-body GW capture pericenter distance. That is, the (maximum) pericenter
distance two BHs temporarily bound in a resonating three-body state generally need to have for them to undergo a successful GW inspiral merger,
without being disrupted by the bound single BH. The distance $r_{\rm cap}$ changes in principle between each IMS
BBH \citep{2014ApJ...784...71S, 2018ApJ...853..140S}, however, its characteristic value is approximately equal to the
pericenter distance at which the loss of orbital energy through GW emission over one passage is about the total energy of the initial three-body system \citep{2017ApJ...846...36S}. From
comparing the initial three-body energy, which is dominated by the orbital energy of the initial target binary in the hard-binary limit \citep[\textit{e.g.},][]{Hut:1983js}, and the GW energy loss integrated over
one pericenter passage \citep[\textit{e.g.},][]{Hansen:1972il}, follows now that \citep{2017arXiv171107452S},
\begin{equation}
r_{\rm cap} \approx {\mathscr{R}_{\rm m}} \times \left({a}/{{\mathscr{R}_{\rm m}}}\right)^{2/7},
\end{equation}
where ${\mathscr{R}_{\rm m}}$ denotes the Schwarzschild radius of a BH with mass $m$.
By now integrating the probability $P_{\rm bs}(a)$ over the series of binary-single interactions that hardens the BBH from $a_{\rm in}$ to $a_{\rm ej}$,
one finds that the total probability that the BBH undergoes a binary-single GW merger before the possibility for ejection, here denoted by $P_{\rm bs}(a_{\rm in}, a_{\rm ej})$, is given by,
\begin{equation}
P_{\rm bs}(a_{\rm in}, a_{\rm ej}) \approx \int_{a_{\rm ej}}^{a_{\rm in}} \frac{P_{\rm bs}(a)}{a(1-\delta)} da \approx \frac{7}{5}\frac{P_{\rm bs}(a_{\rm ej})}{1-\delta},
\end{equation}
where for the last term we have assumed that $a_{\rm in} \gg a_{\rm ej}$. Note here that we have used that the
differential change in SMA, $da$, per binary-single interaction, $dN_{\rm bs}$, is given by $da = -a(1-\delta)dN_{\rm bs}$.
The probability that the BBH merges within a Hubble time, $t_{\rm H}$, after being ejected from the cluster, ${P}_{\rm ej}(a_{\rm ej})$,
can be found by combining the GW inspiral life time derived in \cite{Peters:1964bc}, and that the eccentricities of the ejected BBHs tend to
follow a so-called thermal distribution $P(e) = 2e$ \citep{Heggie:1975uy}, as further
described in \citep{2017arXiv171107452S, 2018ApJ...853..140S}. From this it follows that,
\begin{equation}
{P}_{\rm ej}(a_{\rm ej}) \approx
\begin{cases}
(t_{\rm H}/t_{\rm life}(a_{\rm ej}))^{2/7}, & \ t_{\rm life}(a_{\rm ej}) > t_{\rm H} \\[2ex]
1, & \ t_{\rm life}(a_{\rm ej}) \leq t_{\rm H},
\end{cases}
\end{equation}
where $t_{\rm life}(a_{\rm ej})$ denotes the GW inspiral `circular life time' of the BBH given its SMA $ = a_{\rm ej}$, and its eccentricity $= 0$.
Figure \ref{fig:Pfrac} shows our analytically derived ratio $P_{\rm bs}/P_{\rm ej}$, as a function of $v_{\rm esc}$ and $m$, which to leading order
equates to the expected fraction of BBH mergers that enters the LIGO band without drifting through the LISA band first. The black shaded part shows the region
where our analytical estimate is expected to break down \citep[\textit{e.g.},][]{2017arXiv171107452S}. From the figure, we see that for classical GC systems $P_{\rm bs}/P_{\rm ej} \sim 5-10\%$,
which agrees very well with our full PN scattering results presented in Section \ref{sec:Post-Newtonian $N$-body Scatterings}, although we do note
that our analytical solution is only expected to be valid within a factor of $\sim 2$, due to the assumptions made to make the problem analytically tractable.
However, the apparent excellent agreement is encouraging, and future work will extend our formalism into the nuclear star cluster region.
One also notices that the ratio $P_{\rm bs}/P_{\rm ej}$ is notably dependent on the BH mass $m$, where a greater mass leads to a higher fraction
of mergers that will form above the LISA band. When a large sample of BBH mergers have been observed in multiple GW bands,
this will undoubtedly serve as a useful piece of information when, e.g., extracting the initial mass function of BHs forming in clusters \citep[\textit{e.g.},][]{Kruijssen:2009, Webb+2017}.
We now discuss our findings and conclude.
\section{Conclusions}\label{sec:Conclusions}
We have studied the GW frequency distribution of BBH mergers assembled through
binary-single interactions in GCs, when GW emission at the 2.5 PN level is
included in the EOM. From performing $\sim 2.5\times10^{6}$ PN binary-single
interactions based on GC data extracted from the `MOCCA-Survey Database I'
project \citep{2017MNRAS.464L..36A}, and by the use of the analytical model
presented in \cite{2017arXiv171107452S}, we have illustrated that $5-10\%$ of
the mergers will never drift through the LISA band before entering the LIGO
band. This fraction in the purely Newtonian case is instead $\approx 0\%$,
which clearly illustrates the need for PN cluster simulations; a field that is
just beginning \citep[\textit{e.g.},][]{2017arXiv171204937R, 2017arXiv171206186S}. As
likewise pointed out by \cite{2017ApJ...842L...2C}, the BBH mergers we find
that elude the LISA band when PN terms are taken into account, originate from
BBHs that merge during binary-single interactions through the emission of GWs
\citep{2014ApJ...784...71S}. We now broadly discuss aspects and implications
of our study.
Firstly, the detection of a BBH population that merges in the LIGO band, but
does not pass through the LISA band, would provide indirect evidence for a
dynamical origin of at least a subset of BBH mergers
\citep[\textit{e.g.},][]{2017ApJ...842L...2C}, including the binary-single BBH mergers
resolved in this paper. Not just the existence of, but the fraction of BBHs
formed outside of the LISA band, first computed here for typical GCs, will
provide stronger evidence for their origin. Because this work predicts the
fraction only of BBH mergers assembled in GCs that will not drift through the
LISA band, comparison of this prediction with a measurement of the observed
fraction with future LISA and LIGO observations will allow a determination of
the relative number of BBHs formed through other channels, including isolated
field binary mergers. Dense stellar systems, such as nuclear star clusters,
with and without super massive BHs, would also leave unique imprints across GW
frequency and BBH eccentricity, but no PN work has been done on such systems
yet, and will therefore be the topic of future studies.
A deci-Hz GW detector operating in the frequency range $~0.1-10$~Hz, such as
DECIGO \citep{2011CQGra..28i4011K, 2018arXiv180206977I} or Tian Qin
\citep{TianQin}, would be able to detect GWs in the sensitivity gap between
LISA and LIGO, and thereby probe the formation of not only the dynamically
assembled BBH GW sources studied in this work, but other dynamical channels as
well \citep{2017ApJ...842L...2C}. However, as clearly showed in our work,
precise predictions for their distribution requires a PN treatment.
The orbital parameter distribution of BBHs at formation may also have
implications for the relative GW background in the LISA and LIGO detectors
\citep[\textit{e.g.},][]{ThraneRomano:2013, GW150914_GWB:2016, 2016PhRvL.116w1102S,
Cholis:2017}. Any formation scenario that forms BBHs in the $0.1-10$~Hz band
(or with very high eccentricities at lower frequencies \citep{2017ApJ...842L...2C})
will decrease the level of the high-frequency LISA background that would
otherwise be predicted from assuming that all LIGO mergers pass through the
LISA band \citep[see, \textit{e.g.},][]{2016PhRvL.116w1102S}. We further note that in
addition to the highly eccentric and dynamically assembled systems, there is
another possible BBH formation channel that LISA would miss. For example, the
models of \cite{Loeb:2016, 2017arXiv170604211D} posit the formation of BBHs at
a close separation within the core of a collapsing, hypermassive, rapidly
rotating star. This mechanism may be differentiated from other LISA eluding
scenarios by a preference for high mass, equal mass-ratio binaries and the
prospect of an EM counterpart.
Finally, recent work by \cite{2017ApJ...842L...2C} has shown that highly eccentric BBHs will
also elude detection by LISA. This is because GW emission at the second
harmonic of the orbital period is spread out to higher frequencies, highly
suppressing GW emission until the binary circularizes at closer separations,
in the LIGO-LISA gap. The disentanglement of a highly eccentric vs. a
GW-capture population, as the one resolved in this work, must be carried out in
future work that tracks the binary eccentricity distribution of the different formation scenarios.
Because a DECIGO-like instrument could directly probe the
region where circularization and GW-capture formation occur, future work
should contrast GW signatures of the two in the $0.1-10$~Hz band.
We do note that if the eccentricity distribution of the classical
ejected BBH merger population is assumed thermal \citep{Heggie:1975uy}, one would be able to
`easily' predict the ratio between BBHs that elude the LISA band due to their eccentricity
and BBHs that simply form at higher frequencies. This suggests that to leading order
source separation is possible. We will follow up on this in future work.
\acknowledgments
JS acknowledges support from the Lyman Spitzer Fellowship.
JS thanks the Niels Bohr Institute, the Kavli Foundation and the DNRF for
supporting the 2017 Kavli Summer Program, and the Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences.
JS and AA also thank the Flatiron Institute's Center for Computational Astrophysics for their generous support during the CCA Numerical Scattering Workshop.
DJD acknowledges financial
support from NASA through Einstein Postdoctoral Fellowship award
number PF6-170151.
AA and MG were partially supported by the Polish National Science Center (NCN), Poland, through the grant UMO-
2016/23/B/ST9/02732. AA is also supported by NCN through the grant UMO-2015/17/N/ST9/02573.
\bibliographystyle{h-physrev}
|
{
"timestamp": "2018-02-26T02:12:08",
"yymm": "1802",
"arxiv_id": "1802.08654",
"language": "en",
"url": "https://arxiv.org/abs/1802.08654"
}
|
\section{Introduction}
The discovery of superconductivity (SC) at 26\,K in LaFeAsO$_{1-x}$F$_x$ \cite{YKamihara2008jacs} has led to many following discoveries of new superconductors containing the same basic FeX (X = As, Se) units that are responsible for the high temperature SC \cite{YLSun2012jacs,AIyo2016jacs,ZCWang2016jacs,ZCWang2017jpcm,GRStewart2011rmp}. Among many other topics, tremendous results have been made regarding the phenomenon of SC and its interplay with magnetism \cite{AJDrew2009nm,PDai2012np,MDLumsden2010jpcm,DJScalapino2012rmp}. One of the most promising scenarios for the observed SC is the spin fluctuation mediated pairing mechanism which is the common thread linking a broad class of superconductors including not only the Fe and Cu based materials but also the heavy fermion superconductors \cite{DJScalapino2012rmp}. However, the debates on this issue have never stopped and further investigation on other new families of superconductors is highly encouraged.
As for the Fe based superconductors, SC can either be induced by chemical doping or external pressure \cite{GRStewart2011rmp} and SC in both situations were found to be closely related to their magnetic properties \cite{PDai2012np,MDLumsden2010jpcm,DJScalapino2012rmp}. Interestingly, a third route towards SC, namely the self doping effect, was realized in Sr$_2$VFeAsO$_3$\ \cite{GHCao2010prb}, Ba$_2$Ti$_{2}$Fe$_{2}$As$_{4}$O\ and RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ \cite{YLSun2012jacs,ZCWang2017jpcm,JZMa2014prl}. Rich magnetism has been found and studied with SC for Sr$_2$VFeAsO$_3$\ and it is found that magnetism and SC coexist in the freshly prepared Sr$_2$VFeAsO$_3$\ \cite{XMMa2014epl}. However, the magnetism disappeared after long term storage of the sample indicating that it is meta-stable in nature which might be induced by defects or residual stress \cite{XMMa2014epl}. For the Ba$_2$Ti$_{2}$Fe$_{2}$As$_{4}$O\ superconductor, neutron scattering measurements have been performed and no noticeable magnetism has been found. Additionally, nonmagnetic first principle calculations reproduced well the measured phonon spectra indicating again the nonmagnetic character of the superconducting Ba$_2$Ti$_{2}$Fe$_{2}$As$_{4}$O\ \cite{MZbiri2017prb}.
On the other hand, Gd$^{3+}$ moments were suggested to order at low temperatures within the superconducting ground state in RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ \cite{ZCWang2017jpcm}. Despite the fact that the Gd$^{3+}$ moments order in separate layers than the superconducting FeAs layer, a local probe study to see if there is any transferred magnetic fluctuations from the Gd$^{3+}$ moments at the Fe site is very interesting. In addition, investigation of the local electronic structure around the Fe nucleus should be helpful to have a better understanding of the self doping induced SC.
Fruitful results regarding the local electronic structure and magnetic properties of the Fe based superconductors have already been obtained by M\"ossbauer spectroscopy in the past \cite{SIShylin2015epl, IPresniakov2013jpcm,SLBudko2016prb,ABlachowski2011prb,AOlariu2012njp,TNakamura2012jpsj,MAMcguire2009njp,ZLi2011prb}.
Therefore, in this work, we performed detailed $^{57}$Fe M\"ossbauer spectroscopy measurements on the sample RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ in the temperature range of 5.9\,K to 300\,K. Our results reveal an intermediate electronic structure between that of the Fe in RbFe$_{2}$As$_{2}$\ and GdFeAsO, suggesting that the self hole-doping effect really extends to the Fe site. More interestingly, we found evidence of magnetic fluctuations at the Fe site not only due to that transferred from the Gd$^{3+}$ moments but also due to the Fe moments itself. These results provide us a new system to further study the interplay of magnetism with SC.
\section{Experiments}
Polycrystalline material of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ was synthesized by a solid-state reaction method. Detailed preparation procedure and its physical properties have been reported earlier \cite{ZCWang2017jpcm}. Two-phase Rietveld analysis of the X-ray diffraction pattern showed that the mass fraction of the main phase RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ amounts to $94\%$ and the only detectable impurity phase is unreacted Gd$_2$O$_3$ \cite{ZCWang2017jpcm}. As shown later, this impurity phase is transparent to our $^{57}$Fe M\"ossbauer measurements since it does not contain Fe element and thus has no influence on the data analyses. RbFe$_{2}$As$_{2}$\ single crystal was grown with self flux method and measured at room temperature for comparison purpose. Detailed information with further physical characterization will be reported separately.
Transmission M\"ossbauer spectra (MS) at temperatures between 5.9\,K and 300\,K were recorded using a conventional spectrometer working in constant acceleration mode with a $\gamma$-ray source of 25\,mCi $^{57}$Co(Rh) vibrating at room temperature. The drive velocity was calibrated using sodium nitroprusside (SNP) powder and the isomer shifts (IS) quoted in this work are relative to that of the $\alpha$-Fe foil at room temperature. The full linewidth (LW) at half maximum of the SNP spectrum was 0.244(2)\,mm/s and this value can be regarded as the resolution of the spectrometer. The absorber was prepared with a surface density of $\sim$10\,mg/cm$^2$ of natural iron. All the MS were analyzed with MossWinn 4.0 \cite{mosswinn} programe.
\section{Results and discussion}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,clip=true]{Fig_Chi}
\caption{
(color online)
Temperature dependence of the $4\pi\chi$ data of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ measured in the FC mode with 10\,Oe magnetic field, revealing the ordering temperature of the Gd$^{3+}$ moments, T$_N\sim$3\,K. Inset shows both the FC and ZFC data, showing the superconducting transition temperature, T$_C\sim$35\,K.
}
\label{FigChi}
\end{figure}
In Fig.\ref{FigChi}, we present the temperature dependence of the susceptibility data of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ below about 5\,K measured in the field cooling (FC) mode with 10\,Oe magnetic field. Inset of Fig.\ref{FigChi} is a plot of the susceptibility data measured in both zero field cooling (ZFC) and FC modes, showing the superconducting transition temperature, T$_C\sim$35\,K \cite{ZCWang2017jpcm}.
The increase of the FC data with decreasing temperature below T$_{mag}\sim$3\,K signals the onset of magnetic ordering of the Gd$^{3+}$ moments. The upturn is not likely due to the small amount of Gd$_2$O$_3$ impurity phases since cubic Gd$_2$O$_3$ exhibits an antiferromagnetic transition at 1.6\,K and monoclinic Gd$_2$O$_3$ has an antiferromagnetic transition at 3.4\,K \cite{prb.58.3212}. This conclusion is consistent with the low temperature upturn of the specific heat data \cite{ZCWang2017jpcm} which suggests a bulk effect of this transition. As for the nature of this magnetic transition, it is most likely to be canted antiferromagnetism. This is consistent with the negative Curie-Weiss temperature obtained from the high temperature magnetic susceptibility data (not shown). Note that for a similar compound RbDy$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ with Dy$^{3+}$ magnetic moments becomes antiferromagnetically ordered below about 10\,K \cite{cm.29.1805}. Of course, future neutron diffraction measurements are needed to clarify this point.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,clip=true]{Fig_Compare}
\caption{
(color online)
The 300\,K $^{57}$Fe M\"ossbauer spectra of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ (black dots) and RbFe$_{2}$As$_{2}$\ (blue diamond) together with singlet fits. The green dashed line is calculated curve of GdFeAsO\ using hyperfine parameters taken from ref. \cite{PWang2010jpcm}. Note that these spectra were shifted and normalized together for better view.
}
\label{Figcompare}
\end{figure}
One interesting feature of the compound RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ is self doping induced superconductivity with a transition temperature as high as 35\,K \cite{ZCWang2017jpcm}. As shown by the structural characterization, actual self doping was suggested by the charge redistribution which was reflected from the structural reconstruction, namely the 'GdFeAsO' building block becomes slender while the 'RbFe$_{2}$As$_{2}$' block goes the opposite when compared with the two separate compounds GdFeAsO\ and RbFe$_{2}$As$_{2}$\ \cite{ZCWang2017jpcm}. In order to see if this self doping effect really happens locally, namely at the Fe site, we compare the 300\,K MS of the RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ superconductor with that of the RbFe$_{2}$As$_{2}$\ and GdFeAsO\ in Fig.\ref{Figcompare}. The MS of the GdFeAsO\ shown in Fig.\ref{Figcompare} was calculated using hyperfine parameters taken from reference \cite{PWang2010jpcm}. One can see that the center or IS of the MS shifts to positive direction from sample RbFe$_{2}$As$_{2}$\ to RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ and GdFeAsO, reflecting a decrease of the $s$-electron density at the Fe nucleus. To be quantitative, the fitted IS value for sample RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ is IS\,=\,0.364(6)\,mm/s which is much closer to the value of IS\,=\,0.352(2)\,mm/s for sample RbFe$_{2}$As$_{2}$\ when compared to the value of IS\,=\,0.415(4)\,mm/s for sample GdFeAsO\ \cite{PWang2010jpcm}. This is different with the case for the intermediate value of IS\,=\,0.372(2)\,mm/s for the intergrown sample CaKFe$_4$As$_4$\ in comparison with that of samples KFe$_{2}$As$_{2}$\ IS\,=\,0.311(1)\,mm/s and CaFe$_2$As$_2$\ IS\,=\,0.430(2)\,mm/s \cite{SLBudko2017arXiv}. This could be due to different chemical environments of the FeAs neighboring layers, namely the Rb and GdO layers, or alternatively, due to the different lattice dynamics for sample RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ in comparison with samples RbFe$_{2}$As$_{2}$\ and GdFeAsO. To resolve this issue, we made low temperature measurements and found out that, at 5.9\,K, the IS value of sample RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ (0.495(6)\,mm/s) is right in the middle of that for samples RbFe$_{2}$As$_{2}$\ (0.451(4)\,mm/s) and GdFeAsO\ (0.542(5)\,mm/s at 1.9\,K) \cite{PWang2010jpcm} indicating that the lattice dynamic plays a more important role. These results provide direct evidence of the local electronic change which answers the question that the self doping effect does happen on the local scale at the iron nucleus which is also similar to the systematic change of IS with hole doping for the Ba$_{1-x}$K$_x$Fe$_2$As$_2$\ system \cite{DJohrendt2009pc}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth,clip=true]{Fig_Dist}
\caption{
(color online)
$^{57}$Fe M\"ossbauer spectra of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ and the corresponding IS distribution measured at 5.9\,K for (a) and (b) and at 300\,K for (c) and (d).
}
\label{Figdist}
\end{figure}
Another important result is that the MS of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ exhibit a singlet pattern which persists to the lowest temperature of 5.9\,K in the current experiment indicating the absence of static magnetic order on the Fe site which is consistent with the superconducting ground state \cite{ZCWang2017jpcm}. Attempts to fit our MS with a distribution of IS have been made as shown in Fig.\ref{Figdist} for the 5.9\,K and 300\,K case. Clearly, the IS distribution can be modeled with one symmetric Guassian profile which is consistent with the fact that there is only one crystallographic site for the Fe ion \cite{ZCWang2017jpcm}. Additionally, trying to fit the spectra with a doublet profile always yield zero quadrupole splitting values. Therefore, we fitted our MS with only one singlet profile as shown in Fig.\ref{Figmoss}, from which reasonable fits can be seen. Clearly, this proves again that our sample is free of impurity phases that contain Fe element or their amounts is too small to be detected by M\"ossbauer spectroscopy \cite{INowik2008jsnm}. The fitted IS and LW are listed in Table \ref{table} together with that of samples RbFe$_{2}$As$_{2}$\ and GdFeAsO\ for comparison.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth,clip=true]{Fig_Moss}
\caption{
(color online)
$^{57}$Fe M\"ossbauer spectra (black dots) of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ measured at indicated temperatures. The red solid lines are singlet fits to the experimental data.
}
\label{Figmoss}
\end{figure}
\begin{table}[ht]
\centering
\caption{Hyperfine parameters of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$. IS, isomer shift; LW, spectral full linewidth at half maximum. The 300\,K data for sample RbFe$_{2}$As$_{2}$\ and GdFeAsO\ \cite{PWang2010jpcm} were also shown for comparison.}
\label{table} {
\begin{tabular}{c c c c}
\hline \hline
Sample & Temp. (K) & IS (mm/s) & LW (mm/s) \\
\hline
& 5.9 & 0.495(6) & 0.65(2) \\
& 8.3 & 0.496(6) & 0.57(2) \\
& 10 & 0.495(5) & 0.52(2) \\
& 15 & 0.500(5) & 0.44(2) \\
& 20 & 0.497(4) & 0.46(1) \\
& 25 & 0.499(5) & 0.46(1) \\
& 36 & 0.500(5) & 0.51(2) \\
& 42 & 0.496(5) & 0.50(2) \\
RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$ & 50 & 0.495(5) & 0.57(2) \\
& 75 & 0.490(5) & 0.53(2) \\
& 100 & 0.481(5) & 0.45(2) \\
& 125 & 0.467(4) & 0.40(1) \\
& 150 & 0.461(5) & 0.46(2) \\
& 200 & 0.446(5) & 0.45(2) \\
& 225 & 0.432(5) & 0.43(1) \\
& 250 & 0.396(7) & 0.46(2) \\
& 300 & 0.364(6) & 0.40(2) \\
RbFe$_{2}$As$_{2}$ & 300 & 0.352(2) & 0.327(7) \\
& 5.9 & 0.451(4) & 0.40(1) \\
GdFeAsO\ \cite{PWang2010jpcm} & 300 & 0.415(4) & - \\
& 1.9 & 0.542(5) & - \\
\hline \hline
\end{tabular}}
\end{table}
The temperature dependence of IS, determined from the fits of the MS shown in Fig.\ref{Figmoss}, is shown in Fig.\ref{FigIS} together with theoretical fit using equation (\ref{eqIS}). One can see that IS increases gradually with decreasing temperature and saturates at low temperatures which is the typical behavior of the Debye model. In the Debye approximation of the lattice vibrations, IS can be fitted by the following equation \cite{Mossbook}
\begin{equation}
\begin{split}
IS(T) = IS(0) - \frac{9}{2}\frac{k_BT}{Mc}(\frac{T}{\Theta_D})^3\int_0^{\Theta_D/T}\frac{x^3dx}{e^x-1}
\end{split}
\label{eqIS}
\end{equation}
where IS(0)=0.497(4)\,mm/s is the temperature independent chemical shift, and the second part is the temperature dependent second order Doppler shift. $k_B$ is the Boltzmann constant, $M$ is the mass of the M\"ossbauer nucleus, $c$ is the speed of light and $\Theta_D$ is the corresponding Debye temperature. A Debye temperature of $\Theta_D=415\pm47$\,K was obtained from the above fit. Similar values of $\Theta_D=409\pm4$\,K \cite{PWang2010jpcm} for GdFeAsO\ and $\Theta_D=474\pm20$\,K \cite{SLBudko2017arXiv} for KFe$_{2}$As$_{2}$\ were found by the same method. We note that the higher uncertainty in our fit mainly comes from the deviation of the experimental data from the theory above $\sim$150\,K coinciding with the broad humpback in the resistivity measurement, which might be related to the incoherent-to-coherent crossover in relation with an emergent Kondo lattice effect \cite{ZCWang2017jpcm,YPWu2016prl}. Certainly, in order to confirm this point, more measurements on other similar compounds with different rare earth elements are under way.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,clip=true]{Fig_IS}
\caption{
(color online)
Temperature dependence of the IS of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ determined from the fits shown in Fig.\ref{Figmoss}. The solid line is theoretical fit to the Debye model as explained in the text.
}
\label{FigIS}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth,clip=true]{Fig_LW}
\caption{
(color online)
Temperature dependence of the spectral LW of RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$. The solid lines are guides to the eye.
}
\label{FigLW}
\end{figure}
Another interesting feature of the RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ compound is the possible interplay of magnetism with superconductivity at low temperatures. Note the magnetic transition temperature of the Gd$^{3+}$ moments, T$_N\sim$3\,K, as shown in Fig.\ref{FigChi}. To see if there is any transferred magnetic fluctuations at the Fe site from the Gd$^{3+}$ moments, we plot the LW as a function of temperature in Fig.\ref{FigLW}. From the first measurement shown in Fig.\ref{FigLW}, we can see that above 100\,K a featureless LW with values in the range of 0.4\,mm/s - 0.46\,mm/s was observed. With decreasing temperature, two LW broadenings can be seen. Broadening of the LW usually reflects the distribution of the surrounding electromagnetic environments experienced by the $^{57}$Fe nucleus.
First, LW broadening usually come from defects or doping induced inhomogeneities. However, these kind of inhomogeneities usually do not change with temperature at relatively low temperatures as in our measurements. Second, some kind of chemical alterations that exist at the sample surface may also give such LW broadenings. However, one would expect a monotonic temperature behavior of the LW broadening, which is inconsistent with our observations that the spectral LW first increase, then decrease and increase again with decreasing temperature. Therefore, these reasons are likely not responsible for our observed LW broadenings here.
Additionally, LW broadening can be induced by magnetic fluctuations as observed in other iron based superconductors \cite{XMMa2014epl,XMMa2013jpcm}. Therefore, we attribute the broadening below about 15\,K to the transferred magnetic fluctuations from the Gd$^{3+}$ moments which orders below about 3\,K as discussed earlier. More interestingly, the broadening below 100\,K, considerably lower than the spin density wave ordering transition temperature for GdFeAsO\ $\sim$128\,K \cite{YLuo2009prb}, might be attributed to magnetic fluctuations on the Fe site. This is consistent with the fact that this LW broadening was completely suppressed within the superconducting state which can be understand as a result of the competition between magnetism and superconductivity in the compound \cite{MDLumsden2010jpcm,JMunevar2011arXiv,DKPratt2009prl}. This suggests that the LW broadenings observed here are induced by intrinsic electronic inhomogeneities arise from competitions between multiple order parameters other than chemical disorders as mentioned above.
It has been widely discussed, in the passed, regarding the interplay between magnetism and superconductivity in the iron based superconductors \cite{PDai2012np,MDLumsden2010jpcm}. And, even after almost ten years of intense study, a final understanding of this issue is still far from reached. Our work provides a new direction to further investigate this problem. Especially, inelastic neutron scattering study of the dynamical magnetic properties would be very important.
\section{Summary}
In summary, $^{57}$Fe M\"ossbauer spectroscopy measurements on RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ has been performed. The singlet pattern of the spectra indicates that the sample does not exhibit static magnetic order down to 5.9\,K on the Fe sublattice. The observed intermediate value of the isomer shift for our sample RbGd$_{2}$Fe$_{4}$As$_{4}$O$_{2}$\ compared to that of the RbFe$_{2}$As$_{2}$\ and GdFeAsO, suggests an effective self hole doping at the Fe site. A Debye temperature of $415\pm47$\,K was determined by the temperature dependence of the isomer shift. Most importantly, at the Fe site, we found transferred magnetic fluctuations from the Gd$^{3+}$ moments below 15\,K and magnetic fluctuations of the Fe moments below 100\,K. Our results call for future investigations to study the possible interplay between magnetism and superconductivity in this iron-based superconductor.
\section{Acknowledgements}
This work was supported by the Fundamental Research Funds for the Central Universities under Grant number 223000/862616 and the National Key Research and Development Program of China under Grant number 2016YFA0300202.
|
{
"timestamp": "2018-02-27T02:08:52",
"yymm": "1802",
"arxiv_id": "1802.08940",
"language": "en",
"url": "https://arxiv.org/abs/1802.08940"
}
|
\section{Introduction}
We model a network of interacting qubits in the continuous-time quantum walk XY model by a simple graph $G$ with adjacency matrix $A$, which encodes the pairs of qubits that interact. We are interested in the single-excitation subspace, that is the case when the system is initialized with one qubit in state $\ket 1$ and all others in state $\ket 0$. This model with a time-independent Hamiltonian evolves according to Schrödinger equation, and the main problem we address is what happens to the state $\ket 1$ as time passes. For example, if this state is observed at another vertex with probability $1$ after a certain time, we say that \textit{perfect state transfer} has happened. This is equivalent to having a pair of vertices $a$ and $b$ and a time $\tau \geq 0$ such that
\[|\exp(\ii \tau A)_{a,b}| = 1.\]
When $a = b$ in the equation above, we say that vertex $a$ is periodic. Vertices involved in perfect state transfer are periodic at double the time, but the converse does not hold in general.
Perfect state transfer was first considered by Bose \cite{BoseQuantumComPaths} and has since been studied in a good number of papers and in different contexts, spanning the fields of physics, mathematics and computer science. Surveys are found in Kendon and Tamon \cite{KendonTamon} and Godsil \cite{GodsilStateTransfer12}. The thesis \cite{CoutinhoPhD} contains a detailed introduction and a more recent compilation of known results.
Christandl et al.~\cite{ChristandlPSTQuantumSpinNet2} showed that the paths on two and three vertices admit perfect state transfer between the end vertices, and that if a graph admits perfect state transfer, then any Cartesian power of this graph also admits perfect state transfer at the same time. The pictures below depict the Cartesian squares and cubes of paths on 2 and 3 vertices, and the vertices in black are involved in perfect state transfer.
\begin{center}
\includegraphics[scale=0.4]{paths}
\end{center}
Suppose a graph $G$ admits perfect state transfer between vertices at distance $d$. In this paper, we are interested in determining what is the cost in terms of the number of vertices or edges. To this day, the best known trade-off is achieved by the Cartesian powers of $P_3$, in which perfect state transfer between vertices at even distance $d$ happens in a graph with $3^{d/2}$ vertices and $d \cdot 3^{(d/2) - 1}$ edges. However, there is no known lower bound on the number of edges in terms of the diameter alone other than a trivial linear bound. See \cite[section E]{KayPerfectcommunquantumnetworks} for some discussion.
As we mentioned, vertices involved in perfect state transfer must be periodic. This turns out to be the key concept needed to bound the size of the graph. More specifically, we will derive bounds that depend on the eccentricity of a periodic vertex, and also on the period.
\section{The eccentricity of periodic vertices}
In our considerations below, $M$ is a symmetric integer matrix whose rows and columns are indexed by the vertices of a graph, and whose an off diagonal entry is non-zero if and only if the corresponding pair of vertices is adjacent. Moreover, we assume the sign of all off-diagonal entries is the same. For example, $M$ can be the adjacency, the Laplacian, or the signless Laplacian, as well as weighted versions of these matrices, provided all weights are integers with the same sign.
We say that vertex $a$ is periodic in $G$ according to $M$ if there is a positive real time $\tau$ such that $\exp(\ii \tau M)_{a,a}$ has absolute value equal to one.
Let $\theta_0 > ... > \theta_t$ be the distinct eigenvalues of $M$. If $a$ is a vertex of $G$, we denote by $\ee_a$ the $01$-vector in $\R^n$ that is $0$ everywhere except for the position corresponding to $a$. Because $M$ is symmetric, it has a spectral decomposition into orthogonal idempotents as follows.
\[M = \sum_{r = 0}^t \theta_r E_r.\]
Let $\Phi_a$ denote the set of eigenvalues of $M$ such that $\theta_r \in \Phi_a$ if and only if $E_r \ee_a \neq 0$. This is the eigenvalue support of $a$.
\begin{theorem}[Godsil \cite{GodsilPerfectStateTransfer12}, Theorem 6.1]
If vertex $a$ is periodic in $G$ according to $M$, then the non-zero elements in $\Phi_a$ are either all integers or all quadratic integers. Moreover, there is a square-free positive integer $\Delta$, an integer $\alpha$ and integers $\beta_r$ such that
\[\theta_r \in \Phi_a \implies \theta_r = \frac{1}{2} (\alpha+\beta_r\sqrt{\Delta}).\]
We consider $\Delta = 1$ for the cases where all eigenvalues are integers.
\end{theorem}
If $M = L(G)$, note that $0$ is always an eigenvalue in the eigenvalue support of any vertex. Moreover, $L(G) \succeq 0$, and so no eigevalue of $L(G)$ can be of the form $b \sqrt{\Delta}$ with $b>0$ and $\Delta > 1$, as this would imply that $-b\sqrt{\Delta}$ is also an eigenvalue. As a consequence, we have the following corollary.
\begin{corollary}
If $M = L(G)$ and vertex $a$ is periodic, then the elements in $\Phi_a$ are all integers.
\end{corollary}
We will also need the following result.
\begin{theorem}[Coutinho \cite{CoutinhoPhD}, Theorem 2.4.4] \label{thm:1}
Suppose vertex $a$ is periodic in $G$ according to $M$ at time $\tau$. Let
\[ g = \gcd\left( \left\{ \frac{\theta_0 - \theta_r}{\sqrt{\Delta}} \right\}_{\theta_r \in \Phi_a} \right).\]
Then $\tau$ must be an odd multiple of $\dfrac{2 \pi}{g \sqrt{\Delta}}$.
\end{theorem}
As a consequence of the Theorem above, the minimum time periodicity occurs is at most $2\pi$.
As an immediate consequence, we have the following lemma.
\begin{lemma} \label{lem:1}
Suppose vertex $a$ is periodic in $G$ according to $M$ at minimum time $\tau$, and let $\theta$ and $\theta'$ be distinct eigenvalues in $\Phi_a$. Then
\[\theta - \theta' \ \geq\ \frac{2\pi}{\tau}.\]
\end{lemma}
\begin{proof}
Suppose $\theta$ and $\theta'$ are two elements of $\Phi_a$. Then, by Theorem \ref{thm:1}, there is an integer $k$ such that
\[\frac{2\pi}{\tau} \ k = \theta - \theta'.\]
\end{proof}
Let $\varepsilon_a$ denote the eccentricity of vertex $a$ in the graph $G$, that is, the maximum distance between any vertex of $G$ and vertex $a$. A standard argument in algebraic graph theory leads to a relation between $\varepsilon_a$ and $|\Phi_a|$.
\begin{lemma} \label{lem:2}
Let $M$ be as defined in the beginning of this section. Let $\Phi_a$ be the eigenvalue support of vertex $a$ according to $M$. Then $\varepsilon_a \leq |\Phi_a|$.
\end{lemma}
\begin{proof}
Consider the subspace of $\R^n$ defined as
\[W_a = \langle \{ M^i \ee_a\}_{i \geq 0} \rangle.\]
It follows that
\[W_a = \langle \{ E_r \ee_a \}_{\theta_r \in \Phi_a} \rangle,\]
hence $\dim W_a = |\Phi_a|$. Because the non-zero off-diagonal entries of $M$ correspond to adjacent vertices and have all the same sign, it follows that the vectors $\{M^i \ee_a\}_{i = 0}^{\varepsilon_a}$ are all independent, and thus
\[\varepsilon_a +1\leq |\Phi_a|.\]
\end{proof}
We will now proceed to establish bounds on the size of the graph that depend solely on the eccentricity of a periodic vertex.
\begin{theorem}
Let $G$ be a graph with $m$ edges that, according to the quantum walk model defined by the adjacency matrix, contains a periodic vertex $a$ with period $\tau$. Let $\varepsilon_a$ be the eccentricity of $a$. Then
\[\left(\frac{\varepsilon_a}{3}\right)^3 < 2m.\]
\end{theorem}
\begin{proof}
Let $\theta_0,...,\theta_{n-1}$ denote the eigenvalues of $A$, possibly with repetition, and assume they are ordered in such a way that $\theta_0^2 \geq \theta_1^2 \geq ... \geq \theta_{n-1}^2$. Because the diagonal entries of $A^2$ contain the degrees of each vertex, it follows that
\[\tr A^2 = 2m = \sum_{j = 0}^{n-1} \theta_j^2.\]
As a consequence
\[\theta_j^2 \leq \frac{2m}{j+1}.\]
Let $k = \lfloor \sqrt[3]{2m} \rfloor$. Note that $k \leq n-1$. From Lemma \ref{lem:1}, the separation between any two distinct eigenvalues in $\Phi_a$ is at least $2\pi / \tau$. Thus, in the worst case where all eigenvalues $\theta_0,...,\theta_{k-1}$ belong to $\Phi_a$, we can still say that
\[\frac{2 \pi}{\tau} (|\Phi_a| - k - 1) \leq 2|\theta_k|.\]
Hence
\begin{align}
|\Phi_a| & \leq \frac{\tau}{\pi} \sqrt{\frac{2m}{k+1}} + k + 1 \\ &
< \sqrt[3]{2m}\left( \frac{\tau}{\pi} + 1\right) + 1 \\ &
\leq 3 \sqrt[3]{2m} + 1.
\end{align}
where the last inequality follows from $\tau \leq 2 \pi$ (Theorem \ref{thm:1}). We know from Lemma \ref{lem:2} that $\varepsilon_a + 1 \leq |\Phi_a|$, thus
\[\left(\frac{\varepsilon_a}{3}\right)^3 < 2m.\]
\end{proof}
It is immediate from the theorem above that the number of vertices cannot be bound above by a linear function on $\varepsilon_a$. Adapting the proof to matrices $M$ as defined in the beginning of this section can lead to similar bounds in other quantum walk models. However, the specificities of the matrix could make the bound stronger or weaker. For instance, with the Laplacian matrix, the same technique allows only for a quadratic bound.
In particular, we know that all eigenvalues $0 = \lambda_{n-1} \leq ... \leq \lambda_0$ are non-negative and that
\[\sum_{j = 0}^{n-1} \lambda_j = 2m.\]
So we have the following.
\begin{theorem}
Let $G$ be a graph with $m$ edges that, according to the quantum walk model defined by the Laplacian matrix, contains a periodic vertex $a$ with period $\tau$. Let $\varepsilon_a$ be the eccentricity of $a$. Then
\[\left(\frac{\varepsilon_a}{3}\right)^2 < m.\]
\end{theorem}
\begin{proof}
We have
\[\lambda_j \leq \frac{2m}{j+1}.\]
Let $k = \lfloor \sqrt{m} \rfloor$. From Lemma \ref{lem:1}, the separation between an two distinct eigenvalues in $\Phi_a$ is at least $2\pi / \tau$. Thus, in the worst case where all eigenvalues $\lambda_0,...,\lambda_{k-1}$ belong to $\Phi_a$, we can still say that
\[\frac{2 \pi}{\tau} (|\Phi_a| - k - 1) \leq \lambda_k.\]
Hence
\begin{align}
|\Phi_a| & \leq \frac{\tau}{\pi} \cdot \frac{m}{k+1} + k + 1 \\ &
< \sqrt{m}\left( \frac{\tau}{\pi} + 1\right) + 1 \\ &
\leq 3 \sqrt{m} + 1.
\end{align}
where the last inequality follows from $\tau \leq 2 \pi$ (Theorem \ref{thm:1}). We know from Lemma \ref{lem:2} that $\varepsilon_a + 1\leq |\Phi_a|$, thus
\[\left(\frac{\varepsilon_a}{3}\right)^2 < m.\]
\end{proof}
\section{Discussion and questions}
The task of finding vertices admitting perfect state transfer at arbitrarily large distances has been studied now for at least 10 years. As far as I know, this paper provides the first non-trivial lower bounds on the size of an unweighed graph that would admit perfect state transfer relative to its diameter. This is because any vertex involved in perfect state transfer must be periodic, and the eccentricity of any vertex is at least half the diameter of the graph. One of the reasons that make this work relevant is that the size of the graph can be seen as the cost to construct the quantum system.
Our cubic (or quadratic, depending on the model) lower bound frustrates the expectation that small variations on arbitrarily long paths could provide families of graphs admitting perfect state transfer. It also shows that some trees do not admit perfect state transfer, providing some support to the conjecture that, in the adjacency matrix model, no tree other than $P_2$ or $P_3$ does (see \cite{CoutinhoLiu2}).
The dual line of investigation to what we expose here is to find families of relatively small graphs admitting perfect state transfer at arbitrarily large distances. As we pointed in the introduction, the best trade-off is achieved by Cartesian powers of $P_3$. In \cite[Section 5]{CoutinhoSpectrallyExtremal2}, some strategy on how to perturb these powers to reduce the size of the graph are briefly discussed, but we were not able to obtain a relevant asymptotic improvement.
We end with a list of questions related to our work.
\begin{enumerate}
\item If there is a periodic vertex at time $2\pi/\lambda$, provide a lower bound on the size of the graph that depends on $\lambda$ better than the cubic bound we have. I am guessing this could be an exponential in $\lambda$. Note that the only examples we know of perfect state transfer happening at arbitrarily small times are coming from graphs which are very large (see \cite{AdaChanComplexHadamardIUMPST}).
\item Improve our polynomial bounds to exponential, or find a family of graphs admitting perfect state transfer at arbitrarily large distances and size bounded above by a polynomial in their diameter.
\end{enumerate}
\section{Acknowledgements}
I thank Chris Godsil and Alastair Kay for some discussion on the topic of this paper. I also acknowledge a travel grant from the Dept. of Computer Science at UFMG.
|
{
"timestamp": "2018-02-27T02:01:25",
"yymm": "1802",
"arxiv_id": "1802.08734",
"language": "en",
"url": "https://arxiv.org/abs/1802.08734"
}
|
\subsection{Restriction to Acyclic Topologies}
\label{sec:levelize}
\input{levelize}
\subsection{Adding fault injection results to SAT problem}
We now extend SAT attacks to account for the attacker's ability to inject faults.
To incorporate fault injection results into the SAT problem, the attacker can add a new multiplexer to select between faulty or normal values for all nodes (except primary inputs) in his model, and then allow the SAT solver to guide his fault injection trials as will be shown. This new structure is shown with thick lines in Fig. \ref{fig:faultModel}. For a node $n_x$ in the circuit, $injectFault_x$ is a primary input to the model that selects whether or not a fault should be injected on it. Primary input signal $FaultVal$ determines the value that is forced onto the selected node. Node $n_x\_fe$ is the fault-enabled version of the node, which is either the value computed by the circuit for $n_x$, or the value forced onto the node by fault injection.
Discriminating inputs produced by the solver now provide the reverse engineer with an input vector to apply to the circuit, as well as a node to fault, and a faulty value to inject on that node. The attacker applies these conditions to the oracle circuit, finds the resulting output vector, and feeds the conditions back into the SAT solver as constraints. The ability to have additional discriminating information through fault injection can allow an attacker to better distinguish between circuit implementations that are overall functionally equivalent.
\subsection{Adding Voltage Probing to SAT problem}
\label{sec:sat-lvp}
We now extend SAT attacks to also incorporate voltage probing. Voltage probing has the effect of selectively making internal nodes of the circuit visible to the attacker, which in the attacker's model is equivalent to selectively making internal nodes appear as part of the observable output vector. We create an input signal in the model that selectively activates the observability of a node so that, when the SAT solver finds a discriminating input, that input now indicates to the attacker whether any internal signals should be probed when the vector is applied to the oracle. The corresponding output vector and probed value are fed back into the SAT attack as the additional constraints on the circuit configuration variables.
Probing is illustrated in Fig.~\ref{fig:faultModel} using a model of a circuit with a single internal node $C$. The newly added input signal $Probe_C$ selects whether the value of node $C$ can be considered when finding a discriminating input vector. Whenever the solver decides to assert the input signal $Probe_C$, then a discriminating vector is one that can produce different values on the primary outputs or on the now-observable signal $C$. Whenever the solver does not assert $Probe_C$, then the approach reverts to the standard SAT attack that discriminates between possible configurations based on the primary outputs only.
\begin{figure*}[tbh!]
\centering
\begin{minipage}[c]{0.22\textwidth}
\centering
\subfloat{
\includegraphics[width=1\textwidth]{circuit}
}
\end{minipage}
\begin{minipage}[c]{0.75\textwidth}
\centering
\subfloat{
\includegraphics[width=1\textwidth]{modelWithFault}
}
\end{minipage}
\caption{The circuit shown on the left would be modeled as the one shown on the right. The multiplexers in thick lines are used to incorporate fault injection and the AND gate controlled with $Probe_C$ signal is used to add voltage probing into the SAT problem. }
\label{fig:faultModel}
\end{figure*}
\section{Related Work and Background}
Imaging-based invasive reverse engineering works by decapsulating the chip, imaging and removing each layer in succession, and then using the images to reconstruct the circuit schematic.
A countermeasure against imaging-based reverse engineering is the use of various camouflaged gates or camouflaged interconnects. Camouflaged components are ones in which different functions are implemented by features that are indistinguishable to the reverse engineer, so that function cannot be inferred from appearance. Camouflaged gate libraries use hard-to-observe structural techniques to differentiate the gate functions~\cite{syphermedia-library,cocchi-14}, use functionality that can be controlled without structural differences via transistor doping~\cite{becker-13,shiozaki-14,malik-obfusgate,iyengar-15,collantes-16}, use conducting and non-conducting interconnects, or use secret key inputs that control the design functionality with a structure \cite{KeshavarzSRAM} that cannot be distinguished by the attacker once the chip is delayered~\cite{chen-2015-dummyWire}. Though camouflaging is a promising hardware security enhancement technique, it comes with additional overheads in the chip area, power consumption, and fabrication cost, and there is always a trade-off between the security and overheads~\cite{chakraborty-09,rajendran-13}.
\subsection{SAT Attacks}
SAT attacks are based on principle of finding discriminating input vectors, which are input vectors that can eliminate at least one additional circuit function hypothesis once the corresponding output vector is known. Once no further discriminating vectors can be found, it means that no further circuit functions can be ruled out by any tests, and therefore the current set of discriminating inputs is sufficient to uniquely identify the circuit function. Techniques from oracle-guided synthesis~\cite{jha2010oracle} are used in SAT-based attacks to reverse engineer gate camouflaging or logic encryption~\cite{elmassad-15,subramanyan-15}.
It is important to note that a circuit reverse engineered by oracle-guided synthesis is only guaranteed to be functionally equivalent to the obfuscated circuit, and there is no assurance that it will match the obfuscated circuit on a gate-by-gate basis. Ensuring gate-by-gate equivalence to the obfuscated circuit is generally impossible because the attack only has information about the inputs and outputs. Designs recovered through oracle-guided synthesis may therefore be unsuitable for certain classes of side-channel or fault injection attacks that require knowing the states of all combinational circuit nets. In this paper, we propose a SAT-based de-obfuscation technique that assumes very little knowledge about the obfuscated circuit connections or gates, yet still attempts to reconstruct the exact gate-level schematic of the obfuscated circuit.
\subsection{Attacker Model}
The attacker model we consider in this work represents an adversary that is trying to reverse engineer a circuit from the backside. This scenario may arise in chips with anti-tamper mechanisms that prevent delayering to learn the interconnections of each metal layer. From the backside, the adversary has a very limited knowledge of the circuit as listed below:
\textbf{Connections:} All connections in the circuit are unknown.
This means that any gate input in the circuit could be connected to the output of any other gate in the circuit.
\textbf{Gate inputs/outputs:} Each gate has a single output, and the output pin of the gate can be identified, yet the adversary cannot see what the gate output connects to. The adversary can know how many inputs each gate has, but cannot know which signals (primary inputs or outputs of other gates) are driving them. If the number of inputs to each gate cannot be determined, the attacker can be conservative and overestimate the number of inputs to each gate.
\textbf{Gate functions:} Our model considers that the attack may know nothing about the gate functions. That is, a gate with $n$ inputs can implement any of $2^{2^n}$ possible functions.
\vspace{5pt}
The assumed attacker capabilities in this work are as described below:
\textbf{Circuit inputs/outputs:} Attacker has a working circuit instance, and can apply the desired inputs to the circuit and observe the outputs.
The circuit instance used to correctly map input vectors to output vectors is called the ''oracle''.
In case the circuit is part of an encryption hardware, the
attacker can control the input to the encryption hardware and knows (or is able to
set) an internal secret key. This enables calculating any intermediate
values that might occur during computation (the primary outputs of our target circuit).
\textbf{Probes: } At some points in the work, the attacker is allowed to probe the value of arbitrary gate outputs. In this setting, the attacker still has no knowledge of connectivity and hence doesn't know what else is being driven by the node that is probed.
Due the nature of probing, it is not possible to probe the value of gate inputs.
\textbf{Fault Injection: } At some points in the work, the attacker is allowed to inject faults using a laser.
\subsection{Enforcing Levelization in SAT}
{\bfseries Encoding Constraints:}
For each gate $g_j$ in a circuit with $n$ levels, we define a bit-vector of auxiliary variables $(l_0(g_j), l_1(g_j), \dots, l_{n}(g_j))$ to encode the level of the gate. The level of the gate is encoded in a thermometer code style, with a number of 0 values followed by number of 1 values. If bit $l_i(g_j)$ is 0, then gate $g_j$ exceeds level $i$. If bit $l_i(g_j)$ is 1, then the level of gate $g_j$ is less than or equal to $i$. Therefore, the level of the gate can be said to be the left-most bit position in which the value is 1. For example in Fig.~\ref{fig:loopPrevention}, the level of $g_2$ is 2 because $l_2(g_2)$ is the left-most bit position with value of 1. In any legal thermometer coded value, every 0 bit in the vector other than the first must be preceded by another 0 bit, and this is enforced by the encoding invariant shown in eq. \ref{eq:encodingconst}. The first and last bit of the level encoding vector must be 0 and 1 respectively for all gates.
\vspace{-10pt}
\begin{equation}
\label{eq:encodingconst}
\begin{small}
\begin{aligned}
\forall{{\scriptstyle i>0}, g_j}:
\overbrace{\neg l_i(g_j)}^{level > i} \Rightarrow \overbrace{\neg l_{i-1}(g_j)}^{level > i-1},\qquad\forall{g_j}: \neg l_0(g_j) \wedge l_n(g_j)
\end{aligned}
\end{small}
\end{equation}
{\bfseries Ordering of Levels:}
For the circuit to be levelized, each gate $g_j$ at level $i$ or greater must get its inputs from gates at level $i+1$ or greater. Using transition predicate $R(a,b)$ (see Sec.~\ref{sec:sat-formulation}) to denote a connection from output of gate $a$ to an input of gate $b$, any legal level-ordering between nodes $a$ and $b$ must obey the ordering constraint in eq.~\ref{eq:orderinconst}. For example in Fig.~\ref{fig:loopPrevention}, $l_1(g_2)=0$ (meaning that gate $g_2$ is level 2 or higher) and we have $R(g_0, g_2)=1$ and $R(g_1,g_2)=1$, indicating that both $g_0$ and $g_1$ fan out to $g_2$. Therefore, the ordering constraint enforces that $l_2(g_0)$ and $l_2(g_1)$ must both be 0, meaning that gates $g_0$ and $g_1$ are at level 3 or higher.
\begin{equation}
\label{eq:orderinconst}
\begin{small}
\begin{aligned}
\forall{{\scriptstyle i>0},g_j,g_k}: \!\!
\left( \overbrace{\neg l_{i-1}(g_j)}^{level \geq i} \wedge R(g_k, g_j)\right)
\Rightarrow \overbrace{\neg l_{i}(g_k)}^{fanin\ level \geq i+1}
\end{aligned}
\end{small}
\end{equation}
\vspace{-8pt}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\columnwidth]{loopPrevention}
\caption{Sample levelization encoding in a circuit}
\label{fig:loopPrevention}
\end{figure}
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{SAT Formulation for Unknown Gates and Connections}
\label{sec:sat-formulation}
\input{sat-formulation}
\section{Learning from Voltage Probing}
\label{sec:probing}
\input{probing}
\section{Learning from Fault Injection}
\label{sec:fault-injection}
\input{fault-injection}
\section{Extended SAT Formulation}
\label{sec:extended-sat-formulation}
\input{extended-sat-formulation}
\section{Results}
\label{sec:results}
\input{results}
\section{Conclusions}
\label{sec:conclusions}
\input{conclusions}
\thispagestyle{empty}
\bibliographystyle{acm}
\subsection{Refining feasible inputs of gates}
Having access to the value of internal signals can also help the SAT solver to make inference about the possible connections between gates. Even when gate functions are unknown, it is known due to the nature of circuits that each gate instance must implement a deterministic Boolean function; in other words, any gate must always map the same gate input value to the same gate output value.
\begin{table}%
\centering
\subfloat[][Probed node values.\label{tab:probed}]{
\scalebox{0.8}{
\begin{tabular}{l|l|l|l|l}
\multicolumn{1}{l|}{input} & \multicolumn{4}{c}{circuit nodes} \\
vector & A & B & C & X \\ \hline \hline
0000 & 1 & 1 & 0 & 0 \\ \hline
0001 & 0 & 1 & 0 & 1 \\ \hline
0010 & 1 & 1 & 1 & 0 \\ \hline
0011 & 0 & 0 & 1 & 0 \\ \hline
0100 & 0 & 1 & 0 & 1 \\
\end{tabular}
}
}%
\quad
\subfloat[][Gate truth table for different connections.\label{tab:tt}]{
\scalebox{0.82}{
\begin{tabular}{l|l c l|l c l|l}
AB & X && AC & X && BC & X \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{1-2} \cline{4-5} \cline{7-8}
00 & 0 && 00 & 1,1 && 00 & \\ \cline{1-2} \cline{4-5} \cline{7-8}
01 & 1,1 && 01 & 0 && 01 & 0 \\ \cline{1-2} \cline{4-5} \cline{7-8}
10 & && 10 & 0 && 10 & 0,1,1 \\ \cline{1-2} \cline{4-5} \cline{7-8}
11 & 0,0 && 11 & 0 && 11 & 0 \\
\end{tabular}
}
}
\caption{Example showing that probed values can rule out certain connections. One of the three possible node pairings is non-deterministic and will be rulled out by SAT solver.}
\vspace{-15pt}
\label{tab:probing_example}
\end{table}
As a demonstration of how probing can eliminate some candidate connections, consider the example of Tab.~\ref{tab:probing_example} that shows the values of selected nodes when five different primary input values are applied to a circuit. Assume in this case that the attacker knows that node $X$ is the output of a 2-input gate and nodes $A$, $B$, and $C$ are other nodes in the circuit. Without knowing the connections of the circuit, the attacker knows only that the inputs to the 2-input gate that produces $X$ are either ($A,B$), ($A,C$), or ($B,C$).
For each primary input that is applied to the circuit, probing the values of node X and each of the possible gate input connection pairs make up different cases as shown in Tab.~\ref{tab:tt}.
Looking at these truth tables, we can see that it is impossible for the gate input connections to be ($B,C$), because $X$ takes different values in the three vectors that induced ($B,C$) to have the values (1,0). Input combinations ($A,B$) and ($B,C$) both imply a consistent (deterministic) function for the gate, so neither of these can be ruled out. Note that our pair notation is not ordered; in other words, ($n_i, n_j$) = ($n_j, n_i$).
Using probed values to rule out infeasible input combinations leads to, for each gate, a set of feasible input pairs. If the set of nodes in the circuit is denoted as $N$, for each node $n_x \in N$ that is the output of a 2-input gate a set of feasible input pairings ($F(n_x)$) can be calculated as shown below, where $n_i^j$ is the value of node $n_i \in N$ when the $j^{th}$ input vector is applied to the circuit.
\vspace{-10pt}
\begin{equation}
\label{eq:infeasibleset}
\begin{small}
\begin{aligned}
F({n_x}) \hspace{-3pt} := \hspace{-2pt}
(n_y, n_w) \hspace{-2pt} \in \hspace{-2pt} N^2 \hspace{-2pt} \mid \hspace{-2pt} \left( (n_y^i,n_w^i) \hspace{-1pt} = \hspace{-1pt} (n_y^j,n_w^j) \right) \hspace{-2pt} \Rightarrow \hspace{-3pt} \left( n_x^i \hspace{-2pt} = \hspace{-2pt} n_x^j \right)
\end{aligned}
\end{small}
\end{equation}
\subsection{Distribution of SAT variables}
In all cases, the levelization constraints are found to be necessary for the SAT attack algorithm to terminate successfully (See sec.~\ref{sec:levelize} for details). The majority of the variables and clauses in the SAT problem are used to implement the constraints that enforce levelization.
For circuit c17, Fig.~\ref{fig:varRatio} shows the proportion of SAT variables that are used in each of the following aspects of the formulation: The function variables that help with solving gate functions, the connection variables that are created to solve the gate connections, the levelization variables that are created to enforce levelization in the circuit, the fault injection variables that are used to add fault injection capabilities to the model, and the variables related to circuit's primary inputs and outputs and constraints thereof.
\subsection{Effectiveness of fault injection and probing}
As can be seen in Tab. \ref{tab:res}, the problem is not solved within hours if probing and fault injection are not used. When probing is enabled, but fault injection is not, the algorithm converges to a solution in a timely manner. However, the solution is not unique, and in the case of the S-Box the recovered circuit does not match the structure of the target circuit that is being reverse engineered.
Therefore, probing alone doesn't fulfill our objective of finding solutions that are structurally equivalent to the original netlist on a gate-by-gate basis.
Using fault injection along with voltage probing in reverse engineering makes it possible to find a unique solution for both circuits. In case of the S-box circuit with fault injection and probing, approximately 800K variables and 5M clauses were generated to solve the problem. This solution is identical to the target circuit in all connections and all gate functions. Note that the formulation that includes both probing and fault injection requires more iterations (more discriminating inputs). This occurs because of the large space of fault injection tests and the need to rule out every possible circuit configuration that is not exactly the same as the target, instead of merely ruling out the configurations that are not functionally identical to the target.
\begin{table*}[htb]
\centering
\caption{Results for c17 and 4-bit S-Box circuits}
\label{tab:res}
\begin{threeparttable}
\begin{tabular}{c c ||c c c c | c c c c}
\multicolumn{2}{c||}{\multirow{2}{*}{\textbf{conditions}}} & \multicolumn{8}{c}{\textbf{results}} \\ \clineB{3-10}{3}
\multicolumn{2}{c||}{} & \multicolumn{4}{c}{\textbf{c17}} & \multicolumn{4}{c}{\textbf{S-Box}} \\ \clineB{1-10}{3}
fault injection & probed nodes & CPU time(s) & iterations & unique? & \begin{tabular}[c]{@{}l@{}}CPU time(s)\\(limited functions)\end{tabular} & CPU time(s) & iterations & unique? & \begin{tabular}[c]{@{}l@{}}CPU time(s)\\(limited functions)\end{tabular} \ \\ \hhline{--||----|----}
\xmark & \xmark & timeout\tnote{*} & - & - & timeout\tnote{*} & timeout\tnote{*} & - & - & timeout\tnote{*} \\
\xmark & \cmark & 5.990 & 12 & Yes & 6.425 & 1029 & 16 & No &485\\
\cmark & \xmark & 874.894 & 34 & Yes & 328.623 &timeout\tnote{*} & - & - & timeout\tnote{*} \\
\cmark & \cmark & 9.178 & 24 & Yes & 8.086 & 611 & 54 & Yes &438 \\ \hhline{--||----|----}
\end{tabular}
\begin{tablenotes}\footnotesize
\item[*] Timeout is considered after 16 hours.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Configuration Variables for Unknown Connections}
Since nothing about the connection of gates are known to the attacker, multiplexers are added that are responsible to select which node in the circuit is connected to each input of the gate. For example, in Fig. \ref{fig:ckt_and_model}, since the gate has output $C$, the connection multiplexers choose from the other four nodes of the circuit ($A$, $B$, $D$ and $E$) to determine which is connected to each of the gate's inputs. In a circuit with $N$ nodes, the connection multiplexers are therefore $(N-1)$-to-1 input multiplexers, as they can select any other node in the circuit except for that gate's own output (node $C$). In some cases, as will be shown later, certain connections can be ruled out and the number of multiplexer inputs would reduce accordingly.
To keep track of the connectivity between gates, as will be needed later to ensure that the solver only considers acyclic networks, we define transition relation predicates for all pairs of gates. If there is a connection from output of gate $A$ (node $A$) to one of the inputs of gate $C$ (that has output node $C$), the predicate $R(A,C)$ will be 1 and otherwise it will be 0.
In Fig. \ref{fig:ckt_and_model}, predicate $R(A,C)$ is true if and only if the configuration variables for the connection multiplexer connect the output of gate $A$ to an input of gate $C$; therefore, $R(A,C)$ is true whenever $sel_1sel_0=00$ or $sel_3sel_2=00$.
The second type of multiplexer employed is for choosing the function of the gate based on the selected inputs from the connection multiplexers. The function multiplexer can be regarded as implementing the truth table of the gate function, choosing which combination of input values should result in which binary value on the gate's output. For a gate of $n$ inputs, the function multiplexer would be a $2^n$ to 1 multiplexer.
Note that our model puts no restrictions on the function of the gates.
However, if the attacker has knowledge of the gate library used, he can put restrictions on the configuration variables that determine the gate's function. For example, in figure \ref{fig:ckt_and_model}, if the attacker knows that the 2-input gate could only be $NAND$ or $NOR$, then he can restrict the multiplexer's input values to ''1110'' (for $NAND$ gate) and ''1000'' (for $NOR$ gate) by adding clauses to the SAT problem to disallow all other combinations.
\vspace{-5pt}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.78\columnwidth]{model.pdf}
\label{camcircuit}
\caption{An example of the proposed gate model. Depending on the values of the configuration variables, this model allows each gate input to be driven from any node, and allows the gate to implement any possible logic function over its inputs.}
\vspace{-10pt}
\label{fig:ckt_and_model}
\end{figure}
|
{
"timestamp": "2018-02-27T02:07:57",
"yymm": "1802",
"arxiv_id": "1802.08916",
"language": "en",
"url": "https://arxiv.org/abs/1802.08916"
}
|
\section{Introduction}
The electric power grid is undergoing a period of unprecedented change. A major transition is the replacement of bulk generation based on synchronous machines by renewable generation interfaced via power electronics. This gives rise to scenarios in which either parts of the transmission grid or an islanded distribution grid may operate without conventional synchronous generation. In either case, the loss of synchronous machines poses a great challenge because today's power system operation heavily relies on their self-synchronizing dynamics, rotational inertia, and resilient controls.
The problem of synchronization of grid-forming power inverters has been widely studied in the recent literature. A grid-forming inverter is not limited to power tracking but acts as a controlled voltage source that can change its power output (thanks to storage or curtailment), and is controlled to contribute to the stability of the grid. Most of the common approaches of grid-forming control focus on droop control~\cite{MCC-DMD-RA:93,Guerrero2015,simpson2017voltage}. Other popular approaches are based on mimicking the physical characteristics and controls of synchronous machines~\cite{zhong2011synchronverters,JAD18,SDA-SJA:13} or controlling inverters to behave like virtual Li\'enard-type oscillators~\cite{johnson2014synchronization,MS-FD-BJ-SD:14b,LABT-JPH-JM:13,johnson2016synthesizing}. While strategies based on machine-emulation are compatible with the legacy power system, they use a system (the inverter) with fast actuation but almost no inherent energy storage to mimic a system (the generator) with slow actuation but significant energy storage (in the rotating mass) and may also be ineffective due to deteriorating effects of time-delays in the control loops and current limits of inverters \cite{ENTSOE16}. While some form of power curtailment or energy storage is essential to maintain grid stability, it is not clear that emulating a machine behavior is the preferable option. Virtual oscillator control (VOC) is a promising approach because it can globally synchronize an inverter-based power system. However, {the nominal power injection cannot be specified in the original VOC approach~\cite{johnson2014synchronization,MS-FD-BJ-SD:14b,LABT-JPH-JM:13,johnson2016synthesizing}, i.e., it cannot be dispatched.} Likewise, all theoretic investigations are limited to synchronization with identical angles and voltage magnitudes. {For passive loads it can be shown that power is delivered to the loads \cite{LABT-JPH-JM:13}, but the power sharing by the inverters and their voltage magnitudes are determined by the load and network parameters.}
Finally, the authors recently proposed a dispatchable virtual oscillator control (dVOC) strategy~\cite{colombino2017global,colombino2017global2}, which relies on synchronizing harmonic oscillators through the transmission network, ensures almost global asymptotic stability of an inverter-based AC power systems with respect to a desired solution of the AC power-flow equations{, and has been experimentally validated in \cite{GC+19}}.
In order to simplify the analysis, the dynamic nature of transmission lines is typically neglected in the study of power system transient stability and synchronization. In most of the aforementioned studies, an algebraic model of the transmission network is used, i.e., the relationship between currents and voltages is modeled by the admittance matrix of the network.
This approximation is justified in a traditional power network, where bulk generation is provided by synchronous machines with very slow time constants (seconds) and can be made rigorous using time-scale separation arguments~\cite{curi2017control}. As power inverters can be controlled at much faster time-scales~(milliseconds), when more and more synchronous machines are replaced by inverter-based generation, the transmission line dynamics, which are typically not accounted for in the theoretical analysis, can compromise the stability of the power-network. For droop controlled microgrids this phenomenon has been noted in~\cite{Guerrero2015,vorobev2017high} and it can be verified experimentally for all control methods listed above. Moreover, in \cite{vorobev2017high,vorobev2017cdc} explicit and insightful bounds on the control gains are obtained via small signal stability analysis for a steady state with zero relative angle.
In this work, we study the dVOC proposed in~\cite{colombino2017global}, which renders power inverters interconnected through an algebraic network model almost globally asymptotically stable \cite{colombino2017global2}. We provide a Lyapunov characterization of almost global stability. This technical contribution is combined with ideas inspired by singular perturbation theory \cite{PKC82,K02} to construct a Lyapunov function and an explicit analytic stability condition that guarantees almost global asymptotic stability of the full power-system including transmission line dynamics. {Broadly speaking, this stability condition makes the time-scale separation argument rigorous by quantifying how large the time-scale separation between the inverters and the network needs to be to ensure stability.
In addition to capturing the influence of the network dynamics our stability condition is not restricted to operating points with zero relative angles, accounts for the network topology, and provides explicit bounds the control gains and set-points for the power injections.} In particular, the stability condition links the achievable power transfer and maximum control gain to the connectivity of the graph of the transmission network. Moreover, we show that the only undesirable steady-state of the closed loop, which corresponds to voltage collapse, is exponentially unstable. By guaranteeing almost global stability of dVOC our results provide a theoretical foundation for using dVOC for non-standard operating conditions such as black starts and islanded operation, and alleviate the challenging problem of characterizing the region of attraction of networks of droop-controlled inverters. We use a simplified three-bus transmission system to show that our stability condition is not overly conservative and, to illustrate the results, we provide simulations of the IEEE 9-bus system.
In Section~\ref{sec.model} we introduce the problem setup. Section~\ref{sec.lyap} provides definitions and a preliminary technical results. The main result is given in Section~\ref{sec.sing.pert}, Section~\ref{sec.example} presents numerical examples, and Section~\ref{sec.conclusion} provides the conclusions.
\subsection*{Notation}
The set of real numbers is denoted by ${\mathbb R}$, ${\mathbb R}_{\geq 0}$ denotes the set of nonnegative reals, and $\mathbb N$ denotes the natural numbers. Given ${\theta\in[-\pi,\pi]}$ the 2D rotation matrix is given by
\begin{align*}
R(\theta) \coloneqq \begin{bmatrix}\cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta) \end{bmatrix} \in {\mathbb R}^{2 \times 2}.
\end{align*}
Moreover, we define the $90^\circ$ rotation matrix $J \coloneqq R(\pi/2)$ that can be interpreted as an embedding of the complex imaginary unit $\sqrt{-1}$ into ${\mathbb R}^2$. Given a matrix $A$, $A^\mathsf{T}$ denotes its transpose. We use $\|A\|$ to indicate the induced 2-norm of $A$. We write $A\succcurlyeq0$ $(A\succ0)$ to denote that $A$ is symmetric and positive semidefinite (definite). For column vectors $x\in{\mathbb R}^n$ and $y\in{\mathbb R}^m$ we use $(x,y) = [x^\mathsf{T}, y^\mathsf{T}]^\mathsf{T} \in {\mathbb R}^{n+m}$ to denote a stacked vector and $\norm{x}$ denotes the Euclidean norm. {The absolute value of a scalar $y \in {\mathbb R}$ is denoted by $|y|$.} Furthermore, $I_n$ denotes the identity matrix of dimension $n$, and $\otimes$ denotes the Kronecker product. Matrices of zeros of dimension $n \times m$ are denoted by $\mathbbl{0}_{n\times m}$ and $\mathbbl{0}_{n}$ denotes column vector of zeros of length $n$. We use $\norm{x}_C \coloneqq \min_{z \in \mathcal{C}} \norm{z-x}$ to denote the distance of a point $x$ to a set $\mathcal{C}$. {Moreover, the cardinality of a set $\mathcal E$ is denoted by $|\mathcal E|$.} We use $\varphi_f(t,x_0)$ to denote the solution of $\frac{\diff}{\diff t} x = f(x)$ at time $t \geq 0$ starting from the initial condition $x(0)=x_0$ at time $t_0=0$.
\section{Modeling and control of an inverter-based power network}\label{sec.model}
In this section, we introduce the model of the inverter-based power grid that will be studied throughout the paper.
\subsection{Inverter-based power grid}
We study the control of $N$ three-phase inverters interconnected by a resistive-inductive network. All electrical quantities in the network are assumed to be balanced. This allows us to work in $\alpha\beta$ coordinates obtained by applying the well-known Clarke transformation to the three-phase variables~\cite{clarke1943circuit}. We model each inverter as a controllable voltage source. We model the transmission network as a simple graph (i.e., an undirected graph containing no self-loops or multiple edges) denoted by $\mathcal G~=~(\mathcal N, \mathcal E,\mathcal W)$, where $\mathcal N = \{1,..., N\}$ is the set of nodes corresponding to the inverters, $\mathcal W$ is the set of edge weights, and $\mathcal E \subseteq (\mathcal N \times \mathcal N) \setminus \cup_{i \in \mathcal N} (i,i)$, with $|\mathcal E|=M$, is the set of edges corresponding to the transmission lines. To each inverter we associate a terminal voltage $\underline{v}_k\in{\mathbb R}^2$, that can be fully controlled, and an output current $\underline{i}_{o,k}\in{\mathbb R}^2$ flowing out of the inverter and into the network. To each transmission line, we associate a current $\underline{i}_l \in {\mathbb R}^2$, an impedance matrix $Z_l \coloneqq I_2 r_{l} + \omega_0 J \ell_{l}$, and an admittance matrix $Y_l \coloneqq Z^{-1}_l$ with
\begin{align}\label{eq.weights}
\norm{Y_l} = \frac{1}{\sqrt{r_l^2+\omega_0^2\ell_l^2}},
\end{align}
where $r_l \in \mathbb{R}_{>0}$ and $\ell_l \in \mathbb{R}_{>0}$ are the resistance and inductance of the line $l \in \{1,\ldots,M\}$ respectively, and $\omega_0 \in \mathbb{R}_{\geq 0}$ is the nominal operating frequency of the power system. Moreover, we define the edge weights $\mathcal W = \{\norm{Y_l}\}_{l=1}^M$ of the graph. The oriented incidence matrix of the graph is denoted by $B \in \{-1,0,1\}^{N \times M}$ and, duplicating each edge for the $\alpha$ and $\beta$ components, we define $\mathcal B \coloneqq B\otimes I_2$. The dynamics of the transmission line currents $\underline{i} \coloneqq (\underline{i}_1,\ldots,\underline{i}_M) \in {\mathbb R}^{2M}$ are given by
\begin{align}\label{eq.network.equations}
L_T \frac{\diff}{\diff t} \underline{i} & = -R_T \underline{i} + \mathcal B^\mathsf{T} \underline{v},
\end{align}
where $R_T\coloneqq \diag(\{r_l\}_{l=1}^M) \otimes I_2$ is the matrix of line resistances $r_l \in \mathbb{R}_{>0}$, $L_T \coloneqq \diag(\{\ell_l\}_{l=1}^M) \otimes I_2$ is the matrix of line inductances $\ell_l \in \mathbb{R}_{>0}$, and $\underline{v} \coloneqq (\underline{v}_1,\ldots,\underline{v}_N) \in {\mathbb R}^{2N}$ collects the terminal voltages of all inverters. We define the vector of inverter output currents as $\underline{i}_o \coloneqq (\underline{i}_{o,1},\ldots,\underline{i}_{o,N}) \in {\mathbb R}^{2N}$ given by $\underline{i}_o = \mathcal B \, \underline{i}$. For convenience we use $\ell_{jk}$, $r_{jk}$, and $Y_{jk}$, to denote the inductance, resistance, and impedance of the line connecting node $j$ and node $k$.
\subsection{Quasi-steady-state network model}
To obtain the \emph{quasi-steady-state} approximation (also known as \emph{phasor approximation}) of~\eqref{eq.network.equations} we perform the following change of variables to a rotating reference frame:
\begin{align}\label{eq.rotframe}
v = \mathcal R(\omega_0 t) \underline{v}, \quad i_o = \mathcal R(\omega_0 t) \underline{i}_o, \quad i = I_M \otimes R(\omega_0 t) \underline{i},
\end{align}
where $\mathcal R(\theta) \coloneqq I_N \otimes R(\theta)$. Using $\mathcal J_M = I_M \otimes J$ and the impedance matrix $Z_T \coloneqq R_T + \omega_0 \mathcal J_M L_T$, the dynamics~\eqref{eq.network.equations} in the rotating frame become
\begin{align}\label{eq.network.rot}
L_T \frac{\diff}{\diff t} i = -Z_T i + \mathcal B^\mathsf{T} v.
\end{align}
The quasi-steady-state model is obtained by considering the steady-state map of the exponentially stable dynamics \eqref{eq.network.rot}.
\begin{definition}{\bf(Quasi-steady-state network model)}\label{def:qs}\\
The quasi-steady-state model of the transmission line currents $i$ and output currents $i_o$ is given by the steady-state map $i^s(v) \coloneqq Z^{-1}_T \mathcal B^\mathsf{T} v$ of \eqref{eq.network.rot} and $i^s_o(v) \coloneqq
\mathcal Y v$ with $\mathcal Y \coloneqq \mathcal B Z^{-1}_T \mathcal B^\mathsf{T}$.
\end{definition}
In conventional power systems the approximation $i = i^s(v)$ is typically justified due to the pronounced time-scale separation between the dynamics of the transmission lines and the synchronous machine dynamics. {Therefore, the dynamic nature of the transmission lines is typically neglected a priori in transient stability analysis of power systems.} However, for inverter-based power systems the electromagnetic transients of the lines have a significant influence on the stability boundaries, and the approximation is no-longer valid \cite{vorobev2017high,vorobev2017cdc}. {In this work, we make the time-scale separation argument rigorous and explicit by quantifying how large the time-scale separation needs to be to ensure stability. In order to prove the main result of this manuscript, we require the following standing assumption.}
\begin{assumption}{\bf(Uniform inductance-resistance ratio)}\label{ass.constant.ratio}\\
The ratio between the inductance and resistance of every transmission line is constant, i.e., for all $(j,k) \in\mathcal E$ it holds that $\frac{\ell_{jk}}{r_{jk}} = \rho \in {\mathbb R}_{>0}$.
\end{assumption}
{Assumption~\ref{ass.constant.ratio} is typically approximately satisfied for transmission lines at the same voltage level, but not satisfied across different voltage levels. We note that the main salient features uncovered by our theoretical analysis can still be observed even when Assumption~\ref{ass.constant.ratio} only holds approximately (see Section \ref{sec:example:ieee9}).}
Finally, let us define the angle $\kappa \coloneqq \tan^{-1}( \rho \omega_0)$, the graph Laplacian $L \coloneqq B\, \diag(\{\norm{Y_l}\}_{l=1}^M) B^\mathsf{T}$, and the \emph{extended} Laplacian $\mathcal L \coloneqq L \otimes I_2$. Note that under Assumption \ref{ass.constant.ratio}, it can be verified that $\mathcal R(\kappa) i^s_o(v) = \mathcal R(\kappa) \mathcal Y v = \mathcal L v$.
\subsection{Control objectives}\label{sec.control.obj}
In this section we formally specify the \emph{system-level} control objectives for an inverter-based AC power system that need to be achieved with \emph{local} controllers. We begin by defining instantaneous active and reactive power.
\begin{definition}{\bf(Instantaneous Power)}\label{def:IP}\\
Given the voltage $\underline{v}_k$ at node $k \in \mathcal N$ and the output current $\underline{i}_{o,k}$, we define the \emph{instantaneous active power} $p_k \coloneqq \underline{v}_{k}^\mathsf{T} \underline{i}_{o,k} \in \mathbb{R}$ and the \emph{instantaneous reactive power} $q_k \coloneqq \underline{v}_k^\mathsf{T} J \underline{i}_{o,k} \in \mathbb{R}$. Moreover, for all $(j,k) \in \mathcal E$, we define the \emph{instantaneous} active and reactive \emph{branch powers} $p_{jk} \!\coloneqq \underline{v}^\mathsf{T}_k \underline{i}_{jk}$ and $q_{jk} \!\coloneqq \underline{v}^\mathsf{T}_k J \underline{i}_{jk}$.
\end{definition}
Active and reactive power injections cannot be prescribed arbitrarily to each inverter in a network but they need to be consistent with the power-flow equations~\cite{kundur1994power}. To this end, we introduce steady-state voltage angles $\theta^\star_{k1}$ relative to the node with index $k=1$, and we define the relative steady-state angles $\theta^\star_{jk} \coloneqq \theta^\star_{j1} - \theta^\star_{k1}$ for all $(i,j) \in \mathcal N \times \mathcal N$.
\begin{condition}\textbf{(Consistent set-points)}\label{cond.consistent}\\
The set-points $p_k^\star \in \mathbb{R}$, $q_k^\star \in \mathbb{R}$, $v_k^\star \in \mathbb{R}_{>0}$ for active power, reactive power, and voltage magnitude respectively, are consistent with the power flow equations, i.e., for all $(j,k) \in \mathcal E$ there exist relative angles ${\theta^\star_{jk}} \in {(-\pi,\pi]}$ and steady-state \emph{branch powers} $p^\star_{jk} \in \mathbb{R}$ and $q^\star_{jk} \in \mathbb{R}$ given by
\begin{align*}
p^\star_{jk} \!\!&\coloneqq\! \norm{Y_{jk}}^2 \!\big(v^{\star2}_k r_{jk} - v^{\star}_k v^{\star}_j
( r_{jk}\cos(\theta_{jk}^\star )\!+\omega_0\ell_{jk} \sin(\theta_{jk}^\star ))\big),\\
q^\star_{jk} \!\!&\coloneqq\! \norm{Y_{jk}}^2 \!\big( v_k^{\star2} \omega_0\ell_{jk} \!-\! v_k^{\star}v_j^{\star}( \omega_0\ell_{jk}\!\cos(\theta_{jk}^\star )\!-r_{jk} \sin(\theta_{jk}^\star ))\big),
\end{align*}
such that $p^\star_k = \sum_{(j,k)\in\mathcal E} p^\star_{jk}$ and $q^\star_k = \sum_{(j,k)\in\mathcal E} q^\star_{jk}$ holds for all $k \in \mathcal N$.
\end{condition}
Given a set of \emph{consistent} power, voltage and frequency set-pints (according to Condition~\ref{cond.consistent}), the objective of this paper is to design a decentralized controller of the form
\begin{align}\label{eq.voltage.dyn.ol}
\frac{\diff}{\diff t} \underline{v}_k & =u_k(\underline{v}_k,\underline{i}_{o,k}),
\end{align}
that induces the following steady-state behavior:
\begin{itemize}
\item \textbf{Synchronous frequency:} Given a desired synchronous frequency $\omega_0 \in {\mathbb R}_{\ge0}$, at steady state it holds that:
\begin{align}\label{eq.obj.sync}
\frac{\diff}{\diff t} {\underline{v}_k} = \omega_0J\,\underline{v}_k,\!\quad {k}~\!\in~\mathcal N;
\end{align}
\item \textbf{Voltage magnitude:} Given voltage magnitude set-points $v^\star_k \in {\mathbb R}_{>0}$, at steady state it holds that:
\begin{align}\label{eq.obj.norm}
\norm{\underline{v}_k}= v^\star_k, \quad {k}~\!\in~\mathcal N;
\end{align}
\item \textbf{Steady-state currents:} it holds that $\underline{i} = i^s(\underline{v})$;
\item \textbf{Power injection:} At steady state, each inverter injects the prescribed active and reactive power, i.e.,
\begin{align}\label{eq.obj.power}
\left\{\begin{array}{l}
\underline{v}_k^\mathsf{T} \underline{i}_{o,k} =p_k^\star\\
\underline{v}_k^\mathsf{T} J \underline{i}_{o,k}=q_k^\star
\end{array},\right.
\quad {k}~\!\in~\mathcal N.
\end{align}
\end{itemize}
Equation~\eqref{eq.obj.sync} specifies that, at steady state, all voltages in the power network evolve as sinusoidal signals with frequency $\omega_0$. Equations~\eqref{eq.obj.norm} and~\eqref{eq.obj.power} specify the voltage magnitude and the power injection at every node. The \emph{local nonlinear} specification \eqref{eq.obj.power} on the steady-state power injection can be equivalently expressed as \emph{non-local linear} specification on the voltages as follows ({cf. \eqref{eq.powers.ref} and \eqref{eq.powang})}:
\begin{itemize}
\item \textbf{Phase locking:} Given relative angles {$\theta^\star_{k1} \in [-\frac{\pi}{2},\frac{\pi}{2}]$}, at steady state it holds that:
\begin{align}\label{eq.obj.theta}
\frac{\underline{v}_k}{v^\star_k} - R({\theta_{k1}^\star})\, \frac{\underline{v}_1}{v^\star_1} = 0, \quad k~\!\in~\mathcal N\backslash\{1\}.
\end{align}
\end{itemize}
\subsection{Dispatchable virtual oscillator control}\label{sec.control.cl.design}
For every inverter $k \in \mathcal N$, we define the dispatchable virtual oscillator control (dVOC)
\begin{align}\label{eq.control.law}
u_k \coloneqq \omega_0 J \underline{v}_k + \eta\left[K_k \underline{v}_k - R(\kappa)\underline{i}_{o,k} + \alpha \Phi_k(\underline{v}_k)\underline{v}_k\right],
\end{align}
with gains $\eta \in \mathbb{R}_{>0}$, $\alpha \in \mathbb{R}_{>0}$, and
\begin{align*}
K_k = \frac{1}{v_k^{\star2}} R(\kappa) \begin{bmatrix} p_k^\star & q_k^\star\\ -q_k^\star & p_k^\star
\end{bmatrix}, \quad
\Phi_k(\underline{v}_k) \coloneqq \frac{v_k^{\star2}-\norm{\underline{v}_k}^2}{v_k^{\star2}} I_2.
\end{align*}
Note that the term $\omega_0 J \underline{v}_k$ induces a harmonic oscillation of $\underline{v}_k$ at the nominal frequency $\omega_0$. Moreover, $\Phi_k(\underline{v}_k)\underline{v}_k$ can be interpreted as a voltage regulator, i.e.,
depending on the sign of the normalized quadratic voltage error $\Phi_k(\underline{v}_k)$ the voltage vector $\underline{v}_k$ is scaled up or down. Finally, the term $K_k \underline{v}_k - R(\kappa)\underline{i}_{o,k}$ can be interpreted either in terms of {tracking power set-points (i.e., \eqref{eq.obj.power}), or in terms of phase synchronization (i.e., \eqref{eq.obj.theta}).} To establish the connection to {tracking power set-points}, let $e_{p,k}$ and $e_{q,k}$ denote the weighted difference between the actual and desired power injections defined by ${v^{\star2}_k} e_{p,k} \coloneqq \norm{\underline{v}_k}^2 p^\star_k - {v^{\star2}_k} p_k$ and ${v^{\star2}_k} e_{q,k} \coloneqq \norm{\underline{v}_k}^2 q^\star_k - {v^{\star2}_k} q_k$. A straightforward algebraic manipulation reveals that
\begin{align}\label{eq.pspecs.rot}
\frac{1}{\norm{\underline{v}_k}^2} \begin{bmatrix} \underline{v}_k & J \underline{v}_k \end{bmatrix}
R(\kappa) \begin{bmatrix} e_{p,k} \\ -e_{q,k} \end{bmatrix}\! =
K_k \underline{v}_k - R(\kappa)\underline{i}_{o,k},
\end{align}
i.e., normalizing the vector $(e_{p,k},-e_{q,k})$ and transforming it into a rotating frame attached to $\underline{v}_k$ results in the synchronizing dVOC term. In particular, in a purely inductive grid it holds that $R(\kappa) = J$ and $e_{p,k}$ corresponds to a component that is tangential to $\underline{v}_k$, i.e., its rotational speed, and $e_{q,k}$ corresponds to a component that is radial to $\underline{v}_k$, i.e., its change in magnitude. In other words, in this case, dVOC resembles standard droop control. For networks which are not purely inductive, the rotation $R(\kappa)$ results in a mixed nonlinear droop behavior.
By construction of the control law~\eqref{eq.control.law}, if $p_k=p^\star_k$, $q_k = q_k^\star$, and $\|\underline v_k\|=v_k^\star$, then $\underline{u}_k = \omega_0 J \underline{v}_k$, leaving all voltage vectors to rotate synchronously at frequency $\omega_0$.
{To establish the connection to phase synchronization (i.e., \eqref{eq.obj.theta}), let $e_\theta(\underline{v}) \coloneqq (e_{\theta,1}(\underline{v}),\ldots,e_{\theta,N}(\underline{v}))$ denote the vector of weighted phase errors defined by
\begin{align}\label{eq.distrib.feedback}
e_{\theta,k}(\underline{v}) \coloneqq \sum\nolimits_{(j,k)\in\mathcal E} \norm{Y_{jk}} (\underline{v}_j - \tfrac{v^\star_j}{v^\star_k}R(\theta_{jk}^\star) \underline{v}_k).
\end{align}
Under the \emph{quasi-steady-state} approximation $\underline{i}_{o,k} = \underline{i}^s_o(\underline{v}) = i^s_o(\underline{v})$ the \emph{local} feedback in~\eqref{eq.control.law} is identical to a \emph{distributed} synchronizing feedback of the weighted phase errors~\eqref{eq.distrib.feedback}. This is made precise in the following proposition obtained by combining Proposition 1 and Proposition 2 of \cite{colombino2017global}.
\begin{proposition}{\bf(Synchronizing feedback)}\label{prop.sync}\\
Consider set-points $p^\star_k$, $q^\star_k$, $v^\star_k$, and steady-state angles $\theta^\star_{jk}$ that satisfy Condition \ref{cond.consistent}. It holds that $K_k\underline{v}_k - R(\kappa)\underline{i}^s_{o,k} = e_{\theta,k}(\underline{v})$.
\end{proposition}
The proof is given in the Appendix.
Proposition \ref{prop.sync} highlights how dVOC infers information on the phase difference of the voltages $\underline{v}_k$ from the currents $i_o(\underline{v})$.} In~\cite{colombino2017global2} the authors use this fact to prove stability of the controller~\eqref{eq.control.law} under the quasi-steady-state approximation $\underline{i}_o = i^s_o(\underline{v})$. However, the line dynamics~\eqref{eq.network.equations} ``delay" the propagation of this information and give rise to stability concerns that are the main motivation for this work.
\section{Almost global asymptotic stability of set-valued control specifications}\label{sec.lyap}
In order to state the main result of the paper, we require the following technical definitions and preliminary results, which are used in Section~\ref{sec.sing.pert} to analyze the stability properties of the inverter-based power system.
We begin by defining almost global asymptotic stability (see \cite{A04}) with respect to a set $\mathcal C$.
\begin{definition}{\bf(Almost global asymptotic stability)}\label{def:ags}
A dynamic system $\frac{\diff}{\diff t} x = f(x)$ is called almost globally asymptotically stable with respect to a compact set $\mathcal C$ if
\begin{enumerate}[label=\color{blue}(\roman*)]
\item it is almost globally attractive with respect to $\mathcal C$, i.e.,
\begin{align}
\lim_{t\to\infty} \norm{\varphi_f(t,x_0)}_{\mathcal C} = 0
\end{align}
holds for all $x_0 \notin \mathcal Z$ and $\mathcal Z$ has zero Lebesgue measure,
\item it is Lyapunov stable with respect to $\mathcal C$, i.e., for every $\varepsilon \in \mathbb{R}_{>0}$ there exists $\delta \in \mathbb{R}_{>0}$ such that
\begin{align}\label{eq:lyapstab}
\norm{x_0}_{\mathcal C} < \delta \implies \norm{\varphi_f(t,x_0)}_{\mathcal C} < \varepsilon, \qquad \forall t \geq 0.
\end{align}
\end{enumerate}
\end{definition}
Moreover, we define two classes of comparison functions which are used to establish stability properties of the system.
\begin{definition}{\bf(Comparison functions)}
A function $\chi_c:\mathbb{R}_{\geq0} \to \mathbb{R}_{\geq0}$ is of class $\mathscr{K}$ if it is continuous,
strictly increasing and $\chi_c(0)=0$; it is
of class $\mathscr{K}_{\infty}$ if it is a $\mathscr{K}$-function and $\chi_c(s)\to\infty$ as $s\to\infty$.
\end{definition}
The following Theorem provides a Lyapunov function characterization of almost global asymptotic stability.
\begin{theorem}{\bf(Lyapunov function)}\label{thm.AGAS}
Consider a Lipschitz continuous function $f: \mathbb{R}^n \to \mathbb{R}^n$, a compact set $\mathcal C \subset \mathbb{R}^n$, and an invariant set $\mathcal U \subset \mathbb{R}^n$ {(}i.e., $\varphi_f(t,x_0) \in \mathcal U$ for all $t \in {\mathbb R}_{>0}$ and all $x_0 \in \mathcal U${)}, such that $\mathcal C \cap \mathcal U = \emptyset$. Moreover, consider a continuously differentiable function $\mathcal V: \mathbb{R}^n \to \mathbb{R}_{>0}$ and comparison functions $\chi_1, \chi_2 \in \mathscr{K}_\infty$ and $\chi_3 \in \mathscr{K}$ such that
\begin{align*}
\chi_1(\norm{x}_{\mathcal{C}}) \leq \mathcal V(x) &\leq \chi_2(\norm{x}_{\mathcal{C}})\\
\frac{\diff}{\diff t} \mathcal V(x) \coloneqq \frac{\partial \mathcal V}{\partial x} f(x) &\leq -\chi_3(\norm{x}_{\mathcal C \cup \, {\mathcal U}})
\end{align*}
holds for all $x \in \mathbb{R}^n$. Moreover, let
\begin{align*}
\mathcal Z_{\mathcal U} \coloneqq \{ x_0 \in \mathbb{R}^n \vert \lim\nolimits_{t\to\infty} \norm{\varphi_{f}(t,x_0)}_{\mathcal U} = 0 \}.
\end{align*}
denote the region of attraction of $\mathcal U$. If $\mathcal Z_{\mathcal U}$ has zero Lebesgue measure, the dynamics $\frac{\diff}{\diff t} x = f(x)$ are almost globally asymptotically stable with respect to $\mathcal{C}$.
\end{theorem}
The proof is given in the Appendix.
\section{Stability analysis of the inverter-based AC power system}\label{sec.sing.pert}
In Section~\ref{sec.model} we introduced the control objective and proposed a control law that admits a fully decentralized implementation. We will now analyze the closed-loop system and provide sufficient conditions for stability. To do so, we will use ideas from singular perturbation analysis to verify the assumptions of Theorem~\ref{thm.AGAS} for the multi-inverter power system. This allows us to prove the main result of the paper: an explicit bound for the control gains and set-points that ensures that the steady-state behavior given by~\eqref{eq.obj.sync}-\eqref{eq.obj.power} is almost globally asymptotically stable according to Definition~\ref{def:ags}.
\subsection{Dynamics and control objectives in a rotating frame}
In order to simplify the analysis, it is convenient to perform the change of variables \eqref{eq.rotframe} to a rotating reference frame. Defining $\mathcal K \coloneqq\diag(\{K_k\}_{k=1}^N)$ and $\Phi(\underline{v}) \coloneqq\diag\left(\{\Phi_k(\underline{v}_k)\}_{k=1}^N\right)$ the combination of \eqref{eq.voltage.dyn.ol}, dVOC~\eqref{eq.control.law}, and the line dynamics~\eqref{eq.network.equations} in the new coordinates becomes
\begin{subequations}\label{eq.closed.loop.rot}
\begin{align}
\!\!\frac{\diff}{\diff t} v & = \eta(\mathcal K v - \mathcal R(\kappa) \mathcal B\,i + \alpha \Phi(v)v)=: f_v(v,i), \label{eq.voltage.cl.rot}\\
\!\!\frac{\diff}{\diff t} i & = L_T^{-1}\left(-Z_T i + \mathcal B^\mathsf{T} v\right) =: f_i(v,i). \label{eq.lines.cl.rot}\!
\end{align}
\end{subequations}
Moreover, we let $x \coloneqq (v,i) \in {\mathbb R}^n$ with $n = 2N+2M$, $f(x) \coloneqq (f_v(v,i),f_i(v,i))$, and denote the dynamics of the overall system by $\frac{\diff}{\diff t} x = f(x)$.
To formalize the control objectives \eqref{eq.obj.sync}-\eqref{eq.obj.theta}, we define the sets
\begin{align*}
\mathcal S &= \left\{v \in {\mathbb R}^{2N} \left\vert\; \frac{v_k}{v^\star_k}= R(\theta_{k1}^\star)\,\frac{v_1}{v^\star_1},\; \forall k \in \mathcal N \setminus \{1\} \right.\right\},\\
\mathcal A &= \left\{v \in {\mathbb R}^{2N} \left\vert\; \norm{v_k} = v_k^\star,\; \forall k \in \mathcal N \right.\right\},
\end{align*}
as well as the target set
\begin{align}\label{eq.T}
\mathcal T\coloneqq \left\{x \in {\mathbb R}^{n} \left\vert\; x = (v,i),~ v \in \mathcal S\cap\mathcal A,~ i = i^s(v) \right.\right\}.
\end{align}
Note that $i^s(v)$ is the steady-state map of~\eqref{eq.lines.cl.rot}. Note that all elements of $\mathcal T$ are equilibria for the dynamics in the rotating reference frame~\eqref{eq.closed.loop.rot}. Therefore, in the static frame, they correspond to synchronous sinusoidal trajectories with frequency $\omega_0$. By definition of the sets $\mathcal S$, $\mathcal A$, and $\mathcal T$, they also satisfy all control objectives introduced in Section~\ref{sec.control.obj}. Moreover, we define the union of $\mathcal T$ and the origin as $\mathcal T_0 \coloneqq \mathcal T \cup \{\mathbbl{0}_{n}\}$.
\subsection{Main result}
We require the following condition to establish almost global asymptotic stability of the power system dynamics~\eqref{eq.closed.loop.rot}.
\begin{condition}{\bf(Stability Condition)}\label{cond.stab}
The set-points $p^\star_k$, $q^\star_k$, $v_k^\star$ and the steady-state angles $\theta^\star_{jk}$ satisfy Condition~\ref{cond.consistent}. There exists a maximal steady-state angle $\bar{\theta}^\star \in [0,\tfrac{\pi}{2}]$ such that $|\theta^\star_{jk}| \! \leq \! \bar{\theta}^\star$ holds for all $(j,k) \! \in \! \mathcal N \!\times\! \mathcal N$. For all $k \! \in \! \mathcal N$, the line admittances $\norm{Y_{jk}}$, the stability margin $c \in \mathbb{R}_{>0}$, the set-points $p^\star_k$, $q^\star_k$, $v_k^\star$, and the gains $\eta \in \mathbb{R}_{>0}$ and $\alpha \in \mathbb{R}_{>0}$ satisfy
\begin{multline*}
\!\!\!\!\!\!\sum_{j : (j,k) \in \mathcal E} \!\!\!\!\norm{Y_{jk}}\!\left|1\!-\!\frac{v^\star_j}{v^\star_k}\cos(\theta^\star_{jk})\right|+ \alpha {<} \frac{{1\!+\!\cos(\bar{\theta}^\star)}\!}{2} \frac{v^{\star2}_{\min}}{v^{\star2}_{\max}}\lambda_2( L)-c,\\
\eta < \frac{c }{\rho \norm{\mathcal Y} (c + 5 \|\mathcal K -\mathcal L\|)},
\end{multline*}
where $v^{\star}_{\min} \coloneqq \min_{k \in \mathcal N} v^\star_k$ and $v^{\star}_{\max} \coloneqq \max_{k \in \mathcal N} v^\star_k$ are the smallest and largest magnitude set-points, and $\lambda_2(L)$ is the second smallest eigenvalue of the graph Laplacian $L$.
\end{condition}
If the graph $\mathcal G$ is connected (i.e., $\lambda_2(L)>0$), Condition \ref{cond.stab} can always be satisfied by a suitable choice of control gains and set-points. We refer to Proposition \ref{cond.stab.power} for a more detailed discussion.
We can now state the main result of the manuscript.
\begin{theorem}{\bf(Almost global stability of $\boldsymbol{\mathcal T}$)}\label{thm.main}
Consider set-points $p^\star_k$, $q^\star_k$, $v^\star_k$, steady-state angles $\theta^\star_{jk}$, a stability margin $c$, and control gains $\alpha$ and $\eta$, such that Condition~\ref{cond.stab} holds. Then, the dynamics \eqref{eq.closed.loop.rot} are almost globally asymptotically stable with respect to $\mathcal{T}$, and the origin $\mathbbl{0}_{n}$ is an exponentially unstable equilibrium.
\end{theorem}
Theorem~\ref{thm.main} guarantees almost global asymptotic stability of the set $\mathcal{T}$ of desired equilibria in the rotating frame (corresponding to a harmonic solution with the desired power flows).
In the literature on virtual oscillator control for power inverters~{\cite{johnson2014synchronization,johnson2016synthesizing,MS-FD-BJ-SD:14b,LABT-JPH-JM:13}} it is shown that, for connected graphs, global synchronization can be obtained for the trivial power flow solution with {identical angles and voltages}, i.e., $\theta_{jk}^\star=0$, $v^\star_k=v^\star$. Moreover, in the context of droop-controlled microgrids, \cite{vorobev2017high,vorobev2017cdc} provide stability conditions which account for the electromagnetic transients of the transmission lines and are valid locally around the trivial power flow solution. Condition~\ref{cond.stab} (together with Theorem~\ref{thm.main}) ensures that a larger set of power-flow solutions (with nonzero power flows) is almost globally stable and considers the destabilizing effects of the electromagnetic transients of the transmission lines. We remark that Condition \ref{cond.stab} is not overly conservative (see Section \ref{sec.threebusboundary}). Let us now provide a power-system interpretation of Condition~\ref{cond.stab}.
\begin{proposition}{\bf(Interpretation of the stability condition)}\label{cond.stab.power}
Condition~\ref{cond.stab} is satisfied if the steady-state angles $\theta^\star_{jk}$, the set-points $p^\star_k$, $q^\star_k$, $v_k^\star$, and the steady-state branch powers $p^\star_{jk}$, $q^\star_{jk}$ satisfy Condition~\ref{cond.consistent}, $|\theta^\star_{jk}| \leq \tfrac{\pi}{2}$ holds for all $(j,k) \in \mathcal N \times \mathcal N$, and for all $k \in \mathcal N$, the stability margin $c \in \mathbb{R}_{>0}$, the network parameters $\norm{Y_{jk}}$, and the gains $\eta \in \mathbb{R}_{>0}$, $\alpha \in \mathbb{R}_{>0}$ satisfy
\begin{align*}
\sum_{j : (j,k) \in \mathcal E} \frac{\cos(\kappa)}{v^{\star2}_k} \left|p^\star_{jk}\right|\!+\!\frac{\sin(\kappa)}{v^{\star2}_k} \left| q^\star_{jk} \right|\!+\! \alpha \leq
\frac{v^{\star2}_{\min}}{2 v^{\star2}_{\max}}\lambda_2( L)-c,\\
\eta < \frac{c}{2 \rho d_{\max} (c + 5 \max_{k \in \mathcal N} s^\star_k {v^\star_k}^{-2} + 10 d_{\max})},
\end{align*}
where $s^\star_k \!\coloneqq\! \sqrt{p^{\star2}_k+q^{\star2}_k}$ is the apparent steady-state power injection, and $d_{\max} \!\coloneqq\! \max_{k \in \mathcal N} \sum\nolimits_{j:(j,k) \in \mathcal E} \norm{Y_{jk}}$ is the maximum weighted node degree of the transmission network graph.
\end{proposition}
{The proof is given in the Appendix. The stability conditions confirm and quantify well-known engineering insights. Broadly speaking, they require that the network is not to heavily loaded and that there is a sufficient time-scale separation between the inverter dynamics and line dynamics.} The first inequality in Proposition~\ref{cond.stab.power} bounds the achievable steady-state power transfer in terms of the connectivity $\lambda_2(L)$ of the graph of the transmission network, the stability margin $c$ that accounts for the effect of the line dynamics, the control gain $\alpha$, and the voltage set-points $v^\star_{k}$ and shows that the achievable power transfer can be increased by increasing all voltage set-points $v^\star_{k}$, by reducing the gain $\alpha$ of the voltage regulator, or by upgrading transmission lines, i.e., increasing $\norm{Y_{jk}}$ and hence $\lambda_2(L)$. The second inequality in Proposition~\ref{cond.stab.power} provides a bound on the control gain $\eta$. This bound ensures that the line dynamics do not destabilize the system. We note, that for operating points that satisfy Proposition~\ref{cond.stab.power}, the term $\max_{k \in \mathcal N} s^\star_k {v^\star_k}^{-2}$ is much smaller than $d_{\max}$ and can be neglected for the purpose of this discussion.
The controller \eqref{eq.control.law} achieves synchronization by inferring information about the voltage angle differences through the local measurements of the currents $\underline{i}_{o,k}$ (see Section \ref{sec.control.cl.design}). Thus, the time constant $\rho$ of the transmission lines can be interpreted as the propagation delay of the information on the phase angles.
Loosely speaking, the controller cannot act faster than it can observe information through the network. Therefore, longer time-constants $\rho$ require a lower gain $\eta$.
Moreover, the overall system gain is a combination of the gain of the transmission network, corresponding to the admittances $\norm{Y_{jk}}$ and the controller gains $\eta$. Upgrading or adding a line can increase $d_{\max}$ more than $\lambda_2(L)$. Thus, to keep the closed-loop gain constant, the controller gain $\eta$ needs to decrease. In other words, {the sufficient stability conditions of Theorem \ref{thm.main} and Proposition \ref{cond.stab.power} indicate that the system may become unstable if transmission lines are added, shortened, or upgraded. This observation is verified in Section \ref{sec:three:adm} using a numerical example.} The same effect was observed in \cite{vorobev2017cdc} for droop controlled microgrids. In the remainder of this section we prove Theorem~\ref{thm.main}.
\subsection{Singular perturbation theory}\label{subsec.sing.pert}
In the following we apply tools from singular perturbation theory (see \cite[{Ch. 11.5}]{K02}) to explicitly construct a Lyapunov function that establishes convergence of the dynamics~\eqref{eq.closed.loop.rot} to the set $\mathcal T_0$ and allows us to show that the origin $\mathbbl{0}_{n}$ is an unstable equilibrium. By replacing the dynamics \eqref{eq.lines.cl.rot} of the transmission lines with its steady-state map $i^s(v)$, we obtain the \emph{reduced-order system}
\begin{align}\label{eq.voltage.control.kh}
\begin{split}
\frac{\diff}{\diff t} v = f_v(v,i^s(v)) = {\eta}\left[( \mathcal K - \mathcal L)v + \alpha \Phi(v){v}\right ] ,
\end{split}
\end{align}
which describes the voltage dynamics under the assumption that the line currents are at their quasi-steady-state, i.e., $i_o = i^s_o(v)$. This results in $ \mathcal R(\kappa) i_o = \mathcal R(\kappa) \mathcal Y v = \mathcal L v$. Moreover, note that $(\mathcal K - \mathcal L) v = e_\theta(v)$ which implies $(\mathcal K - \mathcal L) v = 0$ for all $v \in \mathcal S$. Finally, we denote the difference between the line currents and their steady-state value as $y = i - i^s(v)$ and define the \emph{boundary-layer system}
\begin{align}
\frac{\diff y}{\diff t} \bigg\vert_{\frac{\diff}{\diff t} v = \mathbbl{0}_{2N}} = f_i(v,y+i^s(v)) = {-L^{-1}_T Z_T y} \label{eq.current.control.kh}
\end{align}
where $v$ is treated as a constant. In the remainder, we follow a similar approach to~\cite[Sec. 11.5]{K02} and obtain stability conditions for the full system~\eqref{eq.closed.loop.rot} based on the properties of~\eqref{eq.voltage.control.kh} and~\eqref{eq.current.control.kh}. In contrast to~\cite{K02} we address stability with respect to the set $\mathcal T_0$ and not a single equilibrium.
\subsection{Lyapunov function for the reduced-order system}\label{sec.power.systems.steady}
In this section, we provide a Lyapunov function for the reduced-order system~\eqref{eq.voltage.control.kh}. Given the voltage set-points $v^\star_k$ and relative steady-state angles $\theta^\star_{k1}$ for all $k \in \mathcal N$, we define the matrix $S\coloneqq[v_1^\star R(\theta_{11}^\star)^\mathsf{T} \hdots v_N^\star R(\theta_{1N}^\star)^\mathsf{T}]^\mathsf{T}$ and the projector $P_S \coloneqq (I_{2N} - \frac{1}{\sum v_i^{\star 2}}S S^\mathsf{T})$ onto the nullspace of ${\mathcal S}$. We now define the Lyapunov function candidate $V:{\mathbb R}^n\to{\mathbb R}_{\ge0}$ for the reduced-order system as
\begin{align}\label{eq.v}
V(v) \coloneqq \frac{1}{2} v^\mathsf{T}P_Sv + {\frac{1}{2}}\eta\alpha\alpha_1\sum_{k=1}^N \left( \frac{{v_k^\star}^2 - \norm{v_k}^2}{v_k^\star}\right)^2,
\end{align}
where $\alpha \in \mathbb{R}_{>0}$ is the voltage controller gain and, given $c \in \mathbb{R}_{>0}$ such that Condition \ref{cond.stab} holds, the constant $\alpha_1 $ is given by
\begin{align} \label{thm1.strictbound}
\alpha_1 \coloneqq \frac{ c }{ 5\eta \norm{\mathcal K -\mathcal L}^2}.
\end{align}
Moreover, we define the function $\psi: {\mathbb R}^{2N} \to {\mathbb R}_{\geq 0}$ as
\begin{align}\label{eq.voltage.psi.def}
\psi(v) &\coloneqq \eta \left(\|\mathcal K-\mathcal L\| \|v\|_{\mathcal S} + \alpha \|\Phi(v)v\| \right).
\end{align}
We require the following preliminary results
\begin{lemma} \label{lem.positivsatz.1}
The following inequality holds for all $v\in{\mathbb R}^{2N}$
\begin{align}\label{eq.claim1}
v^\mathsf{T} P_S \Phi(v) v \leq v^\mathsf{T} P_S v = \|v\|_{\mathcal S}^2.
\end{align}
\end{lemma}
\begin{lemma} \label{lem.redlfdecr}
Consider steady-state angles $\theta^\star_{jk}$, $\alpha$, and $c$ such that Condition~\ref{cond.stab} is satisfied. For all $v\in{\mathbb R}^{2N}$, it holds that
\begin{align}\label{eq.ass.reformulation}
v^\mathsf{T} P_S ( \mathcal K - \mathcal L + \alpha I_{2N}) v \leq - c \,\|v\|_{\mathcal S}^2.
\end{align}
\end{lemma}
The proofs are given in the appendix and use the same technique as in~\cite[Prop. 6, Prop. 7]{colombino2017global2}. In the following {proposition}, we show {that} the function $V$ is a Lyapunov function for the reduced-order system~\eqref{eq.voltage.control.kh}.
\begin{proposition}{\bf(Lyapunov function for the reduced-order system)}\label{thm.lyap.red.system}
Consider $V(v)$ defined in \eqref{eq.v} and set-points $p^\star_k$, $q^\star_k$, $v^\star_k$, steady-state angles $\theta^\star_{jk}$, $\alpha$ and $c$ such that Condition~\ref{cond.stab} holds. For any $\eta \in \mathbb{R}_{>0}$, there exists $\chi^{\scriptscriptstyle{V}}_1, \chi^{\scriptscriptstyle{V}}_2 \in \mathscr{K}_{\infty}$ such that
\begin{align}\label{eq.Vbound}
\chi^{\scriptscriptstyle{V}}_1(\norm{v}_{\mathcal S \cap \mathcal A}) \leq V(v) \leq \chi^{\scriptscriptstyle{V}}_2(\norm{v}_{\mathcal S \cap \mathcal A})
\end{align}
holds for all $v \in {\mathbb R}^{2N}$. Moreover, for all $v \in {\mathbb R}^{2N}$ the derivative of $V$ along the trajectories of the reduced-order system~\eqref{eq.voltage.control.kh} satisfies
\begin{align} \label{eq.voltage.ss.Vdot.ideal}
\frac{\diff}{\diff t} V \coloneqq \frac{\partial V}{\partial v} f_v(v,i^s(v)) &\leq -\alpha_1 \psi(v)^2.
\end{align}
\end{proposition}
\begin{IEEEproof}
By construction the function $V(v)$ is positive definite with respect to $\mathcal S \cap \mathcal A$, i.e., $V=0$ for all $v \in \mathcal S \cap \mathcal A$ and $V(v)>0$ otherwise, and radially unbounded with respect to $\mathcal S \cap \mathcal A$, i.e., $V(v) \to \infty$ for $\norm{v}_{\mathcal{A}} \to \infty$ and $V(v) \to \infty$ for $\norm{v}_{\mathcal{S}} \to \infty$. Moreover $\mathcal S \cap \mathcal A$ is a compact set. Using these properties and replacing $\norm{\cdot}$ with the point to set distance $\norm{\cdot}_{\mathcal S \cap \mathcal A}$ in the steps outlined in \cite[p. 98]{H67}, it directly follows that there exist $\mathscr{K}_\infty$-functions $\chi^{\scriptscriptstyle{V}}_1$ and $\chi^{\scriptscriptstyle{V}}_2$ such that \eqref{eq.Vbound} holds for all $v \in {\mathbb R}^{2N}$.
Next, we can write the derivative of $V(\cdot)$ along the trajectories of~\eqref{eq.voltage.control.kh} as
\begin{align}\label{eq.lyap.decrease.one}
\begin{split}
\frac{\diff}{\diff t} V & = \eta v^\mathsf{T} P_S \left( ( \mathcal K - \mathcal L)v + \alpha \Phi(v)v\right)\\
& - 2 \eta^2 \alpha\,\alpha_1\,v^\mathsf{T} \Phi(v) \left(( \mathcal K - \mathcal L)v + \alpha \Phi(v)v\right).
\end{split}
\end{align}
Using Lemma~\ref{lem.positivsatz.1}, we can bound~\eqref{eq.lyap.decrease.one} as
\begin{align}\label{eq.lyap.decrease.lemm1}
\begin{split}
\frac{\diff}{\diff t} V \le &\eta v^\mathsf{T} P_S ( \mathcal K - \mathcal L + \alpha I_{2N}) v \\
& - 2 \eta^2 \alpha\,\alpha_1\,v^\mathsf{T} \Phi(v) \left(( \mathcal K - \mathcal L)v + \alpha \Phi(v)v\right).
\end{split}
\end{align}
Using Lemma~\ref{lem.redlfdecr} we obtain
\begin{align*}\label{eq.lyap.decrease.two}
\frac{\diff}{\diff t} V \le -\eta c \norm{v}^2_{\mathcal S} - 2\eta^2 \alpha \alpha_1 v^{\mathsf{T}} \Phi(v)\! \left(( \mathcal K - \mathcal L)v + \alpha \Phi(v)v\right).
\end{align*}
Because $(\mathcal K - \mathcal L) v =0$ for all $v\in\mathcal S$ and $P_S$ is the projector onto $\mathcal S^\perp$ it holds that $\mathcal K-\mathcal L = (\mathcal K-\mathcal L)P_{S}$, and we obtain
\begin{align*}\label{eq.lyap.decrease.three}
\frac{\diff}{\diff t} V \le & -\eta c\, \|v\|_{\mathcal S}^2 -2 \eta^2 \alpha^2 \alpha_1 \| \Phi(v)v\|^2\\
& + 2 \eta^2 \alpha \alpha_1\norm{\Phi(v) v} \norm{\mathcal K - \mathcal L} \norm{v}_S.
\end{align*}
Next, to show that \eqref{eq.voltage.ss.Vdot.ideal} holds, we show that for $\alpha_1$ according to \eqref{thm1.strictbound} and for all $v \in \mathbb{R}^{2N}$, the inequality
\begin{align}\label{eq.inequality}
\begin{split}
&-\eta c\, \|v\|_{\mathcal S}^2 - 2\eta ^2\alpha^2 \alpha_1 \| \Phi(v)v\|^2 \\
&+ 2\, \eta^2 \alpha \alpha_1\norm{\Phi(v) v} \norm{\mathcal K - \mathcal L} \norm{v}_S\\
&\le - \alpha_1 \eta^2(\|\mathcal K-\mathcal L\| \|v\|_\mathcal S +\alpha\| \Phi(v) v\| )^2
\end{split}
\end{align}
holds. By matching the coefficients of the r.h.s and the l.h.s., it can be seen that~\eqref{eq.inequality} holds for all $v \in \mathbb{R}^{2N}$ if %
\begin{align}\label{eq.condition}
\begin{bmatrix}
\|v\|_{\mathcal S}\\
\| \Phi(v)v\|
\end{bmatrix}^\mathsf{T}
Q
\begin{bmatrix}
\|v\|_{\mathcal S}\\
\| \Phi(v)v\|
\end{bmatrix}\ge0,\quad \forall v\in {\mathbb R}^{2N},
\end{align}
holds for the matrix $Q$ defined as
\begin{align}\label{eq.cond.Q}
Q\coloneqq \begin{bmatrix}
\eta c - \alpha_1\eta^2\|\mathcal K-\mathcal L\|^2 & -2\alpha_1\alpha\eta^2\|\mathcal K-\mathcal L\|\\
\star & \eta^2\alpha^2\alpha_1
\end{bmatrix}.
\end{align}
Using the Schur complement and $\alpha_1 >0$ it follows that $Q\succ 0$ if $\eta c - 5\alpha_1\eta^2\|\mathcal K-\mathcal L\|^2 \ge0$. Thus, \eqref{eq.condition} is satisfied for $\alpha_1$ defined in \eqref{thm1.strictbound} and the proposition directly follows.
\end{IEEEproof}
\subsection{Lyapunov function for the Boundary layer system}\label{sec.power.systems.boundary}
In this section, we provide a Lyapunov function for the boundary layer system~\eqref{eq.current.control.kh}. To this end, we define $y_o \coloneqq i_o - i^s_o(v) = \mathcal B y$, the matrix $B_n \in {\mathbb R}^{M \times M_0}$ whose columns span the nullspace of $B$, $\mathcal B_n \coloneqq B_n \otimes I_2$, and $y_n = \mathcal B^\mathsf{T}_n L_T y$. The fact that $\mathcal G$ is connected implies $\rank(B)=N-1$ and it follows from the rank-nullity theorem that $M_0 = M-N+1$. We define the Lyapunov function $W: {\mathbb R}^{2M} \to {\mathbb R}_{\geq 0}$
\begin{align} \label{eq.w}
W(y) \coloneqq \frac{\rho}{2} (y^\mathsf{T} \mathcal B^\mathsf{T} \mathcal B y + y^\mathsf{T} L_T \mathcal B_n \mathcal B^\mathsf{T}_n L_T y).
\end{align}
\begin{proposition}{\bf(Lyapunov function for the boundary layer system)} \label{prop.deriv.W}
There exists $\chi^{\scriptscriptstyle{W}}_1, \chi^{\scriptscriptstyle{W}}_2 \in \mathscr{K}_{\infty}$ such that $ \chi^{\scriptscriptstyle{W}}_1(\norm{y}) \leq W(y) \leq \chi^{\scriptscriptstyle{W}}_2(\norm{y})$ holds for all $y \in {\mathbb R}^{2M}$. Moreover, for every $v \in {\mathbb R}^{2N}$, and all $y \in {\mathbb R}^{2M}$, the derivative of $W(y)$ along the trajectories of~\eqref{eq.current.control.kh} satisfies
\begin{align*}
\frac{\diff W}{\diff t} \big\vert_{\frac{\diff}{\diff t} v = \mathbbl{0}_{2N}} = \frac{\partial W}{\partial y} f_i(v,y+i^s(v)) \le -\norm{y_o}^2 - \norm{y_n}^2.
\end{align*}
\end{proposition}
\begin{IEEEproof}
Because $\mathcal B_n$ has full rank, it holds that $\mathcal B^\mathsf{T}_n L_T \mathcal B_n \succ 0$. Next, we can parametrize all $y$ such that $\mathcal B y \!=\!\mathbbl{0}_{2N}$ using $\xi \in {\mathbb R}^{2M_0}$ and $y = \mathcal B_n \xi$, and it follows that $\xi^\mathsf{T} (\mathcal B^\mathsf{T}_n L_T \mathcal B_n)^\mathsf{T}(\mathcal B^\mathsf{T}_n L_T \mathcal B_n) \xi >0$ for all $\xi$.
Thus, $W(y)$ is positive definite and the first statement directly follows. Using $L^{-1}_T Z_T = \tfrac{1}{\rho}I_{2M}+\omega_0 \mathcal J_M$, one obtains
\begin{align*}
\frac{\diff W}{\diff t} \big\vert_{\frac{\diff}{\diff t} v = \mathbbl{0}_{2N}} =& -y^\mathsf{T}_o y^{\phantom{\mathsf{T}}}_o - \rho y^\mathsf{T} \mathcal B^\mathsf{T} \mathcal B \omega_0 \mathcal J_M y\\
&- y^\mathsf{T}_n y^{\phantom{\mathsf{T}}}_n - \rho y^\mathsf{T} L_T \mathcal B_n \mathcal B^\mathsf{T}_n L_T \omega_0 \mathcal J_M y.
\end{align*}
Using the mixed-product property of the Kronecker product and $J\!=\!-J^\mathsf{T}\!$, it can be shown that $\mathcal B^\mathsf{T}\! \mathcal B \mathcal J_M \!=\! ((B^\mathsf{T}\! B) \otimes I_2)(I_M \otimes J) \!=\! (B^\mathsf{T}\! B) \otimes J \!=\! -((B^\mathsf{T}\! B) \otimes J)^\mathsf{T}\!$ is skew-symmetric. The same approach can be used to show that $L_T \mathcal B_n \mathcal B^\mathsf{T}_n L_T \mathcal J_M$ is skew-symmetric and the proposition follows.
\end{IEEEproof}
\subsection{Proof of the main result}
Proposition~\ref{thm.lyap.red.system} establishes that the Lyapunov functions $V(v)$ of the reduced-order system~\eqref{eq.voltage.control.kh}, which describes the voltage dynamics assuming the transmission line currents are in steady state, is decreasing. Moreover, Proposition~\ref{prop.deriv.W} shows that the Lyapunov function $W(y)$ of the boundary layer system~\eqref{eq.current.control.kh}, which describes the difference between the actual transmission line currents and their steady state for constant voltages (in the rotating reference frame), is decreasing.
In this section, we use these results to construct a Lyapunov function candidate for the overall system~\eqref{eq.closed.loop.rot} as a convex combination of the functions $V(v)$ and $W(y)$ introduced in~\eqref{eq.v} and~\eqref{eq.w}. We recall that $x \coloneqq (v,i) \in {\mathbb R}^n$, introduce a constant $d \in \mathbb{R}_{(0,1)}$, and define the Lyapunov function candidate $\nu: {\mathbb R}^n \to {\mathbb R}_{\geq0}$ for the full-order system~\eqref{eq.closed.loop.rot} as
\begin{align}\label{eq.lyap.full}
\nu(x)\coloneqq d W (i-i^s(v))+ (1-d)V(v).
\end{align}
The following propositions bound the terms which arise from taking the time-derivative of $\nu(x)$ for the full-order system~\eqref{eq.closed.loop.rot} instead of the reduced-order system~\eqref{eq.voltage.control.kh} and boundary layer system~\eqref{eq.current.control.kh}. We will use these bounds to define a constant $d \in \mathbb{R}_{(0,1)}$ that ensures that $\nu(x)$ is decreasing.
\begin{proposition} \label{prop.beta1}
Let $\beta_1 \coloneqq \norm{\mathcal K -\mathcal L}^{-1}$. For all $v\in{\mathbb R}^{2N}$ and all $y \in {\mathbb R}^{2M}$ it holds that
\begin{align*}
\frac{\partial V}{\partial v}[f_v(v,y+i^s(v))-f_v(v,i^s(v))]\le \beta_1\psi(v)\|y_o\|.
\end{align*}
\end{proposition}
\begin{proposition} \label{prop.beta2}
Consider $\beta_2 \coloneqq \rho \norm{\mathcal Y}$ and $\gamma \coloneqq \eta \rho \norm{\mathcal Y}$. For all $v\in{\mathbb R}^{2N}$ and $y \in {\mathbb R}^{2M}$ it holds that
\begin{align*}
-\frac{\partial W}{\partial y}\frac{\partial i^s}{\partial v} f_v(v,y+i^s(v))\leq
\beta_2 \psi(v) \|y_o\| + \gamma \|y_o\|^2 .
\end{align*}
\end{proposition}
The proofs are provided in the Appendix.
The next result bounds the decrease of $\nu$ along the trajectories of the \emph{full-order system}~\eqref{eq.closed.loop.rot}.
\begin{proposition}\label{propo.nudecr}
Let $d \coloneqq \frac{\beta_1}{\beta_1+\beta_2} \in \mathbb{R}_{(0,1)}$. Under Condition~\ref{cond.stab}, there exist a function $\chi_3\in \mathscr{K}$ such that
\begin{align}\label{eq.bound.decrease.nu}
\frac{\diff}{\diff t} \nu \coloneqq \frac{\partial \nu}{\partial v} f_v(v,i) + \frac{\partial \nu}{\partial v} f_i(v,i) \le - \chi_3(\|x\|_{\mathcal T_0}).
\end{align}
\end{proposition}
\begin{IEEEproof}
It follows from Propositions~\ref{thm.lyap.red.system}~-~\ref{prop.beta2}
\begin{align*}
&\frac{\diff}{\diff t} \nu = d \bigg(\! \frac{\partial W}{\partial y} f_i(v,y+i^s(v)) -
\frac{\partial W}{\partial y}\frac{\partial i^s}{\partial v} f_v(v,y+i^s(v))\!\!\bigg)+\\
&(1\!-\!d)\bigg(\!\frac{\partial V}{\partial v} \!f_v(v,i^s(v)) \!+\! \frac{\partial V}{\partial v}[f_v(v,y\!+\!i^s(v))\!-\!f_v(v,i^s(v))]\!\bigg)\!,
\end{align*}
$y_o = i_o-i^s_o(v)$, and $y_n = \mathcal B^\mathsf{T}_n L_T y$, that
\begin{align*}
&\frac{\diff}{\diff t} \nu \leq - d \| y_o \|^2 - d \| y_n \|^2 + d \beta_2 \psi(v) \| y_o\| + d\gamma \| y_o\|^2\\
&\hphantom{\frac{\diff}{\diff t} \nu \leq} -(1-d)\alpha_1\psi(v)^2 +(1-d)\beta_1\psi(v) \| y_o\| \\
\leq & -\!\begin{bmatrix}
\psi(v)\\
\| y_o\|\\
\| y_n \|
\end{bmatrix}^{\!\mathsf{T}}\!\!
\underbrace{\begin{bmatrix}
(1-d)\alpha_1\!\! & -\frac{(1-d)\beta_1+ d\beta_2}{2} & 0\\
\star& (1-\gamma) d & 0 \\
0 & 0 & 1
\end{bmatrix}}_{\eqqcolon \mathcal M}\!
\begin{bmatrix}
\psi(v)\\
\| y_o\|\\
\| y_n \|
\end{bmatrix}
\end{align*}
Following similar steps to~\cite[p. 452]{K02}, we let $d = \frac{\beta_1}{\beta_1+\beta_2}$ and obtain the following condition for $\mathcal M$ to be positive definite
\begin{align}\label{eq.singper.cond}
1 < \frac{{\alpha_1} }{\alpha_1 \gamma + \beta_1 \beta_2}\implies \mathcal M\succ 0.
\end{align}
Using the expressions for $\alpha_1,\beta_1,\beta_2$ and $\gamma$ derived in Propositions~\ref{thm.lyap.red.system}~-~\ref{prop.beta2}, condition~\eqref{eq.singper.cond} is satisfied if
\begin{align} \label{eq.eta.def}
\eta < \frac{c }{\rho \norm{\mathcal Y} (c + 5 \|\mathcal K -\mathcal L\|)}.
\end{align}
Therefore $\frac{\diff}{\diff t} \nu(x)<0$ for all $(v,i)\notin \mathcal T_0$. Moreover, because $(\psi(v),\|y_o\|,\|y_n\|)$ is both positive definite and radially unbounded with respect to $\mathcal T_0$, and $\mathcal T_0$ is compact, the steps in~\cite[p. 98]{H67} can be used to show that there exists a function $\chi_3\in \mathscr{K}$ such that~\eqref{eq.bound.decrease.nu} holds.
\end{IEEEproof}
We are now ready to prove Theorem~\ref{thm.main}.\\
\emph{Proof of Theorem~\ref{thm.main}:}
We apply Theorem \ref{thm.AGAS} with $\mathcal C = \mathcal T$ and $\mathcal U = \{\mathbbl{0}_n\}$. According to Proposition~\ref{thm.lyap.red.system} and \ref{prop.deriv.W} there exists $\mathscr{K}_{\infty}$-functions $\chi^{\scriptscriptstyle{V}}_1$, $\chi^{\scriptscriptstyle{V}}_2$, $\chi^{\scriptscriptstyle{W}}_1$, and $\chi^{\scriptscriptstyle{W}}_2$, such that { the functions $\tilde\chi_i = (1-d) \chi^{\scriptscriptstyle{V}}_i + d \chi^{\scriptscriptstyle{W}}_i$, $i\in\{1,2\}$ are positive definite and radially unbounded w.r.t the compact set $\mathcal T$ and $\tilde\chi_1(\norm{v}_{\mathcal S\cap\mathcal A },\norm{y})\le \nu(x)\le \tilde\chi_2(\norm{v}_{\mathcal S\cap\mathcal A },\norm{y})$, therefore, from~\cite[p. 98]{H67} there exist $\chi_1,\chi_2\in\mathscr{K}_{\infty}$ such that}
$\chi_1(\norm{x}_{\mathcal T})\le \nu(x)\le \chi_2(\norm{x}_{\mathcal T})$ holds for all $x\in \mathbb{R}^{n}$. Moreover, Proposition~\ref{propo.nudecr} guarantees the existence of a function $\chi_3 \in \mathscr{K}$ such that
$\frac{\diff}{\diff t} \nu \le - \chi_3(\|x\|_{\mathcal T_0})$ holds for all $x\in \mathbb{R}^{n}$. It remains to show that the region of attraction $\mathcal Z_{\mathcal U} = \mathcal Z_{\{\mathbbl{0}_n\}}$ has zero Lebesgue measure.
{To this end, consider the linearized dynamics $\frac{\diff}{\diff t} x \!=\! A_{\mathbbl{0}} x$ with $A_{\mathbbl{0}} \coloneqq \tfrac{\partial}{\partial x} f(x) \vert_{x = \mathbbl{0}_{n}}$ and the reduced-order linearized dynamics $\frac{\diff}{\diff t} v = A_{\mathbbl{0},v} v$ with $A_{\mathbbl{0},v} \coloneqq \tfrac{\partial}{\partial v}f_v(v,i^s(v))\vert_{v = \mathbbl{0}_{2N}}$. Moreover, we define $\psi_{\mathbbl{0}}(v) \coloneqq \eta ( \norm{\mathcal K - \mathcal L} \norm{v}_{\mathcal S} + \alpha \norm{v})$ and the quadratic approximation of the previous Lyapunov function $\nu_{\mathbbl{0}}(x) \coloneqq \frac{1}{2}x^\mathsf{T} \tfrac{\partial^2}{\partial^2 x} \nu(x)\vert_{x=\mathbbl{0}_n} x$. Following analogous steps as in the proofs of Propositions \ref{thm.lyap.red.system} and \ref{propo.nudecr}, it can be shown that $\frac{\partial V_{\mathbbl{0}}}{\partial v} A_{\mathbbl{0},v} v \leq -\alpha_1 \psi_{\mathbbl{0}}(v)$ holds, and that there exists positive constants $d \in \mathbb{R}_{(0,1)}$ and $\chi_{\mathbbl{0}} \in \mathbb{R}_{>0}$ such that $\frac{\diff}{\diff t} \nu_{\mathbbl{0}} \coloneqq \frac{\partial \nu_{\mathbbl{0}}}{\partial x} A_{\mathbbl{0}} \leq - \chi_{\mathbbl{0}} (\psi_{\mathbbl{0}}^2 + \norm{y_o}^2 + \norm{y_n}^2)$ holds. Next, we show that $A_{\mathbbl{0}}$ and cannot have eigenvalues with zero real part. If $A_{\mathbbl{0}}$ has eigenvalues with zero real part there exists $x_0 \neq \mathbbl{0}_n$ such that the solution $\varphi_{A_{\mathbbl{0}}}(t,x_0)$ of $\frac{\diff}{\diff t} x_\delta = A_{\mathbbl{0}} x_\delta$
remains bounded for all $t \in \mathbb{R}_{\geq 0}$ and does not converge to the origin. However, $\nu_{\mathbbl{0}}(\varphi_{A_{\mathbbl{0}}}(t,x_0))$ is strictly decreasing in $t$ for all $x_0 \neq \mathbbl{0}_n$ and $\nu_{\mathbbl{0}}$ is quadratic and not bounded from below. Thus, it either holds that $\norm{\varphi_{A_{\mathbbl{0}}}(t,x_0)} \to 0$ as $t \to \infty$ or $\nu_{\mathbbl{0}}(\varphi_{A_{\mathbbl{0}}}(t,x_0)) \to -\infty$ and $\norm{\varphi_{A_{\mathbbl{0}}}(t,x_0)} \to -\infty$ as $t \to \infty$, i.e., there exists no initial condition $x_0 \neq \mathbbl{0}_n$ for which $\varphi_{A_{\mathbbl{0}}}(t,x_0)$ remains bounded and does not converge to the origin. Therefore, $A_{\mathbbl{0}}$ cannot have eigenvalues with zero real part. Moreover, for all $x_0=(v_0,i^s(v_0))$ with $v_0 \in \mathcal S \setminus \{\mathbbl{0}_{2N}\}$ it holds that $\nu_{\mathbbl{0}}(\mathbbl{0}_n) > \nu_{\mathbbl{0}}(x_0)$, and it follows that $\nu_{\mathbbl{0}}(\varphi_{A_{\mathbbl{0}}}(t,x_0)) \to -\infty$ and $\norm{\varphi_{A_{\mathbbl{0}}}(t,x_0)} \to -\infty$ as $t \to \infty$, i.e., the origin is an unstable equilibrium of $\frac{\diff}{\diff t} x = A_{\mathbbl{0}} x$, and at least one eigenvalue of $A_{\mathbbl{0}_n}$ has positive real part. Because the right-hand side of \eqref{eq.closed.loop.rot} is continuously differentiable, the region of attraction $\mathcal Z_{\mathcal U} = \mathcal Z_{\{\mathbbl{0}_n\}}$ of the origin has zero Lebesgue measure (see \cite[Prop. 11]{Monzon2006}). Moreover, it holds that $\mathbbl{0}_n \notin \mathcal C$ and the theorem directly follows from Theorem~\ref{thm.AGAS}.
\hfill\IEEEQED }
\section{Power systems implications and test-cases}\label{sec.example}
In this section, two power systems test-cases are used to illustrate the results. We use a three-bus system to {illustrate the impact of the line parameters and} to illustrate and discuss the stability boundaries obtained from Theorem \ref{thm.main}. Then, we consider a more realistic test-case based on the IEEE 9-bus system, and we show that the system behaves as expected even when some of the technical assumptions do not hold.
{
\subsection{The impact of line admittances on stability}\label{sec:three:adm}
We now investigate the role line admittances play in the stability of an inverter-based transmission system using the system depicted in Figure~\ref{fig:threebus} consisting of three inverters and three transmission lines. The base power is $1$ GW, the base voltage $320$ kV, the transmission line connecting inverter 1 to inverter 2 is $125$ km long, and the transmission lines connecting inverter 1 and inverter 3, as well as inverter 2 and inverter 3, are $25$ km long. The line resistance is $0.03$ Ohm/{km} and the line reactance is $0.3$ Ohm/km (at $\omega_0 = 50$ Hz), i.e., the $\ell/r$ ratio of the transmission lines is $\omega_0\rho=10$. We use dVOC to control the inverters with the set-points (in p.u.), $v_k^\star=1$, $p^\star_1=-0.52$, $p^\star_2= -0.19$, $p^\star_3= 0.71$, $q^\star_1=0.06$, $q^\star_2=0.021$, and $q^\star_3=-0.06$, $\eta = 3\cdot 10^{-3}$ and $\alpha=5$.
Theorem~\ref{thm.main} and Proposition~\ref{cond.stab.power} indicate that the system may become unstable if the admittance of individual lines is increased by, e.g., adding or upgrading transmission lines. To validate this insight, we vary the admittance of individual lines (keeping $\ell/r$ constant), recompute the steady-state given by $\theta^\star_{jk}$ and $v^\star_k=1$ that corresponds to the power injections specified above, linearize the system at this steady-state, and compute the minimum damping ratio $\zeta_{\min}$, defined by
\begin{align*}
\zeta_{\min} \coloneqq \min_{k \in \{2,\ldots,n\}} \frac{\operatorname{Re}(\lambda_k)}{\sqrt{\operatorname{Re}(\lambda_k)^2 + \operatorname{Im}(\lambda_k)^2}},
\end{align*}
where $\operatorname{Re}(\lambda_k)$ and $\operatorname{Im}(\lambda_k)$ denote the real part of the $k$-th eigenvalue of the linearized system and we exclude the eigenvalue $\lambda_1=0$ that corresponds to the rotational invariance of the system. A larger damping ratio corresponds to a well damped system, and the damping ratio is negative if the system is unstable.
Figure \ref{fig:adm} shows the minimum damping ratio $\zeta_{\min}$ as a function of the admittance of the line connecting inverter 1 to inverter 2 (blue), inverter 2 and 3 (red), and inverter 1 and 3 (orange). The black dashed line indicates the minimum damping ratio for the system in the original configuration described in Section \ref{sec.threebusboundary}. In the original configuration, the admittance of the line from inverter 1 to inverter 2 is $0.0265$ S and the admittance of the lines connected to inverter 3 is $0.1327$ S. It can be seen that the system becomes increasingly underdamped and eventually unstable if any of the individual line admittances is increased beyond a threshold. This result is in line with the analytical insights obtained from Theorem \ref{thm.main} and confirms that adding, upgrading, or shortening transmission lines can make the system unstable.
Moreover, the conditions of Theorem~\ref{thm.main} and Proposition~\ref{cond.stab.power} depend on the maximum weighted node degree of the transmission network graph $d_{\max} = \max_{k \in \mathcal N} \sum\nolimits_{j:(j,k) \in \mathcal E} \norm{Y_{jk}}$, i.e., the maximum of the sum of the admittances of lines connected to individual converters. This dependence can also be observed in Figure \ref{fig:adm}. Notably, the minimum damping ratio $\zeta_{\min}$ is insensitive to the admittance of the line connecting inverter 1 to inverter 2 (i.e., $\norm{Y_{12}}$) for small values of $\norm{Y_{12}}$, and becomes sensitive to $\norm{Y_{12}}$ only once $\norm{Y_{12}}$ becomes large enough to affect $d_{\max}$.
}
\begin{figure}[t!!!]
\definecolor{mycolor1}{rgb}{0.00000,0.44700,0.74100}%
\definecolor{mycolor2}{rgb}{0.85000,0.32500,0.09800}%
\definecolor{mycolor3}{rgb}{0.92900,0.69400,0.12500}%
\newlength\figH
\newlength\figW
\setlength{\figH}{4cm}
\setlength{\figW}{7.5cm}
\begin{center}
\begin{circuitikz}[american voltages]
\ctikzset{bipoles/resistor/height=0.15}
\ctikzset{bipoles/resistor/width=0.4}
\ctikzset{bipoles/generic/height=0.15}
\ctikzset{bipoles/generic/width=0.4}
\ctikzset{bipoles/length=.6cm}
\coordinate (c) at (0.5,\figH+2cm);
\coordinate (I1) at ($ (c) + (0,0) $);
\coordinate (I2) at ($ (c) + (6,0.5) $);
\coordinate (I3) at ($ (c) + (2,-0.5) $);
\draw ($ (I1) + (0,-0.8) $) node [ground] (g1) {};
\draw (I1) to[sV, l=$v_2$,*-] (g1);
\draw ($ (I2) + (0,-0.8) $) node [ground] (g2) {};
\draw (g2) to[sV, l=$v_1$,-*] (I2);
\draw ($ (I3) + (0,-0.8) $) node [ground] (g3) {};
\draw (g3) to[sV, l=$v_3$,-*] (I3);
\node (I12) at ($(I1)!0.5!(I2)$) [label={[xshift=0cm, yshift=0cm,color=mycolor1]$125~\text{Km}$},color=mycolor1]{};
\draw (I1)
to[R,color=mycolor1] (I12)
to [L,color=mycolor1] (I2);
\node (I13) at ($(I1)!0.5!(I3)$)[label={[xshift=1.2cm, yshift=-0.2cm,color=mycolor2]$25~\text{Km}$},color=mycolor2]{};
\draw (I1)
to[R,color=mycolor2] (I13)
to [L,-,color=mycolor2] (I3);
\node (I32) at ($(I3)!0.5!(I2) $) [label={[xshift=0.3cm, yshift=-0.6cm,color=mycolor3]$25~\text{Km}$},color=mycolor3]{};
\draw (I3)
to[R,color=mycolor3] (I32)
to [L,color=mycolor3] (I2);
\end{circuitikz}%
\end{center}
\vspace{-1em}
\caption{{Three-bus transmission system}. \label{fig:threebus}}
\vspace{1em}
\input{lineadm.tex}
\vspace{-2em}
\caption{{Damping and stability of a three-bus transmission system as a function of the line admittances. The black dashed line indicates the minimum damping ratio for the system in the original configuration described in Section \ref{sec.threebusboundary}. The minimum damping ratio of the system decreases if the line admittances are increased and the system eventually becomes unstable. \label{fig:adm}}}
\end{figure}
\subsection{Stability boundaries of a three-bus transmission system}\label{sec.threebusboundary}
{We now consider the three-inverter transmission system shown in Figure \ref{fig:threebus} with the nominal line parameters and set-points given in Section~\ref{sec:three:adm}. We validate Theorem~\ref{thm.main} numerically by investigating the stability properties as a function of the control gains $\alpha$ and $\eta$.} The results are shown in Figure~\ref{fig.bound}. For control gains in region (a), Theorem~\ref{thm.main} guarantees almost global asymptotic stability, whereas for control gains in region (b) instability can be verified both via simulation or linearization. In region (c) the system remains stable in simulations of black starts and changes in load, but the magnitude $\norm{i_{o,k}}$ of the converter output currents exhibits overshoots of more than $20 \%$. Due to tight limits on the maximum output current of power inverters, this is not desirable and, in practice, would require to oversize the inverters. Finally, for control gains in region (d), local asymptotic stability can be verified via linearization, and we observe that simulations of black starts and changes in load converge to $\mathcal T$. However, we cannot rule out the existence of unstable solutions in region (d), i.e., the union of (a) and (d) is an outer approximation of the range of parameters for which the system is almost globally asymptotically stable and satisfies the current limits of power inverters. Moreover, the lines in (d) indicate the minimum damping ratio {$\zeta_{\min}$} of the linearized system. In this example, minimum damping ratios below $5\cdot10^{-2}$ can result in significant oscillations and should be avoided.
\begin{figure}[t!!!]
\begin{center}
\input{stabboundary_log}
\vspace{-0.5em}
\caption{Stability regions in parameter space. For control gains in region (a) Theorem \ref{thm.main} guarantees almost global asymptotic stability, in region (b) the system is unstable, in region (c) operational limits of power inverters are exceeded, and in region (d) the system is locally asymptotically stable. The lines in (d) indicate the minimum damping ratios of the linearized system.}
\label{fig.bound}
\end{center}
\end{figure}
It should be noted that Condition~\ref{cond.stab} ensures exponential phase stability of the reduced-order system~\eqref{eq.voltage.control.kh}, i.e., using the same steps as in Prop.~\ref{thm.lyap.red.system} it can be shown that $\frac{\diff}{\diff t} \tfrac{1}{2}\norm{v}^2_{\mathcal S} \leq -\eta c \norm{v}^2_{\mathcal S}$. Thus, Theorem~\ref{thm.main} excludes region (c), and more generally, regions of the parameter space that result in poor damping. Thus, although the bound given by Theorem~\ref{thm.main} is conservative by an order of magnitude, the test case confirms that the controller gain must be limited to maintain stability, and it must be further limited to avoid oscillations and satisfy constraints of power inverters. In fact, within these operational constraints, the explicit bound given by Theorem~\ref{thm.main} is fairly accurate. We stress that Theorem~\ref{thm.main} is not restricted to operating points with zero power flow and gives almost global guarantees. Because of this, it is expected that the resulting bounds are conservative.
\definecolor{mycolor1}{rgb}{0.00000,0.44700,0.74100}%
\definecolor{mycolor2}{rgb}{0.85000,0.32500,0.09800}%
\definecolor{mycolor3}{rgb}{0.92900,0.69400,0.12500}%
\subsection{Illustrative example: IEEE 9-bus system}\label{sec:example:ieee9}
In this section, we use the IEEE 9-bus system and replace generators by power inverters with the same rating. We use a structure preserving model with passive loads shown in Figure \ref{fig:ieee9} and do not modify the test-case parameters, i.e., Assumption~\ref{ass.constant.ratio} does not hold.
\begin{figure}[t!!!]
\vspace{-1em}
\begin{center}
\begin{circuitikz}[american voltages]
\ctikzset{bipoles/resistor/height=0.15}
\ctikzset{bipoles/resistor/width=0.4}
\ctikzset{bipoles/generic/height=0.15}
\ctikzset{bipoles/generic/width=0.4}
\ctikzset{bipoles/length=.6cm}
\ctikzset{bipoles/thickness=2}
\node[label=1] at (0,0.1) (I1) {};
\node[label=2] (I2) at (7.5,0.6) {};
\node[label=3] (I3) at (5,-1.5) {};
\draw ($ (I1) + (0,-0.8) $) node [ground] (g1) {};
\draw (I1) to[sV, l=$v_1$, color=mycolor1] (g1);
\draw ($ (I2) + (0,-0.8) $) node [ground] (g2) {};
\draw (g2) to[sV, l=$v_2$, color=mycolor2] (I2);
\draw ($ (I3) + (0,-0.8) $) node [ground] (g3) {};
\draw (g3) to[sV, l=$v_3$, color=mycolor3] (I3);
\node[label=4] (4) at ($ (I1) + (1,-0.1) $) {};
\node[label=8] (8) at ($ (I2) + (-1,-0.1) $) {};
\node[label=6] (6) at ($ (I3) + (-1,0.2) $) {};
\draw ($ (4) + (0,-0.8) $) node [ground] (g4) {};
\draw (4) to[C] (g4);
\draw ($ (8) + (0,-0.8) $) node [ground] (g8) {};
\draw (8) to[C] (g8);
\draw ($ (6) + (0,-0.8) $) node [ground] (g6) {};
\draw (6) to[C] (g6);
\node[label=5] (5) at ($(4)!0.5!(6)$) {};
\node[label=9] (9) at ($(4)!0.5!(8)$) {};
\node[label=7] (7) at ($(6)!0.5!(8)$) {};
\draw ($ (5) + (0,-0.8) $) node [ground] (g5) {};
\draw (5) to[C] (g5);
\draw ($ (9) + (0,-0.8) $) node [ground] (g9) {};
\draw (9) to[C] (g9);
\draw ($ (7) + (0,-0.8) $) node [ground] (g7) {};
\draw (7) to[C] (g7);
\draw ($ (9) + (0.4,-0.8) $) node [ground] (g991) {};
\draw (g991) to[R] ($ (9) + (0.4,-0.3) $);
\draw (9) to[short] ($ (9) + (0.4,-0.3) $);
\draw ($ (9) + (-0.4,-0.8) $) node [ground] (g992) {};
\draw (g992) to[L] ($ (9) + (-0.4,-0.3) $);
\draw (9) to[short] ($ (9) + (0-0.4,-0.3) $);
\draw ($ (5) + (-0.4,-0.8) $) node [ground] (g551) {};
\draw (g551) to[R] ($ (5) + (-0.4,-0.3) $);
\draw (5) to[short] ($ (5) + (-0.4,-0.3) $);
\draw ($ (5) + (-0.8,-0.8) $) node [ground] (g552) {};
\draw (g552) to[L] ($ (5) + (-0.8,-0.3) $);
\draw (5) to[short] ($ (5) + (-0.8,-0.3) $);
\draw ($ (7) + (0.4,-0.8) $) node [ground] (g771) {};
\draw (g771) to[R] ($ (7) + (0.4,-0.3) $);
\draw (7) to[short] ($ (7) + (0.4,-0.3) $);
\draw ($ (7) + (0.8,-0.8) $) node [ground] (g772) {};
\draw (g772) to[L] ($ (7) + (0.8,-0.3) $);
\draw (7) to[short] ($ (7) + (0.8,-0.3) $);
\draw (I1)
to[L,*-] (4);
\draw (I2)
to[L,*-] (8);
\draw (I3)
to[L,*-] (6);
\node (49) at ($(4)!0.5!(9)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (4)
to[R,*-] (49)
to [L,-*] (9);
\node (98) at ($(8)!0.5!(9)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (9)
to[R,*-] (98)
to [L,-*] (8);
\node (45) at ($(4)!0.5!(5)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (4)
to[R,*-] (45)
to [L,-*] (5);
\node (56) at ($(6)!0.5!(5)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (5)
to[R,*-] (56)
to [L,-*] (6);
\node (67) at ($(6)!0.5!(7)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (6)
to[R,*-] (67)
to [L,-*] (7);
\node (78) at ($(8)!0.5!(7)$) [label={[xshift=0cm, yshift=0.3cm]}]{};
\draw (8)
to[R,*-] (78)
to [L,-*] (7);
\end{circuitikz}
\caption{{IEEE 9-bus system. All generators have been replaced with inverters of the same rating that implement the control law~\eqref{eq.control.law}. The load dynamics are given by passive RLC circuits.}}
\label{fig:ieee9}
\end{center}
\end{figure}
We can therefore no longer apply Theorem~\ref{thm.main} directly. However, we illustrate how the dVOC~\eqref{eq.control.law} indeed synchronizes the grid to the desired power flows under nominal conditions and behaves well during contingencies. Furthermore, we illustrate {numerically} that, even if the assumptions of Theorem~\ref{thm.main} do not hold, the controller gain $\eta$ must remain small in order to ensure stability of the power network.
We use $\eta = 10^{-3}$ p.u., $\alpha = 10$ p.u., and the $\ell / r$ ratio of the lines is approximated by $\omega_0 \rho = 10$. We simulate the following events:
\begin{itemize}
\item $t = 0\,s$ black start: $\|v_k(0)\|\approx 10^{-4}$ p.u.
\item $t = 5\,s$ $20\,\%$ active power increase at load $5$
\item $t = 10\,s$ loss of inverter $1$.
\end{itemize}
The results are illustrated in Figure~\ref{fig.IEEE9}. The controllers are capable to black-start the grid and converge to a synchronous solution with the desired power injections. When the load is increased ($t=5$\,s) we observe a droop-like behavior, the inverters maintain synchrony and share the power needed to supply the loads. Finally at $t=10$\,s we simulate a large contingency (the loss of Inverter 1). Inverters 2 and 3 do not loose synchrony and step up their power injection to supply the loads.
\begin{figure}[b!!!]
\vspace{-1em}
\input{ieee9_sym1.tex}
\vspace{-1em}
\caption{Simulation of the IEEE 9-bus system. We illustrate a grid black start at $t=0$ s, a $20\%$ load increase at bus 5 at $t=5$ s, and the loss of inverter $1$ at $t=10$ s. The system is stable for a sufficiently small synchronization gain ($\eta = 10^{-3}$ p.u.).}
\label{fig.IEEE9}
\end{figure}
Note that the current transients are particularly well behaved and do not present undesirable overshoots that are typical of other control strategies (e.g. synchronous machine emulation).
In Figure~\ref{fig.IEEE9unstable} we show that, if we increase the gain $\eta$ to $10^{-2}$ the system is unstable. The fact that high gain control in conjunction with the line dynamics {is unstable is in agreement with the predictions made by Theorem~\ref{thm.main}}.
\begin{figure}[h!!!]
\begin{center}
\vspace{-0.5em}
\input{ieee9unstabe1.tex}
\vspace{-0.5em}
\caption{Simulation of the IEEE 9-bus system for a large gain ($\eta = 10^{-2}$ p.u.). Instability is caused by high-gain control interfering with the line dynamics.}
\label{fig.IEEE9unstable}
\end{center}
\end{figure}
\section{Conclusion and outlook}\label{sec.conclusion}
In this paper, we considered the effect of the transmission-line dynamics on the stability of dVOC for grid-forming power inverters. A detailed stability analysis {for transmission lines with constant inductance to resistance ratio} was provided which shows that the transmission line dynamics have a destabilizing effect on the {multi-inverter} system, and that the gains of the inverter control need to be chosen appropriately. These instabilities cannot be detected using the standard quasi-steady-state approximation that is commonly used in power system stability analysis. Using tools from singular perturbation theory, we obtained explicit bounds on the controller gains and set-points that guarantee (almost) global asymptotic stability of the inverter based AC power system with respect to a synchronous steady state with the prescribed power-flows. {Broadly speaking, our conditions require that that the network is not to heavily loaded and that there is a sufficient time-scale separation between the inverter dynamics and line dynamics. Although the theoretical bounds are only sufficient and only apply for transmission lines with constant inductance to resistance ratio, we used a realistic test-cases to illustrate that the main salient features uncovered by our theoretical analysis translate to realistic scenarios, i.e., the power system becomes unstable when our sufficient stability conditions are violated.}
Similar instability phenomena induced by the dynamics of the transmission lines were recently observed in~\cite{vorobev2017high} for standard droop control. In view of these recent results, we believe that there is a need for more detailed studies to understand the fundamental limitations for the control of power inverters that arise from the dynamics of {transmission lines with heterogeneous inductance to resistance ratios, transformers, and other network dynamics that are typically not considered in power system stability analysis.}
|
{
"timestamp": "2018-12-19T02:10:04",
"yymm": "1802",
"arxiv_id": "1802.08881",
"language": "en",
"url": "https://arxiv.org/abs/1802.08881"
}
|
\section{Publication notes}
\section{Introduction}
Many important planning domains naturally occur in continuous spaces involving complex constraints among variables.
Consider planning for an 11 degree-of-freedom (DOF) robot tasked with organizing several blocks in a room.
The robot must find a sequence of {\em pick}, {\em place}, and {\em move} actions involving continuous robot configurations, robot trajectories, block poses, and block grasps.
These variables must satisfy complicated kinematic, collision, and motion constraints which affect the feasibility of the actions.
Often, special purpose procedures that efficiently evaluate and produce satisfying values for these constraints such as inverse kinematic solvers, collision checkers, and motion planners are known.
We propose STRIPStream, a description language that introduces {\em streams} as an interface for incorporating these procedures in planning domain description language (PDDL).
To the best of our knowledge, STRIPStream is the first domain-independent planning language that incorporates sampling procedures.
Each stream has both a procedural and declarative component.
The procedural component is a {\em conditional generator}, a function from a tuple of input values to a finite or infinite sequence of tuples of output values.
Conditional generators can construct new values that depend on existing values, such as new robot configurations that satisfy a kinematic constraint with existing poses and grasps.
The declarative component is a specification of the atoms that these input and output values satisfy.
Streams allow a planner to reason about conditions on the inputs and outputs of a conditional generator while treating its implementation as a blackbox.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{images/rovers2}
\caption{A 3D rovers domain requiring a team of rovers (blue/green robots) to analyze rocks/soil (black/brown objects), image objectives (blue blocks), and communicate to a lander (grey robot).} \label{fig:rovers}
\end{figure}
Inspired by the work of Garrett et al.~\shortcite{GarrettRSS17}, which gave sampling-based approaches for planning in hybrid transition systems, we present several algorithms for solving STRIPStream problems.
Each algorithm constructs and solves a sequence of finite-domain STRIPS~\cite{Fikes71} problems.
This enables state-of-the-art PDDL planners~\cite{helmert2006fast} to be used as search subroutines.
Some of these approaches avoid evaluating many unnecessary streams by only evaluating streams that {\em optimistically} could produce values supporting a plan.
This is advantageous when only a few stream evaluations are required, and stream evaluation is computationally expensive.
We also describe extensions incorporating external functions, action costs, and derived predicates.
Finally, we model several domains (figures~\ref{fig:rovers},~\ref{fig:cooking},~\ref{fig:optimal}) and evaluate the performance of our algorithms.
\section{Related Work}
Many approaches to robotic task and motion planning have developed strategies for handling continuous spaces that go beyond {\it a priori} discretization.
Several of these approaches~\cite{HPN,Erdem,Srivastava14,GarrettIROS15,dantam2016tmp,garrettIJRR2017} have been suggested for
integrating sampling-based motion planning with symbolic planning.
However, those able to plan in realistic robot domains have typically been specialized to that class of problems.
Several PDDL extensions such as PDDL2.1~\cite{Fox03pddl2.1:an} and PDDL+~\cite{fox2006modelling} support planning with numeric variables that evolve as a function of time.
Most planners are restricted to problems with linear or polynomial dynamics governing these variables~\cite{hoffmann2003metric,bryce2015smt,cashmore2016compilation}; however, some planners can handle non-polynomial dynamics by discretizing time~\cite{della2009upmurphi,piotrowski2016heuristic}.
While it may be technically possible to analytically model, for example, collision constraints among three-dimensional meshes using PDDL+, the resulting encoding would be enormous, far exceeding the demonstrated capabilities of current numeric planners.
Semantic attachments~\cite{dornhege09icaps,dornhege09ssrr}, functions computed by an external module, are an existing way of integrating blackbox procedures and PDDL planners.
Condition-checker modules are used to test operator preconditions, and effect-applicator modules modify numeric state variables.
Operators must be parameterized by variables with finite types, restricting the technique to domains with finite branching factors.
Semantic attachments can only produce a single tuple of numeric outputs.
Planning with semantic attachments requires modifying existing PDDL planners to evaluate the attachments during the search.
This also results in many unneeded procedure calls, resulting in poor planner performance when the attachments are expensive.
Planning Modulo Theories (PMT)~\cite{gregory2012planning} generalizes semantic attachments by supporting the composition of several modules through custom first-order theories.
\section{STRIPStream}
We build STRIPStream on the STRIPS planning language~\cite{Fikes71}.
A {\em predicate} $p$ is a boolean function.
An {\em atom} $p(\bar{x})$ is an evaluation of predicate $p$ on an {\em object} tuple $\bar{x} = (x_1, ..., x_k)$.
A {\em literal} is an atom or a negated atom.
A {\em state} ${\cal I}$ is a set of atoms.
We make the closed world assumption that atoms not present within a state are false.
An {\em action} $a$ is given by a {\em parameter} tuple $\bar{X} = (X_1, ..., X_k)$, a set of literal {\em preconditions} $\pre{a}$ on $\bar{X}$, and a set of literal {\em effects} $\eff{a}$ on $\bar{X}$.
An {\em action instance} $a(\bar{x})$ is an action $a$ with its parameters $\bar{X}$ replaced with objects $\bar{x}$.
An action instance $a(\bar{x})$ is {\em applicable} in a state ${\cal I}$ if $(\id{pre}^+(a(\bar{x})) \subseteq {\cal I}) \wedge (\id{pre}^-(a(\bar{x})) \cap {\cal I} = \emptyset)$ where the $+$ and $-$ superscripts designate the positive and negative literals respectively.
The result of {\em applying} an action instance $a(\bar{x})$ to state ${\cal I}$ is a new state $({\cal I} \cup \id{eff}^+(a(\bar{x}))) \setminus \id{eff}^-(a(\bar{x}))$.
A STRIPS {\em problem} $({\cal A}, {\cal I}, {\cal G})$ is given by a set of actions ${\cal A}$, an initial state ${\cal I}$, and a goal set of literals ${\cal G}$.
A legal {\em plan} $\pi = \langle a_1(\bar{x}_1), ..., a_k(\bar{x}_k)\rangle$ is a finite sequence of $k$ action instances such that each $a_i(\bar{x}_i)$ is applicable in the $(i-1)$th state resulting from their application.
The {\em preimage} of a plan $\pi$ is the set of literals that must hold in ${\cal I}$ to execute $\pi$:
\begin{equation*}
\proc{preimage}(\pi) = \bigcup_{i =1}^k \Big(\pre{a_i(\bar{x}_i)} - \bigcup_{j <i} \eff{a_j(\bar{x}_j)} \Big).
\end{equation*}
\subsection{Streams}
A {\em generator} $g = \langle \bar{y}_1, \bar{y}_2, ... \rangle$ is a finite or infinite, enumerable sequence of object tuples $\bar{y}_i$.
A {\em conditional generator} $f(\bar{X})$ is a function from an object tuple $\bar{x}$ to a generator $f(\bar{x}) = g_{\bar{x}}$.
Let $\evaluate{f(\bar{x})}$ {\em evaluate} the generator and return the subsequent $\bar{y}_i$ in the sequence $g_{\bar{x}}$ if it is defined.
Otherwise, let $\evaluate{f(\bar{x})}$ return $\kw{None}$.
Conditional generators are implemented as blackbox procedures in a host programming language.
A conditional generator is a programmatic specification of a binary {\em relation} on input object tuples $\bar{x}$ and output object tuples $\bar{y}$.
A {\em stream} $s$ is a conditional generator $s(\bar{X})$ endowed with a declarative specification of the {\em domain}, {\em range}, and {\em graph} of its corresponding relation.
Let $\domain{s}$ be a set of atoms on input parameters $\inp{s} = \bar{X}$ that specify a possibly infinite set of object tuples $\bar{x}$ for which $s(\bar{X})$ is defined.
Let $\graph{s}$ be a set of {\em certified{}} atoms on potentially both $\inp{s}$ and output parameters $\out{s} = \bar{Y}$ representing the range $\{\bar{y};\; \bar{y} \in s(\bar{x})\}$ and graph $\{(\bar{x}, \bar{y});\; \bar{y} \in s(\bar{x})\}$ of $s$.
A {\em stream instance} $s(\bar{x})$ is a stream $s$ with its input parameters $\inp{s}$ replaced by an object tuple $\bar{x}$.
A STRIPStream {\em problem} $({\cal A}, {\cal S}, {\cal I}, {\cal G})$ is given by a set of actions ${\cal A}$, a set of streams ${\cal S}$, an initial state ${\cal I}$, and a goal state ${\cal G}$.
We restrict ourselves to problems in which all stream domain and certified{} predicates are {\em static predicates}, predicates not included within action effects.
Additionally, we require that stream-certified{} atoms are never negated within action preconditions.
As a result, stream-certified{} atoms are interpreted as constant facts that restrict the parameter values for each action.
The set of streams ${\cal S}$ augments the initial state ${\cal I}$, recursively defining a potentially infinite set of atoms ${\cal I}^*$ that hold initially and cannot be changed:
\begin{align*}
{\cal I}^* = {\cal I} \cup \{p(\bar{x}, \bar{y}); \;& s \in {\cal S}, |\bar{x}| = |\inp{s}|, \domain{s(\bar{x})} \subseteq {\cal I}^*, \\
&\bar{y} \in s(\bar{x}), p \in \graph{s}\}.
\end{align*}
A {\em solution} $\pi$ for STRIPStream problem $({\cal A}, {\cal S}, {\cal I}, {\cal G})$ is a legal plan such that $\proc{preimage}(\langle\pi, {\cal G}\rangle) \subseteq {\cal I}^*$.
In supplementary material\footnote{\label{note}https://sites.google.com/view/stripstream-ijcai-2018/},
we prove that STRIPStream planning is {\em undecidable}, but our algorithms are {\em complete} over feasible instances.
\section{Example Domain} \label{sec:example}
As an illustrative example, we apply STRIPStream to model a robotic manipulation domain with a single manipulator and a finite set of movable blocks.
Our model of the domain uses the following parameters:
$B$ is the name of a block;
$P$ is 6 DOF block pose placed stably on a fixed surface;
$G$ is a 6 DOF block grasp transform relative to the robot gripper;
$Q$ is an 11 DOF robot configuration; and
$T$ is a trajectory composed of a finite sequence of waypoint robot configurations.
The fluent predicates $\id{AtConf}$, $\id{AtPose}$, $\id{Holding}$, $\id{Empty}$ model the changing robot configuration, object poses, and gripper status over time.
The static predicates $\id{Pose}$, $\id{Grasp}$, $\id{Kin}$, $\id{Motion}$ are constant facts involving parameter values.
\id{Pose} and \id{Grasp} indicate a pose $P$ or grasp $G$ can be used for block $B$.
\id{Kin} is a kinematic constraint between configuration $Q$ and gripper transform $P*G^{-1}$.
\id{Motion} is a constraint that $Q, Q'$ are the start and end configuration for trajectory $T$, and $T$ respects joint limits, self-collisions, and collisions with the fixed environment.
Collisions between blocks and trajectories are handled using a derived predicate \id{Unsafe} described in section~\ref{sec:derived}.
Actions ${\cal A} = \{\proc{Move}, \proc{Pick}, \proc{Place}\}$ are:
\begin{footnotesize}
\begin{codebox}
\Procname{\proc{Move}$(Q, T, Q')$:}
\zi \id{pre}: $\{\id{Motion}(Q, T, Q'), \id{AtConf}(Q), \neg \id{Unsafe}(T)\}$
\zi \id{eff}: $\{\id{AtConf}(Q'), \neg \id{AtConf}(Q)\}$
\end{codebox}
\begin{codebox}
\Procname{\proc{Pick}$(B, P, G, Q)$:}
\zi \id{pre}: $\{\id{Kin(B, P, G, Q)}, \id{AtPose}(B, P), \id{Empty}(),\id{AtConf}(Q)\}$
\zi \id{eff}: $\{\id{Holding}(B, G), \neg \id{AtPose}(B, P), \neg \id{Empty}()\}$
\end{codebox}
\begin{codebox}
\Procname{\proc{Place}$(B, P, G, Q)$:}
\zi \id{pre}: $\{\id{Kin(B, P, G, Q)}, \id{Holding}(B, G), \id{AtConf}(Q)\}$
\zi \id{eff}: $\{\id{AtPose}(B, P), \id{Empty}(), \neg \id{Holding}(B, G)\}$.
\end{codebox}
\end{footnotesize}
Let $\proc{stream}(\bar{X}) \to \bar{Y}$ indicate a stream with input parameters $\bar{X}$ and output parameters $\bar{Y}$.
The streams are ${\cal S} = \{\proc{surface}, \proc{grasps}, \proc{ik}, \proc{motion}\}$.
\proc{surface} deterministically or randomly samples an infinite sequence of stable placements $P$ for block $B$.
\proc{grasps} enumerates a sequence of force-closure grasps $G$ for block $B$.
\proc{ik} calls an inverse kinematics solver to sample configurations $Q$ from a 4D manifold of values (due to manipulator redundancy) that enable the robot to manipulate a block $B$ at pose $P$ with grasp $G$.
\proc{motion} repeatedly calls a motion planner to generate safe trajectories $T$ between pairs of configurations $Q, Q'$.
\setlength\columnsep{-50pt}
\begin{footnotesize}
\begin{multicols}{2}
\begin{codebox}
\Procname{\proc{surface}$(B) \to (P)$:}
\zi \id{dom}: $\{\id{Block}(B)\}$
\zi \id{cert}{}: $\{\id{Pose}(O, P)\}$
\end{codebox}
\begin{codebox}
\Procname{\proc{grasps}$(B) \to (G)$:}
\zi \id{dom}: $\{\id{Block}(B)\}$
\zi \id{cert}{}: $\{\id{Grasp}(O, G)\}$
\end{codebox}
\begin{codebox}
\Procname{\proc{ik}$(B, P, G) \to (Q)$:}
\zi \id{dom}: $\{\id{Pose}(B, P), \id{Grasp}(B, G)\}$
\zi \id{cert}{}: $\{\id{Kin}(B, P, G, Q), \id{Conf}(Q)\}$
\end{codebox}
\begin{codebox}
\Procname{\proc{motion}$(Q, Q') \to (T)$:}
\zi \id{dom}: $\{\id{Conf}(Q), \id{Conf}(Q')\}$
\zi \id{cert}{}: $\{\id{Motion}(Q, T, Q')\}$
\end{codebox}
\end{multicols}
\end{footnotesize}
\section{Algorithms}
We present several algorithms for solving STRIPStream problems.
These algorithms alternate between evaluating stream instances and solving induced STRIPS problems.
Let $\proc{search}({\cal A}, {\cal I}, {\cal G})$ be any sound and complete search algorithm for STRIPS problems.
\proc{search} can be implemented using an off-the-shelf STRIPS planner without modification, to take advantage of existing, efficient search algorithms.
\subsection{Incremental} \label{sec:incremental}
The first algorithm directly adapts the incremental approach of Garrett et al.~\shortcite{GarrettRSS17} to STRIPStream.
The {\em incremental} algorithm iteratively constructs ${\cal I}^*$.
On each iteration, the current set of atoms $U$ becomes the initial state in a STRIPS problem $({\cal A}, U, {\cal G})$ that is solved using \proc{search}.
If \proc{search} finds a plan $\pi$, it returned as a solution.
Otherwise, the procedure $\proc{instances}({\cal S}, U)$ constructs all stream instances $s(\bar{x})$ with domain atoms satisfied by $U$.
For each stream instance $s(\bar{x})$, the next output object tuple $\evaluate{s(\bar{x})} = \bar{y}$ is queried.
When $\bar{y} \neq \kw{None}$, the procedure $\proc{\cer}(s(\bar{x}), \bar{y})$ returns the certified{} atoms corresponding to $s(\bar{x})$ and $\bar{y}$.
These new atoms are added to $U$ and the next iteration begins.
\begin{footnotesize}
\begin{codebox}
\Procname{$\proc{incremental}({\cal A}, {\cal S}, {\cal I}, {\cal G}):$}
\li $U = \kw{copy}({\cal I})$
\li \While \kw{True}: \Then
\li $\pi = \proc{search}({\cal A}, U, {\cal G})$
\li \If $\pi \neq \kw{None}$: \Return $\pi$
\li \For $s(\bar{x}) \in \proc{instances}({\cal S}, U)$: \Then
\li $U \mathrel{+}= \proc{\cer}(s(\bar{x}), \kw{next}(s(\bar{x})))$
\End\End
\end{codebox}
\end{footnotesize}
\begin{footnotesize}
\begin{equation*}
\proc{instances}({\cal S}, U) = \{s(\bar{x});\; s \in {\cal S}, |\bar{x}| = |\inp{s}|, \domain{s(\bar{x})} \subseteq U\}
\end{equation*}
\begin{equation*}
\proc{\cer}(s(\bar{x}), \bar{y}) = \{p(\bar{x}, \bar{y}) \mid p \in \graph{s}\} \kw{ if } \bar{y} \neq \kw{None} \kw{ else } \emptyset
\end{equation*}
\end{footnotesize}
The incremental algorithm blindly evaluates all stream instances, which can result in significant overhead when stream evaluations are computationally expensive.
Additionally, it can produce large STRIPS problems when $U$ grows quickly.
\subsection{Focused} \label{sec:focused}
Our other algorithms are inspired by and improve on the focused approach of Garrett et al.~\shortcite{GarrettRSS17}.
We present these algorithms in a unified manner using the shared meta-procedure \proc{focused}.
Each subroutine of \proc{focused} admits several possible implementations that result in different algorithms.
The focused algorithms each selectively choose which stream instances to evaluate based on whether they could support a solution.
The key idea is to plan using {\em optimistic objects} that represent hypothetical stream outputs before evaluating actual stream outputs.
These values are optimistic in the sense that their corresponding stream instance may not actually be able to produce a satisfying value.
For instance, a stream instance with a particular pose and grasp pair as inputs may not admit any collision-free inverse kinematic solutions.
Upon failing to produce a desired value, the focused algorithms will replan without that stream instance.
\begin{footnotesize}
\begin{codebox}
\Procname{$\proc{focused}({\cal A}, {\cal S}, {\cal I}, {\cal G}):$}
\li $U = \kw{copy}({\cal I}); D = \emptyset$
\li \While \kw{True}: \Then
\li $S = \proc{optimistic-streams}({\cal S}, D, U)$
\li $\psi, \pi = \proc{plan-streams}({\cal A}, S, U, {\cal G})$
\li \kw{if} $\pi = \kw{None}$: $D = \emptyset$ \Comment Infeasible $\implies$ reset $D$
\li \kw{else if} $\psi = \langle\rangle$: \Return $\pi$ \Comment Empty $\psi$ $\implies$ $\pi$ is a solution
\li \kw{else} \Then:
\li \For $s(\bar{x}) \in \psi$: \If $\domain{s(\bar{x})} \subseteq U$: \Then
\li $U \mathrel{+}= \proc{\cer}(s(\bar{x}), \kw{next}(s(\bar{x})))$
\li $D \mathrel{+}= \{s(\bar{x})\}$
\End\End
\End
\End
\end{codebox}
\end{footnotesize}
\proc{focused} maintains a set of certified atoms $U$ and a set of {\em disabled} stream instances $D$, which is used to limit the stream instances that can be optimistically evaluated.
On each iteration, \proc{focused} calls \proc{optimistic-streams} to generate a sequence of {\em optimistic stream instances} $S$, stream instances producing optimistic objects.
It calls \proc{plan-streams} to identify both plan $\pi$ and a {\em stream plan} $\psi$ of stream instances that, presuming successful evaluations, support $\pi$.
If $\pi = \kw{None}$, the current set of optimistic stream instances $S$ is insufficient.
In this case, disabled is reset by setting $D = \emptyset$ to allow each $s(\bar{x}) \in D$ to be used again within $S$ on the next iteration.
If $\psi = \langle\rangle$, the plan $\pi$ is executable without any additional stream evaluations and is returned as a solution.
Otherwise, \proc{focused} evaluates the stream instances on $\psi$ with domain atoms satisfied by $U$.
New atoms are added to $U$, and evaluated instances are added to the disabled set $D$.
\subsection{Optimistic Evaluation} \label{sec:optimistic-object}
The procedure \proc{optimistic-streams} optimistically evaluates each stream instance absent from $D$ in order to characterize the set of possibly achievable atoms.
It returns a sequence of optimistic stream instances $S$ to be used within \proc{plan-streams}.
Rather than evaluate each stream instance using $\evaluate{s(\bar{x})}$, a single {\em optimistic object} tuple is constructed using the procedure $\proc{opt-eval}(s(\bar{x}))$.
All certified{} atoms resulting from $\proc{opt-eval}(s(\bar{x}))$ are added to $U^*$, an optimistic version of $U$, and $U^*$ is used to instantiate all new stream instances resulting from the new optimistic atoms.
\begin{footnotesize}
\begin{codebox}
\Procname{$\proc{optimistic-streams}({\cal S}, D, U):$}
\li $U^* = \kw{copy}(U); S = \big(\proc{instances}({\cal S}, U) - D\big)$
\li \For $s(\bar{x}) \in S$: \Then
\li $U^* \mathrel{+}= \proc{\cer}(s(\bar{x}), \proc{opt-eval}(s(\bar{x})))$
\li $S \mathrel{+}= \big(\proc{instances}({\cal S}, U^*) - S\big)$
\End
\li \Return $S$
\end{codebox}
\end{footnotesize}
\subsubsection{Unique Optimistic Objects}
Optimistic output objects can be implemented by \proc{opt-eval} in at least two ways.
The first creates a {\em unique} optimistic object for each {\em stream instance} $s(\bar{x})$ and output object parameter $Y \in \out{s}$.
Each optimistic object encodes a history of stream evaluations that produces it.
As an example, consider a STRIPStream problem in the robotics domain (section~\ref{sec:example}) requiring that block $b$ be moved from initial pose $p_0$ to goal pose $p_*$.
The objects $q_0, p_0, p_*, g_1, q_1, ...$ are real-valued vectors ({\it i.e.} $q_0 = [1.71, -2.44, ...]$).
The initial state is
\begin{align*}
{\cal I} = \{&\id{Conf}(q_0), \id{Block}(b), \id{Pose}(b, p_0), \id{Pose}(b, p_*) \\
&\id{AtConf}(q_0), \id{AtPose}(b, p_0), \id{Empty}()\}.
\end{align*}
The goal is ${\cal G} = \{\id{AtPose}(b, p_*)\}$.
On the first iteration, the optimistic stream instances produced are
\begin{footnotesize}
\begin{align*}
S = \langle& \proc{grasps}(b) \to \pmb{\gamma}_1, \proc{surface}(b) \to \pmb{\rho}_1, \proc{ik}(b, p_0, \pmb{\gamma}_1) \to \pmb{\zeta}_1, \\
&\proc{ik}(b, p_*, \pmb{\gamma}_1) \to \pmb{\zeta}_2, \proc{ik}(b, \pmb{\rho}_1, \pmb{\gamma}_1) \to \pmb{\zeta}_3, \\
&\proc{motion}(q_0, q_0) \to \pmb{\tau}_1, \proc{motion}(q_0, \pmb{\zeta}_1) \to \pmb{\tau}_2, \\
&\proc{motion}(\pmb{\zeta}_1, q_0) \to \pmb{\tau}_3, \proc{motion}(\pmb{\zeta}_1, \pmb{\zeta}_2) \to \pmb{\tau}_4, ...\rangle.
\end{align*}
\end{footnotesize}
Each $\pmb{\gamma}_i, \pmb{\rho}_i, \pmb{\zeta}_i, \pmb{\tau}_i$ indicates a unique optimistic output.
In total, 16 \proc{motion} stream instances are created.
A possible plan $\pi_1$ and stream plan $\psi_1$ produced by \proc{plan-streams} is
\begin{footnotesize}
\begin{align*}
\pi_1 = \langle& \proc{Move}(q_0, \pmb{\tau}_2, \pmb{\zeta}_1), \proc{Pick}(b, p_0, \pmb{\gamma}_1, \pmb{\zeta}_1), \\
&\proc{Move}(\pmb{\zeta}_1, \pmb{\tau}_4, \pmb{\zeta}_2), \proc{Place}(b, p_*, \pmb{\gamma}_1, \pmb{\zeta}_2) \rangle \\
\psi_1 = \langle& \proc{grasps}(b) \to \pmb{\gamma}_1, \proc{ik}(b, p_0, \pmb{\gamma}_1) \to \pmb{\zeta}_1, \proc{ik}(b, p_*, \pmb{\gamma}_1) \to \pmb{\zeta}_2, \\
&\proc{motion}(q_0, \pmb{\zeta}_1) \to \pmb{\tau}_2, \proc{motion}(\pmb{\zeta}_1, \pmb{\zeta}_2) \to \pmb{\tau}_4 \rangle.
\end{align*}
\end{footnotesize}
Notice that $\psi_1$ is substantially shorter than $S$ as many stream instances are not needed.
On this iteration, \proc{focused} will evaluate just $\proc{grasps}(b)$, producing a new grasp $g_1$, because the other streams have optimistic objects as inputs.
This process repeats for three iterations before finding a solution, resulting in the following stream instances and objects:
\begin{align*} \label{eqn:sequence}
&1)\; \evaluate{\proc{grasps}(b)} = g_1 \\
&2)\; \evaluate{\proc{ik}(b, p_0, g_1)} = q_1, \evaluate{\proc{ik}(b, p_*, g_1)} = q_2 \\
&3)\; \evaluate{\proc{motion}(q_0, q_1))}= t_1, \evaluate{\proc{motion}(q_1, q_2)} = t_2 \\
\end{align*}
\subsubsection{Shared Optimistic Objects}
A downside of using unique optimistic objects is that they can result in large STRIPS problems.
In the previous example, 21 optimistic objects are created when only 5 are needed.
An alternative implementation of \proc{opt-eval} creates a {\em shared} optimistic object for each {\em stream} $s$ (rather than stream instance) and output object parameter $Y \in \out{s}$.
This strategy is overly optimistic because it assumes the same object will result from two different stream instances, which is typically unlikely for streams involving continuous variables.
But as a result, the number of optimistic objects is much smaller.
Revisiting our example, the optimistic stream instances are
\begin{footnotesize}
\begin{align*}
S = \langle& \proc{grasps}(b) \to \pmb{\gamma}, \proc{surface}(b) \to \pmb{\rho}, \proc{ik}(b, p_0, \pmb{\gamma}) \to \pmb{\zeta}, \\
&\proc{ik}(b, p_*, \pmb{\gamma}) \to \pmb{\zeta}, \proc{ik}(b, \pmb{\rho}, \pmb{\gamma}) \to \pmb{\zeta}, \\
&\proc{motion}(q_0, q_0) \to \pmb{\tau}, \proc{motion}(q_0, \pmb{\zeta}) \to \pmb{\tau}, \\
&\proc{motion}(\pmb{\zeta}, q_0) \to \pmb{\tau}, \proc{motion}(\pmb{\zeta}, \pmb{\zeta}) \to \pmb{\tau} \rangle.
\end{align*}
\end{footnotesize}
Here, each $\pmb{\gamma}, \pmb{\rho}, \pmb{\zeta}, \pmb{\tau}$ indicates a shared optimistic output.
The resulting plan and stream plans differ from before.
\begin{footnotesize}
\begin{align*}
\pi_1 = \langle& \proc{Move}(q_0, \pmb{\tau}, \pmb{\zeta}), \proc{Pick}(b, p_0, \pmb{\gamma}, \pmb{\zeta}), \proc{Place}(b, p_*, \pmb{\gamma}, \pmb{\zeta}) \rangle \\
\psi_1 = \langle& \proc{grasps}(b) \to \pmb{\gamma}, \proc{ik}(b, p_0, \pmb{\gamma}) \to \pmb{\zeta}, \\
&\proc{ik}(b, p_*, \pmb{\gamma}) \to \pmb{\zeta}, \proc{motion}(q_0, \pmb{\zeta}) \to \pmb{\tau} \rangle
\end{align*}
\end{footnotesize}
No replacement $\pmb{\gamma}, \pmb{\zeta}, \pmb{\tau}$ in $\pi$ will result in a solution because the plan lacks a \proc{Move} between \proc{Pick} and \proc{Place}.
However, $\pi$ still leads to a stream plan $\psi$ that identifies useful stream instances.
In this example, both shared objects and unique objects lead to the same sequence of evaluated stream instances.
But the sequence of plans using shared objects is:
\begin{footnotesize}
\begin{align*}
\pi_2 = \langle& \proc{Move}(q_0, \pmb{\tau}, Q), \proc{Pick}(b, p_0, g_1, \pmb{\zeta}), \proc{Place}(b, p_*, g_1, \pmb{\zeta}) \rangle \\
\pi_3 = \langle& \proc{Move}(q_0, \pmb{\tau}, q_1), \proc{Pick}(b, p_0, g_1, q_1), \proc{Move}(q_1, \pmb{\tau}, q_2), \\
&\proc{Place}(b, p_*, g_1, q_2) \rangle \\
\pi_4 = \langle& \proc{Move}(q_0, t_1, q_1), \proc{Pick}(b, p_0, g_1, q_1), \proc{Move}(q_1, t_2, q_2), \\
&\proc{Place}(b, p_*, g_1, q_2) \rangle.
\end{align*}
\end{footnotesize}
On the second iteration, both $\proc{ik}(b, p_0, g_1)$ and $\proc{ik}(b, p_*, g_1)$ are evaluated, resulting in distinct configurations $q_1, q_2$.
This forces $\pi_3$ and $\pi_4$ to use two \proc{Move} actions instead of just one.
\subsection{Planning Streams and Actions} \label{sec:optimistic-plan}
The $\proc{plan-streams}$ subroutine is tasked with identifying both action plans $\pi$ as well as stream plans $\psi$.
\subsubsection{Simultaneous}
The first implementation treats stream instances as actions and simultaneously plans $\pi$ and $\psi$.
The combined plan $\pi_S$ interleaves executing actions and evaluating streams.
The procedure $\proc{stream-actions}(S)$ transforms each stream instance into an action instance.
For each stream instance $s(\bar{x}) \in S$, it creates an action instance $a_s()$ by assigning $\pre{a_s} = \domain{s(\bar{x})}$
and $\eff{a_s} = \proc{\cer}(s(\bar{x}), \proc{opt-eval}(s(\bar{x})))$.
For example, the stream action corresponding to stream instance $\proc{ik}(b, p_0, G) \to Q$ has $\pre{a_s} = \{\id{Pose}(b, p_0), \id{Grasp}(b, G)\}$ and $\eff{a_s} = \{\id{Kin}(b, p_0, G, Q), \id{Conf}(Q)\}$.
Its preconditions require that the stream action for $\proc{grasps}(b) \to G$ be applied in the search before $\proc{ik}(b, p_0, G) \to Q$ can be applied.
As a result, the subsequence $\psi$ will automatically be ordered correctly.
\begin{footnotesize}
\begin{codebox}
\Procname{$\proc{simultaneous-plan-streams}({\cal A}, S, U, {\cal G}):$}
\li $A_S = \proc{stream-actions}(S)$
\li $\pi_S = \proc{search}({\cal A} \cup A_S, U, {\cal G})$
\li \If $\pi_S = \kw{None}$: \Return $\langle\rangle, \kw{None}$
\li \Return $(\pi_S \cap A_S)$, $(\pi_S - A_S)$
\end{codebox}
\end{footnotesize}
For cost-insensitive planning, stream actions can be given costs by a user to reflect the estimated computational {\em effort} required to produce satisfying values.
Effort can incorporate the evaluation overhead, likelihood of producing any output objects, and likelihood of producing output objects that satisfy plan-wide conditions.
Then, a cost-sensitive implementation of \proc{search} can bias the search to find combined plans $\pi_S$ leading to low-effort stream plans $\psi$.
In our experiments, we use unit costs to generically bias towards short stream plans.
A disadvantage to simultaneously planning $\pi$ and $\psi$ is that both the planning horizon and branching factor increase, making each call to \proc{search} more expensive (section~\ref{sec:results}).
\subsubsection{Sequential}
Alternatively, \proc{plan-streams} can be implemented by first planning for $\pi$ using all optimistic atoms and then planning $\psi$ by identifying which streams instances $\pi$ requires.
First, the set of optimistic atoms $U^*$ is recreated and used to augment $U$ within \proc{search}.
When $\pi$ is found, the difference between the plan preimage $\proc{preimage}(\langle\pi, {\cal G}\rangle)$ and $U$ determines which optimistic atoms are required.
Then, the procedure \proc{retrace-streams} is used to identify a subsequence $\psi$ of $S$ that produces these atoms.
By separately planning $\pi$ and $\psi$, both search problems are smaller than the combined problem.
Stream instance effort cannot be as faithfully incorporated when just planning for $\pi$.
However, the effort required to achieve the certified atoms in action instance preconditions can be approximated by applying the $h_\id{max}$ or $h_\id{add}$ heuristics~\cite{bonet2001planning} to stream instances.
\begin{footnotesize}
\begin{codebox}
\Procname{$\proc{sequential-plan-streams}({\cal A}, S, U, {\cal G}):$}
\li $U^* = \bigcup_{s(\bar{x})}^{S} \proc{\cer}(s(\bar{x}), \proc{opt-eval}(s(\bar{x})))$
\li $\pi = \proc{search}({\cal A}, U \cup U^*, {\cal G})$
\li \If $\pi = \kw{None}$: \Return $\langle\rangle, \kw{None}$
\End
\li \Return $\proc{retrace-streams}(S, \proc{preimage}(\langle\pi, {\cal G}\rangle) - U), \pi$
\end{codebox}
\end{footnotesize}
The implementation of \proc{retrace-streams} affects how many stream instances are returned and later evaluated.
A satisfactory implementation would be to just return $S$.
However, $S$ likely contains many stream instances that are not necessary for the execution of $\pi$.
\proc{retrace-streams} could instead be implemented by optimizing for the lowest effort $\psi$ that satisfies the preimage.
This optimization is identical to solving for the optimal delete-relaxation plan~\cite{HoffmannN01}.
Suboptimal $\psi$ can be identified in polynomial time by greedily linearizing stream instances as done when computing the FF heuristic~\cite{HoffmannN01}.
Alternatively, as we do in the experiments, an effort-optimal $\psi$ for $\pi$ can be recovered by a second call to \proc{search}:
\begin{equation*}
\begin{footnotesize}
\psi = \proc{search}(\proc{stream-actions}(S), U, \proc{preimage}(\langle\pi, {\cal G}\rangle)).
\end{footnotesize}
\end{equation*}
We found that this explicit optimization results in surprisingly low overhead and helps prune redundant stream instances resulting from planning using shared objects.
\section{External Functions} \label{sec:func}
STRIPStream can be extended to support planning with procedurally-specified {\em functions} for cost-sensitive planning.
An {\em external function} $f(\bar{X}) \to [0, \infty)$ is a nonnegative function on parameter tuple $\bar{X}$.
Like streams, $f$ is specified in a host programming language, and the domain of $f$ is declared by a set of atoms $\domain{f}$ on $\bar{X}$.
In cost-sensitive planning, each action $a$ may have an effect that increases the total plan cost by $f(\bar{X})$.
Because functions may be defined on infinitely large domains, the set of solution costs may have an {\em infimum} but not a minimum.
For example, the sampled set of trajectories between two configurations might converge in cost to a lower bound without actually reaching it.
Because of this, we will only consider the feasibility problem of producing a solution with cost below a specified cost threshold $c$.
The incremental algorithm can be extended to support functions by evaluating all external functions before running \proc{search} and using an implementation of \proc{search} supporting the specification of a maximum cost $c$.
The focused algorithms require additional care because external functions might be grounded using optimistic objects.
We cannot generally evaluate external functions on optimistic objects; however, we can use lower bounds on their evaluations, such as zero, to ensure the resulting STRIPS problem is optimistic.
The focused algorithms can avoid evaluating stream instances supporting plans exceeding the cost threshold $c$.
Because of this, their performance improves when given tighter lower bounds on external functions, as these prune plans from consideration.
Thus, we allow the user to specify a lower-bound procedure for each external function.
The lower-bound procedure can just a constant or a function defined on optimistic objects.
For example, the distance along an optimistic robot trajectory $\tau$ from $q \to q'$ can be lower-bounded by the straight-line distance between $q$ and $q'$.
\subsubsection{Axioms} ~\label{sec:derived}
An {\em axiom} is a rule that automatically updates the value of a {\em derived predicate} at each state.
Axioms are useful for factoring complex preconditions in order to obtain more compact grounded representations~\cite{thiebaux2005defense}.
Derived predicates can easily be incorporated within STRIPStream when either a bounded number of axioms per state update each atom or atoms are never negated in preconditions.
The incremental algorithm requires no modification, and the focused algorithms only require modification of \proc{preimage} to extract a sequence of axioms.
In our robotics domain, we use \proc{UnsafeAxiom} (below) to update the $\id{Unsafe}$ derived predicate preventing the use of a currently colliding trajectory.
We use a similar \proc{UnsafeHoldingAxiom} (omitted) to update $\id{Unsafe}$ when holding a block $B'$ with grasp $G$.
\proc{UnsafeAxiom} uses a {\em boolean} external predicate $\id{Collision}$, which calls a collision checker to test whether trajectory $T$ is in collision when block $B$ is placed at pose $P$.
In the focused algorithms, \id{Collision} is lower-bounded by \kw{False}, which optimistically assumes trajectories are collision-free.
\begin{footnotesize}
\begin{codebox}
\Procname{\proc{UnsafeAxiom}$(T, B, P)$:}
\zi \id{pre}: $\{\id{Collision}(T, B, P), \id{AtPose}(B, P), \id{Empty}()\}$
\zi \id{eff}: $\{\id{Unsafe}(T)\}$
\end{codebox}
\end{footnotesize}
\section{Results} \label{sec:results}
We experimented using the incremental algorithm (\id{Incr}) and four focused algorithms (\id{Foc-\{U,S\}\{1,2\}}) on 25 randomly generated problems of 3 sizes within 3 domains.
We enforced a 5 minute timeout that includes stream evaluation time.
We use \id{\{U,S\}} to specify unique (\id{U}) vs shared (\id{S}) optimistic objects (section~\ref{sec:optimistic-object}) and \id{\{1,2\}} to specify simultaneous (\id{1}) vs sequential (\id{2}) optimistic planning (section~\ref{sec:optimistic-plan}).
STRIPStream was implemented in Python.
A link to the implementation will be provided in the camera-ready copy of this paper.
We use the FastDownward~\cite{helmert2006fast} planning system to implement \proc{search}.
The stream conditional generators were implemented using the OpenRAVE robotics framework~\cite{openrave}.
Domain 1 (figure~\ref{fig:rovers}) extends the classic PDDL domain {\em rovers}~\cite{long20033rd} by incorporating 3D visibility, distance, and collision constraints.
Although two rovers (blue/green robots) are available, the left rover cannot reach any of the rocks (black objects), soil (brown objects), or configurations near objectives (blue objects).
The right rover alone must analyze the rocks and soil as well as photograph the objectives.
However, the left rover blocks line-of-sight to the lander (grey robot), requiring it to be moved so the right robot can communicate its data.
The top of table~\ref{table:feasible} shows the results when the number of objectives is varied among \{1,3,5\}.
This domain is the most symbolically difficult of the three domains.
As a result, the focused algorithms, although initially outperforming \id{Incr}, are more affected by the increase in problem size due to their repeated calls to \proc{search}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{images/cooking2}
\caption{The robot must clean the blue blocks using the sink (blue surface) and cook the green blocks using the stove (green surface).} \label{fig:cooking}
\end{figure}
\begin{table}[ht]
\centering
\begin{footnotesize}
\begin{tabular}{|c|g|c|g|c|g|c|g|c|g|c|}
\hline
D-\# & \multicolumn{2}{|c|}{\id{Incr}}&\multicolumn{2}{|c|}{\id{Foc-U1}}&\multicolumn{2}{|c|}{\id{Foc-S1}}&\multicolumn{2}{|c|}{\id{Foc-U2}}&\multicolumn{2}{|c|}{\id{Foc-S2}}\\
\hline
& c & t & c & t & c & t & c & t & c & t \\ \hline
1-1 & 25 & 59 & 25 & 27 & 25 & 33 & 25 & 33 & 25 & 23
\\ \hline
1-3 & 25 & 63 & 25 & 40 & 25 & 44 & 25 & 47 & 25 & 38
\\ \hline
1-5 & 25 & 65 & 23 & 67 & 20 & 70 & 25 & 95 & 24 & 41
\\ \hline \hline
2-0 & 23 & 75 & 23 & 5 & 23 & 8 & 23 & 5 & 23 & 6
\\ \hline
2-20 & 0 & - & 22 & 51 & 22 & 75 & 23 & 45 & 23 & 21
\\ \hline
2-40 & 0 & - & 19 & 89 & 20 & 83 & 21 & 108 & 21 & 53
\\ \hline
\end{tabular}
\end{footnotesize}
\caption{The number of successes (c) and mean success runtime in seconds (t) over 25 generated problems per size for domains 1 \& 2.}
\label{table:feasible}
\end{table}
Domain 2 (figure~\ref{fig:cooking}) is a ``cooking" task and motion planning domain where blocks can be cleaned and, once cleaned, cooked.
Blocks can be cleaned when placed in the sink (blue surface) and cooked when placed on the stove (red surface).
The goal is to clean 2 blue blocks, cook 2 green blocks, and return both color blocks back to their initial poses.
Red blocks obstruct the cleaning and cooking process and may need to be moved.
The bottom of table~\ref{table:feasible} shows the results when the number of red blocks is varied among \{0,20,40\}.
As shown, shared objects (\id{S}) outperform unique objects (\id{U}) and sequential search (\id{2}) outperforms simultaneous search (\id{1}).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{images/optimal_smaller}
\caption{Subject to a maximum plan cost constraint, the robot must pick up {\em some} green block and return to its initial configuration.} \label{fig:optimal}
\end{figure}
Domain 3 (figure~\ref{fig:optimal}) is a cost-sensitive task where the goal conditions are that the robot is holding {\em any} green block in its initial configuration and the plan cost is below a fixed threshold.
The robot must navigate around a transparent wall to move between rooms.
As a result, plans that instead manipulate the left red block to reach the left green block are the only ones that meet the cost threshold.
Table~\ref{table:optimal} shows the results when the number of green blocks on the far right table is varied among \{0,2,4\}.
Instead of \id{Foc-U1} and \id{Foc-S1}, we experiment with a version of \id{Foc-U2} endowed with a straight-line lower bound (section~\ref{sec:func}) on base motion costs \id{Foc-U2-Bd}.
The lower bound is able to prune optimistic plans manipulating the far right green blocks from consideration as they are proven to exceed the cost threshold.
However, it does not prune plans manipulating the center object which would be below the cost threshold in the absence of the transparent wall.
Using the custom lower bound results in improved performance over using the default lower bound of zero in \id{Foc-U2} and \id{Foc-S2}.
\begin{table}[ht]
\centering
\begin{footnotesize}
\begin{tabular}{|c|g|c|g|c|g|c|g|c|}
\hline
D-\# & \multicolumn{2}{|c|}{\id{Incr}}&\multicolumn{2}{|c|}{\id{Foc-U2}}&\multicolumn{2}{|c|}{\id{Foc-S2}}&\multicolumn{2}{|c|}{\id{Foc-U2-Bd}}\\
\hline
&
c & t &
c & t &
c & t &
c & t \\
\hline
3-0 & 19 & 126 & 25 & 26 & 25 & 38 & 25 & 12
\\ \hline
3-2 & 10 & 222 & 19 & 98 & 25 & 102 & 25 & 19
\\ \hline
3-4 & 0 & - & 17 & 100 & 19 & 100 & 25 & 17
\\ \hline
\end{tabular}
\end{footnotesize}
\caption{The number of successes (c) and mean success runtime in seconds (t) over 25 generated problems for domain 3.}
\label{table:optimal}
\end{table}
The focused algorithms outperform the incremental algorithm due to their ability to selectively evaluate streams.
The focused algorithms that use shared objects and sequential search (\id{Foc-S2}) scale the best as problem difficulty increases because they produce smaller STRIPS problems.
\section{Conclusion}
STRIPStream is a general-purpose framework for incorporating sampling procedures in a planning language.
We gave several algorithms for STRIPStream that reduce planning to a series of STRIPS problems.
Future work involves further extending STRIPStream to incorporate numeric preconditions
\newpage
\bibliographystyle{named}
|
{
"timestamp": "2018-02-27T02:00:42",
"yymm": "1802",
"arxiv_id": "1802.08705",
"language": "en",
"url": "https://arxiv.org/abs/1802.08705"
}
|
\section{Introduction}
Phase transitions are ubiquitous phenomena in nature. They play an important role in very diverse contexts; from the boiling of the water for your spaghetti, to the large scale structure of the universe, to DNA dynamics. During a phase transition the properties of a system can change abruptly, often discontinuously, upon dialing an external parameter such as temperature, pressure, or others. Differently from \textit{thermal} phase transitions, \textit{quantum} phase transitions (QPT) \cite{sachdev2011,2003RPPh...66.2069V} occur at zero temperature $T=0$ and at a precise \textit{critical} value of an external parameter $g\equiv g_c$, \textit{i.e.} the quantum critical point. Although \textit{absolute} zero temperature is clearly physically not realizable, the consequences of the QPT extend also to a finite temperature region near the critical point, known as \textit{quantum critical region} (QCR) (see fig.~\ref{fig1}). More interestingly, inside the QCR, the critical nature of the system manifests itself in unconventional but universal physical behaviors such as the linear in T scaling of the resistivity in \textit{strange metals} \cite{Bruin804}. In recent years QPTs have attracted the interest of many theorists and experimentalists especially in the condensed matter community where more and more examples are being discovered: from exotic magnetism, to high-Tc superconductivity and metal-insulator transitions. Quantum phase transitions still represent a big scientific challenge and robust and controllable theoretical models are still under investigation \cite{2007arXiv0709.0964V}.
The presence of impurities or disorder is unavoidable in realistic situations and it can have a deep impact on the physical features of a system and its behaviour across possible phase transitions. The effects of disorder on quantum phase transitions are much stronger than on their classical counterparts and they are subject of intense study in the present days \cite{2006JPhA...39R.143V,2013AIPC.1550..188V}.
The common lore to assess whether a phase transition is stable with respect to a small amount of \textit{randomness} relies on the so-called Harris criterion \cite{0022-3719-7-9-009}. According to that criterion, the phase transition will not be affected by the presence of disorder as far as the correlation length critical exponent $\nu$ satisfies the inequality $d\,\nu>2$, with $d$ the number of spatial dimensions in which randomness is present.
The Harris criterion refers to the behaviour of the system at large length scales and it is not able to capture possible existing features at finite length scale. It thus represents a necessary condition for the stability of a clean critical point, but not a sufficient one. In other words disorder is defined as \textit{Harris-irrelevant} if its effects become less and less important at larger scale. It is nowadays clear that such a criterion has to be completed and new effects, \textit{i.e.} rare regions effects, play a fundamental role in disordered QPTs \cite{2013arXiv1309.0753V,2004PSSBR.241.2118V}. In particular the generic consequence of disorder is to smear the sharp quantum phase transition (see fig.~\ref{fig1}) \cite{2008PhRvL.100x0601H,PhysRevLett.90.107202}.
The smearing or rounding of the sharp QPTs is believed to be a direct manifestation of the so-called rare regions or Griffiths effects \cite{PhysRevLett.23.17,PhysRevLett.54.1321,2010JLTP..161..299V}. First experimentally observed in quantum magnets \cite{PhysRevLett.90.107202}, these phenomena appear in a variety of systems, ranging from classical Ising magnets with planar defects \cite{0305-4470-36-43-017} to
nonequilibrium spreading transitions in the contact process \cite{PhysRevE.70.026108}. Furthermore, disorder correlations are of special relevance for QPTs; even short-range correlations can qualitatively modify quantum smeared phase transitions. Concretely positive correlations (like in the case of impurity atoms which attract each other) have been shown to enhance the smearing of the transition \cite{PhysRevLett.108.185701,2013AIPC.1550..263N,0295-5075-97-2-20007}. The understanding of the physics of disordered quantum phase transitions is just at a preliminary stage and a complete classification of the various scenarios is still lacking.\\[0.1cm]
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{1}
\caption{\textbf{Left:} Sketch of the phase diagram as a function of the temperature $T$ and the external non-thermal parameter $g$. The quantum phase transition between the two phases happens at $g\equiv g_c$ and $T=0$ but it affects the physics of the system on a larger region extending towards the finite temperature regime, \textit{i.e.} the QCR (orange area). \textbf{Right: }The evolution of the order parameter $\mathcal{P}$ across the quantum phase transition at $T=0$. The black dashed line is the clean case where the QPT is sharp while the orange line is the smeared disordered QPT.}
\label{fig1}
\end{figure}\\
Holography, also known as \textit{gauge-gravity duality}, has emerged in the last decade as a powerful and effective framework to tackle condensed matter questions dealing with \textit{quantum criticality} and strongly correlated systems \cite{Hartnoll:2016apf,Ammon:2015wua,zaanen2015holographic}.
Quantum phase transitions have been already realized in holography in various forms \cite{DHoker:2009mmn,DHoker:2010onp,Iqbal:2010eh,Landsteiner:2015pdh,Gubankova:2014iha,Iqbal:2011aj,Donos:2012js,Baggioli:2016oqk,Baggioli:2016oju}. The idea is that, dialing an external parameter $g$ on top of an extremal $T=0$ gravitational setup, a specific order parameter $\mathcal{P}$\footnote{Note that in our case the order parameter will be the anomalous Hall conductivity and therefore given by two point function of a current operator. An analogous situation appears within the realm of metal-insulator transitions \cite{ 2011arXiv1112.6166D} which have been studied in holography for example in \cite{Donos:2012js,Baggioli:2016oqk,Baggioli:2016oju}. This is in contrast with the ``Landau-Ginzburg-Wilson'' paradigm of phase transitions where the order parameter is the vacuum expectation value of a local operator.} can emerge at a critical value $g\equiv g_c$ and induce a quantum transition between two different phases, \textit{i.e.} two qualitatively different solutions of the bulk equations of motion. As a benchmark example we will consider in this paper the holographic quantum phase transition from a topologically nontrivial Weyl semimetal to a trivial one introduced in \cite{Landsteiner:2015lsa,Landsteiner:2015pdh}.\\[0.1cm]
Weyl semimetals are materials featuring crossing of bands at isolated, non degenerate points, \textit{i.e.} Weyl nodes, in the Brillouin zone at the Fermi level \cite{Xu613,Liu864,Hosur:2013kxa,2015Sci...349..622L,2015PhRvX...5c1013L}. Crucially, the effective description of the degrees of freedom close to these points is that of relativistic fermions. This implies, among others, that anomaly effects related to chiral fermions are present in these materials.
Anomalies play a fundamental role in high energy physics \cite{Kharzeev:2015znc} and lead to concrete physical manifestations such as the decay of the neutral Pion to two photons. The observation that anomalies can also have a leading impact on condensed matter is somehow more surprising and recent. It is now well established that anomalies can be responsible for new and unexpected transport phenomena in real materials, referred as \textit{anomalous transport} (see \cite{,Landsteiner:2016led} for a review on the subject). Anomaly induced transport has been experimentally tested in Weyl and Dirac semimetals \cite{Gooth:2017mbd, Li:2014bha, PhysRevX.5.031023, nature1}.\\
An effective low energy description of a Weyl semimetal can be realized in terms of fermions satisfying a Lorentz massive Dirac equation with a time reversal breaking parameter $b$ \cite{Grushin:2012mt}:
\begin{equation}
\left(i\,\slashed{\partial}\,-\,M\,+\,\gamma_5\,\gamma_z b\right)\,\Psi\,=\,0\,.
\end{equation}
The parameter $b$ induces a shift in the momenta of the left and right handed Weyl fermions\footnote{As a consequence of the Nielsen-Ninomiya theorem \cite{1983PhLB..130..389N} left- and
right-handed Weyl nodes in a lattice appear always in pairs.}. This model features a topological phase transition from a Weyl semimetal to a topologically trivial (massive fermions) phase.
For $b^2>M^2$ there are two Weyl nodes in the band structure which are separated in momentum space by the effective parameter $b_{\text{eff}}=\sqrt{b^2-M^2}$. In this case the system lies in a topologically non trivial phase, \textit{i.e.} the \textit{Weyl semimetal} phase. On the contrary for
$b^2<M^2$ the theory becomes gapped with the gap being $\Delta=\sqrt{M^2-b^2}$ and the system exhibits a topologically trivial insulating state. In other words a topological quantum phase transition appears at $b/M=1$ and exactly zero temperature. The order parameter for the QPT is the so-called anomalous Hall conductivity (AHC) defined as the Hall conductivity at zero magnetic field
\begin{equation}\label{QPT}
\sigma_{xy}(B=0)\,=\,\frac{b^2-M^2}{2\,\pi^2}\,\Theta\left(|b|\,-\,|M|\right) \, .
\end{equation}
This can be computed with field theory methods \cite{Jackiw:1999qq} and represents a clear manifestation of the axial anomaly. In fig.~\ref{fig:weyl} we provide a simple sketch of these concepts. For a more detailed explanation we refer to \cite{Landsteiner:2016led} and references therein\footnote{See also \cite{Lucas:2016omy} for an hydrodynamic description of Weyl semimetals and their transport properties.}.
The question whether and how disorder and impurities affect the physics of the Weyl semimetals has received particular attention recently. The effective field theory for disordered Weyl semimetals has been built in \cite{PhysRevLett.114.257201,PhysRevB.93.075113} and several studies regarding the phase diagram and the topological phase transition in presence of randomness have been performed \cite{2015PhRvL.115x6603C,2015PhRvL.114t6602Z,Roy:2016amv,2017PhRvB..95a4204L,PhysRevB.94.115137,2014PhRvL.113b6602S, PhysRevB.93.075113}. Despite all the effort a robust and definitive verdict is still absent.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.31\textwidth]{2.pdf}%
\quad
\includegraphics[width=0.31\textwidth]{3.pdf}%
\quad
\includegraphics[width=0.31\textwidth]{4.png}
\caption{Sketch of the Weyl semimetal quantum phase transition. \textbf{Left: } Spectrum of the theory for $b^2>M^2$ with the two Weyl nodes. \textbf{Center :} Gapped spectrum of the theory for $b^2>M^2$. \textbf{Right :} Anomalous Hall conductivity across the QPT from the EFT result eq.\eqref{QPT}.}
\label{fig:weyl}
\end{center}
\end{figure}
A simple model for Weyl semimetals was recently introduced into the holographic framework in a bottom-up fashion \cite{Landsteiner:2015lsa,Landsteiner:2015pdh}. The system exhibits indeed a sharp quantum phase transition between a trivial phase to a nontrivial state dialing a non thermal external coupling encoded in the asymptotics of specific bulk fields. The order parameter for the QPT is given by the anomalous Hall conductivity. The presence of a finite anomalous hall conductivity is a direct consequence of the breaking of time reversal symmetry and an evident manifestation of anomalous transport \cite{Landsteiner:2016led}. Several properties of the model have been investigated in various directions \cite{Landsteiner:2017lwm,Grignani:2016wyz,Landsteiner:2016stv,Copetti:2016ewq,Rogatko:2017svr,Jacobs:2015fiv,Ammon:2016mwa}. See \cite{Liu:2018bye} for a study on other topological semimetals in holography.
On the other side in the last years particular attention has been paid to the idea of incorporating disorder and its effects in the holographic picture. Throughout our work by \textit{disorder} we will mean \textit{quenched disorder}. From the (strongly coupled) quantum field theory side \cite{Aharony:2015aea} this amounts to consider ``random couplings'' whose statistical distribution does not depend on the quantum fields. In more details we have in mind a deformation of the theory:
\begin{equation}
\mathcal{S}\,=\,\mathcal{S}_0\,+\,\int d^dx\,g_i(x)\,\mathcal{O}_i\left(x\right)
\end{equation}
where the couplings $g_i(x)$ are random. From the bulk point of view this corresponds to assuming asymptotic boundary conditions for the bulk fields $\phi_i$ dual to the operators $\mathcal{O}_i$ of the type:
\begin{equation}
\lim_{\rho \rightarrow 0}\phi_i\left(\rho,x\right)\,=\,\rho^{\#}\,g_i(x)\,+\,\dots
\end{equation}
where $\#$ is a specific number depending on the conformal dimensions $\Delta_i$ of the operators $\mathcal{O}_i$ and $\dots$ are subleading corrections at the boundary $\rho=0$. Quenched disorder has been already considered in various holographic models\footnote{The way we here consider disorder differs substantially from the ``homogeneous'' models introduced in holography for example in \cite{Vegh:2013sk,Andrade:2013gsa,Baggioli:2014roa,Alberte:2015isw,Baggioli:2015gsa}. In those cases a relaxation time for the momentum operator is effectively introduced into the dual CFT but no explicitly disordered sources appear anywhere.} with particular emphasis on its effects on transport properties \cite{Lucas:2014sba,Lucas:2014zea,Lucas:2015lna,Donos:2014yya,Garcia-Garcia:2015crx,Andrade:2017lsc}. In addition disordered geometries and fixed points have been recently studied in \cite{Hartnoll:2014cua,Hartnoll:2014gaa}.
Furthermore, discrete scale invariance (DSI) is believed to be intimately linked to the presence of disorder \cite{SORNETTE1998239}; it manifests itself in the peculiar log-oscillatory behaviour of the thermodynamic and transport observables due to the appearance of a complex critical exponent in the vicinity of a (quantum) phase transition. The signatures of emergent (discrete) scale invariance have been already observed in disordered holographic models in \cite{Hartnoll:2015faa,Hartnoll:2015rza,Balasubramanian:2013ux,Flory:2017mal}\footnote{Discrete scale invariance has been also studied in holographic systems \textit{without} disorder in \cite{Balasubramanian:2013ux,Flory:2017mal}.}.
Finally, the effects of disorder on thermal, but not quantum, phase transitions have been already analyzed in holography in \cite{Ryu:2011vq,Fujita:2008rs,Arean:2015sqa,Arean:2014oaa,Arean:2013mta}. Concretely in \cite{Arean:2015sqa} the appearance and effects of ``rare'' regions was studied in the context of superconducting phase transitions.
Despite all the recent efforts, to the best of our knowledge, the effects of quenched disorder on \underline{quantum} phase transitions have not been investigated so far. In this direction, we aim to provide a first study of an holographic QPT and its quantum critical region in presence of disorder. In particular, we focus on one dimensional quenched disorder within the holographic Weyl semimetal QPT and tackle the following questions:\\[0.35cm]
\centerline{\textit{Will the quantum phase transition remain sharp in the presence of quenched disorder?}}\\
\centerline{\textit{If not, how will its nature get modified?}}\\[0.035cm]
\noindent The major result of this work is that indeed the holographic sharp quantum phase transition gets smeared due to the presence of quenched disorder in accordance with the condensed matter expectations \cite{2010JLTP..161..299V}.
The latter phenomenon is linked to the appearance of localized regions at the horizon where the local order parameter is non-zero.
Moreover, the effects of disorder correlation on the smearing of the QPT are consistent with the weakly coupled arguments given for example in \cite{PhysRevLett.108.185701}.
In addition, inside the quantum critical region, we find a log-oscillatory behaviour in the order parameter which we take as evidence of the emergence of a disordered fixed point displaying discrete scale invariance.
The structure of the paper is as follows: in section \ref{sec1} we introduce the model we consider; in section
\ref{sec2} we present our main results and in section \ref{sec3} we conclude with the discussion. Appendices \ref{app2} and \ref{app:numerical} are devoted to technical details about the computations and the numerical routines used to obtain the results presented in the main text.
\section{An holographic dirty quantum phase transition}\label{sec1}
In this section we present the holographic Weyl semimetal model \cite{Landsteiner:2015lsa} and the one-dimensional Gaussian quenched disorder we use to probe the quantum phase transition.
\subsection{The holographic (clean) Weyl semimetal}
The five-dimensional holographic model consists of two Abelian gauge fields, $V$ (``vector'') and $A$ (``axial'') coupled via a Chern-Simons term and a scalar field $\Phi$ charged under the axial field. The action reads as follows:
\begin{align}
\mathcal{L}=&-\frac{1}{4}H^{ab}H_{ab}-\frac{1}{4}F^{ab}F_{ab}+(D_a\Phi)^*(D^a\Phi)-\mathcal{V}(\Phi)\nonumber\\
&+\frac{\kappa}{3}\epsilon^{abcde}A_a\left(F_{bc}F_{de}+3 H_{bc}H_{de} \right)\,,
\end{align}
where $D_a$ is the covariant derivative specified by $D_a\equiv \partial_a-i q A_a$, $F=dA$, $H=dV$, and $\mathcal{V}(\Phi)$ is a potential which we choose to be $\mathcal{V}(\Phi)=m^2|\Phi|^2$ (see \cite{Copetti:2016ewq} for a study on more general potentials).
We consider backgrounds of the form:
\begin{align}
d s^2 &= \frac{1}{\rho^2} \left( - f(\rho) d t^2 + \frac{d \rho^2}{f(\rho)} + d x^2 + d y^2 + d z^2\right), \nonumber\\
A &= A_z(x,\rho) d z, \nonumber\\
\Phi &= \phi(x,\rho).\label{ansatz}
\end{align}
Here, $f(\rho)=1-\rho^4$ is the emblackening factor. The horizon is located at $\rho=1$ while the boundary of the asymptotically AdS geometry is set at $\rho=0$. For simplicity, we restrict ourselves to the probe limit which implies that the background metric is homogeneous and fixed. Note that this limits our study to the critical region induced by the zero temperature QCP. The corresponding temperature is given by:
\begin{equation}
T = \frac{- f'(1)}{4\pi} = \frac{1}{\pi} \, .
\end{equation}
The equations of motion that follow from the inhomogeneous ansatz \eqref{ansatz} read:
\begin{align}\label{eq:eqs}
0 &= -\phi \left(q^2 \rho ^2 A_z^2+m^2\right)+\left(\rho ^2 f'(\rho )-3 \rho f(\rho )\right) \phi^{(0,1)} +\rho ^2 f(\rho)\phi^{(0,2)}+\rho^2 \phi ^{(2,0)} \,,\\
0 &= \left(\rho ^2 f'(\rho )-\rho f(\rho )\right) A_z^{(0,1)}+\rho ^2 f(\rho ) A_z^{(0,2)} -2 q^2 \phi^2 A_z +\rho ^2 A_z^{(2,0)}\,,\label{eq:eqs2}
\end{align}
where the partial derivatives with respect to $x$ and $\rho$ are denoted by $F^{(k,l)}=\frac{\partial^{k+l}}{\partial x^k \partial \rho^l} F$. The spatial $x$ direction is compactified to a sphere $S^1$ for numerical convenience.
For concreteness, throughout the paper we choose the parameters $q=1$ and $m^2=-3.$ (see \cite{Copetti:2016ewq} for a discussion on the effect of other possible choices). The expansion for the bulk fields close to the conformal boundary $\rho=0$ reads:
\begin{equation}\label{eq:boundaryconditions}
A_z(x, \rho)\sim b(x) \, \rho^{0}\,+\,\dots\,, \qquad\phi(x,\rho)\sim M(x) \, \rho^1 +\dots\,
\end{equation}
where the dots refer to subleading terms in the limit $\rho \rightarrow 0$. The functions $b(x)$ and $M(x)$ are to be identified as the \textit{inhomogeneous} sources for the operators dual to the bulk fields $A_z$ and $\phi$. Let us remark that even in the ``homogeneous'' case $b(x)=b$, $M(x)=M$, the introduction of a source for the $z$ component of the gauge field $A_\mu$ accounts for the explicit breaking of the $SO(3)$ symmetry of the boundary theory and it is qualitatively related to the separation of the Weyl nodes in momentum space. At finite temperature this choice of fields allows to define two dimensionless parameters in the system, which we choose to be $\bar{b}=b/M$ and $\bar{T}=T/M$.
In the $x$-independent situation, i.e. $b(x)=b$ and $M(x)=M$, the holographic model exhibits a topological phase transition at a certain critical value of $\bar{b}$, which can be thought of as the onset of the Weyl node separation. This was first shown in the probe limit at low temperature in \cite{Landsteiner:2015lsa} and then analyzed at zero temperature in \cite{Landsteiner:2015pdh}. The order parameter is identified with the anomalous Hall conductivity (AHC), \textit{i.e.} the Hall response at zero magnetic field. For simplicity, throughout the paper we will use the simple notation $-\sigma_{xy}(B=0)\equiv \sigma$, where $\sigma_{xy}$ is obtained from the consistent-covariant currents correlator, see appendix \ref{app2}. Concretely, it was found that the onset of the order parameter happens at $\bar{b}\approx 1.4$ as reproduced here in fig.~\ref{fig:homogeneous} . The topological phase is characterized by a finite Hall conductivity, which in the homogeneous case can be read off from the behavior of the gauge field at the horizon as \cite{Landsteiner:2015lsa}:
\begin{equation}
\label{eq:AHC_hol}
\sigma=8\,\kappa \,A_z\big|_{\rho=1}\,.
\end{equation} The behaviour of the order parameter $\sigma$ close to the QPT is of the form \cite{Landsteiner:2015pdh} :
\begin{equation}
\sigma\,\sim\,\left(\bar{b}\,-\,\bar{b}_c\right)^\Psi\,,\qquad \Psi\,\approx\,0.211
\end{equation}
which is in contrast with the ``mean field'' result $\Psi=1/2$ coming from the field theory model \cite{Landsteiner:2016led}.
The inhomogeneous case, however, adds some difficulty to this analysis. As shown in appendix \ref{app2} only the averaged Hall conductivity $\tilde{\sigma}$ can be obtained directly from the behaviour of the gauge field at the horizon. The concrete formula reads:
\begin{equation}
\label{eq:AHC_inhom}
\tilde{\sigma}\,=\,\int_{S^1}\text{dx}\,\sigma(x)\,=\,8\,\kappa\,\int_{S^1}\text{dx}\,A_z(x)\big|_{\rho=1}\,.
\end{equation}
where $S^1$ is the circle of the compact dimension $x$ with periodicity $L$. For simplicity, we have chosen $\kappa=1/8$ throughout this paper.
\subsection{Let's get dirty}
In order to study the effects of one dimensional quenched disorder in the Weyl nodes distance we have introduced 1D Gaussian noise in the source of the gauge field $b(x)$ retaining the source for the scalar field $\phi$ constant $M(x)\equiv M$. Concretely we implement our disorder distribution (see fig.~\ref{figex}) using the spectral representation \cite{Shinozuka1991}:
\begin{equation}\label{eq:form}
b(x) = b_0 + 2 \,\gamma \sum\limits_{i=1}^{N-1} \sqrt{S(k_i)} \,\sqrt{\Delta k} \, \cos\left( k_i \, x +\delta_i \right) \, .
\end{equation}
with equidistributed momenta $k_i=i\,\Delta k$ with separation $\Delta k=k_0/N$ and the random phases $\delta_i$ uniformly distributed in the interval $[0, 2\pi]$.
The parameter $\gamma$ measures the disorder strength.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.45\textwidth]{5.pdf}
\quad
\includegraphics[width=0.45\textwidth]{6.pdf}
\caption{\textbf{Left: }Example of specific disorder realizations \eqref{eq:form} with $b_0=25$, $N=30$ and $\alpha=0,0.25,1$ (red, blue, green). \textbf{Right: }Corresponding correlation functions $\mathcal{C}(x)$.}
\label{figex}
\end{center}
\end{figure}
Since $x$ is a periodic coordinate with periodicity $L$, we may represent the disordered source using the spectral representation in terms of Fourier modes\footnote{In the plots, we have used $\tilde{x} = 2\pi\,x /L $ such that $\tilde{x}$ is $2\pi$-periodic.}; this amounts to choose $\Delta k=2\pi/L$. The quantity $k_{UV}\equiv k_0= 2\pi N/L$ provides a UV cutoff for our disorder. On the contrary, the quantity $k_{IR}\equiv k_0/N=2\pi/L$ sets the infrared scale. In order for our setup to be physically meaningful we need to satisfy the inequalities:
\begin{equation}\label{eq:inequalities}
k_{IR}\,\ll \,T\, \ll \,k_{UV}\,.
\end{equation}
Note that this can be achieved by a large enough size $L$ and a large enough number of modes $N$. The disorder average $\left\langle f \right\rangle_{dis}$ is defined as
\begin{equation}
\left\langle f \right\rangle_{dis} = \int \prod\limits_{i=1}^{N-1} \frac{d \delta_i}{2\pi} f
\end{equation}
and it is implemented numerically by averaging on a large number of different disorder realizations of the type \eqref{eq:form}.
We set the \textit{spectral density} $S$ of our signal to have the simple form $S(\xi)=\xi^{-2\alpha}$. The power $\alpha$ controls the correlation of our disordered distribution: positive and large $\alpha$ corresponds to positive and large correlation (see fig.~\ref{figex}). We put $\alpha=0$ where not explicitly stated otherwise.
The power $P$ of a signal is defined as as the autocorrelation function at $x=0$:
\begin{equation}\label{eq:power}
P\equiv\langle \Hat{b}(0)\hat{b}(0)\rangle=4\gamma^2\Delta k\sum_{i=1}^{N-1}\frac{1}{k_i^{2\alpha}} \,=\,4\gamma^2\Delta k^{1\,-\,2\,\alpha}\,\mathcal{H}_{N-1}(2\,\alpha)\,, \end{equation}
where $\hat{b}(x) = b(x) - b_0$ and $\mathcal{H}_n$ is the n-th harmonic number. At finite correlation $\alpha \neq 0$ the power $P$ is a better suited measure of the disorder strength than the simple amplitude $\gamma$.
In the case of $\alpha=0,$ i.e. for $S(k_i)=1$, and in the limit $N \rightarrow \infty$, the distribution \eqref{eq:form} describes local, isotropic, Gaussian disorder:
\begin{equation}
\left\langle \hat{b}(x) \right\rangle_{dis} = 0 \, , \qquad\quad \left\langle \hat{b}(x) \,\hat{b}(y) \right\rangle_{dis} = \gamma^2 \delta(x-y) \, ,
\end{equation}
with all the higher momenta vanishing.
The goal of this work is to investigate the fate of the quantum Weyl phase transition in presence of this type of one-dimensional Gaussian disorder. This implies that the dimensionality of our disorder is given by $[\gamma]=[b]-1/2$=1/2, where $b$ is the source that couples to the current operator dual to the bulk gauge field $A$. This is a relevant deformation and in this sense it is said not to satisfy Harris criterion. Since the relevant dimensionless quantity governing the phase transition is the ratio $b/M$ we expect similar results to hold for the case of an inhomogeneous source for the scalar field. \footnote{However, let us remark that this second option allows for the study marginal or irrelevant disorder, by appropriately setting the mass of the associated scalar field in the bulk. Another, numerically more challenging, option to achieve this is to increase the number of dimensions in which the fields are disordered.}
\section{Results: heating it up and making it dirty}\label{sec2}
In this section we discuss the effects of finite temperature and quenched Gaussian disorder on the holographic Weyl semimetal topological quantum phase transition.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.48\textwidth]{7.pdf}%
\quad
\includegraphics[width=0.48\textwidth]{8.pdf}
\caption{\textbf{Left: }AHC $\sigma$ against $\bar{b}$ for zero disorder and at finite temperature $T/M=1/\pi-1/(10\pi)$ (red-blue). At low enough temperatures (blue) a quantum phase transition occurs \label{fig:homogeneous}. \textbf{Right: } Semilogarithmic plot of the AHC $\sigma$ vs $M/T$ for fixed $\bar{b}={0.9-1.39}$ (green-red). At high enough M/T we fit the curves to a function of the type $e^{-c\,x^{\alpha}}$ where $x\equiv M/T$ finding $\alpha=1.05$. Close to the phase transition, at $\bar{b}=1.39$ (red line), a new scaling appears at low M/T. This is an effect of the quantum critical region (see fig.~\ref{figpower}). }
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.48\textwidth]{9.pdf}%
\quad
\includegraphics[width=0.48\textwidth]{10.pdf}
\caption{\textbf{Left: }Log-Log plot of the AHC $\sigma$ vs $M/T$ very close to the critical point $\bar{b}={1.4023}$. For small enough $M/T \gtrapprox 20$ the function fits very well a power law $(M/T)^{-\nu}$ with $\nu=0.673$. At low enough temperature we abandon the critical region and the behaviour is no longer a power law as depicted in the inset. \textbf{Right:} Log-log zoom on the region $M/T \lessapprox 20$ shown in the left panel and fit (black line). In the inset we show a log plot for $M/T \gtrapprox 20$, where an exponential decay is recovered. Black line corresponds to the fit to $c_1e^{c_2\,M/T}$. }
\label{figpower}
\end{center}
\end{figure}
First, we study the effects of temperature on the clean QPT, which have been already partially analyzed in \cite{Landsteiner:2015pdh}. Finite temperature generically smoothens the sharp quantum phase transition into a crossover (see left panel in fig.~\ref{fig:homogeneous}). Eventually, for high enough temperature, the AHC is non-zero everywhere and the topologically trivial phase becomes inaccessible in the whole $\bar b$ range. We have analyzed the decay of the anomalous Hall conductivity $\sigma$ as a function of $T/M$ for several values of $\bar{b}$ moving towards the quantum critical point $\bar{b}_c$. Away from the quantum critical point, or in other words outside the \textit{quantum critical region}, we find an exponential decay of the form (see right panel in fig.~\ref{fig:homogeneous}):
\begin{equation}
\sigma\,\sim\,e^{-\,c\,M/T}\label{Teffects}
\end{equation}
where the parameter $c$ is not constant but rather it depends on $\bar{b}$ as shown in fig.~\ref{fig:homogeneous}. The exponential fall-off is consistent with the presence of a mass gap outside the quantum critical region which breaks the properties of scale invariance. Moving close to the quantum critical region, i.e. in the vicinity of $\bar{b}_c$, this behaviour is modified. As shown in fig.~\ref{figpower}, at big enough $T/M$ the decay follows a power law:
\begin{equation}
\sigma\,\sim\,\left(\frac{M}{T}\right)^{-\nu}\label{T2effects}
\end{equation}
were the ``critical exponent'' is found to be $\nu=0.673$. This is a clear signature of the presence of scale invariance inside the quantum critical region.
Decreasing further the $T/M$ parameter we get back to an exponential decay. This can be understood from the inset in the left panel of fig.~\ref{figpower}: when getting close to zero temperature the critical region becomes thinner and thinner; at some point the system is again outside the quantum critical region and the exponential behaviour due to the mass gap is restored. Following the critical region up to $T/M=0$ requires a high numerical accuracy. We leave this for a future investigation including backreaction, which will allow us to directly probe the QCP at zero temperature.\\
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.45\textwidth]{11.pdf}%
\qquad
\includegraphics[width=0.45\textwidth]{12.pdf}
\caption{ \label{fig:configuration} A concrete configuration of the gauge field along the bulk for $T/M=1/(10\pi)$, $\bar{b}_0=1.35$ and $\bar{\gamma}=0,0.002$ (left, right).}
\end{center}
\end{figure}
Now let us add some disorder into the system! We have computed the anomalous Hall conductivity in presence of Gaussian disorder \eqref{eq:form} in a wide range of temperatures $\bar{T}$ and $b_0/M\equiv\bar{b}_0\in[1-1.6]$ increasing the disorder strength\footnote{Let us remark that we have restricted $\gamma$ to values that do not generate negative Weyl node distances .} $\gamma$. To do this consistently we define another dimensionless ratio, $\bar{\gamma} \equiv \gamma\,\sqrt{\Delta k}/M$, which from now on we refer to simply as the disorder strength.
In order to fulfill the requirement \eqref{eq:inequalities} we have chosen $L=20\pi$ and $N=30$. We explicitly checked that the qualitative results are independent of the cutoffs. A large N scaling analysis would be necessary to make further and robust statements about the concrete quantitative results, like the critical exponents values. In order to have enough resolution to render grid size independent results we used grids of 51x101 (z,x directions respectively). Moreover compute $\sim 50$ different random realizations for each point in parameter space in order to obtain a small enough variance of the mean of the AHC. See fig.~\ref{figex} for a concrete realization of random source for the gauge field $A_z$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.45\textwidth]{13.pdf}
\qquad
\includegraphics[width=0.45\textwidth]{14.pdf}%
\caption{ \label{fig:rarerare} The profile of the gauge field $A_z$ at the horizon and the appearance of the ``rare'' regions. \textbf{Left: }At fixed disorder strength $\bar\gamma=0.002$ approaching the critical point. We take $\bar{b}_0=1.25,\,1.3,\,1.35,\,1.4$ (purple, red, orange, blue). \textbf{Right :} At fixed $\bar{b}_0=1.35$ and increasing strength $\bar\gamma=0.01,\, 0.02,\, 0.025$ (red, yellow, green).}
\end{center}
\end{figure}%
In fig.~\ref{fig:configuration} we show the bulk profile of the gauge field $A_z(\rho)$ at $M/T=10\pi$ and $\bar{b}_0=1.35$ for a concrete realization of the random disorder described previously. We compare the results with the clean and homogeneous system. In absence of disorder (left panel of fig.~\ref{fig:configuration}) the gauge field vanishes everywhere along the horizon, giving rise to a zero anomalous Hall conductivity. On the other hand, as shown in the right panel of fig.~\ref{fig:configuration}, the presence of a disordered source triggers the appearance of localized areas at the horizon where the gauge field does not vanish. As a consequence the integrated AHC acquires a finite value which was not present in the clean system.
In fig.~\ref{fig:rarerare} we show several realizations of the profile of the gauge field at the horizon, as a function of the spatial direction $x$. In the left panel of the figure we show the profile for fixed disorder strength $\bar\gamma$ and increasing $\bar{b}_0$. The appearance of localized areas which have locally undergone the phase transition is apparent. As the system is tuned closer the to the critical $\bar{b}_{0}$ these ``rare'' regions become broader and less rare. Similar behavior is found for fixed $\bar{b}_0$ and increasing disorder strength $\bar\gamma$, as shown in the right panel of fig.~\ref{fig:rarerare}. It is tempting and suggested by our results to interpret these areas as ``rare'' regions discussed in the condensed matter literature \cite{2006JPhA...39R.143V}. \\
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.46\textwidth]{17.pdf}
\includegraphics[width=0.47\textwidth]{18.pdf}
\caption{\textbf{Left:} Anomalous Hall conductivity as a function of vs $\bar{b}_0$ for $\bar{\gamma}\in[0.003,0.035]$ (blue-red). \textbf{Right:} Semilogartihmic plot of the AHC against $\bar{b}_0$ for $\bar{\gamma}\in[0.001,0.035]$ (green-red). Black lines correspond to the exponential fit \eqref{diseffects}.}
\label{fig:figdis}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.47\textwidth]{21.pdf}
\quad \quad
\includegraphics[width=0.47\textwidth]{22.pdf}
\caption{\label{fig:correlation} \textbf{Left:} AHC close to the critical $\bar{b}_0$ for fixed $P=0.15$ and $M/T=10/\pi$ and increasing correlation $\alpha=0,\,0.5\,,2$ (red, yellow, blue). Black dashed line shows the AHC at zero disorder. \textbf{Right:}Evolution of the rare regions with alpha for $\alpha=0,\,0.5,\,2$ (red, yellow, blue) at fixed $P=0.15$. Localized regions become broader, as expected for increasingly correlated disorder.
}
\end{center}
\end{figure}
We now turn to the study of the quantum phase transition at finite disorder. In fig.~\ref{fig:figdis} we show the anomalous Hall conductivity at low temperature $T/M=1/(10\pi)$ as a function of $\bar{b}_0$ and for increasing disorder strength $\bar\gamma$. The sharp QPT gets smeared out by the presence of disorder as a direct consequence of the presence of the localized regions. Close to the citical value $\bar{b}_{0}\sim1.4023.$ the smearing is well approximated by an exponential tail of the form:
\begin{equation}
\sigma_{xy} \sim\,c_1\,e^{c_2(1.4023 -\bar{b}_0)^{c_3}}\label{diseffects}\,,
\end{equation}
as depicted in the right panel of fig.~\ref{fig:figdis}. We have found numerically the parameter $ c_3$ to be independent of the disorder strength and in all cases compatible with the value $c_3\sim1.28$. On the contrary, the other coefficients $c_1,\,c_2$ appear to be strongly dependent on the disorder strength $\bar{\gamma}$.
This functional dependence, as well as the independence of the power $c_3$ from the details of the randomness, is in agreement with the results obtained with Optimal Fluctuation Theory for composition tuned quantum smeared phase transitions, where $c_3=2-d/\phi$, with $\phi$ the so-called \textit{finite size shift exponent}, \textit{i.e.} just dependent on the details of the clean QPT (see for example \cite{2011PhRvB..83v4402H}).\\
Additionally we can study the role of the disorder correlation on the quantum phase transition and its smearing. In order to do that we fix the power $P$ \eqref{eq:power} of our disorder signal and the number of modes $N$ and we consider several realizations with different $\alpha$, ranging from the uncorrelated case $\alpha=0$ to highly correlated disorder, \textit{i.e.} $\alpha=2$ (see fig.~\ref{figex}).
As shown in fig.~\ref{fig:correlation} we find that the disorder correlation plays indeed a role on the QPT. Concretely we find that positive correlation increases the smearing of the order parameter, in agreement with \cite{PhysRevLett.108.185701,2013AIPC.1550..263N,0295-5075-97-2-20007}. On the right panel of fig.~\ref{fig:correlation} we show a concrete realization of the gauge field at the horizon for fixed $P$ and increasing correlation length. We see indeed how increasing correlation gives rise to broader, less rare, regions that have undergone the phase transition.
Finally we study the fate of the finite temperature scaling at finite disorder. We have found that the power law decay in the critical region is modified in a very interesting way. In particular, we notice a ``wiggling'' of the decay which is consistent with a log-oscillatory form (see fig.~\ref{Tdowncritical}). To be precise, we find a behaviour consistent with the form:
\begin{equation}
\sigma\left(M/T\right)\,\sim\,a_1\,\left(M/T\right)^{a_2} (1+a_3 \sin{[a_4 \log (M/T)+a_5])}
\end{equation}
inside the quantum critical region.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.47\textwidth]{25.pdf}
\includegraphics[width=0.48\textwidth]{26.pdf}
\caption{ \label{Tdowncritical}\textbf{Left:} Log-Log plot at the critical point $\bar{b}_0=1.4023$ with increasing disorder strength $\bar\gamma=(0.003-0.6)$ (green-red). For high enough disorder the data fit well to a log-oscillatory function of the type: $t(x)=a_1\, x^{a_2}(1+a_3 \sin{[a_4 \log (x)+a_5])}$ (black lines). At low disorder (green line) the data approximates the power law decay found at zero disorder and shown in blue-dashed line. \textbf{Right:} Data at $\bar\gamma=0.4$ (orange line in the left panel), divided by $(M/T)^{a_2}$ with exponent $a_2$ given by the fit. Oscillations become apparent.}
\end{center}
\end{figure}
This behaviour is known to be connected with the appearance of discrete scale invariance at the disordered fixed point \cite{SORNETTE1998239}\footnote{See \cite{Flory:2017mal} for some recent discussions about discrete scale invariance in the context of holography.} and has been already observed in holography in \cite{Hartnoll:2015rza}. Unfortunately at very low temperature and high disorder our probe approximation and the numerical methods we used are not reliable anymore. Following our preliminary indications it would be very interesting to investigate further this point including full backreaction.
\section{Conclusions}\label{sec3}
In this paper we have studied the effects of temperature and quenched disorder on a quantum phase transition in holography. In particular we focused on the probe limit analysis of the holographic Weyl semimetal quantum phase transition (QPT) \cite{Landsteiner:2015lsa,Landsteiner:2015pdh} in presence of 1-D Gaussian disorder.
First we have investigated the effects of temperature on the clean QPT appearing at $\bar{b}_{c}\approx 1.4023$. Finite temperature enhances the anomalous Hall conductivity (AHC) and tends to destroy the trivial phase where the AHC vanishes. The fall-off of the AHC towards $T\rightarrow 0$ appears to be:
\begin{itemize}
\item exponential $\sim e^{-\Delta /T}$ for values of the external parameter far from the critical point $\bar{b}<\bar{b}_c$ or more precisely outside the \textit{quantum critical region} (see sketch in fig.~\ref{fig1}). This is consistent with the presence of a finite mass gap $\Delta$.
\item power law $\sim T^{-\nu}$ around the critical point $\bar{b}\approx \bar{b}_c$, \textit{i.e.} inside the \textit{quantum critical region}. This is just a manifestation of the presence of scale invariance at criticality.
\end{itemize}
Moreover, we have introduced disorder into the system and studied the fate of the QPT. Our results show that the presence of disorder induces the appearance of localized \textit{rare} regions in the spatial profile of the vector field at the horizon (see fig.~\ref{fig:configuration} and \ref{fig:rarerare}) which account for a finite integrated AHC $\tilde{\sigma}$, absent at zero randomness. The smearing of the sharp QPT (see fig.~\ref{fig:figdis}) is the main consequence of the presence of disorder and it appears to be consistent in its functional form with the expectations from CM theory \cite{2006JPhA...39R.143V} and optimal fluctuations arguments \cite{2011PhRvB..83v4402H}.
In addition, modifying the correlation of our disorder distribution, we proved that correlation indeed plays a role in the smearing of the quantum phase transition as expected from condensed matter theory studies \cite{PhysRevLett.108.185701,2013AIPC.1550..263N,0295-5075-97-2-20007}. Our findings, in qualitative agreement with the results therein, show that positive correlation enhances the order parameter at the tail and therefore the smearing of the QPT (see fig.~\ref{fig:correlation}).
Finally we have investigated the temperature fall-off of the integrated AHC $\tilde{\sigma}$ in presence of disorder. We notice that at strong enough disorder and within the quantum critical region the AHC exhibits log-oscillatory behaviours as a function of T which seem consistent with the emergence of a disordered fixed point enjoying discrete scale invariance \cite{SORNETTE1998239}.
The results of the present paper open several questions and future directions. The main, numerical demanding, but very valuable direction, would be to study the same setup with full backreaction. That would allow for several improvements: full control on the bulk solution up to zero temperature, complete analysis of the disordered fixed points and possibility of computing more observables like entropy, heat capacity and longitudinal conductivities and viscosities (\cite{Landsteiner:2015pdh,Landsteiner:2016stv}) in presence of disorder. On the same ground it would be extremely interesting to analyze in more details the presence of the log-oscillatory structures.
It could also be possible to apply the same techniques we used to introduce disorder into other holographic QPTs, for example \cite{DHoker:2010onp,DHoker:2012rlj}, to test to which extent our results are universal.
We hope to come back to some of these questions in the near future.
\section*{Acknowledgements}
We thank Panagiotis Betzios, Danny Brattan, Carlos Hoyos and Tomas Vojta for useful discussions and comments about this work and the topics considered. We are specially grateful to Daniel Arean, Sean Hartnoll and Karl Landsteiner for reading a preliminary version of the manuscript and providing several insightful comments and suggestions. We are grateful to Andre Sternbeck for his support with the local cluster.\\
MB is supported in part by the Advanced ERC grant SM-grav, No 669288. AJ and SM acknowledges financial support by Deutsche Forschungsgemeinschaft (DFG) GRK 1523/2.\\
MB would like to thank the Adolfo Iba\~nez University of Vi\~na del Mar (Chile), the organizers of the workshop ``Holography and Supergravity 2018'', the ICTP South American Institute for Fundamental Research (ICTP-SAIFR) and the University of Sao Paulo (USP) for the warm hospitality during the completion of this work. MB would like to thank Marianna Siouti for the unconditional support \panda.
|
{
"timestamp": "2018-02-26T02:12:07",
"yymm": "1802",
"arxiv_id": "1802.08650",
"language": "en",
"url": "https://arxiv.org/abs/1802.08650"
}
|
\section{Introduction}
In the recent years, the out-of-equilibrium behavior of close and open physical systems has attracted a lot of interest motivated in particular by new experimental results, see e.g. \cite{Greiner2002,Kinoshita2006,Hofferberth2007,Bloch2008,Trotzky2012,Schneider2012,Fukuhara2013,Gogolin2015,Ron2013}. Microscopic models able to describe such situations are thus in general not only described through their bulk Hamiltonian but also by specifying appropriate boundary conditions. It leads eventually to rather complicated dynamical properties with possible deformations of the bulk symmetries. In the context of strongly coupled systems, integrable models in low dimension, with boundary conditions preserving integrability properties, can be used to gain insights into the non-perturbative behavior of such out-of-equilibrium dynamics, see e.g. \cite{CEM2016} and references therein. In particular, they can also describe classical stochastic relaxation processes, like ASEP \cite{Derrida1998,Schutz2000,Alcaraz1994,Bajnok2006,deGier2005} or transport properties in one dimensional quantum systems, see e.g. \cite{SirPA09,Pro11}.
The algebraic description of quantum integrable models with non-trivial boundary conditions (namely going beyond periodic boundary conditions) goes back to Cherednik \cite{OpenCyChe84} and Sklyanin \cite{OpenCySkly88}. Such models have already a long history, that started with spin chains and Bethe ansatz \cite{OpenCyH28,OpenCyBe31,OpenCyYY661,OpenCyGa83,OpenCyGau71,Bariev1979,Bariev1980,Schulz1985,OpenCyAlca87}, and continued using its modern developments, see e.g. \cite{OpenCySkly88,OpenCyKulS91,OpenCyMazNR90,OpenCyMazN91,OpenCyPS,ADMFBMNR,OpenCy,OpenCydeVegaG-R93,deVegaR-1994,OpenCyGhosZ94, OpenCyJimKKKM95-1, OpenCyJimKKMW95-2, OpenCyBBOY95,OpenCyKKMNST07,OpenCyDoi03, OpenCyFrapNR07, OpenCyRag2+1, OpenCyRag2+2, OpenCyBK05, OpenCyN02, OpenCyNR03, OpenCyMNS06, OpenCyGal08, OpenCyYNZ06, ADMFCaoYSW13, Xu2016, OpenCyDerKM03-2, OpenCyFSW08, OpenCyFGSW11, OpenCyFram+2, OpenCyFram+1, OpenCyGN12-open, OpenCyFalN14, OpenCyFalKN14, OpenCyKitMN14, OpenCyFanHSY96, OpenCyCao03, OpenCyZhang07, OpenCyACDFR05, OpenCyRag1+, OpenCyRag2+,Baseilhac2013,Belliard2013a,Belliard2013b,Belliard2015a,Belliard2015b,Belliard2015c, OpenCy-H1a, OpenCy-H1b, OpenCy-H1c, OpenCy-H1d, OpenCy-ShiW97,Doikou-2006}. The key point of the algebraic approaches is an extension of the standard Quantum Inverse Scattering method , see e.g. \cite{OpenCySF78,OpenCyFT79,OpenCyS79,OpenCyKS79,OpenCyFST80,OpenCyF80,OpenCyTh81,OpenCyS82,OpenCyF82,OpenCyKS82,OpenCyIK82,OpenCyBaxBook,OpenCyF95,OpenCyJ90,OpenCyLM66,OpenCySh85}, and its associated Yang-Baxter algebra ; it takes the form of the so-called reflection equations \cite{OpenCyChe84,OpenCySkly88} satisfied by the boundary version of the quantum monodromy matrix. The integrable structure of the model with boundaries can be described in terms of the corresponding bulk quantities supplemented with boundary conditions encoded in some $K$-matrices. To preserve integrability properties, these $K$-matrices should satisfy reflection equations driven by the $R$-matrix of the model in the bulk which solves the usual Yang-Baxter equation. As shown by Cherednik in \cite{OpenCyChe84} these reflection equations are just consequences of the factorization property of the scattering of particles on a segment having reflecting ends described by the boundary $K$-matrices. It leads to compatibility properties between the scattering in the bulk described by the $R$-matrix and the reflection properties of the ends encoded in the $K$-matrices. These are such that there still exists full series of commuting conserved quantities for the model with boundaries generated by the boundary transfer matrix \cite{OpenCySkly88}. Its expression is quadratic in the bulk monodromy matrix entries and depends on the right and left boundary $K$-matrices. Then, as for the periodic case, the local Hamiltonian for the boundary model can be obtained from this boundary transfer matrix. This is the standard framework to then address the resolution of the common spectral problem for the transfer matrix and its associated local Hamiltonian. There have been quite a number of works devoted to boundary integrable models using, as in the periodic situation, various versions of the Bethe ansatz \cite{OpenCyKulS91,OpenCyMazNR90,OpenCyMazN91,OpenCyPS,ADMFBMNR,OpenCy,OpenCydeVegaG-R93,deVegaR-1994,OpenCyGhosZ94, OpenCyJimKKKM95-1, OpenCyJimKKMW95-2, OpenCyBBOY95, OpenCyKKMNST07,OpenCyDoi03, OpenCyFrapNR07, OpenCyRag2+1, OpenCyRag2+2, OpenCyBK05, OpenCyN02, OpenCyNR03, OpenCyMNS06, OpenCyGal08, OpenCyYNZ06, ADMFCaoYSW13, Xu2016, OpenCyDerKM03-2, OpenCyFSW08, OpenCyFGSW11, OpenCyFram+2, OpenCyFram+1, OpenCyGN12-open, OpenCyFalN14, OpenCyFalKN14, OpenCyKitMN14, OpenCyFanHSY96, OpenCyCao03, OpenCyZhang07, OpenCyACDFR05, OpenCyRag1+, OpenCyRag2+,Baseilhac2013,Belliard2013a,Belliard2013b,Belliard2015a,Belliard2015b,Belliard2015c, OpenCy-H1a, OpenCy-H1b, OpenCy-H1c, OpenCy-H1d, OpenCy-ShiW97,Doikou-2006}. It appeared however, that while for special models and boundary $K$-matrices a method very similar to the standard algebraic Bethe ansatz (ABA), here based on the reflection equations, can be applied, the case of the most general boundary conditions (and associated $K$-matrices) preserving integrability turns out of be out of the reach of these methods. This motivated the use of different approaches like in particular the use of $q$-Onsager algebras, see e.g. \cite{OpenCyBK05,Baseilhac2013}, modifications of the Bethe ansatz \cite{OpenCyRag2+1,OpenCyRag2+2,Belliard2013a,Belliard2013b,Belliard2015a,Belliard2015b,Belliard2015c,ADMFCaoYSW13} and the implementation for this case of the separation of variable (SoV) method \cite{OpenCySk1,Skl1985,OpenCySk2,OpenCySk3,OpenCyBBS96,OpenCySm98,DerKM03,OpenCyBT06,OpenCyGIPS06,OpenCyGIPST07,OpenCyNT-10,OpenCyN-10,OpenCyGN12,OpenCyGMN12-SG,OpenCyNWF09,OpenCyN12-0,OpenCyN12-1,OpenCyNicT15-2,OpenCyNicT15,OpenCyLevNT15,OpenCyKitMNT15,KitMNT16}. For an extensive discussion and comparison of these various methods in the case of boundary integrable models, we refer to the general discussion given in the introduction of our first article \cite{MNP2017} and to the references therein.
In our article \cite{MNP2017} we started the implementation of the SoV method to cyclic representation of the 6-vertex reflection equation associated to the most general Bazhanov-Stroganov quantum Lax operator \cite{RocheA-1989,JimboDMM1990,JimboDMM1991,Tarasov-1992,Tarasov-1993,OpenCyBS90}. Let us recall that the periodic boundary conditions case (spectrum and form factors) was considered in previous works \cite{OpenCyGIPS06,OpenCyGIPST07,OpenCyNT-10,OpenCyN-10,OpenCyGN12,OpenCyGMN12-SG}, generalizing in particular \cite{OpenCyKitMT99,OpenCyMaiT00}. The interest in such a problem is due to the fact that special cases include the Sine-Gordon lattice model at roots of unity and the Chiral Potts model \cite{OpenCyauYMcPTY,OpenCyMcPTS,OpenCyauYMcPT,OpenCyBaPauY,OpenCyBBP90,OpenCyBa89,OpenCyAMcP0,OpenCyTarasovSChP,OpenCyBa04,OpenCy-Au-YP08}. In \cite{MNP2017} we started the analysis considering the special case where one of the boundary $K$-matrices has triangular form (which is equivalent to one constraint on the boundary parameters). For that situation we have been able to apply successfully the SoV method by identifying the separate basis as the eigenstate basis of a special diagonalizable $B$-operator with simple spectrum which can be constructed from the boundary monodromy matrix entries. Then using this separate basis, the spectrum (eigenvalues and eigenstates) for the boundary transfer matrix was completely characterized in terms of the set of solutions to a discrete system of polynomial equations in a given class of functions.
The purpose of the present article is to address this spectral problem for the most general boundary conditions preserving integrability, namely for the most general $K$-matrices solution of the reflection equation. The method to reach this goal is to design a gauge transformation that enable us to put this general situation into correspondence with the previous one, namely with a model having one triangular $K$-matrix. For that purpose, the standard idea of Baxter's gauge transformations, see e.g. \cite{OpenCyBaxBook,OpenCyFT79} and references therein, has to be adapted in a way similar to \cite{OpenCyFanHSY96,OpenCyFalKN14} and generalized to these cyclic representations of the 6-vertex Yang-Baxter algebra. Then using this correspondence, the method and tools obtained in our first paper \cite{MNP2017} can be used, leading to the complete characterization of the spectrum (again eigenvalues and eigenstates) of the general boundary transfer matrix. We also give determinant formula for the scalar products of the separate states. Further, we show that the spectrum characterization admits a representation in terms of functional equations of Baxter T-Q equation type. Let us note that an analogous inhomogeneous Baxter's like equation has been already proposed in \cite{Xu2016} for this model on the basis of pure functional arguments on the fusion of transfer matrices. Thanks to our SoV construction, we prove in the present article that our inhomogeneous Baxter's like equation does characterize the full transfer matrix spectrum. Let us further remark that the inhomogeneous term in the proposal of \cite{Xu2016} is presented in terms of the averages of the entries of the monodromy matrix. For general cyclic representations these quantities are defined only through (unresolved) recursion formula in \cite{Xu2016}. Hence in \cite{Xu2016} this inhomogeneous term is in fact not given explicitly which makes the comparison with our explicit functional equation not directly possible. Moreover, we would like to stress that in our formulation the Q-function are Laurent polynomial of a smaller degree compared to the one in \cite{Xu2016}. This is due to the fact that the inhomogeneous term being computed explicitly in our SoV derivation it allows us to remove $4p$ ($p$ being an integer characteristic value defining the cyclic representation, see \eqref{def-p} in section 2) irrelevant zeros from the T-Q functional equation as they can be factored out from each term of this equation (see section 5).
This article is organized as follows. In section 2 we just recall the basics of the cyclic representations associated to the Bazhanov-Stroganov quantum Lax operator. In section 3 we define the gauged transformed reflection algebra that put into correspondence the most general boundary condition $K$-matrix with a triangular one. It enables us to adapt the SoV method that we already described in our first article \cite{MNP2017} to this more general context, leading in section 4 to the transfer matrix spectrum complete characterization in this SoV basis. There we also present the scalar product formulae for the so-called separate states containing the transfer matrix eigenstates.
In section 5 we show that the spectrum characterization admit a representation in terms of functional equations of Baxter T-Q equation type. Details about the construction of the gauge transformation are given in Appendices A and B together with determinant identities used in the spectrum characterization in Appendix C.
\section{Cyclic representations of 6-vertex reflection algebra.}
Following Sklyanin's paper \cite{OpenCySkly88}, we consider the most general cyclic
solutions of the 6-vertex reflection equation associated to the Bazhanov-Stroganov Lax operator \cite{OpenCyBS90}:%
\begin{equation}
R_{12}(\lambda /\mu )\,\mathcal{U}_{1,-}(\lambda )\,R_{21}(\lambda \mu /q)\,%
\mathcal{U}_{2,-}(\mu )=\mathcal{U}_{2,-}(\mu )\,R_{21}(\lambda \mu /q)\,%
\mathcal{U}_{1,-}(\lambda )\,R_{12}(\lambda /\mu ) \label{bYB}
\end{equation}%
where the two sides of the equation belong to $\text{End}(V_{1}\otimes
V_{2}\otimes \mathcal{H})$ and are defined by the following boundary monodromy
matrices:%
\begin{equation}
\mathcal{U}_{a,-}(\lambda )=M_{a}(\lambda )K_{a,-}(\lambda )\hat{M}%
_{a}(\lambda )=\left(
\begin{array}{cc}
\mathcal{A}_{-}(\lambda ) & \mathcal{B}_{-}(\lambda ) \\
\mathcal{C}_{-}(\lambda ) & \mathcal{D}_{-}(\lambda )%
\end{array}%
\right) _{a}\in \text{End}(V_{a}\otimes \mathcal{H}), \label{BMM}
\end{equation}%
where:%
\begin{equation}
\hat{M}_{a}(\lambda )=(-1)^{\mathsf{N}}\,\sigma
_{a}^{y}\,M_{a}^{t_{a}}(1/\lambda )\,\sigma _{a}^{y}, \label{Inverse-M}
\end{equation}%
and $V_{a}\simeq \mathbb{C}^{2}$ is the so-called auxiliary space. Here,
\begin{equation}
M_{a}(\lambda )=\left(
\begin{array}{cc}
A(\lambda ) & B(\lambda ) \\
C(\lambda ) & D(\lambda )%
\end{array}%
\right) _{a}\equiv L_{a,\mathsf{N}}(\lambda q^{-1/2})\cdots L_{a,1}(\lambda
q^{-1/2})\in \text{End}(V_{a}\otimes \mathcal{H}),
\label{24}
\end{equation}%
is the cyclic solution of the 6-vertex Yang-Baxter equation:%
\begin{equation}
R_{12}(\lambda /\mu )M_{1}(\lambda )M_{2}(\mu )=M_{2}(\mu )M_{1}(\lambda
)R_{12}(\lambda /\mu )\in \text{End}(V_{1}\otimes V_{2}\otimes \mathcal{H}),
\end{equation}%
associated to the R-matrix%
\begin{equation}
R_{ab}(\lambda )=\left(
\begin{array}{cccc}
q\lambda -q^{-1}\lambda ^{-1} & 0 & 0 & 0 \\[-1mm]
0 & \lambda -\lambda ^{-1} & q-q^{-1} & 0 \\[-1mm]
0 & q-q^{-1} & \lambda -\lambda ^{-1} & 0 \\[-1mm]
0 & 0 & 0 & q\lambda -q^{-1}\lambda ^{-1}%
\end{array}%
\right) \in \text{End}(V_{a}\otimes V_{b}), \label{Rlsg}
\end{equation}%
and defined in terms of the
Bazhanov-Stroganov's Lax operators \cite{OpenCyBS90}:%
\begin{equation}
L_{a,n}(\lambda )\equiv \left(
\begin{array}{cc}
\lambda \alpha _{n}v_{n}-\beta _{n}\lambda ^{-1}v_{n}^{-1} & u_{n}\left(
q^{-1/2}a_{n}v_{n}+q^{1/2}b_{n}v_{n}^{-1}\right) \\
u_{n}^{-1}\left( q^{1/2}c_{n}v_{n}+q^{-1/2}d_{n}v_{n}^{-1}\right) & \gamma
_{n}v_{n}/\lambda -\delta _{n}\lambda /v_{n}%
\end{array}%
\right) _{a}\in \text{End}(V_{a}\otimes \mathcal{R}_{n}),
\label{laxopppp}
\end{equation}%
where:%
\begin{equation}
\gamma _{n}=a_{n}c_{n}/\alpha _{n},\text{ \ \ \ \ }\delta
_{n}=b_{n}d_{n}/\beta _{n}.
\end{equation}%
The $u_{n}\in $End$(\mathcal{R}_{n})$ and $v_{m}\in $End$(\mathcal{R}_{m})$
are unitary Weyl algebra generators:%
\begin{equation}
u_{n}v_{m}=q^{\delta _{n,m}}v_{m}u_{n}\text{ \ \ with \ }%
u_{n}^{p}=v_{m}^{p}=1\text{\ }\forall n,m\in \{1,...,\mathsf{N}\},
\end{equation}%
and:
\begin{equation}
q=e^{-i\pi \beta ^{2}}\text{, }\beta ^{2}=p^{\prime }/p\text{ with }%
p^{\prime }\text{ even and }p=2l+1\text{ odd, } l \in \mathbb{N}.
\label{def-p}
\end{equation}%
The local quantum spaces $\mathcal{R}_{n}$ are $p$-dimensional Hilbert
spaces and the full representation space of the cyclic Yang-Baxter and
reflection algebra is defined by the tensor product of the local quantum
spaces, i.e. $\mathcal{H}=\otimes _{n=1}^{\mathsf{N}}\mathcal{R}_{n}$.
Moreover, we consider here the most general boundary matrices defined as:%
\begin{equation}
K_{a,\pm }(\lambda )=\left(
\begin{array}{cc}
a_{\pm }\left( \lambda \right) & b_{\pm }\left( \lambda \right) \\
c_{\pm }\left( \lambda \right) & d_{\pm }\left( \lambda \right)%
\end{array}%
\right) _{a}\equiv K_{a}(\lambda q^{(1\pm 1)/2};\zeta _{\pm },\kappa _{\pm
},\tau _{\pm }),
\label{Kmatuno}
\end{equation}%
where:%
\begin{equation}
K_{a}(\lambda ;\zeta ,\kappa ,\tau )=\frac{1}{\zeta -\frac{1}{\zeta }}\left(
\begin{array}{cc}
\frac{\lambda \zeta }{q^{1/2}}-\frac{q^{1/2}}{\lambda \zeta } & \kappa
e^{\tau }\left( \frac{\lambda ^{2}}{q}-\frac{q}{\lambda ^{2}}\right) \\
\kappa e^{-\tau }\left( \frac{\lambda ^{2}}{q}-\frac{q}{\lambda ^{2}}\right)
& \frac{q^{1/2}\zeta }{\lambda }-\frac{\lambda }{\zeta q^{1/2}}%
\end{array}%
\right) _{a}\in \text{End}(V_{a}).
\label{Kmatuno2}
\end{equation}%
We introduce the functions:%
\begin{equation}
\mathsf{A}_{-}(\lambda )\equiv g_{-}(\lambda )a(\lambda
q^{-1/2})d(1/(q^{1/2}\lambda )),\text{ \ }\mathsf{D}_{-}(\lambda )=k(\lambda
)\mathsf{A}_{-}(q/\lambda ),
\label{premierA}
\end{equation}%
where:%
\begin{eqnarray}
a(\lambda ) &\equiv &a_{0}\prod_{n=1}^{\mathsf{N}}(\frac{\beta _{n}}{\lambda
}+q^{-1}\frac{b_{n}\alpha _{n}}{a_{n}}\lambda ),\text{ \ }k(\lambda )=\frac{%
\left( \lambda ^{2}-1/\lambda ^{2}\right) }{(\lambda
^{2}/q^{2}-q^{2}/\lambda ^{2})}, \\
d(\lambda ) &\equiv &\frac{(-1)^{\mathsf{N}}}{a_{0}}\prod_{n=1}^{\mathsf{N}}%
\frac{a_{n}c_{n}}{\alpha _{n}}(\frac{1}{\lambda }+q\frac{d_{n}\alpha _{n}}{%
c_{n}\beta _{n}}\lambda ),
\end{eqnarray}%
$a_{0}$ is a free nonzero parameter and%
\begin{equation}
g_{\epsilon }(\lambda )\equiv \frac{(\lambda \alpha _{\epsilon
}/q^{1/2}-q^{1/2}/(\lambda \alpha _{\epsilon }))(\lambda \beta _{\epsilon
}^{-\epsilon }/q^{1/2}+q^{1/2}/(\lambda \beta _{\epsilon }^{-\epsilon }))}{%
\left( \alpha _{\epsilon }-1/\alpha _{\epsilon }\right) \left( \beta
_{\epsilon }+1/\beta _{\epsilon }\right) },
\end{equation}%
where\ $\epsilon =\pm 1$ and we have defined:%
\begin{equation}
\left( \alpha _{\epsilon }-1/\alpha _{\epsilon }\right) \left( \beta
_{\epsilon }+1/\beta _{\epsilon }\right) \equiv \frac{\zeta _{\epsilon
}-1/\zeta _{\epsilon }}{\kappa _{\epsilon }},\text{ \ }\left( \alpha
_{\epsilon }+1/\alpha _{\epsilon }\right) \left( \beta _{\epsilon }-1/\beta
_{\epsilon }\right) \equiv \frac{\zeta _{\epsilon }+1/\zeta _{\epsilon }}{%
\kappa _{\epsilon }}, \label{Def-alfa-beta}
\end{equation}%
Moreover, later we will use:
\begin{equation}
\mu _{n,h}\equiv \left\{
\begin{array}{c}
iq^{1/2}\left( a_{n}\beta _{n}/\alpha _{n}b_{n}\right) ^{1/2}\text{ \ \ }h=+,
\\
iq^{1/2}\left( c_{n}\beta _{n}/\alpha _{n}d_{n}\right) ^{1/2}\text{ \ \ }h=-.%
\end{array}%
\right.
\end{equation}%
Following the Sklyanin's paper \cite{OpenCySkly88} the next proposition
holds:
\begin{proposition}
The most general boundary transfer matrix associated to the Bazhanov-Stroganov Lax operator in the cyclic
representations of the reflection algebra is defined by:%
\begin{eqnarray}
\mathcal{T}(\lambda ) &\equiv &\text{tr}_{a}\{K_{a,+}(\lambda )\mathcal{U}%
_{a,-}(\lambda )\} \\
&=&a_{+}(\lambda )\mathcal{A}_{-}(\lambda )+d_{+}(\lambda )\mathcal{D}%
_{-}(\lambda )+b_{+}(\lambda )\mathcal{C}_{-}(\lambda )+c_{+}(\lambda )%
\mathcal{B}_{-}(\lambda ),
\end{eqnarray}%
It is a one parameter family of commuting operators satisfying the following
symmetries proprieties:%
\begin{equation}
\mathcal{T}(\lambda )=\mathcal{T}(1/\lambda ),\text{ \ \ }\mathcal{T}%
(-\lambda )=\mathcal{T}(\lambda ). \label{symmetry-transfer}
\end{equation}%
The boundary quantum determinant:%
\begin{eqnarray}
\text{det}_{q}\ \mathcal{U}_{a,-}(\lambda ) &\equiv &(\left( \lambda /q\right)
^{2}-\left( q/\lambda \right) ^{2})[\mathcal{A}_{-}(\lambda q^{1/2})\mathcal{%
A}_{-}(q^{1/2}/\lambda )+\mathcal{B}_{-}(\lambda q^{1/2})\mathcal{C}%
_{-}(q^{1/2}/\lambda )] \label{Bound-q-detU_1} \\
&=&(\left( \lambda /q\right) ^{2}-\left( q/\lambda \right) ^{2})[\mathcal{D}%
_{-}(\lambda q^{1/2})\mathcal{D}_{-}(q^{1/2}/\lambda )+\mathcal{C}%
_{-}(\lambda q^{1/2})\mathcal{B}_{-}(q^{1/2}/\lambda )],
\label{Bound-q-detU_2}
\end{eqnarray}%
is a central element in the reflection algebra, i.e.%
\begin{equation}
\lbrack \text{det}_{q}\ \mathcal{U}_{a,-}(\lambda ),\mathcal{U}_{a,-}(\mu )]=0,
\end{equation}%
and its explicit expression reads:%
\begin{equation}
\text{det}_{q}\ \mathcal{U}_{a,-}(\lambda )=(\lambda ^{2}/q^{2}-q^{2}/\lambda
^{2})\mathsf{A}_{-}(\lambda q^{1/2})\mathsf{A}_{-}(q^{1/2}/\lambda ).
\label{B-q-detU_-exp}
\end{equation}
\end{proposition}
\section{Gauged cyclic reflection algebra and SoV representations}
In our previous paper we solved the spectral problem associated to the
transfer matrix of the cyclic representations under the requirement that one
of the boundary matrices is triangular, i.e. $b_{+}(\lambda )\equiv 0$. In
this paper we want to solve the same type of spectral problem but for the
most general boundary conditions. In order to do so we can follow the same
approach used in the case of the transfer matrix associated to the spin-1/2
reflection algebra \cite{OpenCyFalKN14}. That is, we introduce the following linear
combinations of the original reflection algebra generators:%
\begin{align}
\mathcal{A}_{-}(\lambda |\beta )& =\left[ -\left( \lambda q^{3/2}/\beta
\right) \mathcal{A}_{-}(\lambda )-\alpha q\mathcal{B}_{-}(\lambda )+\mathcal{%
C}_{-}(\lambda )/(\alpha q)+\beta \mathcal{D}_{-}(\lambda )/\left( \lambda
q^{3/2}\right) \right] /(\beta /q^{2}-q^{2}/\beta ) \label{A-gauge-def} \\
\mathcal{B}_{-}(\lambda |\beta )& =\left[ -\left( \lambda \beta
/q^{1/2}\right) \mathcal{A}_{-}(\lambda )-\alpha q\mathcal{B}_{-}(\lambda
)+\left( \beta ^{2}/\alpha q\right) \mathcal{C}_{-}(\lambda )+\left( \beta
q^{1/2}/\lambda \right) \mathcal{D}_{-}(\lambda )\right] /(\beta -1/\beta )
\\
\mathcal{C}_{-}(\lambda |\beta )& =\left[ \left( \lambda q^{3/2}/\beta
\right) \mathcal{A}_{-}(\lambda )+\alpha q\mathcal{B}_{-}(\lambda )-\left(
q^{3}/\alpha \beta ^{2}\right) \mathcal{C}_{-}(\lambda )-\left(
q^{5/2}/\lambda \beta \right) \mathcal{D}_{-}(\lambda )\right] /(\beta
/q^{2}-q^{2}/\beta ) \\
\mathcal{D}_{-}(\lambda |\beta )& =\left[ \left( \lambda \beta
/q^{1/2}\right) \mathcal{A}_{-}(\lambda )+\alpha q\mathcal{B}_{-}(\lambda )-%
\mathcal{C}_{-}(\lambda )/\alpha q-\left( q^{1/2}/\lambda \beta \right)
\mathcal{D}_{-}(\lambda )\right] /(\beta -1/\beta ),
\end{align}%
where $\beta \neq \pm 1,\pm q^{2}$ and $\alpha $ are arbitrary complex
values; to simplify the notation, we won't explicit the dependance in $%
\alpha $. As it is discussed in the appendices A and B, these operators families
still satisfy a set of commutation relations which are gauged versions of
the reflection algebra commutation relations. In the
following we will refer to these families as the gauge transformed
reflection algebra generators. In the same appendices, we prove the following
theorem, characterizing the representation of these generators:
\begin{theorem}
\label{B-Pseudo-diago}For almost all the values of the boundary-bulk-gauge
parameters there exit a left $\langle \Omega _{\beta }|$ and a right $%
|\Omega _{\beta }\rangle $ pseudo-eigenstate of $\mathcal{B}_{-}(\lambda
|\beta )$:%
\begin{equation}
\langle \Omega _{\beta }|B_{-}(\lambda |\beta )=\text{\textsc{b}}_{\text{%
\textbf{0}}}(\lambda |\beta )\langle \Omega _{\beta /q^{2}}|,\text{ \ }%
B_{-}(\lambda |\beta )|\Omega _{\beta }\rangle =|\Omega _{q^{2}\beta
}\rangle \text{\textsc{b}}_{\text{\textbf{0}}}(\lambda |\beta ),
\label{L/R-Pseudo-eigenstate}
\end{equation}%
where:%
\begin{equation}
\text{\textsc{b}}_{\text{\textbf{h}}}(\lambda |\beta )=\text{\textsc{b}}%
_{-}(\beta )(\frac{\lambda ^{2}}{q}-\frac{q}{\lambda ^{2}})\prod_{a=1}^{%
\mathsf{N}}(\frac{\lambda }{\text{\textsc{b}}_{-,a}(\beta )q^{h_{a}}}-\frac{%
\text{\textsc{b}}_{-,a}(\beta )q^{h_{a}}}{\lambda })(\lambda q^{h_{a}}\text{%
\textsc{b}}_{-,a}(\beta )-\frac{1}{\lambda q^{h_{a}}\text{\textsc{b}}%
_{-,a}(\beta )}), \label{pseudo-eigenvalue}
\end{equation}%
for \textbf{h}$=(h_{1},...,h_{\mathsf{N}})\in \left\{ 0,...,p-1\right\} ^{%
\mathsf{N}}$ with%
\begin{eqnarray}
\text{\textsc{b}}_{-,n}^{2}(\beta ) &\neq &q^{1-2h}\alpha _{-}^{2\epsilon },%
\text{\ \textsc{b}}_{-,n}^{2}(\beta )\neq -q^{1-2h}\beta _{-}^{2\epsilon },%
\text{\ } \label{SOV-cond1} \\
\text{\textsc{b}}_{-,n}^{2}(\beta ) &\neq &q^{1-2h}\mu _{m,+}^{2\epsilon },%
\text{\ \textsc{b}}_{-,n}^{2}(\beta )\neq q^{1-2h}\mu _{m,-}^{2\epsilon },
\label{SOV-cond2}
\end{eqnarray}%
for any $\epsilon =\pm 1,$ $n,m\in \{1,...,\mathsf{N}\}$ and $h\in \left\{
1,...,p-1\right\} $. Then, the following set of states:%
\begin{eqnarray}
\langle \beta ,h_{1},...,h_{\mathsf{N}}| &=&\frac{1}{\text{\textsc{n}}%
_{\beta }}\langle \Omega _{\beta }|\prod_{n=1}^{\mathsf{N}%
}\prod_{k_{n}=1}^{h_{n}}\frac{\mathcal{A}_{-}(q^{1-k_{n}}/\text{\textsc{b}}%
_{-,n}(\beta )|\beta q^{2})}{\mathsf{A}_{-}(q^{1-k_{n}}/\text{\textsc{b}}%
_{-,n}(\beta ))}, \label{Def-left-P-B-basis} \\
|\beta ,h_{1},...,h_{\mathsf{N}}\rangle &\equiv &\frac{1}{\text{\textsc{n}}%
_{\beta /q^{2}}}\prod_{n=1}^{\mathsf{N}}\prod_{k_{n}=1}^{h_{n}}\frac{%
\mathcal{D}_{-}(q^{1-k_{n}}/\text{\textsc{b}}_{-,n}(\beta )|\beta )}{\mathsf{%
D}_{-}(q^{1-k_{n}}/\text{\textsc{b}}_{-,n}(\beta ))}|\Omega _{\beta }\rangle
, \label{Def-right-P-B-basis}
\end{eqnarray}%
form a left and a right basis of the representation space defining the
following decomposition of the identity:%
\begin{equation}
\mathbb{I}\equiv \sum_{h_{1},...,h_{\mathsf{N}}=0}^{p-1}\prod_{1\leq b<a\leq
\mathsf{N}}(X_{a}^{(h_{a})}-X_{a}^{(h_{a})})|\beta q^{2},h_{1},...,h_{%
\mathsf{N}}\rangle \langle \beta ,h_{1},...,h_{\mathsf{N}}|,
\end{equation}
with%
\begin{equation}
\langle \beta ,h_{1},...,h_{\mathsf{N}}|\beta q^{2},k_{1},...,k_{\mathsf{N}%
}\rangle =\prod_{1\leq a\leq \mathsf{N}}\delta _{h_{a},k_{a}}\prod_{1\leq
b<a\leq \mathsf{N}}\frac{1}{X_{a}^{(h_{a})}-X_{b}^{(h_{b})}},
\end{equation}%
where%
\begin{equation}
X_{b}^{(h_{b})}=(\text{\textsc{b}}_{-,b}(\beta )q^{h_{b}})^{2}+1/(\text{%
\textsc{b}}_{-,b}(\beta )q^{h_{b}})^{2}
\label{313}
\end{equation}%
for the non-zero normalization fixed by%
\begin{equation}
\text{\textsc{n}}_{\beta }=\left( \prod_{1\leq b<a\leq \mathsf{N}}\left(
X_{a}^{\left( p-1\right) }-X_{b}^{\left( p-1\right) }\right) \langle \Omega
_{\beta }|\Omega _{\beta q^{2}}\rangle \right) ^{1/2}. \label{Normaliz-def}
\end{equation}%
In this basis the operator family $\mathcal{B}_{-}(\lambda |\beta )$ is
pseudo-diagonalized:%
\begin{eqnarray}
\langle \beta ,h_{1},...,h_{\mathsf{N}}|\mathcal{B}_{-}(\lambda |\beta ) &=&%
\text{\textsc{b}}_{\text{\textbf{h}}}(\lambda |\beta )\langle \beta
/q^{2},h_{1},...,h_{\mathsf{N}}|, \\
\mathcal{B}_{-}(\lambda |\beta )|\beta ,h_{1},...,h_{\mathsf{N}}\rangle
&=&|q^{2}\beta ,h_{1},...,h_{\mathsf{N}}\rangle \text{\textsc{b}}_{\text{%
\textbf{h}}}(\lambda |\beta ),
\end{eqnarray}%
with simple pseudo-spectrum%
\begin{equation}
\text{\textsc{b}}_{-,n}^{2p}(\beta )\neq \pm 1,\text{ \textsc{b}}%
_{-,m}^{p}(\beta )\neq \text{\textsc{b}}_{-,n}^{p}(\beta ),\text{ \ }\forall
n\neq m\in \{1,...,\mathsf{N}\},
\end{equation}%
and the operator families $\mathcal{A}_{-}(\lambda |\beta )$ and $\mathcal{D}%
_{-}(\lambda |\beta )$ in the zeros $\zeta_{a}^{(h_{a})}$ of $\mathcal{B}_{-}(\lambda |\beta )$
act as simple shift operators:%
\begin{align}
\langle \beta ,h_{1},...,h_{\mathsf{N}}|\mathcal{A}_{-}(\zeta
_{a}^{(h_{a})}|\beta q^{2})& =\mathsf{A}_{-}(\zeta _{a}^{(h_{a})})\langle
\beta ,h_{1},...,h_{\mathsf{N}}|T_{a}^{-\varphi _{a}}, \label{Left-A-action}
\\
\mathcal{D}_{-}(\zeta _{a}^{(h_{a})}|\beta )|\beta ,h_{1},...,h_{\mathsf{N}%
}\rangle & =T_{a}^{-\varphi _{a}}|\beta ,h_{1},...,h_{\mathsf{N}}\rangle
\mathsf{D}_{-}(\zeta _{a}^{(h_{a})}), \label{Right-D-action}
\end{align}%
where%
\begin{eqnarray}
\langle \beta ,h_{1},...,h_{a},...,h_{\mathsf{N}}|T_{a}^{\pm } &=&\langle
\beta ,h_{1},...,h_{a}\pm 1,...,h_{\mathsf{N}}|, \\
T_{a}^{\pm }|\beta ,h_{1},...,h_{a},...,h_{\mathsf{N}}\rangle &=&|\beta
,h_{1},...,h_{a}\pm 1,...,h_{\mathsf{N}}\rangle ,
\end{eqnarray}%
\smallskip and:%
\begin{eqnarray}
\zeta _{n}^{(h)} &=&\left( \text{\textsc{b}}_{-,n}(\beta )q^{h}\right)
^{\varphi _{n}}\text{\ \ for \ }h\in \{0,...,p-1\}\text{\ \ and\ \ }\forall
n\in \{1,...,2\mathsf{N}\}, \\
\varphi _{a} &=&1-2\theta (a-\mathsf{N})\text{ \ \ with \ }\theta (x)=\{0%
\text{ for }x\leq 0,\text{ }1\text{ for }x>0\}.
\end{eqnarray}
\end{theorem}
Let us comment that the existence of the states $\langle \Omega _{\beta }|$
and $|\Omega _{\beta }\rangle $ can be proven by a general argument which we
present in Appendix B. For general representations, the pseudo-spectrum of $%
\mathcal{B}_{-}(\lambda |\beta )$, i.e. the values of $\text{\textsc{b}}%
_{-,n}(\beta )$ and $\text{\textsc{b}}_{-}(\beta )$, must be computed by
recursion on the number of sites. However, in Appendix B we present the
explicit expression for $\text{\textsc{b}}_{-,n}(\beta )$ and $\text{\textsc{%
b}}_{-}(\beta )$ in some particular representations.
The interest in these gauge transformed boundary generators is due to the
possibility to use them to rewrite the transfer matrix associated to the
most general cyclic 6-vertex reflection algebra representations in a simple
form, as presented in the following proposition:
\begin{proposition}
The quantum determinant can be written in terms of the gauge transformed
boundary generators as:%
\begin{align}
\frac{\text{det}_{q}\ \mathcal{U}_{-}(\lambda )}{(\lambda
^{2}/q^{2}-q^{2}/\lambda ^{2})}& =\mathcal{A}_{-}(q^{1/2}\lambda ^{\epsilon
}|\beta q^{2})\mathcal{A}_{-}(q^{1/2}/\lambda ^{\epsilon }|\beta q^{2})+%
\mathcal{B}_{-}(q^{1/2}\lambda ^{\epsilon }|\beta )\mathcal{C}%
_{-}(q^{1/2}/\lambda ^{\epsilon }|\beta q^{2}) \\
& =\mathcal{D}_{-}(q^{1/2}\lambda ^{\epsilon }|\beta )\mathcal{D}%
_{-}(q^{1/2}/\lambda ^{\epsilon }|\beta )+\mathcal{C}_{-}(q^{1/2}\lambda
^{\epsilon }|\beta q^{2})\mathcal{B}_{-}(q^{1/2}/\lambda ^{\epsilon }|\beta
),
\end{align}%
$\epsilon =\pm 1$. Moreover, if we set the gauge parameter $\alpha $ to:%
\begin{equation}
\alpha =-\beta \beta _{+}/q^{2}\alpha _{+}e^{\tau _{+}}, \label{guage-alpha}
\end{equation}%
($\alpha_+$ and $\beta_+$ are defined in \eqref{Def-alfa-beta}, they are linked to the boundary parameters $\zeta_+,\kappa_+$ and $\tau_+$, see \eqref{Kmatuno}-\eqref{Kmatuno2}) then the transfer matrix can be written as%
\begin{align}
\mathcal{T}(\lambda )& = \mathsf{a}_{+}(\lambda )\mathcal{A}_{-}(\lambda
|\beta )+\mathsf{a}_{+}(1/\lambda )\mathcal{A}_{-}(1/\lambda |\beta )+q%
\mathsf{c}_{+}(\lambda |\beta )\mathcal{B}_{-}(\lambda |\beta /q^{2})
\label{T-triangu-L} \\
\mathcal{T}(\lambda )& = \mathsf{d}_{+}(\lambda )\mathcal{D}_{-}(\lambda
|\beta )+\mathsf{d}_{+}(1/\lambda )\mathcal{D}_{-}(1/\lambda |\beta )+%
\mathsf{c}_{+}(\lambda |\beta )\mathcal{B}_{-}(\lambda |\beta )/q,
\label{T-triangu-R}
\end{align}%
where we have defined:
\begin{align}
\mathsf{a}_{+}(\lambda )& =-\frac{\lambda ^{2}q-1/q\lambda ^{2}}{\lambda
^{2}-1/\lambda ^{2}}g_{+}(\lambda ),\text{ \ }\mathsf{d}_{+}(\lambda )=\frac{%
\lambda ^{2}q-1/q\lambda ^{2}}{\lambda ^{2}-1/\lambda ^{2}}g_{+}(q/\lambda ), \label{deuxiemeA}
\\
\mathsf{c}_{+}(\lambda |\beta )& =\frac{-q(\lambda ^{2}q-1/q\lambda
^{2})\left( \beta \beta _{+}/q\alpha _{+}-q\alpha _{+}/\beta \beta
_{+}\right) }{\beta \left( \alpha _{+}-1/\alpha _{+}\right) \left( \beta
_{+}+1/\beta _{+}\right) }.
\end{align}
\end{proposition}
\begin{proof}
The proof of this statement coincides with the one given in \cite{OpenCyFalKN14} for
the XXZ spin 1/2 quantum chain with general integrable boundaries; in fact,
this statement is representation independent. The only difference is that
here we have used a Laurent polynomial form while in the XXZ case it was a
trigonometric form.
\end{proof}
The simple representations $(\ref{T-triangu-L})$-$(\ref{T-triangu-R})$ of
the transfer matrix in terms of the gauge transformed boundary generators
and the known actions $(\ref{Left-A-action})$-$(\ref{Right-D-action})$ of
these operators imply that the transfer matrix spectral problem is separated
in the pseudo-eigenbasis of $\mathcal{B}_{-}(\lambda |\beta )$.
\section{$\mathcal{T}$-spectrum characterization in SoV basis and scalar products}
In this section we present the complete characterization of the spectrum of
the transfer matrix $\mathcal{T}(\lambda )$ associated to the cyclic
representations of the 6-vertex reflection algebra. We first present some
preliminary properties satisfied by all the eigenvalue functions of the
transfer matrix $\mathcal{T}(\lambda )$:
\begin{lemma}
Denote by $\Sigma _{\mathcal{T}}$ the transfer matrix spectrum, then any $%
\tau (\lambda )\in \Sigma _{\mathcal{T}}$ is an even function of $\lambda $\
symmetrical under the transformation $\lambda \rightarrow 1/\lambda $ which
admits the following interpolation formula:%
\begin{align}
\tau (\lambda )=& \sum_{a=1}^{\mathsf{N}}\frac{\Lambda ^{2}-X^{2}}{%
(X_{a}^{(0)})^{2}-X^{2}}\prod_{\substack{ b=1 \\ b\neq a}}^{\mathsf{N}}%
\frac{\Lambda -X_{b}^{(0)}}{X_{a}^{(0)}-X_{b}^{(0)}}\tau (\zeta
_{a}^{(0)})+(-1)^{\mathsf{N}}\frac{(\Lambda +X)}{2}\prod_{b=1}^{\mathsf{N}}%
\frac{\Lambda -X_{b}^{(0)}}{X-X_{b}^{(0)}}\text{det}_{q}M(1) \notag \\
& -(-1)^{\mathsf{N}}\frac{(\Lambda -X)}{2}\prod_{b=1}^{\mathsf{N}}\frac{%
\Lambda -X_{b}^{(0)}}{X+X_{b}^{(0)}}\frac{(\zeta _{+}+1/\zeta _{+})}{(\zeta
_{+}-1/\zeta _{+})}\frac{(\zeta _{-}+1/\zeta _{-})}{(\zeta _{-}-1/\zeta _{-})%
}\text{det}_{q}M(i) \notag \\
& +(\Lambda ^{2}-X^{2})\tau _{\infty }\prod_{b=1}^{\mathsf{N}}(\Lambda
-X_{b}^{(0)}), \label{set-tau}
\end{align}%
where:
\begin{equation}
\Lambda\equiv(\lambda ^{2}+1/\lambda ^{2}) \ \ \text{ and \ \ \ }X\equiv q+1/q
\label{defX}
\end{equation}
and
\begin{equation}
\tau _{\infty }\equiv \frac{\kappa _{+}\kappa _{-}(e^{\tau _{+}-\tau
_{-}}\prod_{b=1}^{\mathsf{N}}\delta _{b}\gamma _{b}+e^{\tau _{-}-\tau
_{+}}\prod_{b=1}^{\mathsf{N}}\alpha _{b}\beta _{b})}{\left( \zeta
_{+}-1/\zeta _{+}\right) \left( \zeta _{-}-1/\zeta _{-}\right) }\text{ .\ \
\ }
\end{equation}
\end{lemma}
We recall that $\zeta_-,\kappa_-,\tau_-,\zeta_+,\kappa_+$ and $\tau_+$ are the boundary parameters, see \eqref{Kmatuno}-\eqref{Kmatuno2}, and $\alpha_n,\beta_n,\gamma_n$ and $\delta_n$ are the bulk parameters, see \eqref{laxopppp}.
\begin{proof}
This lemma coincides with Lemma 5.1 of our previous paper.
\end{proof}
We introduce the following one-parameter family $D_{\tau }(\lambda )$ of $%
p\times p$ matrices:%
\begin{equation}
D_{\tau }(\lambda )\equiv
\begin{pmatrix}
\tau (\lambda ) & -\text{\textsc{a}}{}(1/\lambda ) & 0 & \cdots & 0 & -\text{%
\textsc{a}}(\lambda ) \\
-\text{\textsc{a}}(q\lambda ) & \tau (q\lambda ) & -\text{\textsc{a}}%
(1/\left( q\lambda \right) ) & 0 & \cdots & 0 \\
0 & {\quad }\ddots & & & & \vdots \\
\vdots & & \cdots & & & \vdots \\
\vdots & & & \cdots & & \vdots \\
\vdots & & & & \ddots {\qquad } & 0 \\
0 & \ldots & 0 & -\text{\textsc{a}}(q^{2l-1}\lambda ) & \tau
(q^{2l-1}\lambda ) & -\text{\textsc{a}}(1/\left( q^{2l-1}\lambda \right) )
\\
-\text{\textsc{a}}(1/\left( q^{2l}\lambda \right) ) & 0 & \ldots & 0 & -%
\text{\textsc{a}}(q^{2l}\lambda ) & \tau (q^{2l}\lambda )%
\end{pmatrix}%
, \label{D-matrix}
\end{equation}%
where for now $\tau (\lambda )$ is a generic function and we have defined:%
\begin{equation}
\text{\textsc{a}}(\lambda )=\mathsf{a}_{+}(\lambda )\mathsf{A}_{-}(\lambda ),
\label{45}
\end{equation}%
(from \eqref{premierA} and \eqref{deuxiemeA}) where the coefficient \textsc{a}$(\lambda )$ satisfies the quantum
determinant condition:%
\begin{equation}
\text{\textsc{a}}(\lambda q^{1/2})\text{\textsc{a}}(q^{1/2}/\lambda )=\frac{%
\mathsf{a}_{+}(\lambda q^{1/2})\mathsf{a}_{+}(q^{1/2}/\lambda )\text{det}_{q}%
\mathcal{U}_{-}(\lambda )}{\left( \lambda /q\right) ^{2}-\left( q/\lambda
\right) ^{2}}.
\end{equation}%
The separation of variables lead to the following discrete characterization
of the transfer matrix spectrum.
\begin{theorem}
\label{discrete-SoV-ch} For almost all the values of the boundary-bulk
parameters $\mathcal{T}(\lambda )$ is diagonalizable and it has simple spectrum and $\Sigma _{%
\mathcal{T}}$ coincides with the set of polynomials $\tau (\lambda )$ of the
form $(\ref{set-tau})$ which satisfy the following discrete system of
equations:%
\begin{equation}
\text{det}\ \text{$D$}_{\tau }(\zeta _{a}^{(0)})=0,\text{ }\forall a\in
\{1,...,\mathsf{N}\}. \label{OpenCyI-Functional-eq}
\end{equation}
\begin{itemize}
\item[\textsf{I)}] The right $\mathcal{T}$-eigenstate corresponding to $\tau
(\lambda )\in \Sigma _{\mathcal{T}}$ is defined by the following
decomposition in the right SoV-basis:%
\begin{equation}
|\tau \rangle =\sum_{h_{1},...,h_{\mathsf{N}}=0}^{p-1}\prod_{a=1}^{\mathsf{N}%
}q_{\tau ,a}^{(h_{a})}\prod_{1\leq b<a\leq \mathsf{N}%
}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})|\beta ,h_{1},...,h_{\mathsf{N}}\rangle ,
\label{OpenCyeigenT-r-D}
\end{equation}%
where the gauge parameters $\alpha $ and $\beta $ satisfy the condition $%
\left( \ref{guage-alpha}\right) $ and the $q_{\tau ,a}^{(h_{a})}$ are the
unique nontrivial solutions up to normalization of the linear homogeneous
system:%
\begin{equation}
\text{$D$}_{\tau }(\zeta _{a}^{(0)})\left(
\begin{array}{c}
q_{\tau ,a}^{(0)} \\
\vdots \\
q_{\tau ,a}^{(p-1)}%
\end{array}%
\right) =\left(
\begin{array}{c}
0 \\
\vdots \\
0%
\end{array}%
\right) . \label{OpenCyt-Q-relation}
\end{equation}
\item[\textsf{II)}] The left $\mathcal{T}$-eigenstate corresponding to $\tau
(\lambda )\in \Sigma _{\mathcal{T}}$ is defined by the following
decomposition in the left SoV-basis:%
\begin{equation}
\langle \tau |=\sum_{h_{1},...,h_{\mathsf{N}}=0}^{p-1}\prod_{a=1}^{\mathsf{N}%
}\hat{q}_{\tau ,a}^{(h_{a})}\prod_{1\leq b<a\leq \mathsf{N}%
}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})\langle h_{1},...,h_{\mathsf{N}},\beta
/q^{2}|, \label{OpenCyeigenT-l-D}
\end{equation}%
where the gauge parameters $\alpha $ and $\beta $ satisfy the condition $%
\left( \ref{guage-alpha}\right) $ and the $\hat{q}_{\tau ,a}^{(h_{a})}$ are
the unique nontrivial solutions up to normalization of the linear
homogeneous system:%
\begin{equation}
\left(
\begin{array}{ccc}
\hat{q}_{\tau ,a}^{(0)} & \ldots & \hat{q}_{\tau ,a}^{(p-1)}%
\end{array}%
\right) \left( \text{$\hat{D}$}_{\tau }(\zeta _{a}^{(0)})\right)
^{t_{0}}=\left(
\begin{array}{ccc}
0 & \ldots & 0%
\end{array}%
\right) ,
\end{equation}%
and $\hat{D}$$_{\tau }(\lambda )$ is the family of $p\times p$ matrices
defined substituting in $D_{\tau }(\lambda )$ the coefficient \textsc{a}$%
(\lambda )$ with
\begin{equation}
\text{\textsc{d}}(\lambda )=\mathsf{d}_{+}(\lambda )\mathsf{D}_{-}(\lambda ).
\label{412}
\end{equation}
\end{itemize}
defined from \eqref{premierA} and \eqref{deuxiemeA}.
\end{theorem}
\begin{proof}
The Theorem \ref{B-Pseudo-diago} implies that for almost all the values of
the gauge-boundary-bulk parameters the conditions $\left( \ref{SOV-cond1}%
\right) $-$\left( \ref{SOV-cond2}\right) $ hold. Here, we need to prove
also that for almost all the values of the boundary-bulk parameters we have,%
\begin{equation}
\text{\textsc{b}}_{-,n}^{2}(\beta )\neq q^{1-2h}\alpha _{+}^{\pm 2},\text{\
\textsc{b}}_{-,n}^{2}(\beta )\neq -q^{1-2h}\beta _{+}^{\pm 2},\text{ \ }%
\forall h\in \{1,...,p-1\},n\in \{1,...,\mathsf{N}\}, \label{SoV-T-simple}
\end{equation}%
once we set the ratio $\alpha /\beta $ as in $\left( \ref{guage-alpha}\right) $%
. Let us first observe that $\mathcal{B}_{-}(\lambda |\beta )$ is a Laurent
polynomial in $\alpha ,$ $\beta $, the inner boundary parameters and the
bulk parameters. So that by $\left( \ref{guage-alpha}\right) $, the one
parameter family $\mathcal{B}_{-}(\lambda |\beta )$ becomes Laurent
polynomial in the outer boundary parameters too. Consequently, to prove that
$\left( \ref{SoV-T-simple}\right) $ is satisfied for almost all the values
of the boundary-bulk parameters it is enough to prove that we can find some
values of these parameters for which $\left( \ref{SoV-T-simple}\right) $ is
satisfied. Indeed, we can chose arbitrary boundary-bulk parameters
satisfying the following inequalities:%
\begin{equation}
\mu _{+,n}^{p}\neq \alpha _{+}^{\pm p}\text{ and }\mu _{+,n}^{p}\neq -\beta
_{+}^{\pm p},\text{ \ }\forall n\in \{1,...,\mathsf{N}\},
\end{equation}%
together with those in $\left( \ref{Special-B-simple}\right) $ and $\left( %
\ref{Special-B-simple2}\right) $ and impose the $\mathsf{N}$ conditions $%
\left( \ref{Condi-bulk-quasi-L}\right) $. Under these conditions, Theorem %
\ref{B-Pseudo-diago} implies the pseudo-diagonalizability of $\mathcal{B}%
_{-}(\lambda |\beta )$ and fixes the spectrum of its zeros \textsc{b}$%
_{-,n}(\beta )$ by $\left( \ref{Spectrum-Zeros-B}\right) $; so that the
inequality $\left( \ref{SoV-T-simple}\right) $ is satisfied.
As we have proven that for almost all the values of the boundary-bulk
parameters the inequalities $\left( \ref{SoV-T-simple}\right) $, $\left( \ref%
{Special-B-simple}\right) $ and $\left( \ref{Special-B-simple2}\right) $
hold, to prove this theorem we have just to follow the same proof given in
the non-gauged case, i.e. the proof of Theorem 5.1 of our previous paper.
Let us comment that with respect to this last theorem here we are stating also the diagonalizability of the transfer matrix for almost any value of the parameters of the representation. This last statement can be proven as it follows.
Let us consider the following special representation, where the bulk parameters satisfy:
\begin{equation}
c_n=-b_n^* \ \ ; \ \ d_n=-a_n^* \ \ \text{and} \ \ \ \alpha_n^* \beta_n = a_n^* b_n
\end{equation}
and where the boundary matrices are diagonal, $K_{a,-}(\lambda)=K_a(\lambda;\zeta_-,0,0)$ and \\
$K_{a,+}(\lambda)=K_a(q \lambda;\zeta_+,0,0)$ (see \eqref{Kmatuno}-\eqref{Kmatuno2}), with the associated boundary parameters satisfying moreover $|\zeta_-|=|\zeta_+|=1$. The $*$ operation is the complex conjugation. A simple direct calculation made for example in \cite{OpenCyGN12} leads to the following Hermitian conjugate of the monodromy matrix \eqref{24}:
\begin{equation}
M_{a}^{\dag}(\lambda) = \sigma_a^y M_{a}(\lambda^*)\sigma_a^y
\end{equation}
where $\sigma_a^y$ denotes the Pauli matrix.
From this relation, and using the specific inner boundary matrix introduced, one can compute the Hermitian conjugate of the boundary monodromy matrix \eqref{BMM}:
\begin{equation}
{\cal{U}}_{a,-}^{\dag}(\lambda) = {\cal{U}}_{a,-}^{t_a}(1/\lambda^*)
\end{equation}
Then, from the definition of the boundary transfer matrix, and for the special choice of representation here chosen, we can show:
\begin{equation}
{\cal{T}}^{\dag}(\lambda) = {\cal{T}}(1/\lambda^*)
\end{equation}
Thus for this special representation the boundary transfer matrix is normal. Then it follows that the determinant of the $p^\mathsf{N}\times p^\mathsf{N}$ matrix of elements $\langle e_{i}|\tau_{j}\rangle $, where $\bra{e_{i}}$ is the generic element of a given basis of covectors and $\ket{\tau_{j}}$ is the generic transfer matrix eigenvector, is non zero.
Noticing that this determinant is a fractional function of the bulk and boundary parameters, non zero for the special choice of the parameters above defined, it follows that it is non zero for almost every choice of the parameters. Which concludes the proof.
\end{proof}
It is also interesting to remark that we can obtain the coefficients of a
left transfer matrix eigenstates in terms of those of the right one. The
following lemma defines this characterization and can be proven as in the standard case \cite{MNP2017}:
\begin{lemma}
Let $\tau (\lambda )\in \Sigma _{\mathcal{T}}$ then it holds:%
\begin{equation}
\frac{\hat{q}_{\tau ,a}^{(h)}}{\hat{q}_{\tau ,a}^{(h-1)}}=\frac{\text{%
\textsc{a}}(1/\zeta _{a}^{(h-1)})}{\text{\textsc{d}}(1/\zeta _{a}^{(h-1)})}%
\frac{q_{\tau ,a}^{(h)}}{q_{\tau ,a}^{(h-1)}}.
\label{419}
\end{equation}
\end{lemma}
Let us introduce a class of left and right states, the so-called separate
states, characterised by the following type of decompositions in the left
and right separate basis:
\begin{align}
\langle \alpha |& \equiv \sum_{h_{1},...,h_{\mathsf{N}}=0}^{p-1}\prod_{a=1}^{%
\mathsf{N}}\alpha _{a}^{(h_{a})}\prod_{1\leq b<a\leq \mathsf{N}%
}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})\langle \beta/q^2, h_{1},...,h_{\mathsf{N}}|
\label{Separate-left-SoV} \\
|\beta \rangle & =\sum_{h_{1},...,h_{\mathsf{N}}=0}^{p-1}\prod_{a=1}^{%
\mathsf{N}}\beta _{a}^{(h_{a})}\prod_{1\leq b<a\leq \mathsf{N}%
}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})|\beta,h_{1},...,h_{\mathsf{N}}\rangle
\label{Separate-right-SoV}
\end{align}%
where the coefficients $\alpha _{a}^{(h_{a})}$ and $\beta _{a}^{(h_{a})}$ are arbitrary complex numbers, meaning that the coefficients of these separate states have a factorised form in these basis. ($X_{a}^{(h_{a})}$ defined in \eqref{313}).
These separate states are interesting at least for two reasons: the
eigenstates of the boundary transfer matrix are special separate states, and they admit a simple determinant scalar
product, as it is stated in the next proposition:
\begin{proposition}
Let us take an arbitrary separate left state $\langle \alpha |$ and an arbitrary separate right state $|\beta \rangle $. Then it holds:
\begin{equation}
\langle \alpha |\beta \rangle =det\ \mathcal{M}^{\left(\alpha ,\beta \right) }
\label{T2-Sov-Sc-p00}
\end{equation}
where the elements of the size N matrix $\mathcal{M}^{\left(\alpha ,\beta \right) }$ are given by:
\begin{equation}
\forall (a,b) \in [1,N]^2,\ \ \mathcal{M}_{a,b}^{\left( \alpha,\beta \right) }\equiv \sum_{h=0}^{p-1}\alpha _{a}^{(h)}\beta
_{a}^{(h)}(X_{a}^{(h)})^{(b-1)} \label{T2-Sov-Sc-p1}
\end{equation}
\end{proposition}
The proof is quite straightforward, it is based on the fact that one can see a Vandermonde determinant when computing the scalar product. One of the main corollary is the orthogonality of two eigenstates $\langle \tau |$ and $|\tau^{\prime }\rangle $ of the boundary transfer matrix associated to two different eigenvalues $\tau (\lambda )$ and $\tau ^{\prime }(\lambda )$:
\begin{equation}
\langle \tau |\tau^{\prime }\rangle=0
\end{equation}
The computation of such scalar products is the very first step towards the dynamics, several further steps being required to reach this characterization for the models associated to cyclic representations of the 6-vertex reflection algebra: the reconstruction of the local operators in separate variables, the identification of the ground state, the homogeneous and the thermodynamic limit. For example a rewriting of the determinant representations for the form factors obtained from separation of variable will be necessary to overcome the standard problems related to the homogeneous limit. This problem has been addressed and solved for the XXX spin 1/2 chain, linking the separation of variable type determinants with Izergin's, Slavnov's and Gaudin's type determinants \cite{OpenCyKitMNT15,KitMNT16}. \\
\section{Functional equation characterizing the $\mathcal{T}$-spectrum}
The purpose of this section is to characterize the spectrum by functional relations analogous to Baxter's T-Q equation. To begin with, we first need the following property.
\begin{lemma}
Let $\tau (\lambda )$ be a function of $\lambda $\ invariant under the
transformation $\lambda \rightarrow 1/\lambda $ and $\lambda \rightarrow
-\lambda $ then det$_{p}D_{\tau }(\lambda )$ (from \eqref{D-matrix}) is a function of
\begin{equation}
Z=\lambda ^{2p}+\frac{1}{\lambda ^{2p}},
\label{defZ}
\end{equation}%
i.e. it is a function of $\lambda ^{p}$ invariant under the transformations $\lambda ^{p}\rightarrow
1/\lambda ^{p}$ and $\lambda \rightarrow -\lambda $. Moreover, if $\tau
(\lambda )$ is a Laurent polynomial of degree $\mathsf{N}+2$ in $\Lambda =\lambda ^{2}+\frac{1}{\lambda ^{2}}$
then det$_{p} D_{\tau }(\lambda )$ is a Laurent polynomial of degree $\mathsf{%
N}+2$ in $Z$.
\end{lemma}
\begin{proof}
The first part of this lemma about the dependence w.r.t. $Z$ of det$%
_{p}D_{\tau }(\lambda )$ has been already proven in Lemma 5.2 of our
previous paper \cite{MNP2017} while the second part of this lemma can be proven following
the proof given in Proposition 6.1 of the same paper. To adapt this
proof here, let us observe that the matrix $D_{\tau }(i^{a}q^{h+1/2})$ for $%
a\in \{0,1\}$ and $h\in \{0,...,p-1\}$ contains one row with two divergent
elements, i.e. $-$\textsc{a}$(\pm 1)$ and $-$\textsc{a}$(\pm i)$,
respectively for $a=0$ and $a=1$. Nevertheless the determinants det$%
_{p}D_{\tau }(i^{a}q^{h+1/2})$ are all finites for any $a\in \{0,1\}$ and $%
h\in \{0,...,p-1\}$ if $\tau (i^{b}q^{k+1/2})$ are finite for any $b\in
\{0,1\}$ and $k\in \{0,...,p-1\}$. Indeed, by the symmetries $\lambda
^{p}\rightarrow 1/\lambda ^{p}$ and $\lambda \rightarrow -\lambda $ all the
determinants det$_{p}D_{\tau }(q^{h+1/2})$ coincide as well as all the
determinants det$_{p}D_{\tau }(iq^{h+1/2})$. So that we have to prove our
statement for one value of $q^{h+1/2}$ and one value of $iq^{h+1/2}$. Now,
we can use the expansion of the determinant w.r.t. the central row:
\begin{align}
\text{det}_{p} D_{\tau }(\lambda q^{1/2})& =\tau (\lambda )\text{det}%
_{p-1}D_{\tau ,(p+1)/2,(p+1)/2}(\lambda q^{1/2})+\frac{\text{\textsc{x}}%
(\lambda )\text{det}_{p-1}D_{\tau ,(p+1)/2,(p+1)/2-1}(\lambda q^{1/2})}{%
\lambda ^{2}-1/\lambda ^{2}} \notag \\
& -\frac{\text{\textsc{x}}(1/\lambda )\text{det}_{p-1}D_{\tau
,(p+1)/2,(p+1)/2+1}(\lambda q^{1/2})}{\lambda ^{2}-1/\lambda ^{2}},
\end{align}%
where%
\begin{equation}
\text{\textsc{x}}(\lambda )=\left( \lambda ^{2}-\frac{1}{\lambda ^{2}}%
\right) \text{\textsc{a}}(\lambda ),
\end{equation}%
and $D_{\tau ,i,j}(\lambda )$ denotes the $(p-1)\times (p-1)$ matrix
obtained from $D_{\tau }(\lambda )$ removing the row $i$ and the column $j$.
From the identity:%
\begin{equation}
\text{det}_{p-1}D_{\tau ,(p+1)/2,(p+1)/2+1}(\lambda q^{1/2})=\text{det}%
_{p-1}D_{\tau ,(p+1)/2,(p+1)/2-1}(q^{1/2}/\lambda ),
\end{equation}%
and the regularity of these two determinants for $\lambda \rightarrow \pm 1$
and $\lambda \rightarrow \pm i$, it follows that det$_{p}D_{\tau
}(i^{a}q^{1/2})$ are finites too for $a\in \{0,1\}$. Now, our statement
about the Laurent polynomiality of degree $\mathsf{N}+2$ of det$_{p}D_{\tau
}(\lambda )$ w.r.t. $Z$ follows from the symmetries and from the fact that $%
\tau (\lambda )$ and \textsc{x}$(\lambda )$ are Laurent polynomials in $%
\lambda $ of degree $2\mathsf{N}+4$.
\end{proof}
Let us introduce the following notations:%
\begin{eqnarray}
\text{\textsc{a}}_{\infty } &=&\lim_{\lambda \rightarrow +\infty }\lambda
^{-2(\mathsf{N}+2)}\text{\textsc{a}}(\lambda )=\frac{(-1)^{\mathsf{N}%
+1}\kappa _{+}\kappa _{-}\alpha _{-}\beta _{-}\alpha _{+}\prod_{n=1}^{%
\mathsf{N}}b_{n}c_{n}}{q^{3+\mathsf{N}}\beta _{+}\left( \zeta _{+}-1/\zeta
_{+}\right) \left( \zeta _{-}-1/\zeta _{-}\right) }, \label{55a}\\
\text{\textsc{a}}_{0} &=&\lim_{\lambda \rightarrow 0}\lambda ^{2(\mathsf{N}%
+2)}\text{\textsc{a}}(\lambda )=\frac{(-1)^{\mathsf{N}+1}q^{3+\mathsf{N}%
}\kappa _{+}\kappa _{-}\beta _{+}\prod_{n=1}^{\mathsf{N}}a_{n}d_{n}}{\alpha
_{-}\beta _{-}\alpha _{+}\left( \zeta _{+}-1/\zeta _{+}\right) \left( \zeta
_{-}-1/\zeta _{-}\right) },
\end{eqnarray}%
and%
\begin{equation}
F(\lambda )=\prod_{b=1}^{2\mathsf{N}}\left( \frac{\lambda ^{p}}{\left( \zeta
_{b}^{(0)}\right) ^{p}}-\frac{\left( \zeta _{b}^{(0)}\right) ^{p}}{\lambda
^{p}}\right) ,
\label{55c}
\end{equation}
where we recall that $\zeta_-,\kappa_-,\tau_-,\alpha_-,\beta_-,\zeta_+,\kappa_+,\tau_+,\alpha_+$ and $\beta_+$ are the boundary parameters (see \eqref{Kmatuno},\eqref{Kmatuno2} and \eqref{Def-alfa-beta}), while $a_n,b_n,c_n$ and $d_n$ are the bulk parameters, see \eqref{laxopppp}. Then the following results hold:
\begin{proposition}
\label{General F-Eq}For almost all the values of the boundary-bulk
parameters, $\mathcal{T}(\lambda )$ has simple spectrum and $\tau (\lambda )$
of the form $(\ref{set-tau})$ is an element of $\Sigma _{\mathcal{T}}$ (the set of the eigenvalues of $\mathcal{T}(\lambda )$) if
and only if det$_{p}D_{\tau }(\lambda )$ is a Laurent polynomial of degree $%
\mathsf{N}+2$ in the variable $Z$ (see \eqref{defZ}) which satisfies the following functional
equation:%
\begin{equation}
\text{det}_{p}D_{\tau }(\lambda )=F(\lambda )\left( \lambda ^{2p}-\frac{1}{%
\lambda ^{2p}}\right) ^{2}\prod_{k=0}^{p-1}(\tau _{\infty }-(q^{k}\text{%
\textsc{a}}_{\infty }+q^{-k}\text{\textsc{a}}_{0})). \label{Func-EQ-1}
\end{equation}
\end{proposition}
\begin{proof}
The SoV characterization of the spectrum implies that $\tau (\lambda )\in
\Sigma _{\mathcal{T}}$ if and only if it holds:%
\begin{equation}
\text{det}_{p}D_{\tau }(\zeta _{a}^{(0)})=0,\text{ }\forall a\in \{1,...,%
\mathsf{N}\},
\end{equation}%
and $\tau (\lambda )$ has the form $(\ref{set-tau})$. In the previous lemma
we have shown that det$_{p}D_{\tau }(\lambda )$ is a Laurent polynomial of
degree $\mathsf{N}+2$ in $Z$, here we show that from $\tau (\lambda )$ of
form $(\ref{set-tau})$ it follows the identities:%
\begin{equation}
\lim_{\lambda \rightarrow \pm 1,\pm i}\text{det}_{p}D_{\tau }(\lambda
q^{1/2+h})=0\text{ \ \ }\forall h\in \{0,...,p-1\}.
\end{equation}
For the symmetry it is enough to consider the above limit in the case $h=0$.
Let us denote with $\bar{D}_{\tau }(\lambda q^{1/2})$ the matrix whose first
row is the sum of the first and the last row of $D_{\tau }(\lambda q^{1/2})$
divided for $(\lambda ^{2}-1/\lambda ^{2})$ and whose row $\left( p+1\right)
/2$ is the row $\left( p+1\right) /2$ of $D_{\tau }(\lambda q^{1/2})$
multiplied for $(\lambda ^{2}-1/\lambda ^{2})$ while all the others rows of $%
\bar{D}_{\tau }(\lambda q^{1/2})$ and $D_{\tau }(\lambda q^{1/2})$ coincide.
Clearly it holds:%
\begin{equation}
\text{det}_{p}\bar{D}_{\tau }(\lambda q^{1/2})=\text{det}_{p}D_{\tau
}(\lambda q^{1/2}),
\end{equation}%
so that we can compute the limits directly for det$_{p}\bar{D}_{\tau
}(\lambda q^{1/2})$. The interesting point is that now all the rows of the
matrix $\bar{D}_{\tau }(\lambda q^{1/2})$ are finites in the limits $\lambda
\rightarrow \pm 1,\pm i$, this is a consequence of the identities:%
\begin{equation}
\tau (\pm i^{a}q^{1/2})=\text{\textsc{a}}(\pm i^{a}q^{1/2}),\text{ \ \
\textsc{a}}(\pm i^{a}q^{-1/2})=0,\text{ }\forall a\in \{0,1\}.
\end{equation}%
Explicitly, we have that the nonzero elements of the rows $1$, $\left(
p+1\right) /2$ and $p$ are:%
\begin{eqnarray}
&&\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right] _{1,1}\left. =\right.
r_{a,\pm },\text{ }\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right]
_{1,2}\left. =\right. s_{a,\pm }, \\
&&\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right] _{1,p-1}\left. =\right.
-s_{a,\pm },\text{ }\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right]
_{1,p}\left. =\right. -r_{a,\pm }, \\
&&\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right] _{(p+1)/2,(p+1)/2-1}\left.
=\right. -\text{\textsc{x}}(\pm i^{a}),\text{ } \\
&&\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right] _{(p+1)/2,(p+1)/2+1}\left.
=\right. \text{\textsc{x}}(\pm i^{a}), \\
&&\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})\right] _{p,1}\left. =\right.
-\tau (\pm i^{a}q^{1/2}),\text{ }\left[ \bar{D}_{\tau }(\pm i^{a}q^{1/2})%
\right] _{p,p}\left. =\right. \tau (\pm i^{a}q^{1/2}),
\end{eqnarray}%
where we have defined:%
\begin{equation}
r_{a,\pm }=(-1)^{a}\lim_{\lambda \rightarrow \pm 1}\frac{\tau
(i^{a}q^{1/2}\lambda )-\text{\textsc{a}}(i^{a}q^{1/2}/\lambda )}{\lambda
^{2}-1/\lambda ^{2}},\text{ \ }s_{a,\pm }=(-1)^{a}\lim_{\lambda \rightarrow
\pm 1}\frac{\text{\textsc{a}}(i^{-a}q^{-1/2}/\lambda )}{\lambda
^{2}-1/\lambda ^{2}}.
\end{equation}%
The remaining rows of $\bar{D}_{\tau }(\pm i^{a}q^{1/2})$ produce the
tridiagonal part of this matrix. Then, it is possible to prove that this
matrix has linear dependent rows; so that det$_{p}\bar{D}_{\tau }(\pm
i^{a}q^{1/2})=0$. Finally, we can compute the following asymptotic formulae:%
\begin{align}
\Delta _{\infty }& \equiv \lim_{\lambda \rightarrow \infty }\lambda ^{-2p(%
\mathsf{N}+2)}\text{det}_{p}D_{\tau }(\lambda )=\text{det}_{p}\left[
\lim_{\lambda \rightarrow \infty }\lambda ^{-2(\mathsf{N}+2)}D_{\tau
}(\lambda )\right] \\
& =\lim_{\lambda \rightarrow 0}\lambda ^{2p(\mathsf{N}+2)}\text{det}%
_{p}D_{\tau }(\lambda )=\text{det}_{p}\left[ \lim_{\lambda \rightarrow
0}\lambda ^{2(\mathsf{N}+2)}D_{\tau }(\lambda )\right] ^{t} \\
& =\text{det}_{p}%
\begin{pmatrix}
\tau _{\infty } & -\text{\textsc{a}}_{0} & 0 & \cdots & 0 & -\text{\textsc{a}%
}_{\infty } \\
-x\text{\textsc{a}}_{\infty } & x\tau _{\infty } & -x\text{\textsc{a}}_{0} &
0 & \cdots & 0 \\
0 & -x^{2}\text{\textsc{a}}_{\infty } & x^{2}\tau _{\infty } & -x^{2}\text{%
\textsc{a}}_{0} & \ddots & \vdots \\
\vdots & & \ddots & \ddots & 0 & 0 \\
0 & \ldots & 0 & -x^{2l-1}\text{\textsc{a}}_{\infty } & x^{2l-1}\tau
_{\infty } & -x^{2l-1}\text{\textsc{a}}_{0} \\
-x^{2l}\text{\textsc{a}}_{0} & 0 & \ldots & 0 & -x^{2l}\text{\textsc{a}}%
_{\infty } & x^{2l}\tau _{\infty }%
\end{pmatrix}%
,
\end{align}%
where we have denoted with $^{t}$ the transpose of the matrix and $x=q^{2(%
\mathsf{N}+2)}$. We have that $\Delta _{\infty }$ is a degree $p$ polynomial
in $\tau _{\infty }$ whose zeros are known from the identities:%
\begin{equation}
\left. \Delta _{\infty }\right\vert _{\tau _{\infty }=q^{k}\text{\textsc{a}}%
_{\infty }+q^{-k}\text{\textsc{a}}_{0}}=0\text{ \ }\forall k\in
\{0,...,p-1\},
\end{equation}%
so that we get:%
\begin{equation}
\Delta _{\infty }=\prod_{k=0}^{p-1}(\tau _{\infty }-(q^{k}\text{\textsc{a}}%
_{\infty }+q^{-k}\text{\textsc{a}}_{0})).
\end{equation}%
This means that we have determined det$_{p}D_{\tau }(\lambda )$ in $\mathsf{N%
}+2$ different values of $Z$ together with the asymptotic for $Z\rightarrow
\infty $. From which the characterization $(\ref{Func-EQ-1})$ trivially
follows.
\end{proof}
The discrete characterization of the spectrum given in Theorem \ref%
{discrete-SoV-ch} can be reformulated in terms of Baxter's type
T-Q functional equations and the eigenstates admit an algebraic Bethe ansatz
like reformulation, as we show in the next theorem. These type of
reformulations of the spectrum holds for several models once they admit SoV
description, see for example \cite%
{OpenCyNT-10,OpenCyN-10,OpenCyGN12,OpenCyKitMN14,OpenCyNicT15-2,OpenCyLevNT15,OpenCyNicT15}%
.
In the following we denote with $Q(\lambda )$ a polynomial in $\Lambda=\lambda^2+\frac{1}{\lambda^2}$ of
degree $\mathsf{N}_{Q}$ of the form:%
\begin{equation}
Q(\lambda )=\prod_{b=1}^{\mathsf{N}_{Q}}\left( \Lambda -\Lambda _{b}\right) .
\label{Q-form}
\end{equation}
\begin{theorem}
For almost all the values of the boundary-bulk parameters such that:%
\begin{equation}
\tau _{\infty }\neq q^{-k}\textsc{a}_{\infty }+q^{k}\textsc{a}_{0}\text{ \ }%
\forall k\in \{0,...,p-1\}, \label{Most-general.case}
\end{equation}%
$\tau (\lambda )\in \Sigma _{\mathcal{T}}$ (the set of the eigenvalues of $\mathcal{T}(\lambda)$) if and only if $\tau (\lambda )$
is an entire function and there exists and is unique a polynomial $Q(\lambda
)$ of the form $\left( \ref{Q-form}\right) $ with $\mathsf{N}_{Q}=\left(
p-1\right) \mathsf{N}$, satisfying the following functional equation:%
\begin{equation}
\tau (\lambda )Q(\lambda )=\text{\textsc{a}}(\lambda )Q(\lambda /q)+\text{%
\textsc{a}}(1/\lambda )Q(\lambda q)+\left[ \tau _{\infty }-(q^{-\mathsf{N}%
_{Q}}\text{\textsc{a}}_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})%
\right] \left( \Lambda ^{2}-X^{2}\right) F(\lambda ), \label{Inho-Baxter-EQ}
\end{equation}%
and the conditions:%
\begin{equation}
(Q(\zeta _{a}^{\left( 0\right) }),...,Q(\zeta _{a}^{\left( p-1\right)
}))\neq (0,...,0)\text{ \ \ }\forall a\in \{1,...,\mathsf{N}\}\text{.}
\label{Q-condition}
\end{equation}
We recall that $\textsc{a}_{\infty },\textsc{a}_{0}$ and $F(\lambda)$ are defined in \eqref{55a}-\eqref{55c} and that $X=q+1/q$ \eqref{defX}.
\end{theorem}
\begin{proof}
Let us prove first that if it exists a $Q(\lambda )$ of the form $\left( \ref%
{Q-form}\right) $ with $\mathsf{N}_{Q}=\left( p-1\right) \mathsf{N}$
satisfying $\left( \ref{Q-condition}\right) $ and $\left( \ref%
{Inho-Baxter-EQ}\right) $ with $\tau (\lambda )$ an entire function, then $\tau (\lambda )\in \Sigma _{\mathcal{T}}$. The r.h.s of the equation $%
\left( \ref{Inho-Baxter-EQ}\right) $ is a Laurent polynomial in $\lambda $
as we have:%
\begin{equation}
\text{\textsc{a}}(\lambda )Q(\lambda /q)+\text{\textsc{a}}(1/\lambda
)Q(\lambda q)=\frac{\text{\textsc{x}}(\lambda )Q(q/\lambda )-\text{\textsc{x}%
}(1/\lambda )Q(\lambda q)}{\lambda ^{2}-1/\lambda ^{2}}
\end{equation}%
which is finite in the limits $\lambda \rightarrow \pm 1,$ $\lambda
\rightarrow \pm i$. So that the r.h.s. of $\left( \ref{Inho-Baxter-EQ}%
\right) $ is a polynomial of degree $p\mathsf{N}+2$ in $\Lambda $, as it is
invariant w.r.t. the transformations $\lambda \rightarrow -\lambda $ and $%
\lambda \rightarrow 1/\lambda $. Then, the assumption that $\tau (\lambda )$
is entire in $\lambda $ implies by the equation $\left( \ref{Inho-Baxter-EQ}%
\right) $ that $\tau (\lambda )$ is a polynomial in $\Lambda $ of the form $(%
\ref{set-tau})$ and that it satisfies the equations:%
\begin{equation}
\text{det}_{p}D_{\tau }(\zeta_{a}^{(0)})=0,\text{ }\forall a\in \{1,...,%
\mathsf{N}\},
\end{equation}%
thanks to $\left( \ref{Inho-Baxter-EQ}\right) $ and $\left( \ref{Q-condition}%
\right) $, so that we obtain by SoV characterization $\tau (\lambda )\in
\Sigma _{\mathcal{T}}$.
Let us now prove the reverse statement, i.e. we assume $\tau (\lambda )\in
\Sigma _{\mathcal{T}}$\ and we prove that there exists $Q(\lambda )$ of the
form $\left( \ref{Q-form}\right) $ with degree $\mathsf{N}_{Q}=\left(
p-1\right) \mathsf{N}$ satisfying $\left( \ref{Q-condition}\right) $ and $%
\left( \ref{Inho-Baxter-EQ}\right) $. Let us consider the system of
equations:%
\begin{equation}
D_{\tau }(\lambda )\left(
\begin{array}{l}
X_{0}(\lambda ) \\
X_{1}(\lambda ) \\
\vdots \\
\vdots \\
X_{p-1}(\lambda )%
\end{array}%
\right) _{p\times 1}=\left[ \tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{%
\textsc{a}}_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})\right]
F(\lambda )\left(
\begin{array}{l}
\Lambda _{0}^{2}-X^{2} \\
\Lambda _{1}^{2}-X^{2} \\
\vdots \\
\vdots \\
\Lambda _{p-1}^{2}-X^{2}%
\end{array}%
\right) _{p\times 1}, \label{Sys-eq-cyclic}
\end{equation}%
where we have used the notations:%
\begin{equation}
\Lambda _{i}=q^{2i}\lambda ^{2}+\frac{1}{q^{2i}\lambda ^{2}}.
\end{equation}%
From the condition $\tau (\lambda )\in \Sigma _{\mathcal{T}}$ and the
assumption of general values of the boundary-bulk parameters $\left( \ref%
{Most-general.case}\right) $, we know that det$_{p}D_{\tau }(\lambda )$ is a
non-zero polynomial, so defining:%
\begin{equation}
Z_{det_{p}D_{\tau }}=\left\{ \pm i^{a}q^{h+1/2},\pm \zeta _{n}^{\left(
h\right) }\text{ }\forall a\in \left\{ 0,1\right\} ,n\in \{1,...,2\mathsf{N}%
\},h\in \left\{ 0,...,p-1\right\} \right\} ,
\end{equation}%
we can solve the previous system of equations for any value of $\lambda \in
\mathbb{C}\backslash Z_{det_{p}D_{\tau }}$ by the Cramer's rule:%
\begin{equation}
X_{i}(\lambda )=\frac{\tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{\textsc{a}}%
_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})}{\left( Z^{2}-4\right)
\prod_{k=0}^{p-1}\left[ \tau _{\infty }-(q^{k}\text{\textsc{a}}_{\infty
}+q^{-k}\text{\textsc{a}}_{0})\right] }\text{det}_{p}D_{\tau
}^{(i+1)}(\lambda ), \label{Def-X_i}
\end{equation}%
where $D_{\tau }^{(i)}(\lambda )$ is the $p\times p$ matrix obtained
replacing the column $i$ by the column at the r.h.s. of $\left( \ref%
{Sys-eq-cyclic}\right) $. Let us now rewrite the system of equation $\left( %
\ref{Sys-eq-cyclic}\right) $ bringing the first element in the last one for
the two column vectors:%
\begin{equation}
\tilde{D}_{\tau }(\lambda )\left(
\begin{array}{l}
X_{1}(\lambda ) \\
X_{2}(\lambda ) \\
\vdots \\
X_{p-1}(\lambda ) \\
X_{0}(\lambda )%
\end{array}%
\right) _{p\times 1}=\left[ \tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{%
\textsc{a}}_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})\right]
F(\lambda )\left(
\begin{array}{l}
\Lambda _{1}^{2}-X^{2} \\
\Lambda _{2}^{2}-X^{2} \\
\vdots \\
\Lambda _{p-1}^{2}-X^{2} \\
\Lambda _{0}^{2}-X^{2}%
\end{array}%
\right) _{p\times 1},
\end{equation}%
where it is easy to see that $\tilde{D}_{\tau }(\lambda )=D_{\tau }(\lambda
q)$. Rescaling now the argument of the functions, we can rewrite it as it
follows:%
\begin{equation}
D_{\tau }(\lambda )\left(
\begin{array}{l}
X_{1}(\lambda /q) \\
X_{2}(\lambda /q) \\
\vdots \\
X_{p-1}(\lambda /q) \\
X_{0}(\lambda /q)%
\end{array}%
\right) _{p\times 1}=\left[ \tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{%
\textsc{a}}_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})\right]
F(\lambda )\left(
\begin{array}{l}
\Lambda _{0}^{2}-X^{2} \\
\Lambda _{1}^{2}-X^{2} \\
\vdots \\
\Lambda _{p-2}^{2}-X^{2} \\
\Lambda _{p-1}^{2}-X^{2}%
\end{array}%
\right) _{p\times 1},
\end{equation}%
so that it must hold:%
\begin{equation}
X_{i+1}(\lambda /q)=X_{i}(\lambda )\text{ \ }\forall \lambda \in \mathbb{C}%
\backslash Z_{det_{p}D_{\tau }}\text{, }i\in \left\{ 0,...,p-1\right\}
\end{equation}%
where we have used the notation $X_{p}(\lambda )\equiv X_{0}(\lambda )$, or
equivalently:%
\begin{equation}
X_{a}(\lambda )=X_{0}(\lambda q^{a})\text{ \ }\forall \lambda \in \mathbb{C}%
\backslash Z_{det_{p}D_{\tau }}\text{, }a\in \left\{ 1,...,p-1\right\} .
\end{equation}%
Let us observe now that, from their definition, $X_{a}(\lambda )$ are continuous
functions of $\lambda $ so the above equation must be indeed satisfied for
any value of $\lambda \in \mathbb{C}$. Moreover, from the identity:%
\begin{equation}
\text{det}_{p}D_{\tau }^{(1)}(\lambda )=\text{det}_{p}D_{\tau
}^{(1)}(1/\lambda ),
\end{equation}%
which we can prove by some simple exchange of rows and columns, and from the
fact that:
\begin{equation}
\forall i\in \{0...p-1\},\ \lambda \rightarrow 1/\lambda \ \Rightarrow \
\Lambda _{i}\rightarrow \Lambda _{p-i}
\end{equation}%
we get the symmetry:%
\begin{equation}
X_{0}(\lambda )=X_{0}(1/\lambda ),
\end{equation}%
which together with the symmetry $X_{0}(\lambda )=X_{0}(-\lambda )$ implies
that $X_{0}(\lambda )$ is a function of $\Lambda $.
By using this last result we can rewrite the first equation of the system $%
\left( \ref{Sys-eq-cyclic}\right) $ as it follows $\forall \lambda \in \mathbb{C}\ $:%
\begin{equation}
\tau (\lambda )X_{0}(\lambda )-\text{%
\textsc{a}}(\lambda )X_{0}(\lambda /q)-\text{\textsc{a}}(1/\lambda
)X_{0}(\lambda q)=\left[ \tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{\textsc{a}%
}_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})\right] \left( \Lambda
^{2}-X^{2}\right) F(\lambda ).
\end{equation}
Let us now prove that det$_{p}D_{\tau }^{(1)}(\lambda )$ is indeed a
polynomial of degree $(p-1)\mathsf{N}+2p$ in $\Lambda $. Note that in the
following when we refer to a row $k\in \mathbb{Z}$ what we mean\ is the row $%
k^{\prime }\in \left\{ 1,...,p\right\} $ with $k^{\prime }=k$ mod$\,p$. In
the row $\bar{h}=(p+1)/2+h$ of $D_{\tau }^{(1)}(\pm i^{a}q^{1/2-h}\lambda )$
at least one of the three non-zero elements is diverging under the limit $%
\lambda \rightarrow \pm 1,\pm i$. We can proceed as done in the previous
theorem, we define the matrix $\bar{D}_{\tau ,h}^{(1)}(\lambda )$ as the
matrix with all the rows coinciding with those of $D_{\tau }^{(1)}(\lambda )$
except the row $h+1$, which is obtained by summing the row $h$ and $h+1$ of $%
D_{\tau }^{(1)}(\lambda )$ and dividing them by $((i^{a}q^{h-1/2}\lambda
)^{2}-1/(i^{a}q^{h-1/2}\lambda )^{2})$, and the row $\bar{h}$, obtained
multiplying the row $\bar{h}$ of $D_{\tau }^{(1)}(\lambda )$ by $%
((i^{a}q^{h-1/2}\lambda )^{2}-1/(i^{a}q^{h-1/2}\lambda )^{2})$. Clearly we have:%
\begin{equation}
\text{det}_{p}\bar{D}_{\tau ,h}^{(1)}(\lambda )=\text{det}_{p}D{}_{\tau
}^{(1)}(\lambda ),
\end{equation}%
and the interesting point is that now all the rows of the matrix $D_{\tau
,h}^{(1)}(\pm i^{a}q^{1/2-h}\lambda )$ are finite in the limits $\lambda
\rightarrow \pm 1,\pm i$. We have that the nonzero elements of the rows $h$,
$h+1$ and $\bar{h}$ of $\bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})$ reads:%
\begin{eqnarray}
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h+1,h-1}\left.
=\right. -s_{a,\pm }(1-\delta _{h-1,1})+\delta _{h-1,1}\omega _{a}, \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h+1,h}\left.
=\right. -r_{a,\pm }(1-\delta _{h,1})+\delta _{h,1}\omega _{a}, \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h+1,h+1}\left.
=\right. r_{a,\pm }(1-\delta _{h+1,1})+\delta _{h+1,1}\omega _{a}, \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h+1,h+2}\left.
=\right. s_{a,\pm }(1-\delta _{h+2,1})+\delta _{h+2,1}\omega _{a}, \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{\bar{h},\bar{h}%
-1}\left. =\right. -\text{\textsc{x}}(\pm i^{a})(-1)^{a}(1-\delta _{\bar{h}%
-1,1}),\text{ } \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{\bar{h},\bar{h}%
+1}\left. =\right. \text{\textsc{x}}(\pm i^{a})(-1)^{a}(1-\delta _{\bar{h}%
+1,1}), \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h,h-1}\left.
=\right. \tau (\pm i^{a}q^{1/2})(1-\delta _{h+1,1}), \\
&&\left[ \bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})\right] _{h,h+1}\left.
=\right. -\tau (\pm i^{a}q^{1/2})(1-\delta _{h+1,1}),
\end{eqnarray}%
where we have defined:%
\begin{equation}
\omega _{a}=(-1)^{a}\left( q^{2}-1/q^{2}\right) .
\end{equation}%
The remaining rows of $\bar{D}_{\tau ,h}^{(1)}(\pm i^{a}q^{1/2-h})$ produce
the tridiagonal part of this matrix. It is possible to prove than that for
any $h\in \left\{ 0,...,p-1\right\} $ the matrix $\bar{D}_{\tau
,h}^{(1)}(\pm i^{a}q^{1/2-h})$ has linear dependent rows; so that
det$_{p}D_{\tau }^{(1)}(\pm i^{a}q^{1/2-h})=0$ and the following
factorization holds:%
\begin{equation}
\text{det}_{p}D_{\tau }^{(1)}(\lambda )=\left( \lambda ^{2p}-\frac{1}{%
\lambda ^{2p}}\right) P_{\tau }\left( \lambda \right) .
\end{equation}%
Here $P_{\tau }\left( \lambda \right) $ is a Laurent polynomial of degree $%
2(p-1)\mathsf{N}+2p$ in $\lambda $, with the following odd parity:%
\begin{equation}
P_{\tau }(1/\lambda )=-P_{\tau }\left( \lambda \right) ,
\end{equation}%
being det$_{p}D_{\tau }^{(1)}(\lambda )$ a polynomial of degree $(p-1)%
\mathsf{N}+2p$ in $\Lambda $. Here, we want to prove that in fact:%
\begin{equation}
\text{det}_{p}D_{\tau }^{(1)}(\lambda )=\left( \lambda ^{2p}-\frac{1}{%
\lambda ^{2p}}\right) ^{2}\bar{Q}_{\tau }\left( \lambda \right) ,
\label{Full-zero structure}
\end{equation}%
where $\bar{Q}_{\tau }\left( \lambda \right) $ is a polynomial of degree $%
(p-1)\mathsf{N}$ in $\Lambda $. In order to do so we write down the equation:%
\begin{align}
\tau (\lambda )R_{\tau }(\lambda )& =\text{\textsc{a}}(\lambda )R_{\tau
}(\lambda /q)+\text{\textsc{a}}(1/\lambda )R_{\tau }(\lambda q) \notag \\
& +\left( Z^{2}-4\right) \left( \Lambda ^{2}-X^{2}\right) \prod_{k=0}^{p-1}
\left[ \tau _{\infty }-(q^{k}\text{\textsc{a}}_{\infty }+q^{-k}\text{\textsc{%
a}}_{0})\right] F(\lambda ),
\end{align}%
where for convenience we have denoted $R_{\tau }(\lambda )=$det$_{p}D_{\tau
}^{(1)}(\lambda )$, and we recall $Z=\lambda^{2p}+\frac{1}{\lambda^{2p}}$. The above equation is a direct consequence of the
equation satisfied by $X_{0}(\lambda )$ and of the definition of this last
function in terms of det$_{p}D_{\tau }^{(1)}(\lambda )$. Now let us consider
the following limit on the above equation $\lambda \rightarrow \pm i^{a}$
with $a\in \left\{ 0,1\right\} $:%
\begin{align}
\tau (\pm i^{a})R_{\tau }(\pm i^{a})& =\frac{1}{\pm 2i^{a}}\frac{d\text{%
\textsc{x}}}{d\lambda }(\pm i^{a})\left( R_{\tau }(\pm i^{a}/q)-R_{\tau
}(\pm i^{a}q)\right) \notag \\
& +\text{\textsc{x}}(\pm i^{a})\lim_{\lambda \rightarrow \pm i^{a}}\left[
\frac{R_{\tau }(\lambda /q)}{\lambda ^{2}-1/\lambda ^{2}}-\frac{R_{\tau
}(\lambda q)}{\lambda ^{2}-1/\lambda ^{2}}\right] ,
\end{align}%
now by using the known identities:%
\begin{eqnarray}
R_{\tau }(\pm i^{a}) &=&R_{\tau }(\pm i^{a}/q)=R_{\tau }(\pm i^{a}q)=0, \\
\frac{R_{\tau }(\lambda /q)}{\lambda ^{2}-1/\lambda ^{2}} &=&\frac{R_{\tau
}(q/\lambda )}{\lambda ^{2}-1/\lambda ^{2}},
\end{eqnarray}%
we get:
\begin{equation}
\lim_{\lambda \rightarrow \pm i^{a}}\frac{R_{\tau }(\lambda /q)}{\lambda
^{2}-1/\lambda ^{2}}=-\lim_{\lambda \rightarrow \pm i^{a}}\frac{R_{\tau
}(\lambda q)}{\lambda ^{2}-1/\lambda ^{2}}, \label{Zero-deriv+-q}
\end{equation}
and so being \textsc{x}$(\pm i^{a})\neq 0$
\begin{equation}
\lim_{\lambda \rightarrow \pm i^{a}}\frac{R_{\tau }(\lambda /q)}{\lambda
^{2}-1/\lambda ^{2}}=0.
\end{equation}
These results imply the identities:%
\begin{equation}
P_{\tau }(\pm i^{a}/q)=-P_{\tau }(\pm i^{a}q)=0. \label{Zero-deriv+-q+}
\end{equation}%
We can now write the functional equation for $P_{\tau }(\lambda )$:%
\begin{align}
\tau (\lambda )P_{\tau }(\lambda )& =\text{\textsc{a}}(\lambda )P_{\tau
}(\lambda /q)+\text{\textsc{a}}(1/\lambda )P_{\tau }(\lambda q) \notag \\
& +\left( \lambda ^{2p}-\frac{1}{\lambda ^{2p}}\right) \left( \Lambda
^{2}-X^{2}\right) \prod_{k=0}^{p-1}\left[ \tau _{\infty }-(q^{k}\text{%
\textsc{a}}_{\infty }+q^{-k}\text{\textsc{a}}_{0})\right] F(\lambda ).
\end{align}%
Taking the limit $\lambda \rightarrow \pm i^{a}$ with $a\in \left\{
0,1\right\} $, we obtain:%
\begin{align}
\tau (\pm i^{a})P_{\tau }(\pm i^{a})& =\frac{1}{\pm 2i^{a}}\frac{d\text{%
\textsc{x}}}{d\lambda }(\pm i^{a})\left( P_{\tau }(\pm i^{a}/q)-P_{\tau
}(\pm i^{a}q)\right) \notag \\
& +\text{\textsc{x}}(\pm i^{a})\lim_{\lambda \rightarrow \pm i^{a}}\left[
\frac{P_{\tau }(\lambda /q)}{\lambda ^{2}-1/\lambda ^{2}}-\frac{P_{\tau
}(\lambda q)}{\lambda ^{2}-1/\lambda ^{2}}\right] ,
\end{align}%
so that using the previous result $\left( \ref{Zero-deriv+-q+}\right) $ and
the identity:%
\begin{equation}
\lim_{\lambda \rightarrow \pm i^{a}}\frac{P_{\tau }(\lambda /q)}{\lambda
^{2}-1/\lambda ^{2}}=\lim_{\lambda \rightarrow \pm i^{a}}\frac{P_{\tau
}(\lambda q)}{\lambda ^{2}-1/\lambda ^{2}}
\end{equation}%
we obtain%
\begin{equation}
P_{\tau }(\pm i^{a})=0,
\end{equation}%
being $\tau (\pm i^{a})\neq 0$. Let us now compute the functional equation
for $P_{\tau }(\lambda )$ in the points $\lambda =\pm i^{a}q^{\epsilon }$
for $a\in \left\{ 0,1\right\} $, $\epsilon \in \left\{ -1,1\right\} $, we
obtain:%
\begin{eqnarray}
\tau (\pm i^{a}q)P_{\tau }(\pm i^{a}q) &=&\text{\textsc{a}}(\pm
i^{a}q)P_{\tau }(\pm i^{a})+\text{\textsc{a}}(\pm i^{a}/q)P_{\tau }(\pm
i^{a}q^{2}), \\
\tau (\pm i^{a}/q)P_{\tau }(\pm i^{a}/q) &=&\text{\textsc{a}}(\pm
i^{a}/q)P_{\tau }(\pm i^{a}/q^{2})+\text{\textsc{a}}(\pm i^{a}q)P_{\tau
}(\pm i^{a}),
\end{eqnarray}%
implying:%
\begin{equation}
P_{\tau }(\pm i^{a}/q^{2})=-P_{\tau }(\pm i^{a}q^{2})=0,
\end{equation}%
being \textsc{a}$(\pm i^{a}q^{\epsilon })\neq 0$ for $a,\epsilon \in \left\{
0,1\right\} $. We can iterate these computations for $\lambda =\pm
i^{a}q^{b\epsilon }$ for any $a\in \left\{ 0,1\right\} $, $\epsilon \in
\left\{ -1,1\right\} $ and $b\in \left\{ 2,...,\left( p-3\right) /2\right\} $
obtaining that:%
\begin{equation}
P_{\tau }(\pm i^{a}/q^{2b})=-P_{\tau }(\pm i^{a}q^{2b})=0,\text{ for any\ }%
b\in \left\{ 1,...,\left( p-3\right) /2\right\} .
\end{equation}%
In the cases $\lambda =\pm i^{a}q^{\pm 1/2}$ as \textsc{a}$(\pm
i^{a}/q^{1/2})=0$ the functional equation for $P_{\tau }(\lambda )$ give us:%
\begin{equation}
\tau (\pm i^{a}q^{\pm 1/2})P_{\tau }(\pm i^{a}q^{\pm 1/2})=\text{\textsc{a}}%
(\pm i^{a}q^{1/2})P_{\tau }(\pm i^{a}q^{\mp 1/2}),
\end{equation}%
which being $P_{\tau }(\pm i^{a}q^{1/2})=-P_{\tau }(\pm i^{a}q^{-1/2})$ and $%
\tau (\pm i^{a}q^{\pm 1/2})=$\textsc{a}$(\pm i^{a}q^{1/2})\neq 0$ implies
the identity:%
\begin{equation}
P_{\tau }(\pm i^{a}q^{1/2})=-P_{\tau }(\pm i^{a}q^{-1/2})=0,
\label{Final-zero}
\end{equation}%
so that the factorization $\left( \ref{Full-zero structure}\right) $ is
proven and we get that:%
\begin{equation}
X_{0}(\lambda )=\frac{\tau _{\infty }-(q^{-\mathsf{N}_{Q}}\text{\textsc{a}}%
_{\infty }+q^{\mathsf{N}_{Q}}\text{\textsc{a}}_{0})}{\prod_{k=0}^{p-1}\left[
\tau _{\infty }-(q^{k}\text{\textsc{a}}_{\infty }+q^{-k}\text{\textsc{a}}%
_{0})\right] }\bar{Q}_{\tau }(\lambda ),
\end{equation}%
is a polynomial of degree $\mathsf{N}_{Q}=\left( p-1\right) \mathsf{N}$ in $%
\Lambda $ which has the form $\left( \ref{Q-form}\right) $. This follows by
taking the asymptotic of its functional equation so that we can fix:%
\begin{equation}
Q(\lambda )\equiv X_{0}(\lambda ),
\end{equation}%
hence giving a constructive proof of the existence of the polynomial $Q$-function
solution of the equation $\left( \ref{Inho-Baxter-EQ}\right) $. The fact
that it is unique is shown observing that if $\hat{Q}(\lambda )$ is another polynomial
solution then:%
\begin{equation}
D_{\tau }(\lambda )\left(
\begin{array}{l}
Q(\lambda )-\hat{Q}(\lambda ) \\
Q(\lambda q)-\hat{Q}(\lambda q) \\
\vdots \\
\vdots \\
Q(\lambda q^{p-1})-\hat{Q}(\lambda q^{p-1})%
\end{array}%
\right) _{p\times 1}=\left(
\begin{array}{l}
0 \\
0 \\
\vdots \\
\vdots \\
0%
\end{array}%
\right) _{p\times 1},
\end{equation}%
from which it follows $Q(\lambda )\equiv \hat{Q}(\lambda )$ as $D_{\tau
}(\lambda )$ is invertible for any $\lambda \in \mathbb{C}\backslash
Z_{det_{p}D_{\tau }}$.
Finally, let us show that $Q(\lambda )$ satisfies the condition $\left( \ref%
{Q-condition}\right) $. By the definition $\left( \ref{Def-X_i}\right) $, $%
Q(\lambda )$ is a continuous function of the boundary-bulk parameters, then
it is enough to prove this statement for some value of these parameters to
show that it holds for almost all the values of these parameters.
Let us impose the condition $\left( \ref{Condi-bulk-quasi-L}\right) $, where
the ratio $\beta /\alpha $ is fixed by $\left( \ref{guage-alpha}\right) $,
then the following identities are satisfied:%
\begin{equation}
\text{\textsc{a}}(\zeta _{a}^{(0)})=0\text{ \ }\forall a\in \{1,...,\mathsf{N%
}\},
\end{equation}%
and the SoV characterization of the transfer matrix spectrum holds for any
value of the boundary-bulk parameters satisfying the inequalities $\left( %
\ref{Special-B-simple}\right) $-$\left( \ref{Special-B-simple2}\right) $. So in particular if we impose:%
\begin{equation}
\mu _{n_{k},-}=1/(q^{1+k}\mu _{a,+})\text{ \ \ }\forall k\in \{1,...,p-1\}%
\text{,}
\end{equation}%
for some $n_{k}\in \{1,...,\mathsf{N}\}\backslash \{a\}$ once we have chosen any $a\in \{1,...,\mathsf{N}\}$. Under these conditions it holds:%
\begin{equation}
\text{\textsc{a}}(\zeta _{a}^{(k)})=0,\text{\ }\forall k\in \{1,...,p-1\},
\end{equation}%
and the SoV representation implies the following centrality condition:%
\begin{equation}
\prod_{k=0}^{p-1}\mathcal{T}(\zeta _{a}^{(k)})=\prod_{k=0}^{p-1}\text{%
\textsc{a}}(1/\zeta _{a}^{(k)}),
\end{equation}%
from which in particular follows:%
\begin{equation}
\prod_{k=0}^{p-1}\tau (\zeta _{a}^{(k)})=\prod_{k=0}^{p-1}\text{\textsc{a}}%
(1/\zeta _{a}^{(k)}).
\end{equation}%
Let us remark now that the r.h.s and the l.h.s of the above equation are
continuous w.r.t. the boundary-bulk parameters so that the above identity
holds also if we take the special limit $\mu _{a,-}\rightarrow q^{1-p}/\mu
_{a,+}$ for which it holds \textsc{a}$(1/\zeta _{a}^{(p-1)})=0$ and so we
get:%
\begin{equation}
\exists !\bar{h}\in \{0,...,p-1\}:\tau (\zeta _{a}^{(\bar{h})})=0.
\end{equation}%
By definition of the function $Q(\lambda )$ under these conditions and limit
on the bulk parameters we get:%
\begin{equation}
Q(\zeta _{a}^{(\bar{h})})\propto \text{det}_{p}%
\begin{pmatrix}
W_{a,\bar{h}} & -\text{\textsc{a}}{}(1/\zeta _{a}^{(\bar{h})}) & 0 & \cdots
& 0 & 0 \\
W_{a,\bar{h}+1} & \tau (\zeta _{a}^{(\bar{h}+1)}) & -\text{\textsc{a}}%
(1/\zeta _{a}^{(\bar{h}+1)}) & 0 & \cdots & 0 \\
W_{a,\bar{h}+2} & {\quad }0 & \tau (\zeta _{a}^{(\bar{h}+2)}) & -\text{%
\textsc{a}}(1/\zeta _{a}^{(\bar{h}+1)}) & & \vdots \\
\vdots & & \cdots & & & \vdots \\
W_{a,p-1} & 0 & \cdots 0 & \tau (\zeta _{a}^{(p-1)}) & 0\cdots & \vdots \\
\vdots & & & & \ddots {\qquad } & 0 \\
W_{a,\bar{h}} & \ldots & 0 & 0 & \tau (\zeta _{a}^{(\bar{h}-2)}) & -\text{%
\textsc{a}}(1/\zeta _{a}^{(\bar{h}-2)}) \\
W_{a,\bar{h}} & 0 & \ldots & 0 & 0 & \tau (\zeta _{a}^{(\bar{h}-1)})%
\end{pmatrix}%
, \label{For-1}
\end{equation}%
where we have defined:%
\begin{equation}
W_{a,k}=\left( (\zeta _{a}^{(k)})^{2}+1/(\zeta _{a}^{(k)})^{2}\right)
^{2}-X^{2}.
\end{equation}%
Now replacing the first row $R_{1}$ with the following linear combination of
rows:%
\begin{equation}
\bar{R}_{1}=R_{1}+\sum_{i=0}^{p-2-\bar{h}}\prod_{j=0}^{i}\frac{\text{\textsc{%
a}}{}(1/\zeta _{a}^{(\bar{h}+j)})}{\tau (\zeta _{a}^{(\bar{h}+j+1)})}R_{2+i},
\end{equation}%
we get%
\begin{equation}
\bar{R}_{1}=%
\begin{pmatrix}
\bar{W}_{a,\bar{h}} & 0 & \cdots & 0 & 0%
\end{pmatrix}%
_{1\times p}
\end{equation}%
where:%
\begin{equation}
\bar{W}_{a,\bar{h}}=W_{a,\bar{h}}+\sum_{i=0}^{p-2-\bar{h}}\prod_{j=0}^{i}%
\frac{\text{\textsc{a}}{}(1/\zeta _{a}^{(\bar{h}+j)})}{\tau (\zeta _{a}^{(%
\bar{h}+j+1)})}W_{a,\bar{h}+1+i}
\end{equation}%
and so:
\begin{equation}
Q(\zeta _{a}^{(\bar{h})})=\bar{W}_{a,\bar{h}}\prod_{k\neq \bar{h}%
,k=0}^{p-1}\tau (\zeta _{a}^{(k)})\neq 0,
\end{equation}%
for generic values of the boundary-bulk parameters. Indeed, as the $W_{a,%
\bar{h}+1+i}$ are functions only of the bulk parameter $\mu _{a,+}$ while
the ratios \textsc{a}${}(1/\zeta _{a}^{(\bar{h}+j)})/\tau (\zeta _{a}^{(\bar{%
h}+j+1)})$ are functions of both the boundary and the bulk parameters then
we can prove that $\bar{W}_{a,\bar{h}}\neq 0$. Explicitly we can compute the
asymptotic of $\bar{W}_{a,\bar{h}}$ in the limit $\mu _{a,+}\rightarrow
\infty $, by using the know asymptotic of the transfer matrix, therefore showing
that it is non-zero for general values of boundary-bulk parameters.
\end{proof}
In the previous theorem we have excluded the boundary-bulk one-constraint
cases leading to an identically zero det$D_{\tau }(\lambda )$ for any $\tau
(\lambda )\in \Sigma _{\mathcal{T}}$, these specific cases are considered in
the next theorem.
\begin{theorem}
Let us assume that there exists $k \in \{0,...,p-1\}$ such that it holds:%
\begin{equation}
\tau _{\infty }=q^{-k}\text{\textsc{a}}_{\infty }+q^{k}\text{\textsc{a}}_{0},
\label{one-constraint-case}
\end{equation}%
then, for almost all the values of the boundary-bulk parameters, $\tau
(\lambda )\in \Sigma _{\mathcal{T}}$ (the set of the eigenvalues of $\mathcal{T}(\lambda)$) if and only if $\tau (\lambda )$ is
an entire function and there exists and is unique a polynomial $Q(\lambda )$
of the form $\left( \ref{Q-form}\right) $ with $\mathsf{N}_{Q}\leq \left(
p-1\right) (\mathsf{N}+1)$ and $\mathsf{N}_{Q}=k$ mod$\,p$, satisfying the
following homogeneous Baxter equation:%
\begin{equation}
\tau (\lambda )Q(\lambda )=\text{\textsc{a}}(\lambda )Q(\lambda /q)+\text{%
\textsc{a}}(1/\lambda )Q(\lambda q), \label{ho-Baxter-EQ}
\end{equation}%
and the conditions:%
\begin{equation}
(Q(\zeta _{a}^{\left( 0\right) }),...,Q(\zeta _{a}^{\left( p-1\right)
}))\neq (0,...,0)\text{ \ \ }\forall a\in \{1,...,\mathsf{N}\}\text{.}
\label{Q-condition2}
\end{equation}
\end{theorem}
\begin{proof}
First let us assume that $\tau (\lambda )$ and $Q(\lambda )$ satisfies the
homogeneous Baxter equation with $\tau (\lambda )$ entire function and $%
Q(\lambda )$ polynomial of the form $\left( \ref{Q-form}\right) $ with $%
\mathsf{N}_{Q}\leq \left( p-1\right) (\mathsf{N}+1)$ and $\mathsf{N}_{Q}=k$
mod$\,p$, then from this same equation it follows that $\tau (\lambda )$ is
a polynomial of the form $(\ref{set-tau})$. Moreover, for any fixed $\lambda
\in \mathbb{C}$ we can construct the following homogeneous system of
equations:%
\begin{equation}
D_{\tau }(\lambda )\left(
\begin{array}{l}
Q(\lambda ) \\
Q(\lambda q) \\
\vdots \\
\vdots \\
Q(\lambda q^{p-1})%
\end{array}%
\right) _{p\times 1}=\left(
\begin{array}{l}
0 \\
0 \\
\vdots \\
\vdots \\
0%
\end{array}%
\right) _{p\times 1},
\end{equation}%
which is satisfied as a consequence of the Baxter equation. Finally, being $%
(Q(\lambda ),...,Q(\lambda q^{p-1}))$\ non-zero for any $\lambda \in \mathbb{%
C}$, up to at most a finite number of values, we get:%
\begin{equation}
\text{det}D_{\tau }(\lambda )=0\text{ \ \ }\forall \lambda \in \mathbb{C}
\end{equation}%
so that Proposition \ref{General F-Eq} implies $\tau (\lambda )\in \Sigma _{%
\mathcal{T}}$.
To prove the reverse statement we use the results of the Lemma \ref%
{Cofactor-prop} on the matrix $D_{\tau }(\lambda )$ and on its cofactors:%
\begin{equation}
\text{\textsc{C}}_{i,j}(\lambda )=(-1)^{i+j}\text{det}_{p-1}D_{\tau
,i,j}(\lambda )\text{.} \label{cofactor-def}
\end{equation}%
We take now $\tau (\lambda )\in \Sigma _{\mathcal{T}}$ from which it holds:%
\begin{equation}
\text{det}D_{\tau }(\lambda )=0\text{ \ \ }\forall \lambda \in \mathbb{C},
\end{equation}%
and so by Lemma \ref{Cofactor-prop} it follows that rank$D_{\tau }(\lambda
)=p-1$ for any $\lambda \in \mathbb{C}\backslash K$, where $K$ is a finite
set of complex numbers if not empty. Then the matrix composed of the
cofactors of the matrix $D_{\tau }(\lambda )$ has rank $1$ for any $\lambda
\in \mathbb{C}\backslash K$. This just means the proportionality:%
\begin{equation}
\text{\textsc{V}}_{i}(\lambda )=\text{\textsc{a}}_{i,j}(\lambda )\text{%
\textsc{V}}_{j}(\lambda )\text{ }\forall \lambda \in \mathbb{C}\backslash
K,\forall i,j\in \{1,...,p\}
\end{equation}%
where we have defined:%
\begin{equation}
\text{\textsc{V}}_{i}(\lambda )\equiv (\text{\textsc{C}}_{i,1}(\lambda ),%
\text{\textsc{C}}_{i,2}(\lambda ),...,\text{\textsc{C}}_{i,p}(\lambda ))%
\text{ }\forall \lambda \in \mathbb{C}\backslash K,\forall i\in \{1,...,p\}
\end{equation}%
and \textsc{a}$_{i,j}(\lambda )$ are some functions such that:
\begin{equation}
\text{\textsc{a}}_{i,j}(\lambda )\neq 0\text{ and finite for any }\lambda
\in \mathbb{C}\backslash \left\{ K\cup K_{0}\cup K_{i}\cup K_{j}\right\}
\end{equation}%
where $K_{0}$ is the set of the $p$-roots of unit and%
\begin{equation}
K_{a}\equiv \left\{ x\in \mathbb{C}:\text{\textsc{V}}_{a}(x)\equiv
(0,...,0)\right\} \text{ }\forall a\in \{1,...,p\}
\end{equation}%
such sets are finite if not empty, being the elements of the vectors $%
(\Lambda ^{p}-X^{p})$\textsc{V}$_{i}(\lambda )$ Laurent polynomials. The
above identities in particular imply:
\begin{equation}
\text{\textsc{a}}_{1,2}(\lambda )\text{\textsc{C}}_{1,1}(\lambda )\text{%
\textsc{C}}_{2,2}(\lambda )=\text{\textsc{a}}_{1,2}(\lambda )\text{\textsc{C}%
}_{1,2}(\lambda )\text{\textsc{C}}_{2,1}(\lambda )\text{ }\forall \lambda
\in \mathbb{C}\backslash K \label{proportionality}
\end{equation}%
so that for any $\lambda \in \mathbb{C}\backslash \left\{ K\cup K_{0}\cup
K_{i}\cup K_{j}\right\} $ it holds:%
\begin{equation}
\text{\textsc{C}}_{1,1}(\lambda )\text{\textsc{C}}_{2,2}(\lambda )=\text{%
\textsc{C}}_{1,2}(\lambda )\text{\textsc{C}}_{2,1}(\lambda ).
\label{Cofactor-connection}
\end{equation}%
Hence it holds for any $\lambda \in \mathbb{C}$ using continutiy
properties of the cofactors, being $\left\{ K\cup K_{0}\cup K_{i}\cup
K_{j}\right\} $ a finite set of values. Similarly, the fact that the
vectorial condition\ $D(\lambda )$\textsc{V}$_{1}(\lambda )= \ $\beta{0} holds
true for any $\lambda \in \mathbb{C}\backslash K$ implies that it is indeed
satisfied for any $\lambda \in \mathbb{C}$. Here, we write explicitly the
first element of this vectorial condition:%
\begin{equation}
\tau (\lambda )\text{\textsc{C}}_{1,1}(\lambda )=\text{\textsc{a}}(\lambda )%
\text{\textsc{C}}_{1,p}(\lambda )+\text{\textsc{a}}(1/\lambda )\text{\textsc{%
C}}_{1,2}(\lambda ), \label{Bax-eq}
\end{equation}%
together with the rewriting of (\ref{Cofactor-connection}) by using the
identity (\ref{Sym-1}):%
\begin{equation}
\text{\textsc{C}}_{1,1}(\lambda )\text{\textsc{C}}_{1,1}(\lambda q)=\text{%
\textsc{C}}_{1,2}(\lambda )\text{\textsc{C}}_{1,p}(q\lambda ).
\label{Inter-step}
\end{equation}%
Once we recall that \textsc{C}$_{1,1}(\lambda )$, \textsc{C}$_{1,2}(\lambda
) $ and \textsc{C}$_{1,p}(\lambda )$ are Laurent polynomial in $\lambda $
satisfying the factorizations $\left( \ref{F11}\right) $, $\left( \ref{F12}%
\right) $ and $\left( \ref{F1p}\right) $, respectively, it follows that the
above two equations holds as well as if written in terms of the functions $%
\widehat{\text{\textsc{C}}}_{1,1}(\lambda )$, $\widehat{\text{\textsc{C}}}%
_{1,2}(\lambda )$ and $\widehat{\text{\textsc{C}}}_{1,p}(\lambda )$.
Similarly to what has been done in the Lemma 5 of the
paper \cite{OpenCyN-10}, we can show that the two above equations for $%
\widehat{\text{\textsc{C}}}_{1,1}(\lambda )$, $\widehat{\text{\textsc{C}}}%
_{1,2}(\lambda )$ and $\widehat{\text{\textsc{C}}}_{1,p}(\lambda )$ and
their symmetry properties $\left( \ref{Sym-3}\right) $ imply that if $%
\widehat{\text{\textsc{C}}}_{1,1}(\lambda )$ has a common zero with $%
\widehat{\text{\textsc{C}}}_{1,2}(\lambda )$ then this is also a zero of $%
\widehat{\text{\textsc{C}}}_{1p}(\lambda )$ and also the inverse of such
zero is a common zero of these polynomials. Moreover, the same statement
holds exchanging $\widehat{\text{\textsc{C}}}_{1,2}(\lambda )$ with $%
\widehat{\text{\textsc{C}}}_{1,p}(\lambda )$. So we can denote with \textsc{c%
}$_{1,1}\overline{\text{\textsc{C}}}_{1,1}(\lambda )$, \textsc{c}$_{1,2}%
\overline{\text{\textsc{C}}}_{1,2}(\lambda )$ and \textsc{c}$_{1,p}\overline{%
\text{\textsc{C}}}_{1,p}(\lambda )$ the polynomials obtained simplifying the
common factors in $\widehat{\text{\textsc{C}}}_{1,1}(\lambda )$, $\widehat{%
\text{\textsc{C}}}_{1,2}(\lambda )$ and $\widehat{\text{\textsc{C}}}%
_{1,p}(\lambda )$. Then, they have to satisfy the relations:
\begin{equation}
\overline{\text{\textsc{C}}}_{1,p}(\lambda )=y_{1,1}\text{$\overline{\text{%
\textsc{C}}}$}_{1,1}(\lambda q^{-1}),\text{ \ \ $\overline{\text{\textsc{C}}}
$}_{1,2}(\lambda )=y_{1,1}^{-1}\text{$\overline{\text{\textsc{C}}}$}%
_{1,1}(\lambda q) \label{proportionality-cofactor}
\end{equation}%
and defined $x_{1,1}\equiv $\textsc{c}$_{1,1}/$\textsc{c}$_{1,2}=$\textsc{c}$%
_{1,p}/$\textsc{c}$_{1,1}$, we obtain the following Baxter equation in the
polynomial $\overline{\text{\textsc{C}}}_{1,1}(\lambda )$:
\begin{equation}
t(\lambda )\text{$\overline{\text{\textsc{C}}}$}_{1,1}(\lambda )=\left(
x_{1,1}y_{1,1}\right) \text{\textsc{a}}(\lambda )\text{$\overline{\text{%
\textsc{C}}}$}_{1,1}(\lambda q^{-1})+\left( 1/(x_{1,1}y_{1,1})\right) \text{%
\textsc{a}}(1/\lambda )\text{$\overline{\text{\textsc{C}}}$}_{1,1}(\lambda
q), \label{deform-BAX}
\end{equation}%
and computing the above equation in $\lambda =q^{1/2}$ we get:%
\begin{equation}
t(q^{1/2})\text{$\overline{\text{\textsc{C}}}$}_{1,1}(q^{1/2})=\left(
x_{1,1}y_{1,1}\right) \text{\textsc{a}}(q^{1/2})\text{$\overline{\text{%
\textsc{C}}}$}_{1,1}(q^{-1/2}),
\end{equation}%
from which it follows $x_{1,1}y_{1,1}=1$ once we recall that \textsc{C}$%
_{1,1}(q^{1/2})\neq 0$ and that $\overline{\text{\textsc{C}}}$$%
_{1,1}(\lambda )$ is even under $\lambda \rightarrow 1/\lambda $. So, we can
define:
\begin{equation}
Q(\lambda )\equiv \text{$\overline{\text{\textsc{C}}}$}_{1,1}(\lambda ),
\end{equation}%
a polynomial in $\Lambda $ of maximal degree $\left( p-1\right) (\mathsf{N}%
+1)$, which satisfies the homogeneous Baxter equation as required.
\end{proof}
Let us introduce now the following states:%
\begin{eqnarray}
\left\langle \beta ,\omega \right\vert &=&\sum_{h_{1},...,h_{\mathsf{N}%
}=0}^{p-1}\prod_{a=1}^{\mathsf{N}}\prod_{k_{a}=0}^{h_{a}-1}\frac{\text{%
\textsc{a}}(1/\zeta _{a}^{(k_{a})})}{\text{\textsc{d}}(1/\zeta
_{a}^{(k_{a})})}\prod_{1\leq b<a\leq \mathsf{N}%
}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})\left\langle \beta ,h_{1},...,h_{\mathsf{N}%
}\right\vert , \\
|\beta ,\bar{\omega}\rangle &=&\sum_{h_{1},...,h_{\mathsf{N}%
}=0}^{p-1}\prod_{1\leq b<a\leq \mathsf{N}}(X_{a}^{(h_{a})}-X_{b}^{(h_{b})})|%
\beta ,h_{1},...,h_{\mathsf{N}}\rangle ,\label{Ref-Sta-R}
\end{eqnarray}%
(see \eqref{45}, \eqref{412} and \eqref{419}) and the following renormalization of the $\mathcal{B}_{-}$-operator family%
\begin{equation}
\mathcal{\hat{B}}_{-}(\lambda |\beta )=\frac{\mathcal{B}_{-}(\lambda |\beta
)T_{\beta }^{2}}{(\lambda ^{2}/q-q/\lambda ^{2})\text{\textsc{b}}_{-}(\beta )%
},
\end{equation}%
which is a degree $\mathsf{N}$ polynomial in $\Lambda=\lambda^2+\frac{1}{\lambda^2}$, and where $T_\beta$ is simply a shift on the gauge parameter $\beta$ (see \eqref{betashifteuh}). As first remarked
in the papers \cite{DerKM03,OpenCyDerKM03-2}, from the polynomial
characterization of the $Q$-function and the SoV characterization it follows
the Bethe-like rewriting of the transfer matrix eigenstates stated in the
following\footnote{One should remark that the logic that lead us to the ABA rewriting of the transfer matrix eigenstates is completely different from the one underling the algebraic Bethe ansatz. We get it by rewriting the original SoV form and this allows us to identity the non-trivial state that takes a role similar to a reference state. Note however that it has properties rather different from an ABA reference state as in general it is not an eigenstate of the transfer matrix! For simpler models, for which such a reference state can be naturally guessed, one can also follow the ABA logic i.e. to make an ansatz on the form of the ABA states and then to compute the action of the transfer matrix on these states deriving the Bethe equations by putting to zero the so-called unwanted terms. This is what it has been done in the paper \cite{Belliard2015c} for the quantum spin 1/2 chains.}:
\begin{corollary}
The left and right transfer matrix eigenstates associated to $\tau (\lambda
)\in \Sigma _{\mathcal{T}}$ admit the following Bethe ansatz like
representations:%
\begin{equation}
\left\langle \tau \right\vert =\left\langle \beta ,\omega \right\vert
\prod_{b=1}^{\mathsf{N}_{Q}}\mathcal{\hat{B}}_{-}(\lambda _{b}|\beta ),\text{
\ \ }|\tau \rangle =\prod_{b=1}^{\mathsf{N}_{Q}}\mathcal{\hat{B}}%
_{-}(\lambda _{b}|\beta )|\beta ,\bar{\omega}\rangle ,
\label{Bethe-like-eigenstates}
\end{equation}%
where the $\lambda _{b}$ (fixed up the symmetry $\lambda _{b}\rightarrow
-\lambda _{b},$ $\lambda _{b}\rightarrow 1/\lambda _{b}$) for $b\in \{1,...,%
\mathsf{N}_{Q}\}$ are the zeros of $Q(\lambda )$ and we have imposed the
condition $\left( \ref{guage-alpha}\right) $ on the gauge parameters.
\end{corollary}
\begin{proof}
These identities follow from the polynomiality of the $Q$-functions, which
implies the following identity:%
\begin{align}
\prod_{a=1}^{\mathsf{N}}q_{\tau ,a}^{(h_{a})}& =\prod_{a=1}^{\mathsf{N}%
}Q(\zeta _{a}^{(h)})=\prod_{a=1}^{\mathsf{N}}\prod_{b=1}^{\mathsf{N}%
_{Q}}((\zeta _{a}^{(h)})^{2}+1/(\zeta _{a}^{(h)})^{2}-\Lambda _{b}) \notag
\\
& =\left( -1\right) ^{\mathsf{N}\text{\thinspace }\mathsf{N}%
_{Q}}\prod_{a=1}^{\mathsf{N}}\prod_{b=1}^{\mathsf{N}_{Q}}(\Lambda
_{b}-((\zeta _{a}^{(h)})^{2}+1/(\zeta _{a}^{(h)})^{2}))=\left( -1\right) ^{%
\mathsf{N}\text{\thinspace }\mathsf{N}_{Q}}\prod_{b=1}^{\mathsf{N}_{Q}}\text{%
\textsc{\^{b}}}_{\text{\textbf{h}}}(\lambda _{b})
\end{align}%
where the \textsc{\^{b}}$_{\text{\textbf{h}}}(\lambda _{b})$ is the
eigenvalue of the operator $\mathcal{\hat{B}}_{-}(\lambda |\beta )$ and the
\begin{equation}
\Lambda _{b}=\lambda _{b}^{2}+1/\lambda _{b}^{2}
\end{equation}%
are the zeros of the $Q$-function as defined in $\left( \ref{Q-form}\right) $%
. Now we have just to do the action of the monomial:%
\begin{equation}
\prod_{b=1}^{\mathsf{N}_{Q}}\mathcal{\hat{B}}_{-}(\lambda _{b}|\beta )
\end{equation}%
on the right state $\left( \ref{Ref-Sta-R}\right) $ and use that by
definition:%
\begin{equation}
\prod_{b=1}^{\mathsf{N}_{Q}}\mathcal{\hat{B}}_{-}(\lambda _{b}|\beta )|\beta
,h_{1},...,h_{\mathsf{N}}\rangle =|\beta ,h_{1},...,h_{\mathsf{N}}\rangle
\prod_{b=1}^{\mathsf{N}_{Q}}\text{\textsc{\^{b}}}_{\text{\textbf{h}}%
}(\lambda _{b})
\end{equation}%
to prove that the vector in $\left( \ref{Bethe-like-eigenstates}\right) $
coincides, up to the sign, with the vector $\left( \ref{OpenCyeigenT-r-D}%
\right) $ and so it is the corresponding transfer matrix eigenvector;
similarly one shows that the covector in $\left( \ref%
{Bethe-like-eigenstates}\right) $ coincides with the covector $\left( \ref%
{OpenCyeigenT-l-D}\right) $.
\end{proof}
\section{Conclusions}
In this second article we have shown how to implement the SoV method to characterize the transfer matrix spectrum for integrable models associated to the Bazhanov-Stroganov quantum Lax operator and to the most general integrable boundary conditions. For that purpose it was necessary to perform a gauge transformation so as to recast the problem in a form similar to the one studied in our first article, i.e., such that one of the boundary $K$-matrices becomes triangular after the gauge transformation. Let us stress that the separate basis was designed again as the (pseudo)-eigenvector basis of some gauged operator of the reflection algebra having simple spectrum. What remains to be done is the construction of integrable local cyclic Hamiltonian having appropriate boundary conditions and commuting with the boundary transfer matrices considered here. This amounts to use trace identities involving the fundamental $R$-matrix acting in the tensor product of two cyclic representations \cite{OpenCyBS90,Tarasov-1992,Tarasov-1993} and to construct the associated $K$-matrices, hence also acting in these cyclic representations. The reflection equations will have to be written for arbitrary choices (and mixing) of the spin-1/2 and cyclic representations. Correspondingly, there will be compatibility conditions between the different $K$-matrices acting in these two different representations. We will address this question in a forthcoming article \cite{MNP2018b}.
\section*{Acknowledgements}
J. M. M. and G. N. are supported by CNRS and ENS de Lyon; B. P. is supported by ENS de Lyon and ENS Cachan.
|
{
"timestamp": "2018-06-28T02:02:11",
"yymm": "1802",
"arxiv_id": "1802.08853",
"language": "en",
"url": "https://arxiv.org/abs/1802.08853"
}
|
\section*{Appendix}
\section{Optimization Trajectory}
This is a continuation of section $3.1$ in the main text. Here we show further experiments on other datasets, architectures and hyper-parameter settings.
The analysis of GD training for Resnet-56 on CIFAR-10, MLP on MNIST and VGG-11 on tiny ImageNet are shown in figures \ref{fig:interpolation_resnet_gd_boom}, \ref{fig:interpolation_MLP_gd} and \ref{fig:interpolation_imagnet_gd} respectively. Similarly, the analysis of SGD training for Resnet-56 on CIFAR-10 dataset with batch size of 100 and learning rate 0.1 for epochs 1, 2, 25 and 100 are shown in figures \ref{fig:interpolation_resnet_epoch1}, \ref{fig:interpolation_resnet_epoch2}, \ref{fig:interpolation_resnet_epoch25} and \ref{fig:interpolation_resnet_epoch100} respectively. The analysis of SGD training for VGG-11 on CIFAR-10 with the batch size of 100 and learning rate 0.1 on epochs 2, 25,100 are shown in figures \ref{fig:interpolation_vgg_epoch2}, \ref{fig:interpolation_vgg_epoch25} and \ref{fig:interpolation_vgg_epoch100}. The analysis of SGD training for MLP on MNIST for epochs 1 and 2 are shown in figures \ref{fig:interpolation_MLP_epoch1} and \ref{fig:interpolation_MLP_epoch2}. The analysis of SGD training for VGG-11 on tiny ImageNet for epochs 1 is shown in figure \ref{fig:interpolation_imagnet_epoch1}. We also conducted the same experiment and analysis on various batch sizes and learning rates for every architecture. Results of VGG-11 can be found in figures \ref{fig:inter_vgg_0.3_100}, \ref{fig:inter_vgg_0.2_100}, \ref{fig:inter_vgg_0.1_500} and \ref{fig:inter_vgg_0.1_1000}. Results of Resnet-56 can be found in figures \ref{fig:inter_resnet_0.7_100}, \ref{fig:inter_resnet_1_100}, \ref{fig:inter_resnet_1_500} and \ref{fig:inter_resnet_1_1000}. The observations and rules we discovered and described in section $3$ are all consistent for all these experiments. Specifically, for the interpolation of SGD for VGG-11 on tiny ImageNet, the valley-like trajectory is weird-looking but even so, according to our quantitative evaluation there is no barrier between any two consecutive iterations.
We track the spectral norm of the Hessian along with the validation accuracy while the model is being trained. This is shown in figure \ref{fig:resnet56_cifar10_hessian_valacc} for Resnet-56 trained on CIFAR-10.
\section{Qualitative Roles of Learning Rate and Batch Size}
This is a continuation of section $3.2$ in the main text. In this section we show further experiments for the analysis of different roles of learning rate and batch size during training on various architectures and data sets. Figures \ref{fig:cos_resnet_cifar10}, \ref{fig:cos_mlp_mnist} and \ref{fig:cos_vgg_tiny} shows the results for Resnet-56 on CIFAR-10, MLP on MNIST and VGG-11 on Tiny-Imagenet. In all of the experiments, training the model with smaller batch size will make the angle between gradients of two consecutive iterations larger, which means for smaller batch size, instead of oscillating within the same region, the optimization travels farther along the valley, as we described in section $3.2$. For all architectures, changing learning rate doesn't change the angles.
\begin{table*}
\small
\begin{center}
\caption{Average height of SGD above valley floor across the iterations in one epoch for different epochs during the training of VGG-11 and Resnet-56 using SGD on CIFAR-10 and VGG-11 on Tiny-ImageNet. Here the height at iteration $t$ is defined defined by $\frac{\mathcal{L}(\mathbf{\theta}_t) + \mathcal{L}(\mathbf{\theta}_{t+1}) - 2\mathcal{L}(\mathbf{\theta}_t^{\min})}{2}$. \label{tab:height_ref}}
\begin{tabular}{|c|c|c|c|c|}
\hline
VGG-11 on CIFAR-10&\textbf{Epoch 1} & \textbf{Epoch 10} & \textbf{Epoch 25}& \textbf{Epoch 100}\\
\hline
\textbf{LR 0.1}&0.0625$\pm$1.2e-3 & 0.0199$\pm$4.8e-4 & 0.0104$\pm$2.2e-5&0.0025$\pm$3.1e-6\\\hline
\textbf{LR 0.05}&0.0102$\pm$3.8e-5 & 0.0050$\pm$2.8e-5& 0.0035$\pm$1.7e-5 &0.0011$\pm$1.0e-6\\\hline
\hline
Resnet-56 on CIFAR-10&\textbf{Epoch 1} & \textbf{Epoch 10} & \textbf{Epoch 25}& \textbf{Epoch 100}\\
\hline
\textbf{LR 0.3}&0.0380$\pm$5.9e-4 & 0.0131$\pm$4.5e-4& 0.0094$\pm$1.3e-5&0.0017$\pm$1.0e-5\\\hline
\textbf{LR 0.15}&0.0084$\pm$5.2e-5& 0.0034$\pm$3.2e-5& 0.0020$\pm$7.2e-6&0.0013$\pm$6.7e-6\\\hline
\hline
VGG-11 on Tiny-ImageNet&\textbf{Epoch 1} & \textbf{Epoch 10} & \textbf{Epoch 25}& \textbf{Epoch 100}\\
\hline
\textbf{LR 0.5}&0.028$\pm$1.0e-3 & 0.213$\pm$1.5e-3& 0.187$\pm$1.9e-3&9.8e-5$\pm$2.0e-9\\\hline
\textbf{LR 0.1}&0.0039$\pm$5.2e-5& 0.163$\pm$2.64e-5& 0.116$\pm$0.013&1.1e-5$\pm$3.6e-11\\\hline
\end{tabular}
\end{center}
\vspace{-20pt}
\end{table*}
\section{Learning Rate Schedule}
\label{section_lr_sch}
We observe from table \ref{tab:height_vgg} that the optimization oscillates at a lower height as training progresses (which is likely because SGD finds flatter regions as training progresses, see Figure \ref{fig:vgg11_cifar10_hessian_valacc}). As we discussed based on Figure \ref{fig:interpolation_vgg11_cifar10_epoch1}, the floor of the DNN valley is highly non-linear with many barriers. Based on these two observations, it seems that it should be advantageous for SGD to maintain a large height from the floor of the valley to facilitate further exploration without getting hindered by barriers as it may allow the optimization to find flatter regions. Hence, this line of thought suggests that we should increase the learning rate as training progresses (of course eventually it needs to be annealed for convergence to a minimum). \citet{smith2017cyclical, smith2017super} propose a cyclical learning rate (CLR) schedule which partially has this property. It involves linearly increasing the learning rate every iteration until a certain number of iterations, then similarly linearly reducing it, and repeat this process in a cycle. We now empirically show that multiple cycles of CLR are redundant, and simply increasing the learning rate until a certain point, and then annealing it leads to similar or better performance. Specifically, to rule out the need for cycles, as a null hypothesis, we increase the learning rate as in the first cycle of CLR, then keep it flat, then linearly anneal it (we call it the \textit{trapezoid schedule}). For fairness, we also plot the widely used step-wise learning rate annealing schedule. \textit{In our experiments, we find that methods which increase learning rate during training may be considered slightly better}. The learning curves are shown in figures \ref{fig:cyclic} in main text and \ref{fig:clr_resnet} in appendix (with other details). We leave an extensive study of learning rate schedule design based on the proposed guideline as future work.
We run the same experiment as described above on Resnet-56 with CIFAR-10 and it shows the rule for CLR, trapezoid schedule and SGD with stepwise annealing. Plots can be seen at figure \ref{fig:clr_resnet}. All schedules are tuned to their best performance with a hyperparameter grid search. For both Resnet-56 and VGG-11, we use batch size 100 for all models. The learning rate schedules are apparent from the figures themselves.
\section{Importance of SGD Noise Structure}
\label{appendix_hessian_covariance}
Here we derive in detail the relation between the Hessian and gradient covariance using the fact that for the negative log likelihood loss $\mathcal{L}_i(\theta) = -\log(p_i(\theta))$. Note we use the fact that for this particular loss function, $\frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} = -\frac{1}{p_i(\theta)}$, and $\frac{\partial^2 \mathcal{L}_i(\theta)}{\partial p_i(\theta)^2} = \frac{1}{p_i^2(\theta)}$, which yields $\frac{\partial^2 \mathcal{L}_i(\theta)}{\partial p_i(\theta)^2} = \left(\frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)}\right)^2$.
\begin{align}
\mathbf{H}(\mathbf{\theta}) &= \frac{1}{N} \sum_{i=1}^{N} \frac{\partial^2 \mathcal{L}_i(\theta)}{\partial \theta^2} \\
&= \frac{1}{N} \sum_{i=1}^{N} \frac{\partial }{\partial \theta} \left( \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{{\partial p_i(\theta)}}{\partial \theta} \right) \\
&= \frac{1}{N} \sum_{i=1}^{N} \frac{\partial^2 \mathcal{L}_i(\theta)}{\partial p_i(\theta)^2} \cdot \frac{\partial p_i(\theta)}{\partial \theta} \frac{\partial p_i(\theta)}{\partial \theta}^{T} \nonumber \\
&+ \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{\partial^2 p_i(\theta)}{\partial \theta^2}\\
&= \frac{1}{N} \sum_{i=1}^{N} \left( \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \right)^2 \cdot \frac{\partial p_i(\theta)}{\partial \theta} \frac{\partial p_i(\theta)}{\partial \theta}^{T} \nonumber \\
&+ \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{\partial^2 p_i(\theta)}{\partial \theta^2}\\
&= \frac{1}{N} \sum_{i=1}^{N} \frac{\partial \mathcal{L}_i(\theta)}{\partial \theta} \frac{\partial \mathcal{L}_i(\theta)}{\partial \theta}^{T} + \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{\partial^2 p_i(\theta)}{\partial \theta^2}\\
&= \mathbf{C}(\theta) + {\mathbf{\bar{g}}(\theta)}{\mathbf{\bar{g}}(\theta)}^T + \frac{1}{N} \sum_{i=1}^{N} \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{\partial^2 p_i(\theta)}{\partial \theta^2}
\end{align}
where $\mathbf{\bar{g}}(\theta) = \frac{1}{N} \sum_{i=1}^{N} \frac{\partial \mathcal{L}_i(\theta)}{\partial \theta}$.
\section{Discussion}
In the main text, we talk about converge in the quadratic setting depending on the value of learning rate relative to the largest eigenvalue of the Hessian. The convergence in this setting has been visualized in \ref{fig:rates_of_gd_on_quadratics}.
\twocolumn
\begin{figure}[t]
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/resnet_cifar10_spectral_norm.png}
\caption{Max eigenvalue (spectral norm) of the Hessian and validation accuracy for Resnet-56 trained on CIFAR-10 using a fixed learning rate of 0.3 and batch size 100. The spectral norm roughly decreases with training but starts increasing slightly towards the end. Similarly validation accuracy roughly improves throughout training but drops towards the end.}
\label{fig:resnet56_cifar10_hessian_valacc}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_trainLoss.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_validAcc.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/vgg_lr.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 on CIFAR-10 trained using Cyclic learning rate (CLR), SGD with stepsize annealing, and trapezoid schedule. Cycles in the CLR schedule are redundant, which is shown by the trapezoid schedule.}
\label{fig:clr_resnet}
\end{figure}
\begin{figure}[!ht]
\vspace{-5pt}
\centering
\includegraphics[scale=0.6,trim=0.in 0.37in 0.in 0.13in,clip]{figs/vgg_trainLoss.png}
\includegraphics[scale=0.6,trim=0.in 0.37in 0.in 0.13in,clip]{figs/vgg_validAcc.png}
\includegraphics[scale=0.6,trim=0.in 0.17in 0.in 0.13in,clip]{figs/vgg_lr.png}
\captionof{figure}{Plots for VGG-11 on CIFAR-10 trained using Cyclic learning rate (CLR), SGD with stepwise annealing, and trapezoid schedule. Cycles in the CLR schedule are redundant, which is shown by the trapezoid schedule.}
\label{fig:cyclic}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.0in 0.in 0.in,clip]{figs/barrier.png}
\caption{Numbers of barriers found during training loss interpolation for every epoch(450 iterations) for VGG-11 on CIFAR-10. We say a barrier exists during a training update step if there exists a point between the parameters before and after an update which has a loss value higher than the loss at either points. Note that even for these barriers, their heights (defined by $\frac{\mathcal{L}(\mathbf{\theta}_t) + \mathbf{\theta}_{t+1}) - 2\mathcal{L}(\mathbf{\theta}_t^{\min})}{2})$ are substantially smaller compared with the value of loss at the corresponding iterations (not mentioned here), meaning they are not significant barriers.}
\label{fig:barrier}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/resnet_gd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_gd_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_gd_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 Epoch 1 trained using full batch \textbf{Gradient Descent (GD)} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_resnet_gd_boom}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_sgd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_resnet_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_resnet_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 Epoch 1 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_resnet_epoch1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_sgd_epoch2.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_resnet_epoch2_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_resnet_epoch2_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 Epoch 2 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_resnet_epoch2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_sgd_epoch25.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_resnet_epoch25_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_resnet_epoch25_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 Epoch 25 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_resnet_epoch25}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/resnet_sgd_epoch100.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_resnet_epoch100_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_resnet_epoch100_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 Epoch 100 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_resnet_epoch100}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/vgg_sgd_epoch2.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_vgg_epoch2_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_vgg_epoch2_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for VGG-11 Epoch 2 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_vgg_epoch2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.in,clip]{figs/vgg_sgd_epoch25.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.15in,clip]{figs/interpolation_vgg_epoch25_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.15in,clip]{figs/interpolation_vgg_epoch25_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for VGG-11 Epoch 25 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_vgg_epoch25}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.in,clip]{figs/vgg_sgd_epoch100.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.15in,clip]{figs/interpolation_vgg_epoch100_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.15in,clip]{figs/interpolation_vgg_epoch100_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for VGG-11 Epoch 100 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_vgg_epoch100}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.in,clip]{figs/mlp_gd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.15in,clip]{figs/mlp_gd_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.15in,clip]{figs/mlp_gd_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for MLP Epoch 1 trained using full batch \textbf{Gradient Descent (GD)} on MNIST. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_MLP_gd}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/mlp_sgd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_mlp_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_mlp_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for MLP Epoch 1 trained using \textbf{SGD} on MNIST. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_MLP_epoch1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/mlp_sgd_epoch2.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/interpolation_mlp_epoch2_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.in,clip]{figs/interpolation_mlp_epoch2_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for MLP Epoch 2 trained using \textbf{SGD} on MNIST. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_MLP_epoch2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.in,clip]{figs/imagnet_gd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.15in,clip]{figs/imagnet_gd_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.15in,clip]{figs/imagnet_gd_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for VGG-11 Epoch 1 trained using full batch \textbf{Gradient Descent (GD)} on Tiny-ImageNet. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_imagnet_gd}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.in,clip]{figs/imagnet_sgd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.35in 0.in 0.15in,clip]{figs/interpolation_imagenet_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.1in 0.in 0.15in,clip]{figs/interpolation_imagenet_epoch1_dist.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for VGG-11 Epoch 1 trained using \textbf{SGD} on Tiny-ImageNet. The descriptions of the plots are same as in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}.}
\label{fig:interpolation_imagnet_epoch1}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_3_bs100_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_3_bs100_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_3_bs100_dist_epoch1.png}
\end{subfigure
\caption{Plots for VGG-11 Epoch 1 trained using learning rate 0.3 batch size 100 on CIFAR-10.}
\label{fig:inter_vgg_0.3_100}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_2_bs100_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_2_bs100_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_2_bs100_dist_epoch1.png}
\end{subfigure}
\caption{Plots for VGG-11 Epoch 1 trained using learning rate 0.2 batch size 100 on CIFAR-10.}
\label{fig:inter_vgg_0.2_100}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs500_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs500_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs500_dist_epoch1.png}
\end{subfigure}
\caption{Plots for VGG-11 Epoch 1 trained using learning rate 0.1 batch size 500 on CIFAR-10.}
\label{fig:inter_vgg_0.1_500}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs1000_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs1000_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_vgg_lr0_1_bs1000_dist_epoch1.png}
\end{subfigure}
\caption{Plots for VGG-11 Epoch 1 trained using learning rate 0.1 batch size 1000 on CIFAR-10.}
\label{fig:inter_vgg_0.1_1000}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr0_7_bs100_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr0_7_bs100_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_resnet_lr0_7_bs100_dist_epoch1.png}
\end{subfigure}
\caption{Plots for Resnet-56 Epoch 1 trained using learning rate 0.7 batch size 100 on CIFAR-10.}
\label{fig:inter_resnet_0.7_100}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs100_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs100_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs100_dist_epoch1.png}
\end{subfigure}
\caption{Plots for Resnet-56 Epoch 1 trained using learning rate 1 batch size 100 on CIFAR-10.}
\label{fig:inter_resnet_1_100}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs500_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs500_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs500_dist_epoch1.png}
\end{subfigure}
\caption{Plots for Resnet-56 Epoch 1 trained using learning rate 1 batch size 500 on CIFAR-10.}
\vspace{-15pt}
\label{fig:inter_resnet_1_500}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs1000_interp_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs1000_angle_epoch1.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.1in 0.in 0.in,clip]{figs/cifar10_resnet_lr1_bs1000_dist_epoch1.png}
\end{subfigure}
\caption{Plots for Resnet-56 Epoch 1 trained using learning rate 1 batch size 1000 on CIFAR-10.}
\vspace{-15pt}
\label{fig:inter_resnet_1_1000}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_resnet_cifar10_lr.png}
\end{subfigure}
\vfill
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_resnet_cifar10_bs.png}
\end{subfigure}
\caption{Changing batch size changes the cosine of angle between consecutive gradients while changing learning rate does not have any significant effect on the cosine. This shows batch size has a qualitatively different role compared with learning rate. Note that the curves are smoothened for visual clarity. \label{fig:cos_resnet_cifar10}}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_mlp_mnist_lr.png}
\end{subfigure}
\vfill
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_mlp_mnist_bs.png}
\end{subfigure}
\caption{Changing batch size changes the cosine of angle between consecutive gradients while changing learning rate does not have any significant effect on the cosine. This shows batch size has a qualitatively different role compared with learning rate. Note that the curves are smoothened for visual clarity. \label{fig:cos_mlp_mnist}}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_vgg11_timagenet_lr.png}
\end{subfigure}
\vfill
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.in 0.35in 0.in 0.in,clip]{figs/cos_vgg11_timagenet_bs.png}
\end{subfigure}
\caption{Changing batch size changes the cosine of angle between consecutive gradients while changing learning rate does not have any significant effect on the cosine. This shows batch size has a qualitatively different role compared with learning rate. Note that the curves are smoothened for visual clarity. \label{fig:cos_vgg_tiny}}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_isotropic_trainLoss_new.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_isotropic_validacc_new.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_isotropic_angle_new.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/resnet_isotropic_dist_new.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for Resnet-56 trained by GD (without noise) and GD with artificial isotropic noise sampled from Gaussian distribution with different variances. Models trained using GD with added isotropic noise get stuck in terms of training loss and have worse validation performance compared with the model trained with GD.}
\label{fig:resnet_isotropic}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/mlp_isotropic_trainLoss_new.png}
\end{subfigure}\hspace{0.2\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/mlp_isotropic_validacc_new.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/mlp_isotropic_angle_new.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}[t]{1\columnwidth}
\centering
\includegraphics[scale=0.55,trim=0.in 0.35in 0.in 0.in,clip]{figs/mlp_isotropic_dist_new.png}
\end{subfigure}\hspace{0.1\textwidth
\caption{Plots for MLP trained by GD (without noise) and GD with artificial isotropic noise sampled from Gaussian distribution with different variances. Models trained using GD with added isotropic noise get stuck in terms of training loss and have worse validation performance compared with the model trained with GD.}
\label{fig:mlp_isotropic}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth ,trim=0.in 0.3in 0.in 0.in,clip]{figs/convergence_rate.png}
\caption{Graph of $|\lambda_i \eta - 1|$: convergence rates of gradient descent on $\lambda_{\min}$-strongly
convex and $\lambda_{\max}$-smooth quadratic surfaces. The area which is shaded by orange contains possibly graphs of other eigenvalues of the hessian. In the range of learning rates shaded by blue, trajectory underdamps in the direction of the maximum eigenvalue. For a certain learning rate, while the trajectory oscillates in the direction of the maximum eigenvalue (green diamond), it overdamps in all the others (e.g. blue diamond - in a 'flat' direction).}
\label{fig:rates_of_gd_on_quadratics}
\vspace{-15pt}
\end{figure}
\section{Introduction}
Deep neural networks (DNNs) trained with algorithms based on stochastic gradient descent (SGD) are able to tune the parameters of massively over-parametrized models to reach small training loss with good generalization despite the existence of numerous bad minima. This is especially surprising given DNNs are capable of overfitting random data with almost zero training loss \cite{zhang2016understanding}. This behavior has been studied by \citet{arpit2017closer, advani2017high} where they suggest that deep networks generalize well because they tend to fit simple functions over training data before overfitting noise. It has been further discussed that model parameters that are in a region of flatter minima generalize better \cite{hochreiter1997flat, keskar2016large, wu2017towards}, and that SGD finds such minima when used with small batch size and large learning rate \cite{ keskar2016large, jastrzkebski2017three, smith2017don, chaudhari2017stochastic}. These recent papers frame SGD as a stochastic differential equation (SDE) under the assumption of using small learning rates. A main result of these papers is that the SDE dynamics remains the same as long as the ratio of learning rate to batch size remains unchanged. \textit{However, this view is limited due to its assumption and ignores the importance of the structure of SGD noise (i.e., the gradient covariance) and the qualitative roles of learning rate and batch size, which remain relatively obscure.}
On the other hand, various variants of SGD have been proposed for optimizing deep networks with the goal of addressing some of the common problems found in the high dimensional non-convex loss landscapes (Eg. saddle points, faster loss descent etc). Some of the popular algorithms used for training deep networks apart from vanilla SGD are SGD with momentum \cite{polyak1964some, sutskever2013importance}, AdaDelta \cite{zeiler2012adadelta}, RMSProp \cite{tieleman2012lecture}, Adam \cite{kingma2014adam} etc. However, for any of these methods, currently there is little theory proving they help in improving generalization in DNNs (which by itself is currently not very well understood), although there have been some notable efforts (Eg. \citet{hardt2015train,kawaguchi2017generalization}). \textit{This raises the question of whether optimization algorithms that are designed with the goal of solving the aforementioned high dimensional problems also help in finding minima that generalize well, or put differently, what attributes allow optimization algorithms to find such good minima in the non-convex setting.}
We take a step towards answering the above two questions in this paper for SGD (without momentum) through qualitative experiments. The main tool we use for studying the DNN loss surface along SGD's path is to interpolate the loss surface between parameters before and after each training update and track various metrics. Our findings about SGD's trajectory can be summarized as follows:
1. We observe that the loss interpolation between parameters before and after each \textit{iteration's} update is roughly convex with a minimum (\textit{valley floor}) in between. Thus, we deduce that SGD bounces off walls of a \textit{valley-like-structure} at a height above the floor.
2. Learning rate controls the height at which SGD bounces above the valley floor while batch size controls gradient stochasticity which facilitates exploration (visible from larger parameter distance from initialization for small batch-size). In this way, learning rate and batch size exhibit different qualitative roles in SGD dynamics.\footnote{This implies that except when using a reasonably small learning rate (which would make the SDE approximation hold), the effect of small batch size with a certain learning rate cannot be achieved by using a large batch size with a proportionally large learning rate (observed by \citet{goyal2017accurate}).}
3. The valley floor along SGD's path has many ups and downs (barriers) which may hinder exploration. Thus using a large learning rate helps avoid encountering barriers along SGD's path by maintaining a large height above valley floor (thus moving over the barriers instead of crossing\footnote{A \textbf{barrier is crossed} when we see a point in the parameter space interpolated between the points just before and after an update step, such that the loss at the barrier point is higher than the loss at both the other points.} them).
Experiments are conducted on multiple data sets, architectures and hyper-parameter settings. The findings mentioned above hold true on all of them. We further find that stochasticity in SGD induced by mini-batches is needed both for better optimization and generalization. Conversely, artificially added isotropic noise in the absence of mini-batch induced stochasticity is bad for DNN optimization. We also discuss some striking similarities between our empirical findings about SGD's trajectory in DNNs and classical optimization theory in the quadratic setting.
\vspace{-5pt}
\section{Background and Related Work}
Various algorithms have been proposed for optimizing deep neural networks that are designed from the view point of tackling various high dimensional optimization problems like oscillation during training (SGD with momentum \cite{polyak1964some}), oscillations around minima (Nesterov momentum \cite{nesterov1983method, sutskever2013importance}), saddle points \cite{dauphin2014identifying}, automatic decay of the learning rate (AdaDelta \cite{zeiler2012adadelta}, RMSProp \cite{tieleman2012lecture} and ADAM \cite{kingma2014adam}), etc.
However, currently there is insufficient theory to understand what kind of minima generalize better although empirically it has been observed that wider minima (that can be quantified by low Hessian norm) seem to have better generalization \cite{ keskar2016large, wu2017towards, jastrzkebski2017three} due to their low complexity and are more likely to be reached under random initialization given their larger volumes \cite{wu2017towards}.
\textit{This argument raises the question of whether the intuitions behind the designs of the various optimization algorithms are really the reasons behind their success in deep learning or there are other underlying mechanisms that make them successful.}
To understand this aspect better, a number of (mostly) recent papers study SGD as a stochastic differential process \cite{kushner2003stochastic, mandt2017stochastic, chaudhari2017stochastic, smith2017understanding, jastrzkebski2017three, li2015stochastic} under the assumption (among others) that the learning rate is reasonably small. Broadly, these papers show that the stochastic fluctuation in the stochastic differential equation simulated by SGD is governed by the ratio of learning rate to batch size. Hence according to this theoretical framework, the training dynamics of SGD should remain roughly identical when changing learning rate and batch size by the same factor. However, given DNNs (especially Resnet \cite{he2016deep} like architectures) are often trained with quite large learning rates, the small learning rate assumption may be a pitfall of this theoretical framework\footnote{For instance \citet{goyal2017accurate} investigate that increasing learning rate linearly with batch size helps to a certain extent but breaks down for very large learning rates.}. But this theory is nonetheless useful since learning rates do attain small values during training due to annealing or adaptive scheduling, so this framework may indeed apply during parts of training. \textit{In this paper we attempt to go beyond these analyses to study the different qualitative roles of the noise induced by large learning rate versus the noise induced by a small batch size.}
\begin{wrapfigure}{r}{0.48\textwidth}
\vspace{-10pt}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.35in 0.in 0.1in,clip]{figs/vgg_gd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.35in 0.in 0.in,clip]{figs/vgg_gd_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.1in 0.in 0.in,clip]{figs/vgg_gd_epoch1_dist.png}
\end{subfigure
\caption{Plots for VGG-11 Epoch 1 trained using full batch \textbf{Gradient Descent} (GD) on CIFAR-10. \textbf{Top}: Training loss for the $1st$ $40$ iterations of training. Between the training loss at every consecutive iteration (vertical gray lines), we uniformly sample $10$ points between the parameters before and after a training update and calculate the loss at these points. Thus we take a slice of the loss surface between two iterations. These loss values are plotted between every consecutive training loss value from training updates. The dashed orange line connects the minimum of the loss interpolation between consecutive iterations (this minimum denotes the valley floor along the interpolation). \textbf{Middle}: Cosine of the angle between gradients from two consecutive iterations. \textbf{Bottom}: Parameter distance from initialization. \textbf{Gist}: The loss interpolation between consecutive iterations have a minimum for iterations where cosine is highly negative (close to $-1$ after around $20$ iterations meaning the consecutive gradients are almost along opposite directions), suggesting the optimization is oscillating along the walls of a valley like structure. The valley floor reduces monotonously. \label{fig:loss_interpolation_gd_vgg11_cifar10_boom}}
\vspace{-30pt}
\end{wrapfigure}
There have also been work that consider SGD as a diffusion process where SGD is running a Brownian motion in the parameter space. \citet{li2017batch} hypothesize this behavior of SGD and theoretically show that this diffusion process would allow SGD to cross barriers and thus escape sharp local minima. The authors use this theoretical result to support the findings of \citet{keskar2016large} who find that SGD with small mini-batch find wider minima. \citet{hoffer2017train} on the other hand make a similar hypothesis based on the evidence that the distance moved by SGD from initialization resembles a diffusion process, and make a similar claim about SGD crossing barriers during training. \textit{Contrary to these claims, we find that interpolating the loss surface traversed by SGD on a per iteration basis suggests SGD almost never crosses any significant barriers for most of the training.}
There is also a long list of work towards understanding the loss surface geometry of DNNs from a theoretical standpoint. \citet{dotsenko1995introduction, amit1985spin, choromanska2015loss} show that under certain assumptions, the DNN loss landscape can be modeled by a spherical spin glass model which is well studied in terms of its critical points. \citet{safran2016quality} show that under certain mild assumptions,
the initialization is likely to be such that there exists a continuous monotonically decreasing path from the initial point to the global minimum. \citet{freeman2016topology} theoretically show that for DNNs with rectified linear units (ReLU), the level sets of the loss surface become more connected as network over-parametrization increases. This has also been justified by \citet{sagun2017empirical} who show that the hessian of deep ReLU networks is degenerate when the network is over-parametrized and hence the loss surface is flat along such degenerate directions. \citet{goodfellow2014qualitatively} empirically show that the convex interpolation of the loss surface from the initialization to the final parameters found by optimization algorithms do not cross any significant barriers, and that the landscape of loss surface near SGD’s trajectory has a valley-like 2D projection. \textit{Broadly these studies analyze DNN loss surfaces (either theoretically or empirically) in isolation from the optimization dynamics.}
In our work we do not study the loss surface in isolation, but rather analyze it through the lens of SGD. In other words, we study the DNN loss surface along the trajectory of SGD and track various metrics while doing so, from which we deduce both how the landscape relevant to SGD looks like, and how the hyperparameters of SGD (learning rate and batch size) help SGD maneuver through it.
\section{A Walk with SGD}
\label{the_walk}
\begin{wrapfigure}{r}{0.48\textwidth}
\vspace{-35pt}
\centering
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.35in 0.in 0.1in,clip]{figs/vgg_sgd_epoch1.png}
\end{subfigure}\hspace{0.2\textwidth}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.35in 0.in 0.in,clip]{figs/interpolation_vgg_epoch1_angle.png}
\end{subfigure}\hspace{0.1\textwidth
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.15in 0.1in 0.in 0.in,clip]{figs/interpolation_vgg_epoch1_dist.png}
\end{subfigure}
\caption{Plots for VGG-11 Epoch 1 trained using \textbf{SGD} on CIFAR-10. The descriptions of the plots are same as in Figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}. \textbf{Gist}: The loss interpolation between consecutive iterations have a minimum for iterations and cosine is less negative compared with GD, suggesting the optimization is oscillating along the walls of a valley like structure but doing more exploration compared with GD. This is verified by the larger distance traveled by SGD compared with GD in Figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}. The valley floor (dashed orange line) has many ups and downs showing barriers along SGD's path which do not affect its dynamics because SGD travels at a height above the floor. \label{fig:interpolation_vgg11_cifar10_epoch1}}
\vspace{-10pt}
\end{wrapfigure}
We now begin our analysis of studying the loss surface of DNNs along the trajectory of optimization updates. Specifically, consider that the parameters $\mathbf{\theta}$ of a DNN are initialized to a value $\mathbf{\theta}_0$. When using an optimization method to update these parameters, the $t^{th}$ update step takes the parameter from $\mathbf{\theta}_{t}$ to $\mathbf{\theta}_{t+1}$ using estimated gradient $\mathbf{g}_t$ as,
\begin{align}
\small
\mathbf{\theta}_{t+1} = \mathbf{\theta}_t - \eta \mathbf{g}_t
\end{align}
where $\eta$ is the learning rate. Notice the $t^{th}$ update step implies the $t^{th}$ epoch \textit{only} in the case when using the full batch gradient descent (GD-- gradient computed using the whole dataset). In the case of stochastic gradient descent, one iteration is an update from gradient computed from a mini-batch. We then interpolate the DNN loss between the convex combination of $\mathbf{\theta}_{t}$ and $\mathbf{\theta}_{t+1}$ by considering parameter vectors $\mathbf{\theta}_{t}^\alpha = (1-\alpha) \theta_t + \alpha \theta_{t+1}$, where $\alpha \in [0,1]$ is chosen such that we obtain $10$ samples uniformly placed between these two parameter points. Simultaneously, we also keep track of two metrics-- the cosine of the angle between two consecutive gradients $\cos(\mathbf{g}_{t-1}, \mathbf{g}_t): = \mathbf{g}_{t-1}^T \mathbf{g}_{t}/(\lVert \mathbf{g}_{t-1}\rVert_2 \lVert \mathbf{g}_{t}\rVert_2)$, and the distance of the current parameter $\mathbf{\theta}_{t}$ from the initialization $\mathbf{\theta}_{0}$ given by $\lVert \theta_t - \theta_0 \rVert_2$. As it will become apparent in a bit, these two metrics along with the interpolation curve help us make deductions about how the optimization \textit{interacts} with the loss surface during its trajectory.
We perform experiments on MNIST \cite{mnistlecun}, CIFAR-10 \cite{cifar} and a subset of the tiny Imagenet dataset \cite{ILSVRC15} using multi-layer perceptrons (MLP), VGG-11 \cite{simonyan2014very} and Resnet-56 \cite{he2016deep} architectures with various batch sizes and learning rates.
In the main text, we mostly show results for CIFAR-10 using VGG-11 architecture with a batch size of 100 and a fixed learning rate of 0.1 due to space limitations. Experiments on all the other datasets, architectures and hyper-parameter settings can be found in the appendix. All the claims are consistent across them.
\subsection{Optimization Trajectory}
We first experiment with full batch gradient descent (GD) to study its behavior before jumping to the analysis of SGD to isolate the confounding factor of mini-batch induced stochasticity. The plot of training loss interpolation between consecutive iterations (referred in the figure as \textit{training loss}), $\cos(\mathbf{g}_{t-1}, \mathbf{g}_t)$, and \textit{parameter distance} $\lVert \theta_t - \theta_0 \rVert_2$ for CIFAR-10 on VGG-11 architecture optimized using full batch gradient descent is shown in Figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom} for the first $40$ iterations of training. To be clear, the x-axis is calibrated by the number of iterations, and there are $10$ interpolated loss values between each consecutive iterations (vertical gray lines) in the \textit{training loss} plot which is as described above (the cosine and parameter distance plots do not have any interpolations). This figure shows that the interpolated loss between every consecutive parameters from GD optimization update after iteration $15$ appears to be a quadratic-like structure with a minimum in between.
Additionally, the cosine of the angle between consecutive gradients after iteration $15$ is going negative and finally very close to $-1$, which means the consecutive gradients are almost along opposite directions. These two observations together suggest that the GD iterate is bouncing between walls of a valley-like landscape. For the iterations where there is a minimum in the interpolation between two iterations, we refer to this minimum as the \textit{floor} of the valley (these valley floors are connected by dashed orange line in figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom} for clarity). We will see GD behavior shows lack of exploration for better minima by comparing its parameter distance from initialization with that of SGD. Take note that the parameter distance of GD during these $40$ iterations reaches $\sim 1.4$.
Now we perform the same analysis for SGD. Notice that even though the updates are performed using mini-batches for SGD, the training loss values used in the plot are computed using the full dataset to visualize the actual loss landscape. We show these plots for epoch $1$ (Figure \ref{fig:interpolation_vgg11_cifar10_epoch1}) in the main text and epoch $25$ (Figure \ref{fig:interpolation_vgg_epoch25}) and epoch $100$ (Figure \ref{fig:interpolation_vgg_epoch100}) in appendix. We find that while the loss interpolation also shows a quadratic-like structure with a minimum in between (similar to GD), there are some qualitative differences compared with GD. We see that the cosine of the angle between gradients from consecutive iterations are significantly less negative, suggesting that instead of oscillating in the same region, SGD is quickly moving away from its previous position. This can be verified by the parameter distance from initialization. We see that the distance after $40$ iterations is $\sim 1.7$, which is larger than the distance moved by GD\footnote{In general, after the same number of updates, GD traverses a smaller distance compared with SGD, see \cite{hoffer2017train}}. Finally and most interestingly, we see that the height of valley floor has many ups and downs for consecutive iterations in contrast with that of GD (emphasized by the dashed orange line in figure \ref{fig:interpolation_vgg11_cifar10_epoch1}), which means that there is a rough terrain or barriers along the path of SGD that could hinder exploration if the optimization was traveling too close to the valley floor.
\begin{wrapfigure}{l}{0.42\textwidth}
\vspace{-10pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Arch\textbackslash Epochs} &\textbf{ 1} & \textbf{ 10} & \textbf{ 25}& \textbf{ 100}\\\hline
\textbf{VGG-11}&0 & 0 & 5 &13\\\hline
\textbf{Resnet-56}& 0& 0& 2 &23\\\hline
\textbf{MLP}&0&3&5&-\\\hline
\end{tabular}
\captionof{table}{\small Number of barriers crossed during training of one epoch (450 iterations) for VGG-11 and Resnet-56 on CIFAR-10 and MLP on MNIST. We say a barrier is crossed during an update step if there exists a point interpolated between the parameters before and after an update which has a loss value higher than the loss at either points. For most parts of the training, we find that SGD does not cross any significant number of barriers.\label{tab:barrier}}
\vspace{-15pt}
\end{wrapfigure}
A similar analysis for Resnet-56 on CIFAR-10, MLP on MNIST, VGG-11 on tiny ImageNet trained using GD for the first epoch are shown in Figures \ref{fig:interpolation_resnet_gd_boom}, \ref{fig:interpolation_MLP_gd},\ref{fig:interpolation_imagnet_gd} respectively in appendix. The same experiments analysis for SGD on different datasets, architectures under different hyper-parameters are also shown in section 1 in appendix. The observations and rules we discovered and described here are all consistent for all these experiments.
In order to show that the claim about optimization not crossing barriers extends to the whole training instead of only a few iterations we've shown, we quantitatively measure for the entire epoch in different phase of training if barriers are crossed. This result is shown in table \ref{tab:barrier} for VGG-11 and Resnet-56 trained on CIFAR-10 (trained for 100 epochs) and an MLP trained on MNIST (trained for 40 epochs). As we see, no barriers are crossed for most parts of the training. We further compute the number of barriers crossed for the first $40$ epochs for VGG-11 on CIFAR-10 shown in Figure \ref{fig:barrier} in the appendix: no barriers are crossed for most of the epochs and even for the barriers that are crossed towards the end, we find that their heights are substantially smaller compared with the loss value at the corresponding point during training, meaning they are not significant.
Finally, we track the spectral norm of the Hessian along with the validation accuracy while the model is being trained\footnote{Note that we track the spectral norm in the train mode of batchnorm; we observe that in validation mode the values are significantly larger. Tracking the value in train mode is fair because this is what SGD \textit{experiences} during training. Additionally, we track spectral norm because it captures the largest eigenvalue of the Hessian in contrast with Frobenius norm which can be misleading because the Hessian may have negative eigenvalues and the Frobenius norm sums the square of all eigenvalues.}. This plot is shown in Figure \ref{fig:vgg11_cifar10_hessian_valacc} for VGG-11 (and Figure \ref{fig:resnet56_cifar10_hessian_valacc} for Resnet-56 in the appendix). We find that the spectral norm reduces as training progresses (hence SGD finds flatter regions) but starts increasing towards the end. This is mildly correlated with a drop in validation accuracy towards the end. Regarding this correlation, while \citet{Dinh-et-al-2017} discuss that sharper minima can perform as well as wider ones, it is empirically known that flatter minima generalize better than sharper ones with SGD \citep{keskar2016large,jastrzkebski2017three}. This may be explained by \citet{neyshabur2017exploring, achille2017emergence} that discuss that minima that are both wide and have small norm may explain generalization in over-parametrized deep networks.
\vspace{-10pt}
\subsection{Qualitative Roles of Learning Rate and Batch Size}
We now focus in more detail on how the learning rate and batch size play qualitatively different roles during SGD optimization. As an extreme case, we already saw in the last section that when using GD vs SGD, the cosine of the angle between gradients from two consecutive iterations $\cos(\mathbf{g}_{t-1}, \mathbf{g}_t)$ is significantly closer to -1 (180 degrees) in the case of GD in contrast with SGD.
\begin{wrapfigure}{r}{0.42\textwidth}
\vspace{-15pt}
\includegraphics[scale=0.5,trim=0.in 0.1in 0.in 0.in,clip]{figs/vgg11_cifar10_spectral_norm.png}
\captionof{figure}{Spectral norm of the Hessian and validation accuracy for VGG-11 trained on CIFAR-10 using a fixed learning rate of 0.1 and batch size 100. Spectral norm and validation accuracy are roughly inversely correlated.}
\label{fig:vgg11_cifar10_hessian_valacc}
\vspace{-10pt}
\end{wrapfigure}
Now we show on a more granular scale that changing the batch size gradually (keeping the learning rate fixed) changes $\cos(\mathbf{g}_{t-1}, \mathbf{g}_t)$, while changing the learning rate gradually (keeping batch size fixed) does not.
It is shown in Figure \ref{fig:cos_vgg11_cifar10} for VGG-11 trained on CIFAR-10 and in Figures \ref{fig:cos_resnet_cifar10}, \ref{fig:cos_mlp_mnist} and \ref{fig:cos_vgg_tiny}for Resnet-56 with CIFAR-10, MLP with MNIST and VGG-11 with tiny ImageNet separately in the appendix. Notice the cosine is significantly more negative for larger batch sizes, implying that for larger batch sizes, the optimization is bouncing more within the same region instead of traveling farther along the valley as the case of small batch sizes.
This behavior is verified by the smaller distance of parameters from initialization during training for larger batch size which is also discussed by \citet{hoffer2017train}. This suggests that the noise from a small mini-batch size facilitates exploration that may lead to better minima and that this is hard to achieve by changing the learning rate.
On the other hand, we find that the learning rate controls the height from the valley floor at which the optimization oscillates along the valley walls which is important for avoiding barriers along SGD's path. Specifically, to quantify the height at which the optimization is bouncing above the valley floor, we make the following computations. Suppose at iterations $t$ and $t+1$ of training, the parameters are given by $\mathbf{\theta}_t$ and $\mathbf{\theta}_{t+1}$ respectively, and from the $10$ points sampled uniformly between $\mathbf{\theta}_t$ and $\mathbf{\theta}_{t+1}$ given by $\mathbf{\theta}_t^{\alpha} = (1-\alpha) \theta_t + \alpha \theta_{t+1}$ for different values of $\alpha \in [0,1]$, we define $\mathbf{\theta}_t^{\min} := \mathbf{\theta}_t^{\arg \min_{\alpha} \mathcal{L}(\mathbf{\theta}_t^{\alpha})}$, where $\mathcal{L}(\mathbf{\theta})$ denotes the DNN loss at parameter $\mathbf{\theta}$ using the whole training set. Then we define the height of the iterate from the valley floor at iteration $t$ as $\frac{\mathcal{L}(\mathbf{\theta}_t) + \mathcal{L}(\mathbf{\theta}_{t+1}) - 2\mathcal{L}(\mathbf{\theta}_t^{\min})}{2}$. We then separately compute the average height for all iterations of epochs $1$, $10$, $25$ and $100$.
\begin{wrapfigure}{l}{0.65\textwidth}
\vspace{-4pt}
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
VGG-11&\textbf{Epoch 1} & \textbf{Epoch 10} & \textbf{Epoch 25}\\
\hline
\textbf{LR 0.1}&0.0625$\pm$1.2e-3 & 0.0199$\pm$4.8e-4 & 0.0104$\pm$2.2e-5\\\hline
\textbf{LR 0.05}&0.0102$\pm$3.8e-5 & 0.0050$\pm$2.8e-5& 0.0035$\pm$1.7e-5 \\\hline
\hline
Resnet-56&\textbf{Epoch 1} & \textbf{Epoch 10} & \textbf{Epoch 25}\\
\hline
\textbf{LR 0.3}&0.0380$\pm$5.9e-4 & 0.0131$\pm$4.5e-4& 0.0094$\pm$1.3e-5\\\hline
\textbf{LR 0.15}&0.0084$\pm$5.2e-5& 0.0034$\pm$3.2e-5& 0.0020$\pm$7.2e-6\\\hline
\end{tabular}
\captionof{table}{Average height of SGD above valley floor across the iterations in one epoch for different epochs during the training of VGG-11 and Resnet-56 using SGD on CIFAR-10. Here the height at iteration $t$ is defined defined by $\frac{\mathcal{L}(\mathbf{\theta}_t) + \mathcal{L}(\mathbf{\theta}_{t+1}) - 2\mathcal{L}(\mathbf{\theta}_t^{\min})}{2}$. Results of Epoch 100 and of VGG-11 on Tiny-ImageNet can be found at Table \ref{tab:height_ref} in the appendix. \label{tab:height_vgg}}
\end{center}
\vspace{-15pt}
\end{wrapfigure}
These values are shown in table \ref{tab:height_vgg} for VGG-11 and Resnet-56 trained on CIFAR-10 and in table \ref{tab:height_ref} in appendix for VGG-11 on Tiny-ImageNet. They show that for almost all epochs, a smaller learning rate leads to a smaller height from the valley floor. Since the floor has barriers, it would increase the risk of hindering exploration for flatter minima. This has been corroborated by the recent empirical observations that smaller learning rates lead to sharper minima and poor generalization \cite{smith2017don, jastrzkebski2017three}. Based on our observations on the role of learning rate and batch-size, we empirically study learning rate schedules in appendix \ref{section_lr_sch}.
\vspace{-8pt}
\section{Importance of SGD Noise Structure}
\vspace{-5pt}
The gradient $\mathbf{g}_{SGD}(\mathbf{\theta})$ from mini-batch SGD at a parameter value $\mathbf{\theta}$ is expressed as,
$\mathbf{g}_{SGD}(\mathbf{\theta}) = \mathbf{\bar{g}}(\mathbf{\theta}) + \frac{1}{\sqrt{B}} \mathbf{n} (\mathbf{\theta})$,
where $\mathbf{n}(\mathbf{\theta})\sim \mathcal{N}(0, \mathbf{C}(\mathbf{\theta}))$, $ \mathbf{\bar{g}}(\mathbf{\theta})$ denotes the expected gradient using all training samples, $B$ is the mini-batch size and $\mathbf{C}(\mathbf{\theta})$ is the gradient covariance matrix at $\mathbf{\theta}$. In the previous section we discussed how mini-batch induced stochasticity plays a crucial role in SGD based optimization. This stochasticity due to SGD has historically been attributed to helping the optimization escape local minima in DNNs. However, the importance of the structure of the gradient covariance matrix $\mathbf{C}(\mathbf{\theta})$ is often neglected in these claims. To better understand its importance, we study the training dynamics of full batch gradient with artificially added isotropic noise. Specifically, we treat isotropic noise as our null hypothesis to confirm that the structure of noise induced by mini-batches in SGD is important.
\begin{wrapfigure}{r}{0.48\textwidth}
\vspace{-30pt}
\centering
\begin{subfigure}[c]{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.18in 0.35in 0.in 0.in,clip]{figs/cos_vgg11_cifar10_lr.png}
\end{subfigure}
\vfill
\begin{subfigure}[c]{0.45\columnwidth}
\centering
\includegraphics[scale=0.6,trim=0.18in 0.1in 0.in 0.in,clip]{figs/cos_vgg11_cifar10_bs.png}
\end{subfigure}
\caption{Changing batch size changes the cosine of angle between consecutive gradients while changing learning rate does not have any significant effect on the cosine. This shows batch size has a qualitatively different role compared with learning rate. Note that the curves are smoothened for visual clarity.\label{fig:cos_vgg11_cifar10}}
\vspace{-15pt}
\end{wrapfigure}
In this experiment, we first train our models with gradient descent (GD), meaning there is no noise sampled from $\mathcal{N}(0, \mathbf{C}(\mathbf{\theta}))$ of SGD. For gradient descent with isotropic noise, we add isotropic noise at every iteration on $\mathbf{\bar{g}}(\mathbf{\theta})$. The noise is sampled from a normal distribution with variance calculated by multiplying the maximum gradient variance of the model at the initialization with a factor of $0.1$ and $0.05$. We train all models until their training losses saturate and monitor training loss, validation accuracy, cosine of the angle between gradients from two consecutive iterations and the parameter distance from initialization.
Figure \ref{fig:vgg_isotropic} shows the results for VGG-11 and Figures \ref{fig:resnet_isotropic} and \ref{fig:mlp_isotropic} in the appendix shows the results for Resnet-56 on CIFAR-10 and MLP on MNIST. From the training loss and validation accuracy curves, we can see that adding even a small isotropic noise makes both the convergence\footnote{We additionally find that the model trained with isotropic noise gets stuck because we find that neither reducing learning rate, nor switching to GD at this point leads to reduction in training loss. However, switching to SGD makes the loss go down.} and generalization worse compared with the model trained with GD. The cosine of the angle between gradients of two consecutive iterations is close to $-1$ for $1500$ iterations for GD, which means two consecutive gradients are almost along opposite directions. It is an extra evidence that GD makes the optimization bounce off valley walls, which is what we discussed in section \ref{the_walk}. The parameter distance from initialization shows that models trained with isotropic noise travel farther away compared with the model trained using noiseless GD. These distances are much larger even compared with models trained with SGD (not shown here) for the same number of updates.
To gain more intuitions into this behavior, we also calculate the norm of the final parameters found by GD, SGD and the isotropic noise cases. The parameter norms are $87$ and $82$ for GD and SGD respectively, and $369$ and $443$ for the $0.014$ and $0.028$ isotropic variance case. These numbers corroborate the generally discussed notion that SGD finds solutions with small $\ell^2$ norm \cite{zhang2016understanding} compared with GD, and the fact that isotropic noise solutions have much larger norms and get stuck suggests that isotropic noise both hinders optimization and is bad for generalization.
\citet{neelakantan2015adding} suggest adding isotropic noise to gradients and report performance improvement on a number of tasks. However, notice the crucial difference between our claim in this section and their setup is that they add isotropic noise on top of the noise due to the mini-batch induced stochasticity, while we add isotropic noise to the full dataset gradient (hence no noise is sampled from the gradient covariance matrix).
To gains insights into why the noise sampled from the gradient covariance matrix $\mathbf{C}(\mathbf{\theta})$ helps SGD, we note that there is a relationship between the covariance $\mathbf{C}(\mathbf{\theta})$ and the Hessian $\mathbf{H}(\mathbf{\theta})$ of the loss surface at parameter $\mathbf{\theta}$ which is revealed by the generalized Gauss Newton decomposition (see \citet{sagun2017empirical}) when using the cross-entropy (or negative log likelihood) loss. Let $p_i(\theta)$ denote the predicted probability output (of the correct class in the classification setting for instance) of a DNN parameterized by $\theta$ for the $i^{th}$ data sample (in total N samples). Then the negative log likelihood loss for the $i^{th}$ sample is given by, $\mathcal{L}_i(\theta) = -\log(p_i(\theta))$. \textit{The relation between the Hessian} $\mathbf{H}(\mathbf{\theta})$ \textit{and the gradient covariance} $\mathbf{C}(\theta)$ \textit{for negative log likelihood loss is},
\begin{align}
\small
\vspace{-15pt}
\mathbf{H}(\mathbf{\theta}) = \mathbf{C}(\theta) + \mathbf{\bar{g}}(\theta) \mathbf{\bar{g}}(\theta)^T + \frac{1}{N} \sum_{i=1}^{N} \frac{\partial \mathcal{L}_i(\theta)}{\partial p_i(\theta)} \cdot \frac{\partial^2 p_i(\theta)}{\partial \theta^2}\nonumber
\vspace{-15pt}
\end{align}
The derivation can be found in section \ref{appendix_hessian_covariance} of the appendix.
Thus we find that the Hessian and covariance at any point $\theta$ are related, and are almost equal near minima where the second term tends to zero. This relationship would imply that the mini-batch induced noise is roughly aligned with sharper directions of the loss landscape (empirically confirmed concurrently by \cite{zhu2018regularization}). This would prevent the optimization from converging along such directions unless a wider region is found, which could explain why SGD finds wider minima without relying on the stochastic differential equation framework (previous work) which assumes a reasonably small learning rate.
\begin{wrapfigure}{r}{0.48\textwidth}
\centering
\includegraphics[scale=0.5,trim=0.in 0.35in 0.in 0.in,clip]{figs/vgg_isotropic_trainLoss_new.png}
\includegraphics[scale=0.5,trim=0.in 0.35in 0.in 0.1in,clip]{figs/vgg_isotropic_validacc_new.png}
\includegraphics[scale=0.5,trim=0.17in 0.35in 0.in 0.in,clip]{figs/vgg_isotropic_angle_new.png}
\includegraphics[scale=0.5,trim=0.in 0.1in 0.in 0.in,clip]{figs/vgg_isotropic_dist_new.png}
\captionof{figure}{Plots for VGG-11 trained by GD (without noise) and GD with artificial isotropic noise sampled from Gaussian distribution with different variances. Models trained using GD with added isotropic noise get stuck in terms of training loss and have worse validation performance compared with the model trained with GD.}
\label{fig:vgg_isotropic}
\vspace{-35pt}
\end{wrapfigure}
\vspace{-8pt}
\section{Discussion}
\vspace{-8pt}
We presented qualitative results to understand how GD and SGD interact with the DNN loss surface and avoided assumptions in order to rely instead on empirical evidence. We now draw similarities between the optimization trajectory in DNNs that we have empirically found, with those in quadratic loss optimization (see section 5 of \citet{lecun1998efficient}). Based on our empirical evidence, we deduce that both GD and SGD move in a valley like landscape by bouncing off valley walls. This is reminiscent of optimization in a quadratic loss setting with a non-isotropic positive semi-definite Hessian, where the optimal learning rate $\eta$ causes under-damping without divergence along eigenvectors of the Hessian which have eigenvalues $\lambda_i$ such that $ \lambda_i^{-1}< \eta < 2\lambda_i^{-1}$. On the other hand, in the case of DNNs trained with GD, we find that even though the training loss oscillates between valley walls during consecutive iterations, the valley floor decreases smoothly (see Figure \ref{fig:loss_interpolation_gd_vgg11_cifar10_boom}). This is similar to the quadratic loss optimization with over-damped convergence along the eigenvectors corresponding to eigenvalues $\lambda_i$ such that $ \eta < \lambda_i^{-1}$.
On a different note, it is commonly conjectured that when training DNNs, SGD crosses barriers to escape local minima. Contrary to this commonly held intuition, we find that SGD almost never crosses any significant barriers along its path. More interestingly, when training with a certain learning rate (see figure \ref{fig:interpolation_vgg11_cifar10_epoch1}), we find barriers at the floor of the valley but SGD avoids them by traveling at a height above the floor (due to large learning rate). Hence if we use a small learning rate, SGD should encounter such barriers and likely cross them. But we found in our experiments that this was not the case. This suggests that while in theory SGD is capable of crossing barriers (due to stochasticity), it does not do so because probably there exist other directions in such regions along which SGD can continue to optimize without crossing barriers. But since small learning rates empirically correlate with bad generalization, this suggests that moving over such barriers instead of crossing them by using a large learning rate is a good mechanism for exploration for good regions.
Finally, much of what we have discussed is based on the loss landscape of specific datasets and architectures along with network parameterization choices like rectified linear activation units (ReLUs) and batch normalization \cite{ioffe2015batch}. These conclusions may differ depending on these choices. In these cases analysis similar to ours can be performed to see if similar dynamics hold or not. Studying these dynamics may provide more practical guidelines for setting optimization hyperparameters.
|
{
"timestamp": "2018-05-31T02:04:35",
"yymm": "1802",
"arxiv_id": "1802.08770",
"language": "en",
"url": "https://arxiv.org/abs/1802.08770"
}
|
\section{Introduction}
Deep neural networks are powerful models that
achieve state-of-the-art performance across several domains, such as bioinformatics \cite{bio2, bio1}, speech \cite{sp2}, and computer vision \cite{he2015deep, cv2}. Though deep networks have exhibited very good performance in classification tasks, they have recently been shown to be unstable to adversarial perturbations of the data \cite{szegedy2013intriguing, biggio2013evasion}. In fact, very small and often imperceptible perturbations of the data samples are sufficient to fool state-of-the-art classifiers and result in incorrect classification. This discovery of the surprising vulnerability of classifiers to perturbations has led to a large body of work that attempts to design robust classifiers \cite{goodfellow2014, shaham2015understanding, madry2017towards, cisse2017parseval, papernot2015distillation, alemi2016deep}. However, advances in designing robust classifiers have been accompanied with stronger perturbation schemes that defeat such defenses \cite{carlini2017adversarial, uesato2018adversarial, robust_vision}.
In this paper, we assume that the data distribution is defined by a smooth generative model (mapping latent representations to images), and study theoretically the existence of small adversarial perturbations for arbitrary classifiers. We summarize our main contributions as follows:
\begin{itemize}
\item We show fundamental upper bounds on the robustness of any classifier to perturbations, which provides a baseline to the maximal achievable robustness. When the latent space of the data distribution is high dimensional, our analysis shows that \textit{any} classifier is vulnerable to very small perturbations. Our results further suggest the existence of a tight relation between robustness and linearity of the classifier in the latent space.
\item We prove the existence of adversarial perturbations that transfer across different classifiers. This provides theoretical justification to previous empirical findings that highlighted the existence of such transferable perturbations.
\item We quantify the difference between the robustness to adversarial examples \textit{in the data manifold} and \textit{unconstrained} adversarial examples, and show that the two notions of robustness can be precisely related: for any classifier $f$ with in-distribution robustness $r$, there exists a classifier $\tilde{f}$ that achieves unconstrained robustness $r/2$. This further provides support to the empirical observations in \cite{ilyas2017robust, gilmer2018adversarial}.
\item We evaluate our bounds in several experimental setups (CIFAR-10 and SVHN), and show that they yield informative baselines to the maximal achievable robustness.
\end{itemize}
Our robustness analysis provides in turn insights onto desirable properties of generative models capturing real-world distributions.
In particular, the intriguing generality of our analysis implies that when the data distribution is modeled through a smooth and generative model with high-dimensional latent space, there exist small-norm perturbations of images that fool humans for any discriminative task defined on the data distribution. If, on the other hand, it is the case that the human visual system is inherently robust to small perturbations (e.g., in $\ell_p$ norm), then our analysis shows that a distribution over natural images cannot be modeled by smooth and high-dimensional generative models. Going forward in modeling complex natural image distributions, our results hence suggest that low dimensional, non-smooth generative models are important constraints to capture the real-world distribution of images; not satisfying such constraints can lead to small adversarial perturbations for any classifier, including the human visual system.
\section{Related work}
It was proven in \cite{fawzi2015a, nips2016_ours} that for certain families of classifiers, there exist adversarial perturbations that cause misclassification of magnitude $O(1/\sqrt{d})$, where $d$ is the data dimension, provided the robustness to random noise is fixed (which is typically the case if e.g., the data is normalized).
In addition, fundamental limits on the robustness of classifiers were derived in \cite{fawzi2015a} for some simple classification families. Other works have instead studied the existence of adversarial perturbations, under strong assumptions on the data distribution \cite{gilmer2018adversarial, tanay2016boundary}. In this work, motivated by the success of generative models mapping latent representations with a normal prior, we instead study the existence of robust classifiers under this general data-generating procedure and derive bounds on the robustness that hold for any classification function. A large number of techniques have recently been proposed to improve the robustness of classifiers to perturbations, such as adversarial training \cite{goodfellow2014}, robust optimization \cite{shaham2015understanding, madry2017towards}, regularization \cite{cisse2017parseval}, distillation \cite{papernot2015distillation}, stochastic networks \cite{alemi2016deep}, etc... Unfortunately, such techniques have been shown to fail whenever a more complex attack strategy is used \cite{carlini2017adversarial, uesato2018adversarial}, or when it is evaluated on a more complex dataset. Other works have recently studied procedures and algorithms to provably guarantee a certain level of robustness \cite{hein2017formal, peck2017lower, sinha2017certifiable, raghunathan2018certified, dvijotham2018dual}, and have been applied to small datasets (e.g., MNIST).
For large scale, high dimensional datasets, the problem of designing robust classifiers is entirely open. We finally note that adversarial examples for generative models have recently been considered in \cite{kos2017adversarial}; our aim here is however different as our goal is to bound the robustness of classifiers when data comes from a generative model.
\section{Definitions and notations}
\label{sec:def_notations}
Let $g$ be a generative model that maps latent vectors $z \in \mathcal{Z} := \mathbb{R}^d$ to the space of images $\mathcal{X} := \mathbb{R}^m$, with $m$ denoting the number of pixels. To generate an image according to the distribution of natural images $\mu$, we generate a random vector $z \sim \nu$ according to the standard Gaussian distribution $\nu = \mathcal{N} (0, I_d)$, and we apply the map $g$; the resulting image is then $g(z)$.
This data-generating procedure is motivated by numerous previous works on generative models, whereby natural-looking images are obtained by transforming normal vectors through a deep neural network \cite{kingma2013auto}, \cite{goodfellow2014generative}, \cite{radford2015unsupervised}, \cite{arjovsky2017wasserstein}, \cite{gulrajani2017improved}.\footnote{Instead of sampling from $\mathcal{N}(0,I_d)$ in $\mathcal{Z}$, some generative models sample from the uniform distribution in $[-1, 1]^d$. The results of this paper can be easily extended to such generative procedures.} Let $f: \mathbb{R}^m \rightarrow \{1,\dots,K\}$ be a classifier mapping images in $\mathbb{R}^m$ to discrete labels $\{1,\dots,K\}$. The discriminator $f$ partitions $\mathcal{X}$ into $K$ sets $C_i = \{x \in \mathcal{X} : f(x) = i\}$ each of which corresponds to a different predicted label.
The relative proportion of points in class $i$ is equal to $\mathbb{P}(C_i) = \nu(g^{-1}(C_i))$, the Gaussian measure of $g^{-1}(C_i)$ in $\mathcal{Z}$.
The goal of this paper is to study the \emph{robustness} of $f$ to additive perturbations under the assumption that the data is generated according to $g$.
We define two notions of robustness. These effectively measure the minimum distance one has to travel in image space to change the classification decision.
\begin{itemize}
\item \textbf{In-distribution robustness:}
For $x = g(z)$, we define the in-distribution robustness $r_{\text{in}} (x)$ as follows:
\[
r_{\text{in}} (x) = \min_{r \in \mathcal{Z}} \| g(z + r) - x \| \text{ s.t. } f(g(z+r)) \neq f(x),
\]
where $\| \cdot \|$ denotes an arbitrary norm on $\mathcal{X}$. Note that the perturbed image, $g(z+r)$ is \textit{constrained to lie in the image} of $g$, and hence belongs to the support of the distribution $\mu$.
\item \textbf{Unconstrained robustness:} Unlike the in-distribution setting, we measure here the robustness to \textit{arbitrary} perturbations in the image space; that is, the perturbed image is not constrained anymore to belong to the data distribution $\mu$.
$$r_{\text{unc}} (x) = \min_{r \in \mathcal{X}} \| r \| \text{ s.t. } f(x+r) \neq f(x).$$
This notion of robustness corresponds to the widely used definition of adversarial perturbations. It is easy to see that this robustness definition is smaller than the in-distribution robustness; i.e., $r_{\text{unc}} (x) \leq r_{\text{in}} (x)$.
\end{itemize}
In this paper, we assume that the generative model is smooth, in the sense that it satisfies a \textit{modulus of continuity} property, defined as follows:
\begin{assumption}
We assume that $g$ admits a monotone invertible modulus of continuity $\omega$; i.e.,\footnote{This assumption can be extended to random $z$ (see C.2 in the appendix). For ease of exposition however, we use here the deterministic assumption.}
\begin{equation}
\label{eq:modulus_cont}
\forall z, z' \in \mathcal Z, \| g(z) - g(z') \| \leq \omega(\| z - z' \|_2).
\end{equation}
\end{assumption}
Note that the above assumption is milder than assuming Lipschitz continuity. In fact, the Lipschitz property corresponds to choosing $\omega(t)$ to be a linear function of $t$. In particular, the above assumption does not require that $\omega(0) = 0$, which potentially allows us to model distributions with disconnected support.\footnote{In this paper, we use the term \textit{smooth} generative models to denote that the function $\omega(\delta)$ takes small values for small $\delta$.}
It should be noted that generator smoothness is a desirable property of generative models. This property is often illustrated empirically by generating images along a straight path in the latent space \cite{radford2015unsupervised}, and verifying that the images undergo gradual semantic changes between the two endpoints. In fact, smooth transitions is often used as a qualitative evidence that the generator has learned relevant factors of variation.
Fig. \ref{fig:illustration_generative} summarizes the problem setting and notations. Assuming that the data is generated according to $g$, we analyze in the remainder of the paper the robustness of arbitrary classifiers to perturbations.
\begin{figure}
\centering
\includegraphics[scale=0.6]{illustration_generative_2}
\caption{\label{fig:illustration_generative}Setting used in this paper. The data distribution is obtained by mapping $\mathcal{N} (0, I_d)$ through $g$ (we set $d = 1$ and $g(z) = (\cos(2 \pi z), \sin(2 \pi z))$ in this example). The thick circle indicates the support of the data distribution $\mu$ in $\mathbb{R}^m$ ($m = 2$ here). The binary discriminative function $f$ separates the data space into two classification regions (red and blue colors). While the in-distribution perturbed image is required to belong to the data support, this is not necessarily the case in the unconstrained setting. In this paper, we do not put any assumption on $f$, resulting in potentially arbitrary partitioning of the data space. While the existence of very small adversarial perturbations seems counter-intuitive in this low-dimensional illustrative example (i.e., $r_{\text{in}}$ and $r_{\text{unc}}$ can be large for some choices of $f$), we show in the next sections that this is the case in high dimensions.}
\end{figure}
\section{Analysis of the robustness to perturbations}
\subsection{Upper bounds on robustness}
We state a general bound on the robustness to perturbations and derive two special cases to make more explicit the dependence on the distribution
and number of classes.
\begin{theorem}
\label{thm:image_space_bounds}
Let $f: \mathbb{R}^m \rightarrow \{1, \dots, K\}$ be an arbitrary classification function defined on the image space. Then, the fraction of datapoints having robustness less than $\eta$ satisfies:
\begin{align}
\label{eq:main_theorem_image_space}
\mathbb{P} \left( r_{\text{in}} (x) \leq \eta \right) \geq \sum_{i=1}^{K} (\Phi(a_{\neq i} + \omega^{-1}(\eta)) - \Phi(a_{\neq i})) \ ,
\end{align}
where $\Phi$ is the cdf of $\mathcal{N}(0,1)$, and $a_{\neq i} = \Phi^{-1} \left(\mathbb{P} \left(\bigcup\limits_{j \neq i} C_j \right)\right)$.
In particular, if for all $i$, $\mathbb{P}(C_i) \leq \frac{1}{2}$ (the classes are not too unbalanced), we have
\begin{align}
\label{eq:prob_bound_half}
\mathbb{P} \left( r_{\text{in}} (x) \leq \eta \right) \geq 1 - \sqrt{\frac{\pi}{2}} e^{-\omega^{-1}(\eta)^2/2} \ .
\end{align}
To see the dependence on the number of classes more explicitly, consider the setting where the classes are equiprobable, i.e., $\mathbb{P} (C_i) = \frac{1}{K}$ for all $i$, $K \geq 5$, then
\begin{align}
\label{eq:prob_bound_class_dependence}
\mathbb{P} \left( r_{\text{in}} (x) \leq \eta \right) &\geq 1 - \sqrt{\frac{\pi}{2}} e^{-\omega^{-1}(\eta)^2/2}
e^{-\eta \sqrt{\log\left(\frac{K^2}{4\pi \log(K)}\right)}} \ .
\end{align}
\end{theorem}
This theorem is a consequence of the Gaussian isoperimetric inequality first proved in~\cite{borell1975brunn} and~\cite{sudakov1978extremal}. The proofs can be found in the appendix.
\vspace{2mm}
\textbf{Remark 1. Interpretation.} For easiness of interpretation, we assume that the function $g$ is Lipschitz continuous, in which case $\omega^{-1}(\eta)$ is replaced with $\eta/L$ where $L$ is the Lipschitz constant. Then, Eq. (\ref{eq:prob_bound_half}) shows the existence of perturbations of norm $\eta \propto L$ that can fool any classifier. This norm should be compared to the typical norm given by $\mathbb{E} \| g(z) \|$. By normalizing the data, we can assume $\mathbb{E} \| g(z) \| = \mathbb{E} \| z \|_2$ without loss of generality.\footnote{Without this assumption, the following discussion applies if we replace the Lipschitz constant with the normalized Lipschitz constant $L' = L \frac{\mathbb{E} \| z \|_2}{\mathbb{E} \| g(z) \|}$.} As $z$ has a normal distribution, we have $\mathbb{E} \| z \|_2 \in [\sqrt{d-1}, \sqrt{d}]$ and thus the typical norm of an element in the data set satisfies $\mathbb{E} \| g(z) \| \geq \sqrt{d-1}$. Now if we plug in $\eta = 2 L$, we obtain that the robustness is less than $2 L$ with probability exceeding 0.8. This should be compared to the typical norm which is at least $\sqrt{d-1}$. Our result therefore shows that when $d$ is large and $g$ is smooth (in the sense that $L \ll \sqrt{d}$), there exist small adversarial perturbations that can fool arbitrary classifiers $f$. Fig. \ref{fig:high_probability_d} provides an illustration of the upper bound, in the case where $\omega$ is the identity function.
\textbf{Remark 2. Dependence on $K$.} Theorem \ref{thm:image_space_bounds} shows an \textit{increasing} probability of misclassification with the number of classes $K$. In other words, it is easier to find adversarial perturbations in the setting where the number of classes is large, than for a binary classification task.\footnote{We assume here equiprobable classes.} This dependence confirms empirical results whereby the robustness is observed to decrease with the number of classes. The dependence on $K$ captured in our bounds is in contrast to previous bounds that showed decreasing probability of fooling the classifier, for larger number of classes \cite{nips2016_ours}.
\textbf{Remark 3. Classification-agnostic bound.} Our bounds hold for any classification function $f$, and are not specific to a family of classifiers. This is unlike the work of~\cite{fawzi2015a} that establishes bounds on the robustness for specific classes of functions (e.g., linear or quadratic classifiers).
\textbf{Remark 4. How tight is the upper bound on robustness in Theorem \ref{thm:image_space_bounds}?} Assuming that the smoothness assumption in Eq. \ref{eq:modulus_cont} is an equality, let the classifier $f$ be such that $f \circ g$ separates the latent space into $B_1 = g^{-1}(C_1) = \{ z: z_1 \geq 0 \}$ and $B_2 = g^{-1}(C_2) = \{ z: z_1 < 0 \}$. Then, it follows that
\begin{align*}
\mathbb{P} (r_{\text{in}} (x)) \leq \eta) & = \mathbb{P} (\exists r: \| g(z+r) - g(z) \| \leq \eta, f(g(z+r)) \neq f(g(z)))) \\
& = \mathbb{P} (\exists r: \| r \|_2 \leq \omega^{-1}(\eta), \text{sgn} (z_1 + r_1) \text{sgn} (z_1) < 0) \\
& = \mathbb{P} (z \in B_1, z_1 < \omega^{-1}(\eta)) + \mathbb{P} (z \in B_2, z_1 \geq -\omega^{-1}(\eta)) = 2 (\Phi(\omega^{-1}(\eta)) - \Phi(0)),
\end{align*}
which precisely corresponds to Eq. (\ref{eq:main_theorem_image_space}). In this case, the bound in Eq. (\ref{eq:main_theorem_image_space}) is therefore an equality. More generally, this bound is an equality if the classifier induces linearly separable regions in the latent space.\footnote{In the case where Eq. (\ref{eq:modulus_cont}) is an inequality, we will not exactly achieve the bound, but get closer to it when $f \circ g$ is linear.} This suggests that classifiers are maximally robust when the induced classification boundaries in the latent space are linear. We stress on the fact that boundaries in the $\mathcal{Z}$-space can be very different from the boundaries in the image space. In particular, as $g$ is in general non-linear, $f$ might be a highly \textit{non-linear function} of the input space, while $z \mapsto (f \circ g) (z)$ is a linear function in $z$. We provide an explicit example in the appendix illustrating this remark.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{plot_highprobability_d.pdf}
\caption{\label{fig:high_probability_d} Upper bound (Theorem \ref{thm:image_space_bounds}) on the median of the normalized robustness $r_{\text{in}} / \sqrt{d}$ for different values of the number of classes $K$, in the setting where $\omega(t) = t$. We assume that classes have equal measure (i.e., $\mathbb{P}(C_i) = 1/K$).}
\end{figure}
\textbf{Remark 5. Adversarial perturbations in the latent space} While the quantities introduced in Section \ref{sec:def_notations} measure the robustness in the \textit{image space}, an alternative is to measure the robustness in the \textit{latent space}, defined as $r_{Z} = \min_{r} \| r \|_2 \text{ s.t. } f(g(z+r)) \neq f(g(z))$. For natural images, latent vectors provide a decomposition of images into meaningful factors of variation, such as features of objects in the image, illumination, etc... Hence, perturbations of vectors in the latent space measure the amount of change one needs to apply to such meaningful latent features to cause data misclassification. A bound on the magnitude of the minimal perturbation in the latent space (i.e., $r_{Z}$) can be directly obtained from Theorem \ref{thm:image_space_bounds} by setting $\omega$ to identity (i.e., $\omega(t) = t$). Importantly, note that no assumptions on the smoothness of the generator $g$ are required for our bounds to hold when considering this notion of robustness.
\textbf{Relation between in-distribution robustness and unconstrained robustness.}
While the previous bound is specifically looking at the in-distribution robustness, in many cases, we are interested in achieving \textit{unconstrained} robustness; that is, the perturbed image is not constrained to belong to the data distribution (or equivalently to the range of $g$). It is easy to see that any bound derived for the in-distribution robustness $ r_{\text{in}} (x)$ also holds for the unconstrained robustness $r_{\text{unc}}(x)$ since it clearly holds that $r_{\text{unc}}(x) \leq r_{\text{in}} (x)$. One may wonder whether it is possible to get a better upper bound on $r_{\text{unc}}(x)$ directly. We show here that this is not possible if we require our bound to hold for any general classifier. Specifically, we construct a family of classifiers for which $r_{\text{unc}}(x) \geq \frac{1}{2} r_{\text{in}} (x)$, which we now present:
For a given classifier $f$ in the image space, define the classifier $\tilde{f}$ constructed in a nearest neighbour strategy:
\begin{align}
\label{eq:classiifer_h}
\tilde{f}(x) = f(g(z^*)) \quad \text{ with } \quad z^* = \arg\min_{z} \| g(z) - x \|.
\end{align}
Note that $\tilde{f}$ behaves exactly in the same way as $f$ on the image of $g$ (in particular, it has the same risk and in-distribution robustness). We show here that it has an unconstrained robustness that is at least half of the in-distribution robustness of $f$.
\begin{theorem}
\label{thm:relation_two_robustness}
For the classifier $\tilde{f}$, we have $r_{\text{unc}}(x) \geq \frac{1}{2} r_{\text{in}} (x)$.
\end{theorem}
This result shows that if a classifier has in-distribution robustness $r$, then we can construct a classifier with unconstrained robustness $r/2$, through a simple modification of the original classifier $f$.
Hence, classification-agnostic limits derived for both notions of robustness are essentially the same. It should further be noted that the procedure in Eq. (\ref{eq:classiifer_h}) provides a constructive method to increase the robustness of any classifier to unconstrained perturbations. Such a nearest neighbour strategy is useful when the in-distribution robustness is much larger than the unconstrained robustness, and permits the latter to match the former. This approach has recently been found to be successful in increasing the robustness of classifiers when accurate generative models can be learned in \cite{defensegan}. Other techniques \cite{ilyas2017robust} build on this approach, and further use methods to increase the in-distribution robustness.
\subsection{Transferability of perturbations}
One of the most intriguing properties about adversarial perturbations is their transferability \cite{szegedy2013intriguing, liu2016delving} across different models. Under our data model distribution, we study the existence of transferable adversarial perturbations, and show that two models with approximately zero risk will have shared adversarial perturbations.
\begin{theorem}[Transferability of perturbations]
\label{thm:transferability}
Let $f,h$ be two classifiers. Assume that $\mathbb{P} (f \circ g(z) \neq h \circ g(z) ) \leq \delta$ (e.g., if $f$ and $h$ have a risk bounded by $\delta/2$ for the data set generated by $g$). In addition, assume that
$\mathbb{P}(C_i(f)) + \delta \leq \frac{1}{2}$ for all $i$.\footnote{This assumption is only to simplify the statement, a general statement can be easily derived in the same way.} Then,
\begin{equation}
\begin{aligned}
& \mathbb{P} \left\{ \exists v : \| v \|_2 \leq \eta \text{ and } \begin{array}{ll} f(g(z)+v) \neq f(g(z)) \\ h(g(z)+v) \neq h(g(z)) \end{array} \right\}\\
& \qquad \qquad \qquad \geq 1 - \sqrt{\frac{\pi}{2}} e^{-\omega^{-1}(\eta)^2/2} - 2 \delta.
\end{aligned}
\end{equation}
\end{theorem}
Compared to Theorem \ref{thm:image_space_bounds} which bounds the robustness to adversarial perturbations, the extra price to pay here to find \textit{transferable} adversarial perturbations is the $2 \delta$ term, which is small if the risk of both classifiers is small. Hence, our bounds provide a theoretical explanation for the existence of transferable adversarial perturbations, which were previously shown to exist in \cite{szegedy2013intriguing, liu2016delving}.
The existence of transferable adversarial perturbations across several models with small risk has important security implications, as adversaries can, in principle, fool different classifiers with a single, classifier-agnostic, perturbation. The existence of such perturbations significantly reduces the difficulty of attacking (potentially black box) machine learning models.
\subsection{Approximate generative model}
In the previous results, we have assumed that the data distribution is exactly described by the generative model $g$ (i.e., $\mu = g_{*} (\nu)$ where $g_*(\nu)$ is the pushforward of $\nu$ via $g$). However, in many cases, such generative models only provide an \textit{approximation} to the true data distribution $\mu$. In this section, we specifically assume that the generated distribution $g_{*} (\nu)$ provides an approximation to the true underlying distribution in the 1-Wasserstein sense on the metric space $(\mathcal{X}, \| \cdot \|)$; i.e., $W(g_{*}(\nu), \mu) \leq \delta$, and derive upper bounds on the robustness. This assumption is in line with recent advances in generative models, whereby the generator provides a good approximation (in the Wasserstein sense) to the true distribution, but does not exactly fit it \cite{arjovsky2017wasserstein}.
We show here that similar upper bounds on the robustness (in expectation) hold, as long as $g_{*} (\nu)$ provides an accurate approximation of the true distribution $\mu$.
\begin{theorem}
\label{thm:expectation_image_in}
We use the same notations as in Theorem \ref{thm:image_space_bounds}.
Assume that the generator $g$ provides a $\delta$ \textit{approximation} of the true distribution $\mu$ in the 1-Wasserstein sense on the metric space $(\mathcal{X}, \| \cdot \|)$; that is, $W(g_*(\nu), \mu) \leq \delta$ (where $g_*(\nu)$ is the pushforward of $\nu$ via $g$), the following inequality holds provided $\omega$ is concave
\[
\ex{x \sim \mu}{r_{\text{unc}} (x)} \leq \omega \left( \sum_{i=1}^{K} -a_{\neq i} \Phi(-a_{\neq i}) + \frac{e^{-a_{\neq i}^2/2}}{\sqrt{2\pi}} \right) + \delta,
\]
where $r_{\text{unc}} (x)$ is the unconstrained robustness in the image space. In particular, for $K \geq 5$ equiprobable classes, we have
\[
\ex{x \sim \mu}{r_{\text{unc}} (x)} \leq \omega \left( \frac{\log(4\pi \log(K))}{\sqrt{2\log(K)}} \right) + \delta.
\]
\end{theorem}
In words, when the data is defined according to a distribution which can be \textit{approximated} by a smooth, high-dimensional generative model, our results show that arbitrary classifiers will have small adversarial examples in expectation. We also note that as $K$ grows, this bound decreases and even goes to zero under the sole condition that $\omega$ is continuous at $0$. Note however that the decrease is slow as it is only logarithmic.
\section{Experimental evaluation}
We now evaluate our bounds on the SVHN dataset \cite{netzer2011reading} which contains color images of house numbers, and the task is to classify the digit at the center of the image. In all this section, computations of perturbations are done using the algorithm in \cite{moosavi2015deepfool}.\footnote{Note that in order to estimate robustness quantities (e.g., $r_{\text{in}}$), we do not need the ground truth label, as the definition only involves the change of the estimated label. Estimation of the robustness can therefore be readily done for automatically generated images.} The dataset contains $73,257$ training images, and $26,032$ test images (we do not use the images in the 'extra' set). We train a DCGAN \cite{radford2015unsupervised} generative model on this dataset, with a latent vector dimension $d = 100$, and further consider several neural networks architectures for classification.\footnote{For the SVHN and CIFAR-10 experiments, we show examples of generated images and perturbed images in the appendix (Section C.3). Moreover, we provide in C.1 details on the architectures of the used models.}
For each classifier, the empirical robustness is compared to our upper bound.\footnote{To evaluate numerically the upper bound, we have used a probabilistic version of the modulus of continuity, where the property is not required to be satisfied for \textit{all} $z,z'$, but rather with high probability, and accounted for the error probability in the bound. We refer to the appendix for the detailed optimization used to estimate the smoothness parameters.}
In addition to reporting the in-distribution and unconstrained robustness, we also report the robustness in the latent space: $r_Z = \min_{r} \| r \|_2 \text{ s.t. } f(g(z+r)) \neq f(g(z))$. For this robustness setting, note that the upper bound exactly corresponds to Theorem \ref{thm:image_space_bounds} with $\omega$ set to the identity map. Results are reported in Table \ref{tab:svhn_exp}.
\begin{table*}[t]
\centering \renewcommand\cellgape{\Gape[2pt]}
\begin{tabular}{l|c|c|c|c}
\toprule
& \makecell[c]{Upper bound\\ on robustness} & 2-Layer LeNet & ResNet-18 & ResNet-101 \\
\midrule
Error rate & - & 11\% & 4.8\% & 4.2 \% \\ \hline
Robustness in the $\mathcal{Z}$-space & $16 \times 10^{-3}$ & $6.1 \times 10^{-3}$ & $6.1 \times 10^{-3}$ & $6.6 \times 10^{-3}$ \\ \hline
\makecell[l]{In-distribution robustness} & $36 \times 10^{-2}$ & $3.3 \times 10^{-2}$ & $3.1 \times 10^{-2}$ & $3.1 \times 10^{-2}$ \\
\makecell[l]{Unconstrained robustness} & $36 \times 10^{-2}$ & $0.39 \times 10^{-2}$ & $1.1 \times 10^{-2}$ & $1.4 \times 10^{-2}$ \\ \bottomrule
\end{tabular}
\caption{\label{tab:svhn_exp} Experiments on SVHN dataset. We report the $25\%$ percentile of the \textit{normalized} robustness at each cell, where probabilities are computed either theoretically (for the upper bound) or empirically.
More precisely, we report the following quantities for the upper bound column: For the \textbf{robustness in the $\mathcal{Z}$ space}, we report $t / \mathbb{E} (\| z \|_2)$ such that $\mathbb{P} \left( \min_{r} \| r \|_2 \text{ s.t. } f(g(z+r)) \neq f(g(z)) \leq t \right) \geq 0.25$, using Theorem \ref{thm:image_space_bounds} with $\omega$ taken as identity. For the \textbf{robustness in image-space}, we report $t / \mathbb{E} (\| g(z) \|_2)$ such that $\mathbb{P} \left( r_{\text{in}} (x) \leq t \right) \geq 0.25$, using Theorem \ref{thm:image_space_bounds}, with $\omega$ estimated empirically (Section C.2 in appendix).}
\end{table*}
Observe first that the upper bound on the robustness in the latent space is of the same order of magnitude as the empirical robustness computed in the $\mathcal{Z}$-space, for the different tested classifiers. This suggests that the isoperimetric inequality (which is the only source of inequality in our bound, when factoring out smoothness) provides a reasonable baseline that is on par with the robustness of best classifiers.
In the image space, the theoretical prediction from our classifier-agnostic bounds is one order of magnitude larger than the empirical estimates. Note however that our bound is still non-vacuous, as it predicts the norm of the required perturbation to be approximately $1/3$ of the norm of images (i.e., normalized robustness of $0.36$).
This potentially leaves room for improving the robustness in the image space. Moreover, we believe that the bound on the robustness in the image space is not tight (unlike the bound in the $\mathcal{Z}$ space) as the smoothness assumption on $g$ can be conservative.
Further comparisons of the figures between in-distribution and unconstrained robustness in the image space interestingly show that for the simple LeNet architecture, a large gap exists between these two quantities. However, by using more complex classifiers (ResNet-18 and ResNet-101), the gap between in-distribution and unconstrained robustness gets smaller. Recall that Theorem~\ref{thm:relation_two_robustness} says that any classifier can be modified in a way that the in-distribution robustness and unconstrained robustness only differ by a factor $2$, while preserving the accuracy. But this modification may result in a more complicated classifier compared to the original one; for example starting with a linear classifier, the modified classifier will in general not be linear. This interestingly matches with our numerical values for this experiment, as the multiplicative gap between in-distribution and unconstrained robustness approaches $2$ as we make the classification function more complex (e.g., in-distribution robustness of $3.1 \times 10^{-2}$ and out-distribution $1.4 \times 10^{-2}$ for ResNet-101).
We now consider the more complex CIFAR-10 dataset \cite{krizhevsky2009learning}. The CIFAR-10 dataset consists of 10 classes of $32 \times 32$ color natural images. Similarly to the previous experiment, we used a DCGAN generative model with $d = 100$, and tested the robustness of state-of-the-art deep neural network classifiers.
Quantitative results are reported in Table \ref{tab:cifar_exp}. Our bounds notably predict that any classifier defined on this task will have perturbations not exceeding $1/10$ of the norm of the image, for $25\%$ of the datapoints in the distribution. Note that using the PGD adversarial training strategy of \cite{madry2017towards} (which constitutes one of the most robust models to date \cite{uesato2018adversarial}), the robustness is significantly improved, despite still being $\sim 1$ order of magnitude smaller than the baseline of $0.1$ for the in-distribution robustness. The construction of more robust classifiers, alongside better empirical estimates of the quantities involved in the bound/improved bounds will hopefully lead to a convergence of these two quantities, hence guaranteeing optimality of the robustness of our classifiers.
\begin{table*}[t]
\centering \renewcommand\cellgape{\Gape[2pt]}
\begin{tabular}{l|c|c|c|c}
\toprule
& \makecell[c]{Upper bound\\on robustness} & VGG \cite{simonyan2014very} & Wide ResNet \cite{zagoruyko2016wide} & \makecell[c]{Wide ResNet \\ + Adv. training \\ \cite{madry2017towards, uesato2018adversarial}} \\ \midrule
Error rate & - & $5.5 \%$ & $3.9 \%$ & $16.0 \%$ \\ \hline
Robustness in the $\mathcal{Z}$-space & $0.016$ & $2.5 \times 10^{-3}$ & $3.0 \times 10^{-3}$ & $3.6 \times 10^{-3}$ \\ \hline
\makecell[l]{In-distribution robustness} & $0.10$ & $4.8 \times 10^{-3}$ & $5.9 \times 10^{-3}$ & $8.3 \times 10^{-3}$ \\
\makecell[l]{Unconstrained robustness} & $0.10$ & $0.23 \times 10^{-3}$ & $0.20 \times 10^{-3}$ & $2.0 \times 10^{-3}$\\
\bottomrule
\end{tabular}
\caption{\label{tab:cifar_exp} Experiments on CIFAR-10 (same setting as in Table \ref{tab:svhn_exp}). See appendix for details about models.}
\end{table*}
\section{Discussion}
We have shown the existence of a baseline robustness that no classifier can surpass, whenever the distribution is approximable by a generative model mapping latent representations to images.
The bounds lead to informative numerical results: for example, on the CIFAR-10 task (with a DCGAN approximator), our upper bound shows that a significant portion of datapoints can be fooled with a perturbation of magnitude $10\%$ that of an image. Existing classifiers however do not match the derived upper bound. Moving forward, we expect the design of more robust classifiers to get closer to this upper bound.
The existence of a baseline robustness is fundamental in that context in order to measure the progress made and compare to the optimal robustness we can hope to achieve.
In addition to providing a baseline, this work has several practical implications on the \textit{robustness} front. To construct classifiers with better robustness, our analysis suggests that these should have linear decision boundaries in the \textit{latent} space; in particular, classifiers with multiple disconnected classification regions will be more prone to small perturbations. We further provided a constructive way to provably close the gap between unconstrained robustness and in-distribution robustness.
Our analysis at the intersection of classifiers' robustness and generative modeling has further led to insights onto \textit{generative models}, due to its intriguing generality. If we take as a premise that human visual system classifiers require large-norm perturbations to be fooled (which is implicitly assumed in many works on adversarial robustness, though see \cite{elsayed2018adversarial}), our work shows that natural image distributions cannot be modeled as very high dimensional and smooth mappings. While current dimensions used for the latent space (e.g., $d = 100$) do not lead to any contradiction with this assumption (as upper bounds are sufficiently large), moving to higher dimensions for more complex datasets might lead to very small bounds. To model such datasets, the prior distribution, smoothness and dimension properties should therefore be carefully set to avoid contradictions with the premise. For example, conditional generative models can be seen as non-smooth generative models, as different generating functions are used for each class. We finally note that the derived results do bound the \textit{norm} of the perturbation, and not the human perceptibility, which is much harder to quantify. We leave it as an open question to derive bounds on more perceptual metrics.
\subsubsection*{Acknowledgments}
A.F. would like thank Seyed Moosavi, Wojtek Czarnecki, Neil Rabinowitz, Bernardino Romera-Paredes and the DeepMind team for useful feedbacks and discussions.
|
{
"timestamp": "2018-12-03T02:24:01",
"yymm": "1802",
"arxiv_id": "1802.08686",
"language": "en",
"url": "https://arxiv.org/abs/1802.08686"
}
|
\section{Introduction}
Let $k\geqslant 2$ be an integer. A Laurent power series $F(z)\in\mathbb{C}((z))$ is called {\em $k$-Mahler} provided there exist a positive integer $d$ and polynomials $a_0(z),\ldots,a_d(z)\in\mathbb{C}[z]$ with $a_0(z)a_d(z)\neq 0$ such that $F(z)$ satisfies the Mahler-type functional equation \begin{equation}\label{MFE} a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0.\end{equation} The minimal $d$ for which such an equation exists is called the {\em degree} of $F(z)$.
There has been a flurry of recent activity involving the study of Mahler series---see, e.g. \cite{AF2017,BC2017,BCZ2016,BV2016,C2016,GT2018,P2015,R2017,R2018}---in large part due to the fact that one can often deduce transcendence of special values of Mahler series by knowing transcendence of the series itself, and also due to the guiding principle that much of the theory of Mahler series should mirror the much better developed theory of solutions to homogeneous differential equations.
A special subclass of Mahler functions is the ring of $k$-regular power series. These functions are defined from their coefficient sequences. More specifically, a power series $F(z)=\sum_{n\geqslant 0} f(n)z^n$ is {\em $k$-regular} provided there is a positive integer $D$, vectors ${\boldsymbol\ell},{\bf c}\in\mathbb{C}^{D\times 1}$, and matrices ${\bf A}_0,\ldots,{\bf A}_{k-1}\in\mathbb{C}^{D\times D}$, such that for all $n\geqslant 0$, $$f(n)={\boldsymbol\ell}^T {\bf A}_{i_s}\cdots {\bf A}_{i_0}{\bf c},$$ where $(n)_k=i_s\cdots i_0$ is the base-$k$ expansion of $n$. Allouche and Shallit \cite{AS1992} introduced $k$-regular sequences in the early nineties as a generalisation of $k$-automatic sequences; that is, sequences obtained from a deterministic finite-state automaton which takes as input the base-$k$ expansion of $n$ and outputs the $n$-th term of the sequence. By refining the proof of the known result that $k$-automatic power series are $k$-Mahler, Allouche \cite[Theorem 1]{A1987} implicitly showed\footnote{The proof of the latter implication is indeed buried in his proof of the former, as a substep that does not rely on the finiteness of the base field \cite[p. 253 and~254]{B1994}.} that a $k$-regular power series is $k$-Mahler. Since all $k$-automatic sequences are $k$-regular sequences, there is a natural hierarchy: $$\{\mbox{$k$-automatic functions}\} \subset \{\mbox{$k$-regular functions}\}\subset \{\mbox{$k$-Mahler functions}\}.$$ Additionally, both inclusions are proper. For example, consider the three paradigmatic examples for $k=2$ of degree one, $$T(z)=\prod_{j\geqslant 0}(1-z^{2^j}),\quad S(z)=\prod_{j\geqslant 0}(1+z^{2^j}+z^{2^{j+1}})\quad\mbox{and}\quad T(z)^{-1}=\prod_{j\geqslant 0}(1-z^{2^j})^{-1}.$$ The function $T(z)$ is the generating power series of the Thue--Morse sequence $t(n)$ over the alphabet $\{-1,1\}$, which is $2$-automatic. The function $S(z)$ is the generating power series of the Stern sequence $s(n+1)$ that counts the number of hyperbinary representations of the number $n+1$, which is $2$-regular but not $2$-automatic. And finally, the function $T(z)^{-1}$ is $2$-Mahler but not $2$-regular; the coefficients $p(n)$ of the power series expansion of $T(z)^{-1}$ count the number of ways of writing the number $n$ as sums of powers of two.
Since a $k$-regular power series is $k$-Mahler, an immediate question arises: {\em can one determine if a solution to \eqref{MFE} is $k$-regular, or not, based solely on properties of the functional equation?} Towards answering this question, Becker \cite[Theorem 2]{B1994} showed that if $a_0(z)=1$, then $F(z)$ is $k$-regular. He conjectured \cite[p.~279]{B1994} that a sort of converse to this result also holds. Specifically, Becker conjectured that {\em if $F(z)$ is a $k$-regular power series, then there exists a nonzero $k$-regular rational function $R(z)$ such that $F(z)/R(z)$ satisfies a Mahler-type functional equation \eqref{MFE} with $a_0(z)=1$.} In view of this conjecture, a power series $F(z)$ is called {\em $k$-Becker} provided it satisfies a functional equation \eqref{MFE} with $a_0(z)=1$.
The historical significance of the $k$-Becker property lies in the fact that zeros of $a_0(z)$ in the minimal Mahler equation \eqref{MFE} for $F(z)$ are values $\alpha$ at which the theorems proving transcendence of $F(\alpha)$ based upon knowledge of algebraic independence of certain related Mahler functions do not apply; this point is highlighted in the works of Loxton and van der Poorten \cite{LV1982,LV1988} and the celebrated result of Nishioka \cite{N1990,N1996}. In this paper, we prove (a bit more than) Becker's conjecture.
\begin{theorem}\label{main} If $F(z)$ is a $k$-regular power series, there exist a nonzero polynomial $Q(z)$ with $Q(0)=1$ such that $1/Q(z)$ is $k$-regular and a nonnegative integer $\gamma$ such that $F(z)/z^\gamma Q(z)$ satisfies a Mahler-type functional equation \eqref{MFE} with $a_0(z)=1$.
\end{theorem}
\noindent Moreover, if the Mahler-type functional equation of minimal degree for $F(z)$ is known, then the polynomial $Q(z)$ in Theorem \ref{main} can be easily written down. Specifically, if \eqref{MFE} is the minimal functional equation for $F(z)$, and we write $A$ for the set of roots of unity $\zeta$ such that $\zeta^{k^M}\neq\zeta$ for all $M\geqslant 1$ and $a_0(\zeta)=0$, then there is an $N$ depending on $a_0(z)$ such that $$Q(z):=\prod_{\zeta\in A}\prod_{j=0}^{N-1}(1-z^{k^j}\overline{\zeta}^{k^N})^{\nu_\zeta(a_0(z))},$$ where for a given Laurent power series $g(z)$, $\nu_\zeta(g(z))$ is the order of the zero of $g(z)$ at $z=\zeta$. For more details, see the proof of Lemma \ref{lem:norou}. Noting that all of the zeros of the polynomial $Q(z)$ are roots of unity of order not coprime to $k$, we may combine this with a result of Dumas \cite[Th\'eor\`eme 30]{Dumasthese} to give the following proposition.
\begin{proposition}\label{prop:dumas} Let $F(z)\in\mathbb{C}[[z]]$. Then $F(z)$ is $k$-regular if and only if $F(z)$ satisfies some functional equation \eqref{MFE} such that all of the zeros of $a_0(z)$ are either zero or roots of unity of order not coprime to $k$.
\end{proposition}
\noindent Note that the functional equation alluded to in the above proposition need not be minimal.
To prove Theorem \ref{main}, we will show that if $F(z)$ is $k$-regular satisfying \eqref{MFE}, then, essentially, one can `remove' all of the zeros of $a_0(z)$ that are roots of unity. We then show that, after dividing by an appropriate power of $z$, the resulting function satisfies another Mahler-type functional equation with $a_0(z)=1$, but is not necessarily $k$-Becker, since {\em it may not be a power series}.
This line of reasoning is inspired by a recent paper of Kisielewski \cite{K2017}, who considered Becker's conjecture for a subclass of regular functions. Indeed, Kisielewski \cite[Proposition~2]{K2017} showed that Becker's conjecture holds for every $k$-regular function $F(z)$ satisfying a functional equation \eqref{MFE} of minimal degree $d$ such that $a_0(z)$ has no zeros that are roots of unity; specifically, for a function $F(z)$ in this class, he showed there exists a $k$-regular rational function $R(z)$ such that $F(z)/R(z)$ is $k$-Becker; his result is purely existential concerning the rational function $R(z)$. In comparison, Theorem \ref{main} has the following corollary in this context.
\begin{corollary} Suppose that $F(z)$ is a $k$-regular function satisfying a functional equation \eqref{MFE} of minimal degree $d$ such that $a_0(0)\neq 0$ and $a_0(z)$ has no zeros that are roots of unity. Then $F(z)$ is $k$-Becker.
\end{corollary}
This paper is outlined as follows. Section \ref{prelims} contains preliminary results that will be needed in Section \ref{proof}, which contains the proof of Theorem \ref{main}. Section \ref{further} contains justification that Theorem \ref{main} is the best-possible resolution of Becker's conjecture; in particular, in that section, we give an example of a $k$-regular function $F(z)$ such that for any rational function $R(z)$, the function $R(z)F(z)$ cannot simultaneously be a power series and satisfy the conclusion of Becker's conjecture. Finally, in Section~\ref{structure} we prove Proposition \ref{prop:dumas}.
\section{Preliminaries}\label{prelims}
We require the following definition and lemmas.
A general form of Lemma \ref{ch6:Cartier}(a) is given by Dumas \cite[Lemma 4, p.~20]{Dumasthese}
and Lemma~\ref{ch6:Cartier} was known at least to Allouche \cite[p.~255]{A1987}.
For an English reference, we refer the reader to the work of Becker \cite[Lemma~2]{B1994}.
Lemma~\ref{cartvec} is a restatement of Allouche and Shallit \cite[Theorem 2.2]{AS1992}, also provided by Becker \cite[Lemma~3]{B1994}.
For its part, Lemma~\ref{Kislemma8} is due to Kisielewski \cite[Lemma~8]{K2017}.
\begin{definition}\label{cartier} Let $C(z)=\sum_{n\geqslant 0}c(n)z^n$. Given a positive integer $k\geqslant 2$, for each $i\in\{0,\ldots,k-1\}$, we define the {\em Cartier operator} $\Lambda_i:\mathbb{C}[[z]]\to \mathbb{C}[[z]]$ by $$\Lambda_i(C)(z)=\sum_{n\geqslant 0}c(kn+i)z^n.$$
\end{definition}
\begin{lemma}[Dumas \cite{Dumasthese}, Allouche \cite{A1987}]\label{ch6:Cartier} Let $F(z),G(z)\in\mathbb{C}[[z]]$. For $i=0,\ldots,k-1$ we have
\begin{enumerate}
\item[(a)] $\Lambda_i(F(z^k)G(z))=F(z)\Lambda_i(G(z))$, and
\item[(b)] $F(z)=\sum_{i=0}^{k-1} z^i\Lambda_i(F)(z^k).$
\end{enumerate}
\end{lemma}
In Lemma \ref{ch6:Cartier}, $\Lambda_i(F)(z^k)$ is understood as $\Lambda_i(F(z))$ evaluated at $z^k$, so that if we write $F(z)=\sum_{n\geqslant 0} f(n)z^n$, then $\Lambda_i(F)(z^k)=\sum_{n\geqslant 0} f(kn+i)z^{kn}.$
\begin{lemma}[Allouche and Shallit \cite{AS1992}]\label{cartvec} The function $F(z)\in\mathbb{C}[[z]]$ is $k$-regular if and only if the $\mathbb{C}$-vector space $$V:=\left\langle\left\{\Lambda_{r_n}\cdots \Lambda_{r_1}(F)(z): 0\leqslant r_i<k,\ n\in\mathbb{N}\right\}\right\rangle_\mathbb{C}$$ is finite-dimensional.
\end{lemma}
If one lets $W$ denote the finitely generated $\mathbb{C}[z]$-submodule of the field of Laurent power series $\mathbb{C}((z))$ spanned by the finite-dimensional $\mathbb{C}$-vector space $V$, then $W$ has the property that $$W\subseteq \sum_{h(z)\in W} \mathbb{C}[z] h(z^k).$$ To see this, we let $\{F(z)=h_1(z),\ldots ,h_r(z)\}$ be a basis for $V$. Then notice that for $i=0,\ldots ,k-1$, we have
$$\Lambda_i(h_j)(z) = \sum_{\ell=1}^{r} c_{i,j,\ell} h_{\ell}(z)$$ for some constants $c_{i,j,\ell}\in \mathbb{C}$. An application of Lemma \ref{ch6:Cartier}(b) then gives that
\begin{equation}\label{hch}h_j(z) = \sum_{\ell=1}^r \left( \sum_{i=0}^{k-1} c_{i,j,\ell} z^i\right) h_{\ell}(z^k),\end{equation} which gives the desired claim.
In fact, in the case that the dimension of the vector space $V$ is $r<\infty$, we have that $F(z)$ is a $k$-Mahler function of degree at most $r$. To see this, we observe that \eqref{hch} can be written as $${\bf h}(z)={\bf A}(z){\bf h}(z^k),$$ where ${\bf h}(z):=[F(z)=h_1(z), h_2(z),\ldots,h_r(z)]^T$ and ${\bf A}(z)\in\mathbb{C}[z]^{r\times r}.$ Now we let ${\bf A}^{(i)}(z)={\bf A}(z){\bf A}(z^k)\cdots {\bf A}(z^{k^{i-1}})$, where we use the convention that ${\bf A}^{(0)}(z)$ is the identity. So for $i\in\{0,1,\ldots,r\}$, we have ${\bf h}(z^{k^i})={\bf A}^{(r-i)}(z^{k^i}){\bf h}(z^{k^r}).$ Left multiplying by the vector $e_1^T$ and using the fact that $h_1(z)=F(z)$ we obtain equations \begin{equation}\label{Fuh}F(z^{k^i})={\bf u}_i(z){\bf h}(z^{k^r}),\end{equation} for some ${\bf u}_i(z)\in\mathbb{C}[z]^{1\times r}$. Since we have $r+1$ vectors ${\bf u}_0(z),\ldots,{\bf u}_{r}(z)$, we have a nontrivial linear dependence; that is, there are polynomials $p_0(z),\ldots,p_r(z)$, not all zero, such that $$p_0(z){\bf u}_0(z)+\cdots+p_r(z){\bf u}_{r}(z)=0.$$ Combining this with \eqref{Fuh} shows that $F(z)$ satisfies the functional equation $$p_i(z)F(z^{k^i})+p_{i+1}(z)F(z^{k^{i+1}})+\cdots+p_r(z)F(z^{k^r})=0,$$ where $i$ is the smallest index such that $p_i(z)\neq 0$. But this implies that $F(z)$ is a $k$-Mahler function of degree at most $r-i\leqslant r$; see, e.g., Becker's argument \cite[p.~273]{B1994} of taking successive sections to reduce the smallest index.
\begin{lemma}[Kisielewski \cite{K2017}]\label{Kislemma8} Let $c(z)\in\mathbb{C}(z)$, $\alpha\in\mathbb{C}\setminus\{0\}$, and $\nu_\alpha(c(z))$ be the order of the zero of $c(z)$ at $z=\alpha$. There is an $r\in\{0,\ldots,k-1\}$ such that $\nu_\alpha\left(\Lambda_r (c)(z^k)\right)\leqslant \nu_\alpha\left(c(z)\right).$
\end{lemma}
We will use the functional equation \eqref{MFE} in a slightly different form. For a Mahler function satisfying \eqref{MFE} of degree $d$, setting $${\bf F}(z):=[F(z), F(z^k),\ldots,F(z^{k^{d-1}})]^T$$ and $${\bf A}(z):=\left[\begin{matrix} -\frac{a_1(z)}{a_0(z)}\ \ -\frac{a_2(z)}{a_0(z)}\ \ \ \cdots & -\frac{a_d(z)}{a_0(z)} \\ {\bf I}_{(d-1)\times (d-1)} & {\bf 0}_{(d-1)\times 1}\end{matrix}\right],$$ we have \begin{equation}\label{FAF}{\bf F}(z)={\bf A}(z){\bf F}(z^k).\end{equation}
We will be specifically interested in the matrices \begin{equation}\label{Bz} {\bf B}_n(z):={\bf A}(z){\bf A}(z^k)\cdots {\bf A}(z^{k^{n-1}}).\end{equation} Note that ${\bf F}(z)={\bf B}_n(z){\bf F}(z^{k^n})$ for every $n\geqslant 1$. In what follows, for $i=1,\ldots,d$, we write $$e_i:=\left[{\bf 0}_{1\times (i-1)}\ 1\ {\bf 0}_{1\times (d-i)}\right]^T.$$
Kisielewski's lemma above states that a Cartier operator can be used to (possibly) reduce the order of a zero. We use this result in the following lemma to find an upper bound on the order of certain poles of the matrices ${\bf B}_n(z).$
\begin{lemma}\label{Buniform} Suppose $F(z)$ is $k$-regular, ${\bf B}_n(z)$ is as defined in \eqref{Bz}, and $\xi$ is a root of unity such that $\xi^k=\xi$. Then the poles at $z=\xi$ of the entries of the matrices $\{{\bf B}_n(z):n\geqslant 1\}$ have uniformly bounded order. In particular, there is a polynomial $h(z)\in\mathbb{C}[z]$ such that for each $n$ the matrix $h(z)\cdot {\bf B}_n(z)$ has polynomial entries.
\end{lemma}
\begin{proof} For each $i\in\{1,\ldots,d\}$ and each $n\in\mathbb{N}$, set $$c_{i,n}(z):=e_1^T {\bf B}_n(z) e_i.$$ Then for each $n$ we have \begin{equation}\label{FcFn} F(z)=\sum_{i=1}^d c_{i,n}(z)F(z^{k^{i+n-1}}).\end{equation} If we apply $n$ Cartier operators to \eqref{FcFn}, we have \begin{equation}\label{CartFcFn} \Lambda_{r_n}\cdots \Lambda_{r_1}(F)(z)=\sum_{i=1}^d \Lambda_{r_n}\cdots \Lambda_{r_1}(c_{i,n})(z)\cdot F(z^{k^{i-1}}).\end{equation} Since $d$ here is minimal, the functions $F(z),\ldots,F(z^{k^{d-1}})$ are linearly independent over $\mathbb{C}(z)$.
Now suppose $F(z)$ is $k$-regular. Since the $\mathbb{C}$-vector space $V$ defined in Lemma \ref{cartvec} is finite-dimensional, its finite number of generators are of the form $\sum_{i=1}^d h_i(z) F(z^{k^{i-1}}),$ for some rational functions $h_i(z)$. Since, as we run over a finite generating set, the $h_i(z)$ that occur are a finite number of rational functions and the functions $F(z),\ldots,F(z^{k^{d-1}})$ are linearly independent over $\mathbb{C}(z)$, there is a nonzero polynomial $h(z)$ such that $$V\subseteq h(z)^{-1}\sum_{i=1}^d \mathbb{C}[z]F(z^{k^{i-1}}).$$ This,
requires for every $i\in\{1,\ldots,d\}$, $n\in\mathbb{N}$ and choice of Cartier operators that the inequality \begin{equation}\label{cartup}\nu_\xi\left(h(z)^{-1}\right)\leqslant\nu_\xi\left(\Lambda_{r_n}\cdots \Lambda_{r_1}(c_{i,n})(z)\right)\end{equation} holds between orders of the zeros at $z=\zeta$. Since $\xi^k=\xi$, for each rational function $c(z)$, we have $\nu_\xi\left(c(z)\right)=\nu_\xi\left(c(z^k)\right)$. By Lemma \ref{Kislemma8} and \eqref{cartup}, for each $n\in\mathbb{N}$ there is a choice of Cartier operators $\Lambda_{r_1},\ldots, \Lambda_{r_n}$ such that, for zero orders, $$ \nu_\xi\left(h(z)\right)\leqslant \nu_\xi\left(\Lambda_{r_n}\cdots \Lambda_{r_1}(c_{i,n})(z)\right) =\nu_\xi\big(\Lambda_{r_n}\cdots \Lambda_{r_1}(c_{i,n})(z^{k^n})\big)\leqslant \nu_\xi\left(c_{i,n}(z)\right).
$$ Thus the poles of the entries $c_{i,n}(z)$ of the first row of the matrices ${\bf B}_n(z)$ at $z=\xi$ have uniformly bounded order; specifically, they are bounded above by $\nu_\zeta(h(z))$.
It remains now to show this for the rest of the rows, but this follows due to the structure of the matrix ${\bf A}(z)$. In fact, consider the $(i,j)$ entry of the matrix ${\bf B}_n(z)$ for some $i\in\{2,\ldots,d\}$. Using the definition of ${\bf B}_n(z)$, we have $$e_i^T{\bf B}_n(z)e_j=e_i^T{\bf A}(z){\bf B}_{n-1}(z^k)e_j.$$ Now, $e_i^T{\bf A}(z)=e_{i-1}^T.$ So, \begin{equation}\label{eidown}\nu_\xi(e_i^T{\bf B}_n(z)e_j)=\nu_\xi(e_{i-1}^T{\bf B}_{n-1}(z^k)e_j)=\nu_\xi(e_{i-1}^T{\bf B}_{n-1}(z)e_j),\end{equation} where the last equality uses, again, the facts that $\xi^k=\xi$ and for every rational function $\nu_\xi\left(c(z)\right)=\nu_\xi\left(c(z^k)\right)$. Applying \eqref{eidown} $i-1$ times, we have \begin{equation}\label{e1down}\nu_\xi(e_i^T{\bf B}_n(z)e_j)=\nu_\xi(e_{1}^T{\bf B}_{n-i+1}(z)e_j),\end{equation} which immediately implies the desired result.
\end{proof}
\begin{proposition}\label{prop0} If $F(z)$ is $k$-Mahler satisfying \eqref{MFE} of degree $d$, $a_0(\xi)=0$ for some root of unity $\xi$ with $\xi^k=\xi$, and $\gcd(a_0(z),a_1(z),\ldots,a_d(z))=1$, then $F(z)$ is not $k$-regular.
\end{proposition}
\begin{proof} Towards a contradiction, assume that $F(z)$ is $k$-regular and suppose that $\xi$ is a root of unity with $\xi^k=\xi$ such that $a_0(\xi)=0$. Using Lemma \ref{Buniform}, let $Y$ denote the minimal uniform bound of the order of the poles at $z=\xi$ of $\{{\bf B}_n(z):n\geqslant 1\}$, and note that $Y>0$ since $\gcd(a_0(z),a_1(z),\ldots,a_d(z))=1$.
We examine the first row of ${\bf B}_1(z)={\bf A}(z)$. In particular, set $$N:=\min\Big\{i\in\{1,\ldots,d\}: \nu_\xi\left(\frac{a_i(z)}{a_0(z)}\right)\leqslant \nu_\xi\left(\frac{a_j(z)}{a_0(z)}\right) \mbox{for all $j\in\{1,\ldots,d\}$}\Big\},$$ and note, again using $\gcd(a_0(z),a_1(z),\ldots,a_d(z))=1$, that \begin{equation}\label{X}X:=-\nu_\xi\left(\frac{a_N(z)}{a_0(z)}\right)>0.\end{equation} By the minimality of $N$, we have both $$\nu_\xi(a_i(z)/a_0(z))>-X\quad\mbox{for}\quad i<N,$$ and $$\nu_\xi(a_i(z)/a_0(z))\geqslant -X\quad\mbox{for}\quad i>N.$$
Since ${\bf B}_1(z)={\bf A}(z)$ has only constant entries outside of its first row, \eqref{eidown} and \eqref{e1down} imply there is some minimal $n$, say $m$, for which the maximal order of the pole at $z=\xi$ of the entries of ${\bf B}_m(z)$ is $Y$, occurs in the first row of ${\bf B}_m(z)$, say in the $(1,J)$ entry, and all of the other rows have entries with poles at $z=\xi$ of order strictly less than $Y$. That is, specifically, within the $J$-th column of ${\bf B}_m(z)$, we have \begin{equation}\label{Y}\nu_\xi\left(e_1^T{\bf B}_m(z)e_J\right)=-Y<0\quad\mbox{and}\quad \nu_\xi\left(e_i^T{\bf B}_m(z)e_J\right)>-Y,\end{equation} for each $i\in\{2,\ldots,d\}$.
Now, define the rational functions $b_1(z),\ldots,b_d(z)$ by $${\bf B}_{m+N-1}(z^k)e_J=\left[\begin{matrix} b_1(z)\ \cdots\ b_d(z)\end{matrix}\right]^T,$$ and note that by \eqref{e1down} we have, since $\xi^k=\xi$ and for every rational function $\nu_\xi\left(c(z)\right)=\nu_\xi\left(c(z^k)\right)$, that \begin{equation}\label{Y}-Y=\nu_\xi(e_1^T{\bf B}_m(z)e_J)=\nu_\xi(e_{N}^T{\bf B}_{m+N-1}(z)e_J)=\nu_\xi(b_N(z)).\end{equation} By the minimality of $m$, we have $$\nu_\xi(b_i(z))>-Y\quad\mbox{for}\quad i>N,$$ and trivially $$\nu_\xi(b_i(z))\geqslant -Y\quad\mbox{and}\quad i<N.$$
Let us now see how we can put together the results of the previous paragraphs in order to obtain the desired result. Consider the first entry of the $J$th column of ${\bf B}_{m+N}(z)$. We have \begin{align} \nonumber e_1^T{\bf B}_{m+N}(z)e_J
&=e_1^T{\bf A}(z){\bf B}_{m+N-1}(z^k)e_J\\
\nonumber &=\left[\begin{matrix} -\frac{a_1(z)}{a_0(z)}\ \ -\frac{a_2(z)}{a_0(z)}\ \ \cdots & -\frac{a_d(z)}{a_0(z)}\end{matrix}\right]{\bf B}_{m+N-1}(z^k)e_J\\
\label{finalnumber}&=-\sum_{i=1}^{N-1}\frac{a_i(z)b_i(z)}{a_0(z)} -\frac{a_N(z)b_N(z)}{a_0(z)}-\sum_{i=N+1}^{d}\frac{a_i(z)b_i(z)}{a_0(z)}.\end{align}
For $i\neq N$, using the comments immediately below Equations \eqref{Y} and \eqref{X}, respectively, we have \begin{equation}\label{notN}\nu_\xi\left(\frac{a_i(z)b_i(z)}{a_0(z)}\right)=\nu_\xi\left(\frac{a_i(z)}{a_0(z)}\right)+\nu_\xi\left(b_i(z)\right)>-X-Y,\end{equation} since both $\nu_\xi\left({a_i(z)/a_0(z)}\right)>-X$ for $i\in\{1,\ldots,N-1\}$ and $\nu_\xi\left(b_i(z)\right)>-Y$ for $i\in\{N+1,\ldots,d\}$. Also, by \eqref{Y} and \eqref{X}, we have \begin{equation}\label{N}\nu_\xi\left(\frac{a_N(z)b_N(z)}{a_0(z)}\right)=\nu_\xi\left(\frac{a_N(z)}{a_0(z)}\right)+\nu_\xi\left(b_N(z)\right)=-X-Y.\end{equation} Hence, using \eqref{notN} and \eqref{N}, Equation \eqref{finalnumber} gives the inequality $$\nu_\xi\left(e_1^T{\bf B}_{m+N}(z)e_J\right)=-X-Y<-Y,$$ contradicting that $Y$ is a uniform bound on the pole order at $z=\xi$ over all $e_i^T{\bf B}_n(z)e_j$. Thus $F(z)$ is not $k$-regular.
\end{proof}
\begin{corollary}\label{zetaN=zeta} Suppose $F(z)$ is $k$-regular satisfying \eqref{MFE} of degree $d$. Let $\mathfrak{I}$ be the ideal of polynomials $p(z)$ such that $$p(z)F(z) \in \sum_{j\geqslant 1} \mathbb{C}[z]F(z^{k^j})$$ and let $q(z)$ be a generator for $\mathfrak{I}$. If $\xi$ is a zero of $q(z)$ such that $\xi^{k^M}=\xi$ for some $M\geqslant 1$, then $\xi=0$.
\end{corollary}
\begin{proof} Suppose that there exists $M\geqslant 1$ and $\xi$ such that $q(\xi)=0$ with $\xi^{k^M}=\xi$ and $\xi\neq 0$. Let $$q_0(z)F(z) + q_1(z) F(z^{k^M})+ \cdots + q_D(z) F(z^{k^{MD}}) = 0$$ be a relation with $q_0(z)\neq 0$, $\gcd(q_0(z),q_1(z),\ldots,q_D(z))=1$ and $D$ minimal. Then $q(z)$ divides $q_0(z)$ and so $q_0(\xi)=0$. But $\xi^{k^M}=\xi$ and by Proposition \ref{prop0} this contradicts the fact that $F(z)$ is $k^M$-regular. Since $F(z)$ is $k^M$-regular if and only if it is $k$-regular \cite[Theorem~2.9]{AS1992}, this proves the corollary.
\end{proof}
\begin{lemma}\label{lem:norou} Let $F(z)$ be a $k$-regular power series satisfying \eqref{MFE} of degree $d$. Then there exist a polynomial $Q(z)$ with $Q(0)\neq 0$ such that $1/Q(z)$ is $k$-regular and a nonnegative integer $\gamma$ such that $G(z):={F(z)}/{z^{\gamma } Q(z)}$ satisfies a Mahler-type functional equation $$q_0(z)G(z)+q_1(z)G(z^k)+\cdots+q_d(z)G(z^{k^d})=0,$$ of degree $d$, $q_i(z)\in\mathbb{C}[z]$ with $q_0(0)\neq 0$, and if $\zeta$ is a zero of $q_0(z)$ that is a root of unity then there is some $M\geqslant 1$ such that $\zeta^{k^M}=\zeta$.
\end{lemma}
The proof of Lemma \ref{lem:norou} requires the following characterisation of Mahler functions due to Dumas \cite[Theorem 31, p.~153]{Dumasthese}; see also Coons and Spiegelhofer \cite{CS2018} for a proof of Dumas's result in English.
\begin{theorem}[Structure Theorem of Dumas]\label{lem:DAB} A $k$-Mahler function is the quotient of a series and an infinite product which are $k$-regular. That is, if $F(z)$ is the solution of the Mahler functional equation $$a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0,$$ where $a_0(z)a_d(z)\neq 0$, the $a_i(z)$ are polynomials, then there exists a $k$-regular series $J(z)$ such that $$F(z)=\frac{J(z)}{\prod_{j\geqslant 0}\Gamma(z^{k^j})},$$ where $a_0(z)=\rho z^{\delta}\Gamma(z)$, with $\rho\neq 0$ and $\Gamma(0)=1$.
\end{theorem}
\begin{proof}[Proof of Lemma \ref{lem:norou}] Suppose that $F(z)$ is a $k$-Mahler power series of degree $d$ satisfying \eqref{MFE}. Let $A$ be the set of roots of unity $\zeta$ such that $a_0(\zeta)=0$ and there does not exist $M\geqslant 1$ such that $\zeta^{k^M}=\zeta$ and set $\nu_\zeta(a_0):=\nu_\zeta(a_0(z))$. For each $\zeta\in A$, the sequence $\{\zeta^{k^i}\}_{i\geqslant 0}$ is eventually periodic, so that there is an $M_\zeta$ such that $\zeta^{k^{2M_\zeta}}=\zeta^{k^{M_\zeta}}$. Note that this then implies that $\zeta^{k^{pM_\zeta}}=\zeta^{k^{M_\zeta}}$ for all $p\geqslant 1$. Now, set $$N:=\prod_{\zeta\in A}M_\zeta,$$ so that $\zeta^{k^{2N}}=\zeta^{k^N}$ for all $\zeta\in A$. Define the polynomial $Q(z)$ by $$Q(z):=\prod_{\zeta\in A}\prod_{j=0}^{N-1}(1-z^{k^j}\overline{\zeta}^{k^N})^{\nu_\zeta(a_0)}.$$
Then $$\frac{Q(z^k)}{Q(z)}=\prod_{\zeta\in A}\left(\frac{1-z^{k^N}\overline{\zeta}^{k^N} }{1-z\overline{\zeta}^{k^N} }\right)^{\nu_\zeta(a_0)}\in\mathbb{C}[z],$$ since for each $\zeta\in A$, $$1- z^{k^N}\overline{\zeta}^{k^N}=1-(z\overline{\zeta}^{k^N})^{k^N}=(1-z\overline{\zeta}^{k^N} )(1+(z\overline{\zeta}^{k^N})+\cdots+(z\overline{\zeta}^{k^N} )^{k^N-1}).$$ But also for each $\xi\in A$, \begin{align*}\frac{Q(z^k)}{Q(z)}(1-& z\overline{\xi}^{k^N} )^{\nu_\xi(a_0)}\\
&=\left(1-(z\overline{\xi} )^{k^N}\right)^{\nu_\xi(a_0)}\prod_{\zeta\in A\setminus\{\xi\}}\left(\frac{1-z^{k^N}\overline{\zeta}^{k^N} }{1-z\overline{\zeta}^{k^N} }\right)^{\nu_\zeta(a_0)}\\
&=(1-z\overline{\xi} )^{\nu_\xi(a_0)}\left(\sum_{j=0}^{k^N-1}(z\overline{\xi} )^j\right)^{\nu_\xi(a_0)}\prod_{\zeta\in A\setminus\{\xi\}}\left(\frac{1-z^{k^N}\overline{\zeta}^{k^N} }{1-z\overline{\zeta}^{k^N} }\right)^{\nu_\zeta(a_0)}
\end{align*} is a polynomial. Since $\overline{\xi}\neq\overline{\xi}^{k^N}$, we have that $(1-z\overline{\xi})^{\nu_\xi(a_0)}$ divides the polynomial $Q(z^k)/Q(z)$. As this is true for all $\xi\in A$, there is a polynomial $h(z)$ such that \begin{equation}\label{QoverQpoly}\frac{Q(z^k)}{Q(z)}=\left(\prod_{\zeta\in A}(1-z\overline{\zeta})^{\nu_\zeta(a_0)}\right)\cdot h(z).\end{equation} Set $P(z):=\prod_{\zeta\in A}(1-z\overline{\zeta})^{\nu_\zeta(a_0)}.$ Then \eqref{QoverQpoly} shows that the polynomial $Q(z)$ satisfies \begin{equation}\label{Qzh}Q(z)=\left(\prod_{j\geqslant 0} P(z^{k^j})\right)^{-1}\left(\prod_{j\geqslant 0} h(z^{k^j})\right)^{-1}.\end{equation}
We factor $a_0(z)=cz^\gamma\Gamma(z)=cz^\gamma a(z)P(z)$, where $a(\zeta)\neq 0$ for every $\zeta\in A$ and $a(0)=1$. By Proposition \ref{prop0}, since $F(z)$ is $k$-regular, $a_0(1)\neq 0$. Then using Theorem~\ref{lem:DAB} and \eqref{Qzh}, there is a $k$-regular function $J(z)$ such that $$F(z)=\frac{J(z)}{\prod_{j\geqslant 0}\Gamma(z^{k^j})}=\frac{J(z)\cdot Q(z)\cdot\prod_{j\geqslant 0} h(z^{k^j})}{\prod_{j\geqslant 0}a(z^{k^j})}.$$
Setting $H(z):=J(z)\prod_{j\geqslant 0} h(z^{k^j})$, we have that $$G(z):=\frac{F(z)}{z^{\gamma} Q(z)}=\frac{H(z)}{z^{\gamma} \prod_{j\geqslant 0}a(z^{k^j})},$$ where $1/Q(z)$ is $k$-regular by an above-mentioned result of Becker \cite[Theorem 2]{B1994} and \eqref{Qzh}. To build the functional equation for $G(z)$, we start with the functional equation \eqref{MFE} for $F(z)$ of degree $d$, and divide by $z^{2\gamma}Q(z)P(z)$ to get \begin{equation}\label{FPQ} c\cdot a(z) \frac{F(z)}{z^\gamma Q(z)}+\sum_{i=1}^d \frac{a_i(z)F(z^{k^i})}{z^{2\gamma}Q(z)P(z)}=0.\end{equation} These coefficients, for $i=1,\ldots,d$, satisfy $$\frac{a_i(z)}{z^{2\gamma}Q(z)P(z)}=\frac{z^{\gamma(k^i-2)}a_i(z)Q(z^{k^i})}{z^{k^i\gamma}Q(z^{k^i})Q(z)P(z)}=\frac{z^{\gamma(k^i-2)}a_i(z)}{z^{k^i\gamma}Q(z^{k^i})}\cdot \frac{Q(z^{k})}{Q(z)P(z)}\prod_{j=2}^i\frac{Q(z^{k^j})}{Q(z^{k^{j-1}})},$$ where as usual, the empty product is taken to be equal to $1$. By \eqref{QoverQpoly}, we have $Q(z^{k^j})/Q(z^{k^{j-1}}) =P(z^{k^{j-1}})h(z^{k^{j-1}}),$ so continuing the above equality gives \begin{equation}\label{qia}\frac{a_i(z)}{z^{2\gamma}Q(z)P(z)}= \frac{z^{\gamma(k^i-2)}a_i(z)}{z^{k^i\gamma}Q(z^{k^i})}\cdot h(z)\prod_{j=2}^i\left(P(z^{k^{j-1}})h(z^{k^{j-1}})\right)=\frac{q_i(z)}{z^{k^i\gamma}Q(z^{k^i})},\end{equation} where $q_i(z)$ is the polynomial $$q_i(z):=z^{\gamma(k^i-2)}a_i(z)h(z)\prod_{j=2}^i\left(P(z^{k^{j-1}})h(z^{k^{j-1}})\right).$$ Finally, defining $q_0(z):=c\cdot a(z)$, substituting the result of \eqref{qia} into \eqref{FPQ} and using the definition of $G(z)$, we have that $G(z)$ satisfies the functional equation $$q_0(z)G(z)+q_1(z)G(z^k)+\cdots+q_d(z)G(z^{k^d})=0.$$ Here $G(z)$ inherits the degree $d$ from $F(z)$, $q_0(0)c\cdot a(0)=c\neq 0$ and $q_0(z)$ inherits the desired root properties from $a(z)$. This finishes the proof of the lemma.
\end{proof}
Our method of proof of Lemma \ref{lem:norou} is inspired by remarks of Becker \cite[p.~279]{B1994} as well as an argument of Adamczewski and Bell \cite[Proposition 7.2]{ABkl}.
\section{Proof of the main result}\label{proof}
\begin{proof}[Proof of Theorem \ref{main}] Suppose that $F(z)$ is a $k$-regular function satisfying \eqref{MFE} of degree $d$. By Lemma \ref{lem:norou}, there exist a polynomial $Q(z)$ with $Q(0)=1$ such that $1/Q(z)$ is $k$-regular and a nonnegative integer $\gamma$ such that the $k$-Mahler function $G(z):=z^{-\gamma} F(z)/Q(z)$ satisfies a Mahler functional equation \begin{equation}\label{q0G}q_0(z)G(z)+q_1(z)G(z^k)+\cdots+q_d(z)G(z^{k^d})=0\end{equation} of minimal degree $d$, $q_i(z)\in\mathbb{C}[z]$, and $q_0(z)$ has the property that $q_0(0)\neq 0$ and if $q_0(\zeta)=0$ with $\zeta$ a root of unity then there is some $M\geqslant 1$ such that $\zeta^{k^M}=\zeta$.
We let $\mathfrak{I}$ be the ideal of polynomials $p(z)$ such that $$p(z)G(z) \in \sum_{j\geqslant 1} \mathbb{C}[z]G(z^{k^j}).$$ Since $G$ is $k$-Mahler, $\mathfrak{I}$ is nonzero, and we let $q(z)$ be a generator for $\mathfrak{I}$ whose leading coefficient is $1$. Since $q_0(z)\in \mathfrak{I}$, we have $q(z)$ divides $q_0(z)$ and so we have $q(0)\neq 0$. By Corollary \ref{zetaN=zeta}, if $\zeta$ is a zero of $q(z)$, then there does not exist $M\geqslant 1$ such that $\zeta^{k^M}=\zeta$. Thus if $\zeta$ is a root of $q(z)$ then $\zeta$ cannot be a root of unity, since we have shown that any zero of $q_0(z)$ that is a root of unity must satisfy $\zeta^{k^M}=\zeta$ for some $M\geqslant 1$, and we have also shown that each zero $\zeta$ of $q_0(z)$ that is a root of unity has the property that there is no $M\geqslant 1$ such that $\zeta^{k^M}=\zeta$. Hence $q(z)$ has no zeros that are either zero or a root of unity. Thus $G(z)$ has the property that there is a relation $$q(z)G(z) \in \sum_{j\geqslant 1} \mathbb{C}[z]G(z^{k^j})$$ with $q(z)$ having no zeros that are roots of unity and with $q(0)\neq 0$.
We now claim that $q(z)=1$. To see this, suppose that $q(z)$ is non-constant. Then there is a nonzero complex number $\lambda$ that is not a root of unity such that $q(\lambda)=0$. Since $G(z)$ is $k$-regular, the $\mathbb{C}$-vector space spanned by all elements of the form $\Lambda_{r_m}\cdots \Lambda_{r_0}(G)(z)$ (including also $G(z)$) is finite-dimensional. Moreover, its basis elements are of the form $\sum_{i=1}^d h_i(z) G(z^{k^{i-1}}),$ for some rational functions $h_i(z)$, where $d$ is the degree of the Mahler function $G$. Since, as we run over a basis, only finitely many rational functions $h_i(z)$ occur and since the functions $F(z),\ldots,F(z^{k^{d-1}})$ are linearly independent over $\mathbb{C}(z)$, there is a nonzero polynomial $h(z)$ such that $$V\subseteq h(z)^{-1}\sum_{i=1}^d \mathbb{C}[z]G(z^{k^{i-1}}),$$ where $V$ is the $\mathbb{C}$-vector space defined in Lemma \ref{cartvec}. Now since $\lambda$ is nonzero and is not a root of unity, there exists some positive integer $N$ such that $\lambda^{k^N}$ is not a zero of $h(z)$.
Repeatedly using the Mahler Equation \eqref{q0G}, we obtain a relation of the form $$Q(z) G(z) = \sum_{j=1}^{d} Q_j(z) G(z^{k^{N+j-1}})$$ with $Q(z), Q_1(z),\ldots ,Q_d(z)$ polynomials and $Q(z)\neq0$ and $\gcd(Q(z),Q_1(z),\ldots,$ $Q_d(z))=1$. Since $Q(z)\in \mathfrak{I}$, we see that $q(z)$ divides $Q(z)$ and so $\lambda$ is a root of $Q(z)$.
Now we write $$G(z) = \sum_{j=1}^d R_j(z) G(z^{k^{N+j-1}}),$$ with $$R_j(z):=Q_j(z)/Q(z).$$ Moreover, since $\gcd(Q(z),Q_1(z),\ldots, Q_d(z))=1$, we have $\nu_{\lambda}(R_{\ell}(z))<0$ for some $\ell\in \{1,\ldots ,d\}$.
By Lemma \ref{Kislemma8}, there exists $(r_1,\ldots ,r_N)\in \{0,1,\ldots ,k-1\}^N$ such that
$$\nu_{\lambda}(\Lambda_{r_N} \cdots \Lambda_{r_1}(R_{\ell})(z^{k^N}))\leqslant \nu_{\lambda}(R_{\ell}(z)) < 0.$$
Thus
$$\nu_{\lambda^{k^n}} (\Lambda_{r_N} \cdots \Lambda_{r_1}(R_{\ell})(z)) < 0.$$ Now set
$$T_j(z) :=\Lambda_{r_N} \cdots \Lambda_{r_1}(R_j)(z)\quad \mbox{for}\quad j=1,\ldots ,d.$$
Then
$\Lambda_{r_N} \cdots \Lambda_{r_1}(G)(z)\in V$ and so
$$\sum_{j=1}^d T_j(z) G(z^{k^{j-1}}) \in V.$$ Since $G(z),\ldots ,G(z^{k^{d-1}})$ are linearly independent over $\mathbb{C}(z)$ we must have that $h(z)T_j(z) \in \mathbb{C}[z]$ for $j=1,\ldots, d$. But $\nu_{\lambda^{k^N}}(h(z))=0$ and so $\nu_{\lambda^{k^N}}(h(z)T_{\ell}(z))<0$, which contradicts the fact that $h(z)T_j(z)$ must be a polynomial, giving the claim. It follows that $q(z)=1$.
Specifically, $1\in \mathfrak{I}$ and so $$G(z)\in \sum_{j\ge 1} \mathbb{C}[z]G(z^{k^j}),$$ which says that $G(z)$ satisfies a Mahler functional equation of the form \eqref{MFE} with $a_0(z)=1$.
This finishes the proof of Theorem \ref{main}.
\end{proof}
\section{Optimality of the Theorem \ref{main}}\label{further}
The careful reader will notice that, while we prove Becker's conjecture completely, the resulting function $F(z)/z^\gamma Q(z)$ that satisfies a Mahler-type functional equation \eqref{MFE} with $a_0(z)=1$ is not necessarily a power series, so that (strictly speaking) it is neither $k$-regular nor $k$-Becker. One may argue, that probably the field of Laurent series is a preferable setting for solutions to \eqref{MFE}, and indeed a result of Dumas's \cite[Th\'eor\`eme 7]{Dumasthese} gives reasonable bounds on the valuation at $z=0$ of the solutions.
\begin{theorem}[Dumas] Let $F(z)$ be a Laurent power series solution to a Mahler-type functional equation \eqref{MFE} of degree $d$. Then $F(z)\in z^{-\nu}\mathbb{C}[[z]]$, where $$\nu:=\left\lceil\max\left\{\frac{\nu_0(a_d(z))}{k^d},\frac{\nu_0(a_d(z)/a_0(z))}{(k^d-1)}\right\}\right\rceil.$$
\end{theorem}
\noindent In this section, we show, by giving an example, that a stronger variant of Becker's conjecture with the added conclusion that the resulting function $F(z)/R(z)$ is a power series cannot hold; that is, with the currently in-use definitions, such a function is not necessarily $k$-Becker. We now state this result.
\begin{theorem} Let $k\geqslant 2$ be a natural number. Then there exists a $k$-regular power series $F(z)$ such that there is no nonzero rational function $R(z)$ with the property that $F(z)R(z)$ is a $k$-Becker power series.
\label{thm: example}
\end{theorem}
We note that this does not contradict the conclusion of Theorem \ref{main}, but merely shows that one must necessarily work in the ring of Laurent power series in order to obtain the conclusion. More precisely, the examples we give in establishing Theorem~\ref{thm: example} have the property that $F(z)/z$ is $k$-Becker with a pole at $z=0$ and so it has a Laurent power series expansion, but not an expansion in the ring of formal power series around $z=0$; moreover, one must introduce a pole at $z=0$ in order to obtain a $k$-Becker function.
Towards the goal of producing these examples, let $k$ be a natural number that is greater than or equal to two and consider the functional equation
\begin{equation}\label{Afunc} A(z) = (1-z+z^{k-1}) A(z^k) - z^{k^2-k} (1-z)A(z^{k^2}).\end{equation} Then writing $${\bf M}(z):=\left[\begin{matrix} 1-z+z^{k-1} & -z^{k^2-k}(1-z)\\ 1 & 0 \end{matrix}\right]$$ and ${\bf A}(z)=[A(z), A(z^k)]^T$, we have $${\bf A}(z)={\bf M}(z){\bf A}(z^k).$$ Let $H(z)$ be the power series solution of the functional equation \eqref{Afunc} corresponding to the iteration of the matrix ${\bf M}(z)$; that is, set $$H(z):=\lim_{n\to\infty} [1,\,0]{\bf M}(z){\bf M}(z^k)\cdots {\bf M}(z^{k^{n-1}}) [1,\,0]^T.$$ To see that this limit exists, it is enough to notice that $k^2-k\geq1$ so that $${\bf M}(z)=\left[\begin{matrix} 1+O(z) & O(z)\\ 1 & 0 \end{matrix}\right]$$ and then for any $n\geqslant 1$, \begin{align*}{\bf M}(z^{k^{n-1}})\left({\bf M}(z^{k^n})-\left[\begin{matrix} 1 & 0\\ 0 & 1 \end{matrix}\right]\right)&=\left[\begin{matrix} O(1) & O(z^{k^{n-1}})\\ 1 & 0 \end{matrix}\right]\left[\begin{matrix} O(z^{k^n}) & O(z^{k^n})\\ 1 & -1 \end{matrix}\right] \\
&=\left[\begin{matrix} O(z^{k^{n-1}}) & O(z^{k^{n-1}})\\ O(z^{k^n}) & O(z^{k^n}) \end{matrix}\right].\end{align*}
It then follows that for any $n\geqslant 2$, the difference for consecutive terms within the limit is \begin{align*}[1,\,0]{\bf M}(z)&{\bf M}(z^k)\cdots {\bf M}(z^{k^{n-1}}){\bf M}(z^{k^{n}}) [1,\,0]^T\\
&\qquad-[1,\,0]{\bf M}(z){\bf M}(z^k)\cdots {\bf M}(z^{k^{n-1}}) [1,\,0]^T\\
&= [1,\,0]{\bf M}(z){\bf M}(z^k)\cdots {\bf M}(z^{k^{n-2}})\left[\begin{matrix} O(z^{k^{n-1}}) & O(z^{k^{n-1}})\\ O(z^{k^n}) & O(z^{k^n})\end{matrix}\right] [1,\,0]^T\\
&= O(z^{k^{n-1}}).\end{align*} Here we note that $H(0)=1$. We also note that the function $H_0(z):=1/z$ is a solution to the functional equation \eqref{Afunc}.
We continue by setting
\begin{equation} \label{F0}
F_0(z) := H(z)+\frac{1}{z},\end{equation} which again satisfies \begin{equation}\label{F_1}F_0(z) = (1-z+z^{k-1}) F_0(z^k) - z^{k^2-k} (1-z)F_0(z^{k^2}).\end{equation} As $H(z)$ is a $k$-Becker power series, it is $k$-regular, thus \begin{equation} \label{eqF} F(z):=zF_0(z)=1+zH(z)\end{equation} is $k$-regular, as the $k$-regular power series form a ring. We note that $F_0(z)=F(z)/z$ is $k$-Becker and is a Laurent power series. We show, however, that there does not exist a nonzero rational function $R(z)$ such that $F(z)R(z)$ is a $k$-Becker power series; that is, in order to obtain a $k$-Becker function that is a nonzero rational function multiple of $F(z)$ one must work in the ring of Laurent power series and cannot restrict one's focus to the ring of formal power series. In order to show the desired result, we first establish two key lemmas. We note that the following lemma can also be proved using the method of Roques \cite{R2018}.
\begin{lemma}\label{independent} Let $k\geqslant 2$ and let $F_0(z)$ be as in Equation \eqref{F0}. Then the Laurent power series $F_0(z)$ and $F_0(z^k)$ are linearly independent over $\mathbb{C}(z)$.
\end{lemma}
\begin{proof} Suppose not. Then since $F_0(z)$ is nonzero, there is a rational function $a(z)$ such that $F_0(z^k)/F_0(z)=a(z)$. We note that $$F_0(z)=\frac{1}{z} + 1 + O(z)$$ and so $$\frac{F_0(z^k)}{F_0(z)} = z^{1-k}( 1-z + O(z^2)).$$ It follows that there are relatively prime polynomials $P(z)$ and $Q(z)$ with $P(0)=Q(0)=1$ such that $a(z) = z^{1-k} P(z)/Q(z)$. Then since $$F_0(z^{k^2}) = a(z)a(z^k) F_0(z),$$ Equation \eqref{F_1} gives
$$1 = (1-z+z^{k-1}) z^{1-k} \cdot\frac{P(z)}{Q(z)} - z^{k^2-k} (1-z)z^{1-k^2} \cdot\frac{P(z)P(z^k)}{Q(z)Q(z^k)}.$$
Clearing denominators, we see
\begin{equation} \label{PQ}
z^{k-1} Q(z)Q(z^k) = (1-z+z^{k-1}) P(z)Q(z^k) - (1-z)P(z)P(z^k).\end{equation}
In particular, $Q(z^k)$ divides $(1-z)P(z)P(z^k)$ and since $P(z)$ and $Q(z)$ are relatively prime, we then have that
$Q(z^k)$ divides $(1-z)P(z)$. Similarly, $P(z)$ divides $z^{k-1} Q(z)Q(z^k)$ and since $P(0)=1$, and $P(z)$ and $Q(z)$ are relatively prime, we see that $P(z)$ divides $Q(z^k)$. So we may write $Q(z^k)=P(z)b(z)$ with $b(z)$ dividing $(1-z)$. Since $Q(0)=P(0)=1$, we see that $b(z)=1$ or $b(z)=(1-z)$.
Then substituting $P(z)=Q(z^k)/b(z)$ into Equation \eqref{PQ}, we find
$$z^{k-1}Q(z)b(z)b(z^k) = (1-z+z^{k-1}) Q(z^k)b(z^k) - (1-z) Q(z^{k^2}).$$
Now let $D$ denote the degree of $Q(z)$. Then since \begin{equation}\label{Qkb}(1-z)Q(z^{k^2}) = - z^{k-1}Q(z)b(z)b(z^k) + (1-z+z^{k-1}) Q(z^k)b(z^k),\end{equation} and since $b(z)$ has degree at most $1$, we have
$$k^2D + 1 \leqslant \max\{ 2k+D, 2k-1+kD\}$$ so that \begin{equation}\label{Dkk1} D\leqslant \max\left\{ \frac{2k-1}{k^2-1},\frac{2}{k}\right\}\leqslant 1,\end{equation} since $k\geqslant 2$. Thus $D=0$ or $D=1$.
Suppose that $D=0$. Then $Q(z)$ is a constant polynomial and the condition that $Q(0)=1$ gives $Q(z)=1$. Since $P(z)$ divides $Q(z^k)$ we have that $P(z)$ is also $1$ and so $a(z)=z^{1-k}$. But $$\frac{F_0(z^k)}{F_0(z)}=z^{1-k}(1-z+O(z^2)) \neq z^{1-k}=a(z),$$ and so we get a contradiction, thus $D=1$.
So, suppose that $D=1$. By \eqref{Dkk1} it is clear that if $k\geqslant 3$, then $D=0$, so we must have $k=2$. If $b(z)=1$, then comparing degrees of the sides of the equality in \eqref{Qkb} gives $k^2+1=2k-1$, which is impossible since $k\geqslant 2$. Thus we must have $b(z)=1-z$. In this case, $Q(z)$ has degree one and we have $Q(z^2)=P(z)(1-z)$. Plugging in $z=1$ gives $Q(1)=0$ and since $Q(z)$ has degree $1$, we have $Q(z)=1-z$. Then $Q(z^2)=P(z)(1-z)$ gives that $P(z)=1+z$ and so $$a(z)=\frac{1}{z} \cdot\frac{1+z}{1-z} = \frac{1}{z}+2 +O(z).$$ But $$\frac{F(z^2)}{F(z)}=\frac{1}{z}- 1+O(z),$$ and so we obtain a contradiction. Thus $F_0(z)$ and $F_0(z^k)$ are linearly independent over $\mathbb{C}(z)$.
\end{proof}
\begin{lemma}\label{Xnotzero} Let $F_0(z)$ be as defined above, let $r\in\mathbb{N}$, and let $h_0(z),\ldots,h_r(z)$ be rational functions such that $h_i(z)/z^{k^i-1}$ does not have a pole at $z=0$ for $i=0,\ldots ,r$. Then, if $$\sum_{i=0}^r h_i(z) F_0(z^{k^i})= 0,$$ then $h_0(0)=0$.
\end{lemma}
\begin{proof} We prove this by induction on $r$. For $r=0$ and $r=1$, the result follows by Lemma \ref{independent} since $F_0(z)$ and $F_0(z^k)$ are linearly independent over $\mathbb{C}(z)$.
So suppose that the result holds for $r<m$ with $m\geqslant 2$ and consider the case when $r=m$.
Towards a contradiction, suppose that $$\sum_{i=0}^m h_i(z) F_0(z^{k^i}) = 0$$ with $h_0(0)$ nonzero and $z^{k^i-1}$ dividing $h_i(z)$ in the local ring $\mathbb{C}[z]_{(z)}$ (recall that $\mathbb{C}[z]_{(z)}$ is the ring of all rational functions whose denominator, when written in reduced form, is nonzero at $z=0$). For $i=1,\ldots,m$, set $$g_i(z) := z^{-k^i+1} \frac{h_i(z)}{h_0(z)}.$$ Then since $h_0(0)$ is nonzero, each $g_i(z)$ is regular at $z=0$ and
$$F_0(z) + \sum_{i=1}^m g_i(z) z^{k^i-1} F_0(z^{k^i}) = 0.$$
Applying the Cartier operator $\Lambda_0$ gives
$$\Lambda_0(F_0)(z) + \sum_{i=1}^m \Lambda_0( g_i(z)z^{k-1}) z^{k^{i-1}-1} F_0(z^{k^{i-1}}) = 0.$$ But using \eqref{F_1} and applying Lemma \ref{ch6:Cartier}(a), we have $\Lambda_0(F_0)(z) = F_0(z) - z^{k-1} F_0(z^k)$, so we have
\begin{multline}\label{lambdaF_1} 0 = (1+\Lambda_0(g_1(z)z^{k-1})) F_0(z)\\ +
(-1+\Lambda_0(g_2(z) z^{k-1})) z^{k-1} F_0(z^k) \\ + \sum_{i=2}^{m-1} \Lambda_0(g_{i+1}(z)z^{k-1}) z^{k^{i}-1}F_0(z^{k^{i}}). \end{multline}
Since $g_1(z)$ is regular at $z=0$, we have that $g_1(z)z^{k-1}$ has a power series expansion with zero constant term and hence $\Lambda_0(g_1(z)z^{k-1})$ vanishes at $z=0$, and so $1+\Lambda_0(g_1(z)z^{k-1})$ is a rational function which is nonzero at $z=0$. Since each of the higher-index coefficients in \eqref{lambdaF_1} are of the form $z^{k^{i}-1}$ times a rational function regular at $z=0$, the induction hypothesis applies and we get a contradiction. This contradiction proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: example}] Let $F(z)$ be the $k$-regular power series defined in \eqref{eqF}. We claim that there is no nonzero rational function $R(z)$ such that function $R(z)F(z)$ is a $k$-Becker power series. Since $F(0)=1$, if $R(z)F(z)$ has a power series expansion at $z=0$, $R(z)$ must be regular at $z=0$.
Suppose towards a contradiction that there is a rational function $R(z)$ such that $R(z)$ is regular at $z=0$ and such that $F(z)R(z)$ is $k$-Becker. Then we write $R(z)=z^a R_0(z)$ with $a\geqslant 0$ and with $R_0(0)$ nonzero. Then there exist a natural number $d$ and polynomials $b_1(z),\ldots,b_d(z)$ such that
$$R_0(z)F(z) = b_1(z) z^{ka-a} R_0(z^k) F(z^k) + \cdots + b_d(z) z^{k^d a - a} R_0(z^{k^d}) F(z^{k^d}).$$ As defined above, $F(z)=z F_0(z)$, so we have
\begin{align}\label{star}R_0(z) F_0(z) &= b_1(z) z^{ka-a+ k-1} R_0(z^k) F_0(z^k)\\ \nonumber &\qquad\qquad+ \cdots + b_d(z) z^{k^da -a + k^d-1} R_0(z^{k^d}) F_0(z^{k^d}). \end{align}
But this contradicts Lemma \ref{Xnotzero}. The result follows.
\end{proof}
\section{A structure of Mahler functional equations for regular functions}\label{structure}
In this section, we prove Proposition \ref{prop:dumas}; that is, we show for $F(z)\in\mathbb{C}[[z]]$, the series $F(z)$ is $k$-regular if and only if $F(z)$ satisfies some functional equation \eqref{MFE} such that all of the zeros of $a_0(z)$ are either zero or roots of unity of order not coprime to $k$. As stated in the Introduction, Proposition \ref{prop:dumas} is obtained by combining Theorem \ref{main} with a result of Dumas \cite[Th\'eor\`eme 30]{Dumasthese}. Dumas's result \cite[Th\'eor\`eme 30]{Dumasthese} is proved by appealing to results for degree-one Mahler functions via his Structure Theorem recorded above as Theorem \ref{lem:DAB}. By appealing to Theorem \ref{lem:DAB} and the ring structure of the set of $k$-regular power series, one can show that a series $F(z)$ is $k$-regular, if one can show that the infinite product $$H(z):=\frac{1}{\prod_{j\geqslant 0}\Gamma(z^{k^j})}$$ is $k$-regular. This is exactly what Dumas did via the following lemma; see \cite[Lemme~8]{Dumasthese}.
\begin{lemma}[Dumas]\label{lem:partial} The infinite product $H(z)=\prod_{j\geqslant 0}\Gamma(z^{k^j})^{-1}$ is $k$-regular if and only if the $\mathbb{C}$-vector space $$\left\langle\left\{\Lambda_{r_n}\cdots \Lambda_{r_1}\left(\frac{1}{\prod_{j=0}^{n-1}\Gamma(z^{k^j})}\right): 0\leqslant r_i<k,\ n\in\mathbb{N}\right\}\right\rangle_\mathbb{C}$$ is finite-dimensional.
\end{lemma}
Lemma \ref{lem:partial} follows from Lemma \ref{cartvec} combined with the equality $$\Lambda_{r_n}\cdots \Lambda_{r_1}H(z)=\left(\Lambda_{r_n}\cdots \Lambda_{r_1}\left(\frac{1}{\prod_{j=0}^{n-1}\Gamma(z^{k^j})}\right)\right) H(z),$$ which itself follows from the fact that $H(z)$ is a degree-one Mahler function satisfying the functional equation $$\Gamma(z)H(z)-H(z^k)=0.$$
We require the following proposition for the necessary direction of Proposition \ref{prop:dumas}. As stated previously, the argument is due to Dumas \cite[Th\'eor\`eme 30]{Dumasthese}. We state the result here in a slightly different form.
\begin{proposition}[Dumas]\label{regprod} Let $\Gamma(z)$ be a polynomial with $\Gamma(0)=1$. If all of the zeros of $\Gamma(z)$ are roots of unity of order not coprime to $k$, then $H(z)=\prod_{j\geqslant 0}\Gamma(z^{k^j})^{-1}$ is $k$-regular.
\end{proposition}
\noindent To prove Proposition \ref{regprod}, Dumas proved that the functions $$\Lambda_{r_n}\cdots \Lambda_{r_1}\left(\prod_{j=0}^{n-1}\Gamma(z^{k^j})^{-1}\right),$$ for $n\geqslant 1$, have only finitely many poles with bounded multiplicities and then applied Lemma~\ref{lem:partial}; see also \cite[Theorem 10]{D1993}. Compare with Lemma \ref{Buniform}, where we show a similar result for the set of matrices $\{{\bf B}_n(z):n\geqslant 1\}$.
For the sufficient direction of Proposition \ref{prop:dumas}, we will use the following result.
\begin{lemma}\label{Qzk} Let $k\geqslant 2$ be an integer, $Q(z)$ be a polynomial and suppose that all of the zeros of $Q(z)$ are either zero or roots of unity of order not coprime to $k$. Then for any integer $m\geqslant 1$, the zeros of $Q(z^{k^m})$ are either zero or roots of unity of order not coprime to $k$.
\end{lemma}
\begin{proof} Since all zeros of $Q(z)$ are either zero or roots of unity, it is clear that all zeros of $Q(z^{k^m})$ are either zero or roots of unity.
Now suppose to the contrary that there is a zero $z=\zeta$ of $Q(z^{k^m})$ that is a root of unity of order coprime to $k$, say $\ell$. Then since $\gcd(k,\ell)=1$, there is a positive integer $M$ dividing $\varphi(\ell)$ such that $k^M\equiv 1\ (\bmod\ \ell)$. Thus for this $M$, we have \begin{equation}\label{zz}\zeta^{k^M}=\zeta.\end{equation} Since $z=\zeta$ is a zero of $Q(z^{k^m})$, we have that $z=\xi:=\zeta^{k^m}$ is a zero of $Q(z)$. But then, using \eqref{zz}, we have $z=\xi$ is a zero of $Q(z)$ such that $$\xi=\zeta^{k^{m}}=\left(\zeta^{k^{M}}\right){}^{k^m}=\left(\zeta^{k^{m}}\right){}^{k^M}={\xi}^{k^M}.$$ If we denote by $n$ the order of $\xi$, this gives that $k^M\equiv 1\ (\bmod\ n)$, so that we have $\gcd(k,n)=1$, a contradiction, which proves the lemma.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:dumas}] We prove sufficiency first. Towards this, suppose that $F(z)$ is $k$-regular and satisfies the minimal functional equation \eqref{MFE}. Following the comments after Theorem \ref{main}, we denote by $A$ the set of roots of unity $\zeta$ such that $\zeta^{k^M}\neq\zeta$ for all $M\geqslant 1$ and $a_0(\zeta)=0$; note that this condition is equivalent to the condition that the order of $\zeta$ is not coprime to $k$. Then there is a nonnegative integer $\gamma$ and an $N$ depending on $a_0(z)$ such that for $$Q(z):=\prod_{\zeta\in A}\prod_{j=0}^{N-1}(1-z^{k^j}\overline{\zeta}^{k^N})^{\nu_\zeta(a_0)},$$ the function $F(z)/z^\gamma Q(z)$ satisfies a Mahler-type functional equation \eqref{MFE} with $a_0(z)=1$. In particular, we write $$\frac{F(z)}{z^\gamma Q(z)}+\sum_{i=1}^D b_i(z)\cdot \frac{F(z^{k^i})}{z^{\gamma k^i} Q(z^{k^i})}=0.$$ Now multiplying by $z^{\gamma k^D} Q(z) Q(z^k)\cdots Q(z^{k^d})$ gives \begin{multline}\label{newMFE} \qquad z^{\gamma (k^{D}-1)}Q(z^k)\cdots Q(z^{k^D}) F(z)\\ +\sum_{i=1}^D b_i(z)z^{\gamma (k^D-k^i)}\left(\frac{\prod_{j=0}^D Q(z^{k^j})}{Q(z^{k^i})}\right) \cdot F(z^{k^i})=0.\qquad \end{multline} By the definition of $Q(z)$ and Lemma \ref{Qzk}, we have that $F(z)$ satisfies a (new) functional equation \eqref{MFE}, specifically Equation \eqref{newMFE}, such that all of the zeros of $$a_0(z)=z^{\gamma (k^{D}-1)}Q(z^k)\cdots Q(z^{k^D})$$ are either zero or roots of unity of order not coprime to $k$. This proves necessity.
For sufficiency, we use both Theorem \ref{lem:DAB} and Proposition \ref{regprod}. To this end, suppose that $F(z)$ satisfies some functional equation \eqref{MFE} such that all of the zeros of $a_0(z)$ are either zero or roots of unity of order not coprime to $k$. Now write $$a_0(z)=\rho z^\delta \Gamma(z),$$ where $\Gamma(0)=1$. Thus all the zeros of $\Gamma(z)$ are roots of unity of order not coprime to $k$. Now, Theorem \ref{lem:DAB}, gives that there is a $k$-regular series $G(z)$ such that $$F(z)=\frac{G(z)}{\prod_{j\geqslant 0}\Gamma(z^{k^j})}.$$ Applying Proposition \ref{regprod} gives that the function $$H(z):= \frac{1}{\prod_{j\geqslant 0}\Gamma(z^{k^j})}$$ is $k$-regular. Since $k$-regular series form a ring, we have that $F(z)=G(z)H(z)$ is $k$-regular. This proves sufficiency, and completes the proof of the proposition.
\end{proof}
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{
"timestamp": "2018-11-28T02:10:51",
"yymm": "1802",
"arxiv_id": "1802.08653",
"language": "en",
"url": "https://arxiv.org/abs/1802.08653"
}
|
\section{Introduction}
\label{sec:intro}
Turbulence is ubiquitous in the interstellar medium (ISM) at different scales \citep{AM95, CL10} and magnetic field plays an important role for most of the ISM physics. In particular, magnetic fields are essential for the star formation \citep{1976ApJ...210..326M,2015ApJ...808...48B,2013ApJ...770..151C}, propagation and acceleration of cosmic ray \citep{1949PhRv...75.1169F,Sch10}, transport of heat and mass in the galaxy \citep{L06,NM01}. More recently, the importance of studying the structure of magnetic field was motivated by attempts to study elusive B-modes of cosmological origin \citep{2014JCAP...06..053F}. The latter produce polarization that is being confused with the foreground polarization arising from interstellar magnetic fields\citep{1989ApJ...346..728J,2016MNRAS.462.2343V}. However, the study of magnetic fields in the ISM is complicated. Therefore it is extremely interesting in finding alternative ways for magnetic field tracing.
The VGT technique employs either Velocity Centroid Gradients (VCGs) \citep{ GCL17,YL17a,YL17b} or Reduced Velocity Centroid Gradients (RVCGs) \citep{LY18a} or Velocity Channel Gradients (VChGs)\footnote{The technique is based on the theoretical \citep{LP00}. The theory predicts that the velocity caustics dominate the intensity fluctuations in thin channel maps.} (\citealt{LY18a}). In this paper, we use the VCGs, but the approach that we discuss in this paper is also applicable to other realizations of the VGT.
The VGT is founded by the modern understanding of MHD turbulence theory (\citealt{GS95}, hereafter GS95) that includes the concept of fast turbulent reconnection (\citealt{LV99}, henceforth LV99) and is supported by numerical studies ( see \citealt{CV00,2001ApJ...554.1175M,CLV02, CL2003, K09}). Due to turbulent reconnection, motions perpendicular to the local direction of the magnetic field are not constrained, and therefore eddies rotating perpendicular to magnetic field have the Kolmogorov spectrum with eddy velocity $v_l\sim l_{\bot}^{1/3}$. The index $\bot$ in $l_{\bot}$ indicate that the motions are perpendicular to the magnetic field of the eddy. As a result, the gradient of velocity scales as $v_l/l_{\bot}\sim l_{\bot}^{-1/3}$, means that the smallest resolved eddies induce the largest gradients. These gradients are perpendicular to the local direction of magnetic field. A more detailed explanation of the foundations of the VGT can be found in \citep{LY18a}.
The VGT has been a fast developing branch of research. For instance, the tracing of the direction of magnetic field in diffuse \citep{YL17a} and self-gravitating media \citep{YL17b, LY18a} has been performed, as well as the estimations of the sonic ($M_s$, \citealt{YLL18}) and Alfvenic ($M_A$, \citealt{LYH18}) Mach numbers. As a separate development, the approach of studying magnetic fields with gradients has been also applied to synchrotron intensities\citep{Letal17}, which resulted in the Synchrotron Intensity Gradients (SIGs) technique, as well as to synchrotron polarization \citep{LY18b}, which resulted in two techniques, the Synchrotron Polarization Gradients (SPGs) and Synchrotron Polarization Derivative Gradients (SPDGs).\footnote{Incidentally, our approach of block averaging is also applicable to studies of gradients of column densities. The corresponding Intensity Gradient Technique (IGT) should not be confused with the Histograms of Relative Orientation (HRO) proposed in Soler et al. (2013). The IGT traces both magnetic fields and shocks (see \cite{YL17b}, \cite{LY18a}), while HRO provides a statistical relation between the relative orientation of the magnetic field and intensity gradients as a function of the column density. The latter is a measure calcuated for the entire image and it cannot be used to trace the spatial variations of magnetic fields. We view the IGT as a part of the gradient technique. Its synergy with the VGT was demonstrated e.g. in \cite{LY18a}}. The theoretical foundations of the procedures employed in the aforementioned techniques mentioned above are similar to those of the VGT, and therefore we expect that the improvements of the data analysis, in particular, the use of the Principal Component Analysis can be also advantageous for improving the accuracy of other gradient techniques, e.g. those dealing with synchrotron.
The practical application of VGT is affected by the quality of the data. The noise suppression method for VGT has been explored in \cite{Letal17} and elaborated in \cite{LY18a}, showing that a convolution of the observational map with a small $\sigma$ Gaussian kernel would retrieve the spatial structure of the molecular cloud. Moreover, in \cite{YL17b} they showed that the filtering of non-turbulence contribution in Fourier space could improve the accuracy of VGT in tracing magnetic field. In this paper, we proceed with the work of improving magnetic field tracing with the VGT. For this purpose, we explore the application of the Principal Component Analysis (PCA).
The PCA is widely used in image processing and image compression. Regarding astrophysical applications the PCA analysis was used in \cite{2002ApJ...566..276B,2002ApJ...566..289B} for obtaining the turbulence spectrum from observations. Later, in \cite{2008ApJ...680..420H} the PCA was employed for studying turbulence anisotropies. Our present use of the PCA is different: we use it as a tool to provide the preliminary processing of the spectroscopic data.
The idea of the PCA is that the image of size $N^2$ can be effectively represented by $n<N$ eigen-maps. The physical meaning of the eigenvalues from the PCA analysis is closely related to the value of the turbulence velocity dispersion $v^2$. We apply the VCGs to the individual eigen-images and explore for which of them the magnetic field is traced the best.
In what follows, we briefly describe the numerical code and setup for simulation in \S \ref{tab:sim}. In \S \ref{sec:res}, we test the implementation of the VCGs with the PCA using numerical simulations. \S \ref{sec:obs} shows the observational example with VCG-PCA technique. In \S \ref{sec:conclusion}, we give our discussion about our technique and conclusion.
\section{Numerical setting for synergistic use of PCA and VGT}
\label{tab:sim}
\begin{table}
\centering
\begin{tabular}{c c c c}
\hline
Model Name & $M_S$ & $M_A$ & Resolution \\ \hline
Ms0.4Ma0.04 & 0.41 & 0.04 & $480^3$\\
Ms0.8Ma0.08 & 0.92 & 0.09 & $480^3$\\
Ms1.6Ma0.16 & 1.95 & 0.18 & $480^3$\\
Ms3.2Ma0.32 & 3.88 & 0.35 & $480^3$\\
Ms6.4Ma0.64 & 7.14 & 0.66 & $480^3$\\
\hline
\end{tabular}
\caption{\label{tab:sim} MHD simulations used in the present work. $M_s$ and $M_A$ denote the instantaneous values of the sonic and Alfven Mach numbers at each of the snapshots. }
\end{table}
For our studies of gradients with PCA, we use the same numerical cubes as in \cite{brazil18} (see Table \ref{tab:sim}). The simulation parameters with different combinations of Alfvenic Mach numbers $M_A=V_L/V_A$ and sonic Mach numbers $M_S=V_L/V_s$ are listed in Table \ref{tab:sim}. There $V_L$ is the turbulence injection velocity and $V_A$, and $V_s$ are the Alfven and sonic velocities, respectively.
For this study, we consider the optically thin case ($\tau \sim \int \kappa(s) ds \ll 1$) and synthesize observational maps similar to that in \cite{brazil18} . We assume that the emissivity is proportional to density, but do not consider this as a significant limitation. For instance, the case of emissivity proportional to the density squared is regarded in \citet{2017MNRAS.470.3103K} with the change of the results being insignificant.
We denote the intensity within of the Position-Position-Velocity (PPV) cubes as $\rho(x,y,v)$, and the cubes dimensions $n_x\times n_y\times n_v$, where the $n_v$ means the number of velocity channels along the spectral line direction v (line-of-sight direction, LOS), which is $n_v=400$ for our studies unless specifically mentioned.
The PPV cubes are preprocessed first by the PCA similar to that described in \cite{2002ApJ...566..289B}. After that, the VCG technique is applied to the eigen-images of a different order. The product of PCA would be a set of eigen-images $I_i$ with decreasing order of eigenvalues $\lambda_i \sim v^2_i$, where the latter records the velocity variance along the line of sight.
As we discussed in the \S \ref{sec:intro} that for studying turbulence the velocity variance is related to the eddy size along the line of sight. \cite{2008ApJ...680..420H} splits the PPV cube into vertical and horizontal Position-Velocity tires (PV tires), where every PV tire is a vertical or horizontal slice from the PPV map $\rho(x,y,v)$ averaged over the x-direction or y-direction, respectively. The eigenvalue is obtained by solving the eigenvalue equation for each PV tire.
\begin{figure*}
\centering
\subfigure[1st eigen-channel, AM=0.48
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_1st-min.png}}
\centering
\subfigure[10th eigen-channel, AM=0.64
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_10th-min.png}}
\centering
\subfigure[20th eigen-channel, AM=0.66
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_20th-min.png}}
\centering
\subfigure[30th eigen-channel, AM=0.63
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_30th-min.png}}
\centering
\subfigure[40th eigen-channel, AM=0.64
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_40th-min.png}}
\centering
\subfigure[50th eigen-channel, AM=0.63
]
{\includegraphics[width=.3\linewidth,height=0.3\linewidth]{figure/VCGs_50th-min.png}}
\caption{\label{fig:si-eigen0}The eigen-centroid maps with gradients (red) and magnetic field (blue) plotted with different eigenvalues. The simulation used here is Ms0.4Ma0.04 with $M_{S}=0.41$ and $M_{A}=0.04$. Please note, each figure is in individual color-scale.}
\end{figure*}
Similarly, in our work, we assume the PPV cube is properly normalized \footnote{In principle one shall use the normalized PPV cube $\rho' = \rho/\int \rho$. However, for the treatment of PCA, the difference of a constant does not alter the result. Therefore we stay with $\rho$ for simplicity.}, and treat the PPV cube $\rho(x,y,v)$ as the probability density function of three random variables $x,y,v$, we can then define the covariance matrix \citep{2002ApJ...566..276B} of each velocity channel as: \footnote{The textbook definition of covariance matrix should be $S(v_1,v_2)=E(\rho(v_1) \rho(v_2))-E(\rho(v_1))E(\rho(v_2))$, where E is the expectation operator. However in both \cite{2002ApJ...566..276B,2002ApJ...566..289B} and \cite{2008ApJ...680..420H} the second part is not included. In this work, we do not include this part either. However, we expect the inclusion of the second part brings only small effect to the eigenvalues of the covariance matrix if we are focusing only the largest eigenvalues.}
\begin{equation}
S(v_1,v_2) \propto \int dxdy \rho(x,y,v_1)\rho(x,y,v_2)
\end{equation}
hence an eigenvalue equation for this covariance matrix is:
\begin{equation}
S\textbf{u}=\lambda\textbf{u}
\end{equation}
where the $\lambda_{i}$ are the eigenvalues associated with the eigenvectors $\textbf{u}_{i}$ with $i=1,2,...,n_v$. One can solve the eigenvalue equation to get the eigenvalue and eigenvector of each channel. The eigenvectors $\textbf{u}_{i}$ contain the weight of how one can construct the eigen-maps of rank $i$\footnote{ Here we are referring to the ordering index of eigenvalues from PCA after sorting them from the largest to smallest. } with the channel maps. We apply eigenvalues $\lambda_{i}$ as the weighting coefficients for each channel. Then the eigen-intensity maps $I_{eigen}$ and eigen-centroid maps $C_{eigen}$ can be computed by:
\begin{equation}
C_{eigen}(x,y)=\frac{\int dv\ \rho(x,y,v)\cdot v \cdot \lambda(v)}{I_{eigen}(x,y)}
\end{equation}
\begin{equation}
I_{eigen}(x,y)=\int dv\ \rho(x,y,v)\cdot \lambda(v)
\end{equation}
For the gradient computation, we shall follow the sub-block averaging method developed in \cite{YL17a}, which will tell the sub-block averaged orientation of gradients. The resultant gradients will be rotated $90^o$ to correspond to the expected magnetic field directions. The error estimation method \citep{LY18a} is also employed to signify how accurate the Gaussian fitting function used in sub-block averaging is when computing the average gradient direction within a sub-block. The orientation of gradients from VGT is compared with the synthetic polarization, assuming a constant emissivity in the dust grain alignment \citep{L07} .
That means the mock Stokes parameters $Q(x,y)$ and $U(x,y)$\citep{2015PhRvL.115x1302C} can be expressed in terms of the angle $\theta$ between the x and y direction magnetic fields by $\tan\theta(x,y,z)=\frac{B_y(x,y,z)}{B_x(x,y,z)}$:
\begin{equation}
Q(x,y)\propto\int dz\rho(x,y,z)\cos(2\theta(x,y,z))
\end{equation}
\begin{equation}
U(x,y)\propto\int dz\rho(x,y,z)\sin(2\theta(x,y,z))
\end{equation}
The polarization angle $\Phi=0.5 arctan2(\frac{U}{Q})$ is then defined correspondingly, which gives an probe of projected magnetic field in realistic scenarios.
The relative orientations between the $90^o$ rotated gradients and project magnetic field directions from polarization angles are measured by the \textbf{Alignment Measure (AM)} used in our previous studies \citep{GCL17, YL17a}:
\begin{align}
AM=2(\langle cos^{2} \theta_{r}\rangle-\frac{1}{2})
\end{align}
Where $\theta_{r}$ is the relative angle between the gradients (rotated $90^o$) and the direction of the projected magnetic field. The range of AM is [-1,1]. When $AM = 1$, the gradients (rotated $90^o$) are parallel to the projected magnetic field. When $AM = -1$, the gradients (rotated $90^o$) are perpendicular to the projected magnetic field. We expect to get $AM\sim 1$ in most scenarios.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figure/AM_log.png}
\caption{\label{fig:pca-noiseC} Five plots showing the response of AM between gradients of eigen-centroids and projected magnetic field to the rank of eigenvalue (the maximum eigenvalue is ranked as the first one, the minimum eigenvalue is the last one) for both cases without noise (pink) and with noise added (blue).}
\end{figure}
\section{Applying VCGs to Eigen-images}
\label{sec:res}
The eigen-images produced by the PCA are the product to which we apply the VCGs analysis. For doing the latter, we apply the procedures described in our earlier papers, e.g. \citep{YL17a}, i.e. compute the sub-block averaged VCGs for each eigen-image and compare the obtained gradient directions with the projected magnetic field directions. Fig. \ref{fig:si-eigen0} illustrates the gradients and the structure of some selected eigen-centroids for the cube Ms0.4Ma0.04. One can see for the eigen-centroids, the first eigen-channel map shows a lower level of alignment, the rest are essentially equally aligned. The structure of the eigen-centroids becomes more filamentary when the rank of eigen-channel map increases.
We analyze the visual patterns in Fig \ref{fig:si-eigen0} using the AM-eigenvalue plot. The pink curves in Fig. \ref{fig:pca-noiseC} shows how the AM of the gradients from eigen-images and projected magnetic field varies concerning the eigenvalues from PCA analysis for the numerical cubes listed in Table \ref{tab:sim}. To test the power of PCA on noise reduction, we add white noise with mean amplitude $0.1 \sigma_C$ to the centroid maps. The results are shown as the blue curves in Fig. \ref{fig:pca-noiseC}. The $x$-axis in Fig. \ref{fig:pca-noiseC} represents the rank of eigenvalues sorted in decreasing order, i.e. if $\lambda_1>\lambda_2> ... >\lambda_n$, then we shall use the number 1 (the rank) to represent $\lambda_1$, rank 2 for $\lambda_2$ etc. We see that for all simulations we tested, the peak rank is at around $\sim 10$. As the rank increases (i.e., smaller eigenvalues), the AM of the respective gradients of eigen-centroids to magnetic field decreases significantly. In noisy environments (blue curves in Fig. \ref{fig:pca-noiseC}), the AM of the images corresponding to the small ranks are approximately the same as the case without noise (pink curves), but the AM in higher rank cases drop significantly. The experiment in Fig. \ref{fig:pca-noiseC} shows that using the method of PCA before applying VGT, we can retrieve the strong signal part, which has a lower rank in PCA, from the noisy part, which has a higher rank.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure/Centroid.png}
\caption{\label{fig:meanIC} A plot showing how the eigen-centroid amplitudes varies with the rank of the eigen-values on the synthetic map from the cube Ms0.4Ma0.04.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figure/susan_distribution.png}
\caption{\label{fig:susan_eigen} A plot showing the AM(top) and mean centorid amplitude(bottom) versus the rank of the eigen-values on the PPV cube from observation}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{figure/susan.png}
\caption{\label{fig:susan} The region is from GALFA-HI and spans right ascension $212.5^{o}$ to $265^{o}$ and declination $19.1^{o}$ to $38.3^{o}$, stretches from b = $30^{o}$ above the Galactic plane to b =$81.7^{o}$, nearly Galactic zenith. We compare the gradients got from VCGs(red lines) with the Planck polarization data (black lines). Please note, each figure is in individual color-scale.}
\end{figure*}
We also notice that the AM for the images with ranks in the range $\sim 1-5$ is generally smaller than that for the ranks in the range $\sim 10-15$. The reason behind this is that the largest velocity dispersion $v^2$ extracted from PCA corresponds to the largest-scale eddies along the line of sight, that is affected by the energy injection. Taking into account that for Alfvenic strong turbulence that $v^2\sim l^{2/3}$ (GS95, LV99), the images with range of ranks about $10-15$ correspond to the turbulent eddies in the inertial range of our numerical cubes ($k_{\rm inertial} \approx 10-30$). In fact, when we refer to the eigen-centroid amplitudes from the cube Ms0.4Ma0.04 (Fig \ref{fig:meanIC}), we can see the amplitude becomes insignificant after rank $>20$. The amplitude of the eigen-centroids with the rank higher than $20$ is at least $0.1 - 0.01$ compared to the first few eigen-centroids. Therefore to use VCG$_{PCA}$ for its full potential, it is advantageous to remove the largest eigenvalues together with those having rank $>20$ to obtain the best result in magnetic field tracing.
\section{Application to observations}
\label{sec:obs}
For testing our recipe, we use the well-studied region from \cite{2015PhRvL.115x1302C}, with further information that can be found in \cite{susantail}. The region spans right ascension (R.A.) $212.5^{o}$ to $265^{o}$ and declination (DEC.) $19.1^{o}$ to $38.3^{o}$, covering a substantial piece of HI region with different physical conditions. The HI-cube has 41 velocity channels with each $\sim 3km/s$ wide. In previous studies \citep{YLL18,LYH18} we explored $M_s$ and $M_A$ in the same region, showing the region is super-sonic and sub-Alfvenic ($M_A\sim 0.75$), which is close to the condition we had in Table \ref{tab:sim}. We then use the same strategy as we did in \S \ref{sec:res} to analyze the gradient orientation with the PCA.
We apply the PCA to the selected region and choose the $2^{nd}$ and $10^{th}$ eigen-centroid maps based on our experience that we had in Fig \ref{fig:pca-noiseC}. Since the PCA eigen-rank is similar to the wavenumber in the spectral analysis, the choice we made here should not be affected by the short numerical inertial range in our simulation as we are choosing the first few eigenvalues for analysis. We show the magnetic field tracing with VCGs and compare it to the magnetic field directions traced by the 353GHz Planck polarization data, which we illustrated in Fig.\ref{fig:susan} with two eigen-centroid maps $\lambda_2$ and $\lambda_{10}$. The corresponding figure showing the AM-eigenvalue variation is in Fig \ref{fig:susan_eigen}, which has the same trend as Fig \ref{fig:pca-noiseC}. The $10^{th}$ eigen-map has an obviously better AM compared to that of $2^{nd}$, which is consistent with the study we have in \S \ref{sec:res}.
\section{Discussion}
\subsection{Studying media magnetization}
The magnetization of the interstellar media can be characterized by the Alfven Mach number $M_A$. By itself, $M_A$ is critical parameter the knowledge of which is essential for understanding the vital astrophysical processes, including the transport and acceleration of cosmic rays (see \citealt{2014ApJ...784...38L}), transport of heat (see \citealt{L06}), etc. With known $M_A$ and known velocity dispersion one can get the value of the interstellar magnetic field (see \citealt{LYH18}).
The technique of studying media $M_A$ using velocity gradients was suggested in \citet{LYH18}. This technique was tested with numerical simulations and applied to 21 cm data. The decrease of the noise that we observe applying the PCA technique for the initial filtering of the data is valuable for the studies of $M_A$. We plan to demonstrate this elsewhere.
\subsection{Obtaining 3D structure of magnetic field}
The employment of eigenvalue decompositions through PCA also provides a way to study three-dimensional (3D) magnetic field. As we can isolate the contribution of turbulent eddies along the line of sight with PCA, we can then stack the prediction from different eigen-centroids and construct the 3D tomography by sorting the eigenvalue axis. In a separate development the gradients of synchrotron intensity \citep{Letal17} and polarization intensity \citep{LY18b} have been used to construct the 3D magnetic field morphology with multi-frequency measurements. A similar idea of constructing 3D magnetic field morphology with VGT on spectroscopic data has been tested in \cite{GL18} when the galactic rotation curve is available. With these 3D field tracing methods available, the productive application of VGT and synergy with different techniques will then shift the paradigm of studying magnetic fields from polarimetry measurements to studies of gradients on both interferometric and spectroscopic data.
\subsection{Application within other gradient techniques}
We expect the PCA filtering to be useful when applied with other velocity gradient techniques, e.g., with VChGs. However, the application of the procedure is not limited to the velocity gradients.
It is explained, e.g., in \cite{LY18b}, that the VGT is one of the techniques that employ the properties of MHD turbulence to study magnetic fields. Magnetic fluctuations enter Alfvenic turbulence in a symmetric way to velocity fluctuations. Therefore both synchrotron intensity gradients (see \citealt{Letal17}) and synchrotron polarization gradients \citep{LY18b} can be used to trace magnetic field and study $M_A$. Naturally, the improvement that we are suggesting here with the pre-filtering the images using the PCA seems an attractive possibility for these synchrotron-based techniques.
We would like to mention that while the statistics of density fluctuations in MHD turbulence\citep{2005ApJ...624L..93B,2007ApJ...658..423K} does not follow closely, especially at high sonic Mach numbers, the statistics of velocity and magnetic field fluctuation, the Intensity Gradient Technique (IGT)\footnote{As we mentioned earlier, one should distinguish the IGT technique and the Histogram of Relative Orientation (HRO) \citep{2013ApJ...774..128S}. We would like to stress that the former employs the technology that we developed for the velocity and magnetic gradients and therefore can provide the spatial distribution of magnetic fields, shocks, regions of gravitational collapse, etc. (see \citealt{YL17b}, \citealt{LY18a}). It is important that the IGT technique does not require any polarization data to get this information. HRO, on the contrary, compares the relative orientation of the density gradients and the polarization directions as a function of column density.} are also very informative. We also expect to see the improvements for the IG technique when the PCA pre-filtering is employed.
\section{Summary}
\label{sec:conclusion}
In the present paper, we have utilized filtering of images using the PCA. We have shown using both synthetic and observational maps, that the extraction of eigen-centroids with the rank number of $\sim 10$ can effectively probe the direction of magnetic field with a very high AM. As a result, for the studies of the projected magnetic field, the improved technique can provide higher accuracy of magnetic field tracing.
\section*{Acknowledgements}
AL acknowledges the support of NSF grants DMS 1622353, AST 1715754 and 1816234. This publication utilizes data from Galactic ALFA HI (GALFA HI) survey data set obtained with the Arecibo L-band Feed Array (ALFA) on the Arecibo 305m telescope. The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association. The GALFA HI surveys have been funded by the NSF through grants to Columbia University, the University of Wisconsin, and the University of California.
|
{
"timestamp": "2018-07-05T02:00:22",
"yymm": "1802",
"arxiv_id": "1802.08772",
"language": "en",
"url": "https://arxiv.org/abs/1802.08772"
}
|
\section{\@startsection{section}{1}%
\z@{-1.4\linespacing\@plus-.5\linespacing}{.8\linespacing}%
{\normalfont\bfseries\Large}}
\def\subsection{\@startsection{subsection}{2}%
\z@{-.8\linespacing\@plus-.3\linespacing}{.5\linespacing\@plus.2\linespacing}%
{\normalfont\bfseries\large}}
\def\subsubsection{\@startsection{subsubsection}{3}%
\z@{.7\linespacing\@plus.2\linespacing}{-1.5ex}%
{\normalfont\itshape}}
\def\paragraph{\@startsection{paragraph}{4}%
\z@{.7\linespacing\@plus.2\linespacing}{-1.5ex}%
{\normalfont\itshape}}
\def\@secnumfont{\bfseries}
\renewcommand\contentsnamefont{\bfseries}
\def\@starttoc#1#2{\begingroup
\setTrue{#1}%
\par\removelastskip\vskip\z@skip
\@startsection{}\@M\z@{\linespacing\@plus\linespacing}%
{.5\linespacing}
\contentsnamefont}{#2}%
\ifxContents#2%
\else \addcontentsline{toc}{section}{#2}\fi
\makeatletter
\@input{\jobname.#1}%
\if@filesw
\@xp\newwrite\csname tf@#1\endcsname
\immediate\@xp\openout\csname tf@#1\endcsname \jobname.#1\relax
\fi
\global\@nobreakfalse \endgroup
\addvspace{32\p@\@plus14\p@}%
\let\tableofcontents\relax
}
\defContents{Contents}
\def\l@section{\@tocline{2}{.5ex}{0mm}{5pc}{}}
\def\l@subsection{\@tocline{2}{0pt}{2em}{5pc}{}}
\makeatother
\fi
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{W}{\mathcal{W}}
\def\mathbb{N}{\mathbb{N}}
\def\mathbb{Z}{\mathbb{Z}}
\def\mathcal{O}{\mathcal{O}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{C}{\mathbb{C}}
\def\partial{\partial}
\def\mathfrak{s}{\mathfrak{s}}
\def\operatorname{sign}{\operatorname{sign}}
\def\operatorname{Spin}{\operatorname{Spin}}
\def\partial{\partial}
\def\+{\oplus}
\def\smallsetminus{\smallsetminus}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{theoremalpha}{Theorem}\def\Alph{theoremalpha}{\Alph{theoremalpha}}
\newtheorem{corollaryalpha}[theoremalpha]{Corollary}\def\thecorollaryalpha{\Alph{corollaryalpha}}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem*{claim}{Claim}
\newtheorem*{zerosurgeryobstruction}{Zero Surgery Obstruction}
\newtheorem*{Homology Cobordism Invariance}{Homology Cobordism Invariance}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{question}{Question}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem*{acknowledgement}{Acknowledgements}
\numberwithin{equation}{section}
\newtheorem*{organization}{Organization of the paper}
\DeclareMathOperator{\arf}{Arf}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\def\mathchoice{\longrightarrow}{\rightarrow}{\rightarrow}{\rightarrow}{\mathchoice{\longrightarrow}{\rightarrow}{\rightarrow}{\rightarrow}}
\makeatletter
\newcommand{\shortxra}[2][]{\ext@arrow 0359\rightarrowfill@{#1}{#2}}
\def\longrightarrowfill@{\arrowfill@\relbar\relbar\longrightarrow}
\newcommand{\longxra}[2][]{\ext@arrow 0359\longrightarrowfill@{#1}{#2}}
\renewcommand{\xrightarrow}[2][]{\mathchoice{\longxra[#1]{#2}}%
{\shortxra[#1]{#2}}{\shortxra[#1]{#2}}{\shortxra[#1]{#2}}}
\makeatother
\begin{document}
\title [3-manifolds that cannot be obtained by 0-surgery on a knot]
{Irreducible 3-manifolds that cannot be obtained by 0-surgery on a knot}
\author{Matthew Hedden}
\address{Department of Mathematics\\
Michigan State University\\
MI 48824\\
USA
}
\email{mhedden@math.msu.edu}
\author{Min Hoon Kim}
\address{
School of Mathematics\\
Korea Institute for Advanced Study \\
Seoul 02455\\
Republic of Korea
}
\email{kminhoon@kias.re.kr}
\author{Thomas E.\ Mark}
\address{
Department of Mathematics\\
University of Virginia\\
VA 22903\\
USA
}
\email{tmark@virginia.edu}
\author{Kyungbae Park}
\address{
School of Mathematics\\
Korea Institute for Advanced Study \\
Seoul 02455\\
Republic of Korea
}
\email{kbpark@kias.re.kr}
\keywords{}
\begin{abstract}We give two infinite families of examples of closed, orientable, irreducible 3-manifolds $M$ such that $b_1(M)=1$ and $\pi_1(M)$ has weight 1, but $M$ is not the result of Dehn surgery along a knot in the 3-sphere. This answers a question of Aschenbrenner, Friedl and Wilton, and provides the first examples of irreducible manifolds with $b_1=1$ that are known not to be surgery on a knot in the 3-sphere. One family consists of Seifert fibered 3-manifolds, while each member of the other family is not even homology cobordant to any Seifert fibered 3-manifold. None of our examples are homology cobordant to any manifold obtained by Dehn surgery along a knot in the 3-sphere.\end{abstract}
\maketitle
\section{Introduction}
It is a well-known theorem of Lickorish \cite{Lickorish:1962-1} and Wallace \cite{Wallace:1960-1} that every closed, oriented 3-manifold is obtained by Dehn surgery on a link in the three-sphere. This leads one to wonder how the complexity of a $3$-manifold is reflected in the links which yield it through surgery, and conversely. A natural yet difficult goal in this vein is to determine the minimum number of components of a link on which one can perform surgery to produce a given 3-manifold. In particular, one can ask which 3-manifolds are obtained by Dehn surgery on a \emph{knot} in $S^3$. If, following \cite{Auckly:1997-1}, we define the {\it surgery number} $DS(Y)$ of a closed 3-manifold $Y$ to be the smallest number of components of a link in $S^3$ yielding $Y$ by (Dehn) surgery, we ask for conditions under which $DS(Y) >1$.
The fundamental group provides some information on this problem. Indeed, if a closed, oriented 3-manifold $Y$ has $DS(Y) = 1$, then the van Kampen theorem implies that $\pi_1(Y)$ is normally generated by a single element (which is represented by a meridian of $K$). In particular, $\pi_1(Y)$ has weight one and $H_1(Y;\mathbb{Z})$ is cyclic. (Recall that the \emph{weight} of a group $G$ is the minimum number of normal generators of $G$.)
A more sophisticated topological obstruction to being surgery on a knot comes from essential 2-spheres in 3-manifolds. While Dehn surgery on a knot can produce a non-prime 3-manifold, the \emph{cabling conjecture} \cite[Conjecture~A]{Acuna-Short:1986} asserts that this is quite rare and occurs only in the case of $pq$-surgery on a $(p,q)$-cable knot. It would imply, in particular, that a non-prime 3-manifold obtained by surgery on a knot in $S^3$ has only two prime summands, one of which is a lens space. Deep work of Gordon-Luecke \cite[Corollary~3.1]{Gordon-Luecke:1989-1} and Gabai \cite[Theorem~8.3]{Gabai:1987-3} verify this in the case of homology spheres and homology $S^1\times S^2$'s, respectively, showing more generally that if such a manifold is obtained by surgery on a non-trivial knot, then $Y$ is irreducible.
It is natural to ask whether these conditions are sufficient to conclude that $Y$ is obtained from $S^3$ by Dehn surgery on a knot. In the case of homology 3-spheres, Auckly \cite{Auckly:1997-1} used Taubes' end-periodic diagonalization theorem \cite[Theorem~1.4]{Taubes} to give examples of hyperbolic, hence irreducible, homology 3-spheres with $DS(Y)>1$. It remains unknown, however, if any of Auckly's examples have weight-one fundamental group. More recently, Hom, Karakurt and Lidman \cite{Hom-Karakurt-Lidman:2016-1} used Heegaard Floer homology to obstruct infinitely many irreducible Seifert fibered homology 3-spheres with weight-one fundamental groups from being obtained by Dehn surgery on a knot. In \cite{Hom-Lidman:2016-1}, Hom and Lidman gave infinitely many such hyperbolic examples, as well as infinitely many examples with arbitrary JSJ decompositions. Currently, we do not know whether the examples of \cite{Hom-Lidman:2016-1} have weight-one fundamental groups or not.
It is interesting to note, however, that a longstanding open problem of Wiegold (\cite[Problem 5.52]{Mazurov-Khukhro:2014-1} and \cite[Problem 15]{Gersten:1987-1}) asks whether {\it every} finitely presented perfect group has weight one. The question would be answered negatively if there is a homology 3-sphere whose fundamental group has weight $\geq 2$.
Using $\mathbb{Q}/\mathbb{Z}$-valued linking form and their surgery formulae for Casson invariant, Boyer and Lines \cite[Theorem~5.6]{Boyer-Lines:1990-1} gave infinitely many irreducible homology lens spaces which have weight-one fundamental group, but are not obtained by Dehn surgery on a knot. In \cite{Hoffman-Walsh:2015-1}, Hoffman and Walsh gave infinitely many hyperbolic examples of this sort.
For the case that $Y$ is a homology $S^1\times S^2$, significantly less is known. Aschenbrenner, Friedl and Wilton \cite{Aschenbrenner-Friedl-Wilton:2015-1} asked the following question.
\begin{question}[{\cite[Question 7.1.5]{Aschenbrenner-Friedl-Wilton:2015-1}}]\label{question:AFW}Let $M$ be a closed, orientable, irreducible $3$-manifold such that $b_1(M) = 1$ and $\pi_1(M)$ has weight $1$. Is $M$ the result of Dehn surgery along a knot in $S^3$?
\end{question}
Note that if $M$ as in the question does arise from surgery on a knot in $S^3$ then necessarily the surgery coefficient is zero.
The purpose of this paper is to give two families of examples that show the answer to Question \ref{question:AFW} is negative. The first family shows that there exist homology $S^1\times S^2$'s not smoothly homology cobordant to any Seifert manifold or to zero surgery on a knot; we recall that two closed, oriented 3-manifolds $M$ and $N$ are \emph{homology cobordant} if there is a smooth oriented cobordism $W$ between them for which the inclusion maps $M\hookrightarrow W \hookleftarrow N$ induce isomorphisms on integral homology.
\begin{theoremalpha}\label{theorem:A} The family of closed, oriented $3$-manifolds $\{M_k\}_{k\geq1}$ described by the surgery diagram in Figure \ref{figure:Hedden-Mark} satisfies the following.
\begin{enumerate}
\item $M_k$ is irreducible with first homology ${\mathbb Z}$ and $\pi_1(M_k)$ of weight $1$.
\item $M_k$ is not the result of Dehn surgery along a knot in $S^3$.
\item $M_k$ is not homology cobordant to Dehn surgery along a knot in $S^3$.
\item\label{item:Seifert} $M_k$ is not homology cobordant to any Seifert fibered $3$-manifold.
\item $M_k$ is not homology cobordant to $M_l$ if $k\neq l$.
\end{enumerate}
\end{theoremalpha}
\begin{figure}[htb!]
\centering
\includegraphics[scale=1]{figure1}
\caption{A surgery diagram of $M_k$ ($k\geq 1)$.}
\label{figure:Hedden-Mark}
\end{figure}
The first property of $M_k$ is relatively elementary; in particular it follows from some general topological results about spliced manifolds. As we show in the next section, any splice of non-trivial knot complements in the 3-sphere is irreducible and has weight one fundamental group, from which our claims about $M_k$ will follow.
To show that $M_k$ is not the result of Dehn surgery along a knot in $S^3$, we use a Heegaard Floer theoretic obstruction developed by
Ozsv\'{a}th and Szab\'{o} in \cite{Ozsvath-Szabo:2003-2}. They showed that certain numerical ``correction terms" $d_{1/2}$ and $d_{-1/2}$ satisfy
\begin{equation}\label{dconstraints}
d_{1/2}(M)\leq \tfrac{1}{2}\quad\mbox{and}\quad d_{-1/2}(M)\geq -\tfrac{1}{2}
\end{equation}
whenever $M$ is obtained from 0-surgery on a knot in $S^3$ (see Theorem~\ref{thm:OS}). We will show that $d_{-1/2}(M_k)=-\frac{5}{2}$, and hence $M_k$ is not the result of Dehn surgery on a knot in~$S^3$. The correction terms are actually invariants of homology cobordism, from which it follows that none of the $M_k$ are even homology cobordant to surgery on a knot in~$S^3$. This feature of our examples distinguishes it from the analogous gauge and Floer theoretic results for homology spheres mentioned above. Indeed, the techniques of Auckly or Hom, Lidman, Karakurt are not invariant under homology cobordism; in the former, this is due to a condition on $\pi_1$ in Taubes' result on end periodic manifolds, and in the latter because the reduced Floer homology is not invariant under homology cobordism (though see \cite{Hendricks-Hom-Lidman:2018} for some results in that direction).
To show that our examples $M_k$ are not homology cobordant to any Seifert fibered 3-manifold, we prove a general result, Theorem~\ref{theorem:dofSeifert}, about the correction terms of Seifert fibered 3-manifold $M$ with first homology $ \mathbb{Z}$: we show that any Seifert manifold with the homology of $S^1\times S^2$ satisfies the same constraints \eqref{dconstraints} as the result of 0-surgery does. Part $(6)$ of our theorem immediately follows. We remark that it was only recently shown by Stoffregen (preceded by unpublished work of Fr{\o}yshov) that there exist homology 3-spheres that are not homology cobordant to Seifert manifolds, or equivalently that not every element of the integral homology cobordism group is represented by a Seifert manifold. To be precise, Stoffregen showed in \cite[Corollary~1.11]{Stoffregen:2015-1} that $\Sigma(2,3,11)\#\Sigma(2,3,11)$ is not homology cobordant to any Seifert fibered homology 3-sphere by using homology cobordism invariants from Pin(2)-equivariant Seiberg-Witten Floer homology.
\subsubsection*{Hyperbolic examples}
For any closed, orientable 3-manifold $M$ with a chosen Heegaard splitting, Myers gives an explicit homology cobordism from $M$ to a hyperbolic, orientable 3-manifold \cite{Myers:1983-1}. By using these homology cobordisms, we can obtain hyperbolic, orientable 3-manifolds $Z_k$ with first homology $\mathbb{Z}$ which are homology cobordant to~$M_k$. Since $d_{-1/2}$ is a homology cobordism invariant, $Z_k$ is also not the result of Dehn surgery along a knot in $S^3$ by Theorem~\ref{thm:OS}.
\begin{corollaryalpha}There is a family of closed, orientable irreducible $3$-manifolds $\{Z_k\}_{k\geq 1}$ satisfying the following.
\begin{enumerate}
\item $Z_k$ is hyperbolic with first homology ${\mathbb Z}$.
\item $Z_k$ is not the result of Dehn surgery along a knot in $S^3$.
\item $Z_k$ is not homology cobordant to any Seifert fibered $3$-manifold.
\item $Z_k$ is not homology cobordant to $Z_l$ if $k\neq l$.
\end{enumerate}
\end{corollaryalpha}
Myers' cobordisms may not preserve the weight of the fundamental groups at hand. If $\pi_1(Z_k)$ has weight one, then $Z_k$ would provide a negative answer to the following question.
\begin{question}Let $M$ be a closed, orientable, hyperbolic $3$-manifold with $b_1(M) = 1$ and $\pi_1(M)$ of weight $1$. Is $M$ the result of Dehn surgery along a knot in $S^3$?
\end{question}
We remark that the question is also open for integral homology 3-spheres.
\subsubsection*{Seifert examples}
From the previous remarks, it follows that the correction terms $d_{\pm 1/2}$ cannot show that a Seifert manifold with the homology of $S^1\times S^2$ has $DS> 1$. Using an obstruction based on the classical Rohlin invariant instead, we prove the following.
\begin{theoremalpha}\label{theorem:B}
Let $\{N_k\}_{k\geq 1}$ be the family of $3$-manifolds described by the surgery diagram in Figure \ref{figure:OS}. Then
\begin{enumerate}
\item $N_k$ is irreducible with first homology ${\mathbb Z}$ and $\pi_1(N_k)$ of weight $1$.
\item $N_k$ is a Seifert manifold over $S^2$ with three exceptional fibers.
\item If $k$ is odd, $N_k$ is not obtained by Dehn surgery on a knot in $S^3$.
\item If $k$ is odd, $N_k$ is not homology cobordant to Dehn surgery along a knot in $S^3$.
\end{enumerate}
\end{theoremalpha}
\begin{figure}[htb!]
\centering
\includegraphics[scale=1]{figure2}
\caption{A surgery diagram of $N_k$ $(k\geq 1)$.}
\label{figure:OS}
\end{figure}
Independent of questions involving weight or homology cobordism, our results provide the first known examples of irreducible homology $S^1\times S^2$'s which are not homeomorphic to surgery on a knot in $S^3$. To clarify the literature, it is worth mentioning here that in \cite[Section~10.2]{Ozsvath-Szabo:2003-2} Ozsv\'{a}th and Szab\'{o} argued based on the correction term obstruction that the manifold $N_1$ shown in Figure \ref{figure:OS} is not the result of Dehn surgery on a knot in $S^3$. Unfortunately, as we mentioned above, since $N_1$ is Seifert fibered the correction terms do not actually provide obstructions to $DS = 1$. We point out in Section \ref{section:Ozsvath-Szabo} where their calculation goes astray.
\begin{organization} In the next section, we establish some topological results on spliced manifolds which we will apply to our examples $M_k$. In Section~\ref{section:background}, we briefly recall the relevant background on Heegaard Floer correction terms and the zero surgery obstruction of Ozsv\'{a}th and Szab\'{o}. Section~\ref{section:dofM_k} is devoted to computation of the correction terms of $M_k$, whose values imply they are not zero surgery on knots in $S^3$ and have the stated homology cobordism properties. In Section \ref{section:Seifert} we prove the estimates on the correction terms of Seifert manifolds and finish the proof of Theorem \ref{theorem:A}. Section \ref{section:rokhlin} shows how the Rohlin invariant gives a different obstruction to $DS = 1$, and in Section \ref{section:Ozsvath-Szabo} we prove Theorem \ref{theorem:B}.
\end{organization}
\begin{acknowledgement}The authors would like to thank Marco Golla and Jennifer Hom for their helpful comments and encouragement. Especially, Marco Golla gave several valuable comments on the correction terms of $N_1$ which are reflected in Section~\ref{section:Ozsvath-Szabo}. Part of this work was done while Min Hoon Kim was visiting Michigan State University, and he thanks MSU for its generous hospitality and support. We also thank the University of Virginia for supporting an extended visit by Matthew Hedden in 2007, which led to the discovery of the manifolds in Theorem \ref{theorem:A}. Matthew Hedden's work on this project was partially supported by NSF CAREER grant DMS-1150872, DMS-1709016, and an NSF postdoctoral fellowship. Min Hoon Kim was partially supported by the POSCO TJ Park Science Fellowship. Thomas Mark was supported in part by a grant from the Simons Foundation (523795, TM). Kyungbae Park was partially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (F2018R1C1B6008364).
\end{acknowledgement}
\section{Some topological preliminaries}\label{section:topology}
In this section we verify the topological features---irreducibility and weight one fundamental group---of the manifolds $M_k$ in Theorem \ref{theorem:A}. These features are consequences of the fact that the manifolds are obtained by a splicing operation. Thus we establish some general results for manifolds obtained through this construction.
Given two oriented $3$-manifolds with torus boundary, $X_1,X_2$, we will refer to any manifold obtained from them by identifying their boundaries by an orientation reversing diffeomorphism as a {\em splice} of $X_1$ and $X_2$. Of course the homeomorphism type of a splice depends intimately on the choice of diffeomorphism, but this choice will be irrelevant for the topological results that follow. Note that with this definition Dehn filling is a splice with the unknot complement in $S^3$. We begin with the following observation, which indicates that the manifolds appearing in Theorem \ref{theorem:A} are splices.
\begin{proposition}\label{prop:splice} Let $L$ be the result of connected summing the components of the Hopf link with knots $K_1$ and $K_2$, respectively. Then any integral surgery on $L$ is a splice of the complements of $K_1$ and $K_2$.
\end{proposition}
\begin{proof} The connected sum operation can be viewed as a splicing operation. More precisely, the connected sum of a link component with a knot $K$ is obtained by removing a neighborhood of the meridian of the component and gluing the complement of $K$ to it by the diffeomorphism which interchanges longitudes and meridians. Thus the result of integral surgery on $L$ is diffeomorphic to integral surgery on the Hopf link, followed by the operation of gluing the complements of $K_1$ and $K_2$ to the complements of the meridians of the Hopf link. But the meridians of the components of the Hopf link, viewed within the surgered manifold, are isotopic to the cores of the surgery solid tori since the surgery slopes are integral. Thus, upon removing the meridians, we arrive back at the complement of the Hopf link, which is homeomorphic to $T^2\times [0,1]$. The manifold at hand, then, is obtained by gluing the boundary tori of the complements of $K_1$ and $K_2$ to the boundary components of a thickened torus. The result follows immediately.
\end{proof}
We next prove that splices of knot complements in the 3-sphere have fundamental groups of weight one. This follows from a basic result about pushouts of groups.
\begin{proposition}\label{prop:pushoutweight} Suppose that $G_1$ and $G_2$ are groups which are normally generated by elements $g_1$ and $g_2$, respectively, and that $\phi_i:H\rightarrow G_i$ are homomorphisms. If the image of $\phi_1$ contains $g_1$, then the pushout $G_1\ast_H G_2$ is normally generated by a single element\textup{;} namely, the image of $g_2$ under the defining map $G_2\rightarrow G_1\ast_H G_2$.
\end{proposition}
\begin{proof} In the pushout, $g_1=\phi_1(x)=\phi_2(x)$. Now $\phi_2(x)\in G_2$, hence can be written as a product of conjugates of $g_2$. Since $g_1$ normally generates $G_1$, it follows that $g_2$ normally generates the pushout.
\end{proof}
It follows at once from van Kampen's theorem that that any splice of complements of knots in the 3-sphere has weight one fundamental group. Indeed, the Wirtinger presentation shows that the fundamental group of a knot complement has weight one, normally generated by a meridian. The homotopy class of the meridian is represented by a curve on the boundary, thereby verifying the hypothesis of the proposition. Of course this reasoning shows more generally that the splice of a knot complement in $S^3$ with {\em any} manifold with torus boundary and weight one fundamental group also has fundamental group of weight one.
The discussion to this point shows that the manifolds $M_k$, being splices of knot complements, have weight one fundamental groups. We turn our attention to their irreducibility. As above, we will deduce this property from a more general result about splicing.
\vskip0.1in
Recall that a $3$-manifold is {\em irreducible} if any smoothly embedded $2$-sphere bounds a $3$-ball, and a surface $T$ in a 3-manifold is {\em incompressible} if any embedded disk $D$ in the manifold for which $D\cap T=\partial D$ has the property that $\partial D$ bounds a disk in $T$ as well.
\begin{proposition}\label{prop:irreduciblesplice} Let $X_1$, $X_2$ be irreducible manifolds, each with an incompressible torus as boundary. Then any splice of $X_1$ and $X_2$ is irreducible.
\end{proposition}
The proposition applies to the complements of non-trivial knots in the $3$-sphere, which are irreducible by Alexander's characterization of the $3$-sphere \cite{Alexander:1924} (namely, that any smooth $2$-sphere separates into two pieces, each diffeomorphic to a ball), and have incompressible boundary whenever the knot is non-trivial.
\begin{proof} The proposition follows from a standard ``innermost disk" argument. More precisely, let $S$ be an embedded $2$-sphere in a splice of $X_1$ and $X_2$, and let $T$ denote the image of the boundary tori, identified within the splice. Then $S$ intersects $T$ in a collection of embedded circles. We claim that we can remove these circles by an isotopy of $S$. This claim would prove the proposition since, after the isotopy, the sphere lies entirely in $X_1$ or $X_2$, where it bounds a ball by hypothesis.
To remove the components of $S\cap T$, consider a disk $D\subset S$ which intersects $T$ precisely in $\partial D$ (a so-called ``innermost disk", which must exist by compactness of $S\cap T$ and the Jordan-Sch{\"o}nflies theorem). Since $D\cap T=\partial D$, the interior of $D$ must lay entirely in one of $X_1$ or $X_2$. Incompressibility of the boundary of these manifolds therefore implies $\partial D$ bounds a disk embedded in $T$. The union of this latter disk with $D$ is an embedded sphere in either $X_1$ or $X_2$, which bounds a ball by its irreducibility. The ball can be used to isotope $S$ and remove the circle of intersection. Inducting on the number of such circles implies our claim.
\end{proof}
\section{Heegaard Floer theory and Ozsv\'{a}th-Szab\'{o}'s 0-surgery obstruction}\label{section:background}
In this section we briefly recall the Heegaard Floer correction terms and an obstruction they yield, due to Ozsv\'ath and Szab\'o, to a 3-manifold being obtained by $0$-surgery on a knot in $S^3$. For more detailed exposition, we refer the reader to \cite{Ozsvath-Szabo:2003-2}.
Let $\mathbb{F}$ be the field with two elements, and $\mathbb{F}[U]$ be the polynomial ring over $\mathbb{F}$. Let $Y$ be a closed oriented 3-manifold endowed with a spin$^c$ structure $\mathfrak{s}$. Heegaard Floer homology associates to the pair $(Y,\mathfrak{s})$ several relatively graded modules over $\mathbb{F}[
U]$, $HF^\circ(Y,\mathfrak{s})$, where $\circ\in\{-,+,\infty\}$. These Heegaard Floer modules are related by a long exact sequence:
\begin{equation*}
\cdots\rightarrow HF^-(Y,\mathfrak{s})\xrightarrow{\iota} HF^\infty(Y,\mathfrak{s})\xrightarrow{\pi} HF^+(Y,\mathfrak{s})\rightarrow\cdots.
\end{equation*}
The reduced Floer homology, denoted $HF^+_{red}(Y,\mathfrak{s})$, can be defined either as the cokernel of $\pi$ or the kernel of $\iota$ with grading shifted up by one.
In the case that the spin$^c$-structure $\mathfrak{s}$ has torsion first Chern class, the relative grading of the corresponding Floer homology modules can be lifted to an \emph{absolute} $\mathbb{Q}$-grading. In particular,
$HF^\circ(Y,\mathfrak{s})$ is an absolutely $\mathbb{Q}$-graded $\mathbb{F}[U]$-module for any $\circ\in\{-,+,\infty\}$.
For a rational homology 3-sphere $Y$, every spin$^c$ structure will have torsion Chern class, and we define the \emph{correction term} $d(Y,\mathfrak{s})\in \mathbb{Q}$ to be the minimal $\mathbb{Q}$-grading of any element in $HF^+(Y,\mathfrak{s})$ in the image of $\pi$. A structure theorem \cite[Theorem 10.1]{Ozsvath-Szabo:2004-2} for the Floer modules states that $HF^\infty(Y,\mathfrak{s})\cong\mathbb{F}[U,U^{-1}]$, from which it follows that \[HF^+(Y,\mathfrak{s})\cong\mathcal{T}^+_{d(Y,\mathfrak{s})}\oplus HF^+_{red}(Y,\mathfrak{s}),\]
where $\mathcal{T}^+_d$ denotes the $\mathbb{Q}$-graded $\mathbb{F}[U]$-module isomorphic to $\mathbb{F}[U,U^{-1}]/U\mathbb{F}[U]$ in which the non-trivial element with lowest grading occurs in grading $d\in \mathbb{Q}$. Multiplication by $U$ decreases the $\mathbb{Q}$-grading by $2$.
A 3-manifold $Y$ with $H_1(Y;\mathbb{Z})\cong\mathbb{Z}$ has a unique spin$^c$ structure with torsion (zero) Chern class; we denote this spin$^c$ structure by $\mathfrak{s}_0$. In this setting, the structure theorem states that $HF^\infty(Y,\mathfrak{s})\cong\mathbb{F}[U,U^{-1}]\oplus\mathbb{F}[U,U^{-1}] $, with the two summands supported in grading $\pm \frac{1}{2}$ modulo 2, respectively. We define $d_{1/2}(Y)$ and $d_{-1/2}(Y)$ to be the minimal grading of any element in the image of $\pi$ in $HF^+(Y,\mathfrak{s}_0)$ supported in the grading $\frac{1}{2}$ and $-\frac{1}{2}$ modulo $2$, respectively. It follows that
\begin{equation*}
HF^+(Y,\mathfrak{s}_0)\cong\mathcal{T}^+_{d_{-1/2}(Y)}\oplus\mathcal{T}^+_{d_{1/2}(Y)}\oplus HF^+_{red}(Y,\mathfrak{s}_0).
\end{equation*}
The key features of the correction terms are certain constraints they place on negative semi-definite 4-manifolds bounded by a given 3-manifold, \cite[Theorem~9.11]{Ozsvath-Szabo:2003-2}. Applying these constraints to the 4-manifold obtained from a homology cobordism by drilling out a neighborhood of an arc connecting the boundaries yields the following (compare \cite[Proposition~4.5]{Levine-Ruberman:2014-1}):
\begin{Homology Cobordism Invariance} If $Y$ and $Y'$ are integral homology cobordant homology manifolds with first homology $\mathbb{Z}$, then $d_{\pm 1/2}(Y)=d_{\pm 1/2}(Y')$.
\end{Homology Cobordism Invariance}
The relevance to the surgery question at hand also becomes apparent: if a 3-manifold is obtained by 0-surgery on a knot $K$ in $S^3$, then it bounds a negative semi-definite 4-manifold gotten by attaching a $0$-framed 2-handle to the 4-ball along $K$. Coupling this observation with the constraints mentioned above, and using the fact that $d_{-1/2}(Y)=-d_{1/2}(-Y)$ \cite[Proposition~4.10]{Ozsvath-Szabo:2003-2}, we get the following obstruction:
\begin{zerosurgeryobstruction}\cite[Corollary 9.13]{Ozsvath-Szabo:2003-2} If $Y$ bounds a homology $S^2\times D^2$ then $d_{1/2}(Y)\leq\frac{1}{2}$ and $d_{-1/2}(Y)\geq -\frac{1}{2}$.
\end{zerosurgeryobstruction}
\noindent The obstruction applies, for instance, if $Y$ is homology cobordant to zero surgery on a knot in a 3-manifold that bounds a smooth contractible 4-manifold.
Drawing on information from the surgery exact triangle, Ozsv\'{a}th and Szab\'{o} \cite[Proposition~4.12]{Ozsvath-Szabo:2003-2} gave a refined statement of the obstruction, which determines the values of the correction terms. We rephrase their result in terms of the non-negative knot invariant $V_0(K)$ introduced by Rasmussen (under the name $h_0(K)$) in \cite{Rasmussen:2003-1}, and used by Ni-Wu \cite{Ni-Wu:2015-1}. To see that the following agrees with the stated reference, we recall that $d(S^3_1(K))=-2V_0(K)$, and that the $d$-invariant of $1$-surgery changes sign under orientation reversal (implying $d(S^3_{-1}(K))=2V_0(\overline{K})$).
\begin{theorem}[{\cite[Proposition 4.12]{Ozsvath-Szabo:2003-2}}]\label{thm:OS}Suppose that $Y$ is obtained by $0$-surgery on a knot $K$ in~$S^3$. Then $d_{1/2}(Y)=\frac{1}{2}-2V_0(K)$ and $d_{-1/2}(Y)=-\frac{1}{2}+2V_0(\overline{K})$ where $\overline{K}$ is the mirror of $K$.
\end{theorem}
\section{Computation of $d_{\pm 1/2}(M_k)$}\label{section:dofM_k}
Consider the 3-manifold $M_k$ obtained by $(1,1)$ surgery on the link obtained from the Hopf link by connected summing one component with the right-handed trefoil $T_{2,3}$ and the other component with the $(2,4k-1)$ torus knot $T_{2,4k-1}$ as depicted in Figure~\ref{figure:Hedden-Mark}. In this section, we compute $d_{\pm 1/2}(M_k)$ for any $k\geq 1$. We assume that the reader is familiar with knot Floer homology \cite{Ozsvath-Szabo:2004-1,Rasmussen:2003-1}.
\begin{theorem}\label{thm:dofM_k}For any $k\geq 1$, $d_{1/2}(M_k)=-2k+\frac{1}{2}$ and $d_{-1/2}(M_k)=-\frac{5}{2}$.
\end{theorem}
We briefly discuss the strategy of our computation. Consider the knot $J_k$ in $S^3_{1}(T_{2,3})$ depicted in Figure~\ref{figure:J_k}. Since $H_1(M_k)\cong \mathbb{Z}$, $M_k$ is the result of surgery on $S^3_1(T_{2,3})$ along the knot $J_k$ using its Seifert framing. Note that the Seifert framing of $J_k$ is the 1-framing with respect to the blackboard framing of Figure~\ref{figure:J_k}. Then $d_{\pm 1/2}(M_k)$ can be determined by the knot Floer homology $CFK^\infty(S^3_1(T_{2,3}),J_k)$ using a surgery formula \cite[Section~4.8]{Ozsvath-Szabo:2008-1}.
\begin{figure}[tb!]
\centering
\includegraphics[scale=1]{figure3}
\caption{A knot $J_{k}$ in $S^3_1(T_{2,3})$.}
\label{figure:J_k}
\end{figure}
In order to determine the aforementioned knot Floer homology complex, we first consider the meridian of $T_{2,3}$, viewed as a knot $\mu\subset S^3_1(T_{2,3})$. Then the relevant knot $(S^3_1(T_{2,3}),J_k)$ is simply the connected sum of two knots, $(S^3_1(T_{2,3}),\mu)$ and $(S^3,T_{2,4k-1})$. A K\"{u}nneth formula for the knot Floer homology of connected sums then implies
\begin{equation}\label{equation:Kunneth}CFK^\infty(S^3_1(T_{2,3}),J_k)\cong CFK^\infty (S^3_1(T_{2,3}),\mu)\otimes CFK^\infty (T_{2,4k-1}).\end{equation}
We can deduce the structure of $CFK^\infty(S^3_1(T_{2,3}),\mu)$ using a surgery formula which, together with the K\"{u}nneth formula and the well-known structure of the Floer homology of torus knots, will determine the filtered chain homotopy type of $CFK^\infty(S^3_1(T_{2,3}),J_k)$. Precisely, we prove the following:
\begin{proposition}\label{proposition:CFKofJ_k}We have the following filtered chain homotopy equivalences.
\begin{enumerate}
\item\label{item:CFKofmeridian} $CFK^\infty(S^3_1(T_{2,3}),\mu)\cong CFK^\infty (T_{2,-3})[-2]$.
\item\label{item:CFKofJ_k} $CFK^\infty(S^3_{1}(T_{2,3}),J_k)\oplus A_0\cong CFK^\infty(T_{2,4k-3})[-2]\oplus A_1$.
\end{enumerate}
Here $[-2]$ means that the Maslov grading is shifted by $-2$, and $A_0$ and $A_1$ are acyclic chain complexes over $\mathbb{F}[U,U^{-1}]$.
\end{proposition}
\begin{remark}For $N\geq 2g(K)$, it is known that $CFK^\infty(S^3_{-N}(K),\mu)$ is determined by $CFK^\infty(S^3,K)$ in \cite[Theorem~4.2]{Hedden-Kim-Livingston:2016-1} (compare \cite[Theorem~4.1]{Hedden:2007-1}). Since $1<2g(T_{2,3})$, we cannot apply \cite[Theorem~4.2]{Hedden-Kim-Livingston:2016-1} to determine $CFK^\infty(S^3_1(T_{2,3}),\mu)$. Work in progress of Hedden and Levine on a general surgery formula for the knot Floer homology of $\mu$ would easily yield the formula. In the case at hand, however, a surgery formula applied for $\mu$ allows for an ad hoc argument.\end{remark}
\begin{proof} (\ref{item:CFKofmeridian}) The key observation is that the complement of $\mu \subset S^3_1(T_{2,3})$ is homeomorphic to the complement of $T_{2,3}\subset S^3$. Indeed, this can be seen by observing that $\mu$ is isotopic to the core of the surgery solid torus. It follows that $\mu$ is a genus one fibered knot. Moreover, $S^3_1(T_{2,3})$ is homeomorphic to the Poincar{\'e} sphere equipped with the opposite orientation it inherits as the boundary of the resolution of the surface singularity $z^2+w^3+r^5=0$, which is well known and easily seen to be an $L$-space homology sphere with $d$-invariant equal to $-2$.
As $\mu$ is a genus one fibered knot in an $L$-space homology sphere, it follows readily that its knot Floer homology must have rank $5$ or $3$, and in the latter case must be isomorphic to that of one of the trefoil knots, with an overall shift in the Maslov grading by the $d$-invariant. To see this, observe that being genus one implies, by the adjunction inequality for knot Floer homology \cite[Theorem~5.1]{Ozsvath-Szabo:2004-1}, that $\widehat{HFK}(S^3_1(T_{2,3}),\mu,i)=0$ for $|i|>1$. As $\mu$ is fibered, $\widehat{HFK}(S^3_1(T_{2,3}),\mu,i)=\mathbb{F}$ for $i=\pm1$ \cite[Theorem~5.1]{Ozsvath-Szabo:2005-1}. Moreover, the Maslov grading of the generator of $\widehat{HFK}(S^3_1(T_{2,3}),\mu,1)$ is two higher than that of $\widehat{HFK}(S^3_1(T_{2,3}),\mu,-1)$, by a symmetry of the knot Floer homology groups \cite[Proposition~3.10]{Ozsvath-Szabo:2005-1}. Now there is a differential $\partial$ acting on $\widehat{HFK}(-S^3_1(T_{2,3}),\mu)$, the homology of which is isomorphic to the Floer homology of the ambient 3-manifold. (The existence of such a ``cancelling differential" follows from the homological method of reduction of a filtered chain complex; see \cite[Section 2.1]{Hedden-Watson:2018-1} for details on this perspective.) This differential strictly lowers the Alexander grading, which implies that the ``middle" group $\widehat{HFK}(S^3_1(T_{2,3}),\mu,0)$ is either $\mathbb{F}^3$ or $\mathbb{F}$. In the former case, two of the summands are supported in the same grading, which is one less than that of the top group; moreover, one of these summands is the image of $\widehat{HFK}(S^3_1(T_{2,3}),\mu,1)$ under~$\partial$, and $\partial$ maps the other summand surjectively onto $\widehat{HFK}(S^3_1(T_{2,3}),\mu,-1)$. The case that $\widehat{HFK}(S^3_1(T_{2,3}),\mu,0)=\mathbb{F}$ divides into two sub-cases, depending on whether $\partial$ maps the middle group surjectively onto the bottom, or the top group surjectively onto the middle. In both sub-cases the resulting knot Floer homology is thin, and hence $CFK^\infty$ is determined by the hat groups. In the former sub-case the hat groups are isomorphic to those of the right-handed trefoil, and to those of the left-handed trefoil in the latter; in both sub-cases, their Maslov grading has an overall shift down by $2$.
To determine which of the three possibilities above arise, we recall the surgery formula for knot Floer homology. In its simplest guise, which will be sufficient for our purposes, it expresses the Floer homology of the manifold obtained by $n$-surgery, $n\le -(2g(K)-1)$ on a null-homologous knot $(Y,K)$ as the homology of a particular sub-quotient complex of $CFK^\infty(Y,K)$ \cite[Theorem~4.1]{Ozsvath-Szabo:2004-1}. Its relevance to us is that $-1$-surgery on $(S^3_1(T_{2,3}),\mu)$ is homeomorphic to $S^3$, a manifold with $\widehat{HF}(S^3)$ of rank $1$. Since $\mu$ is a genus one knot, we can apply the surgery formula to (re)-calculate the Floer homology of $S^3$, viewed as $-1$-surgery on $\mu$. The surgery formula says that the homology is given as the homology of the subquotient complex of $CFK^\infty(S^3_1(T_{2,3}),\mu)$ generated by chains whose $\mathbb{Z}\oplus\mathbb{Z}$-filtration values satisfy the constraint min$(i,j)=0$. Of the three possibilities for $\widehat{HFK}(S^3_1(T_{2,3}),\mu)$, all but the case of the left-handed trefoil (shifted down in grading by $2$) have the property that the relevant subquotient complex has homology of rank $3$. The stated structure of $CFK^\infty(S^3_1(T_{2,3}),\mu)$ follows at once.
(\ref{item:CFKofJ_k}) We say two chain complexes $C_0$ and $C_1$ are \emph{stably filtered chain homotopy equivalent} if $C_0\oplus A_0$ is filtered chain homotopy equivalent to $C_1\oplus A_1$ for some acyclic chain complexes $A_0$ and $A_1$. By \cite[Theorem B.1]{Hedden-Kim-Livingston:2016-1} and \cite[Proposition 3.11]{Hom:2017-1}, it is known that the tensor product $CFK^\infty(T_{2,4k-1})\otimes CFK^\infty(T_{2,-3})$ is stably filtered chain homotopy equivalent to $CFK^\infty(T_{2,4k-3})$.
By (\ref{item:CFKofmeridian}), the K{\"u}nneth formula (\ref{equation:Kunneth}) becomes
\begin{equation}\label{equation:J_k}CFK^\infty(S^3_1(T_{2,3}),J_k)\cong CFK^\infty (T_{2,-3})\otimes CFK^\infty (T_{2,4k-1})[-2].\end{equation}
It follows that the right hand side of \eqref{equation:J_k} is stably filtered chain homotopy equivalent to $CFK^\infty(T_{2,4k-3})[-2]$, and we obtain the desired conclusion.
\end{proof}
Now we prove Theorem \ref{thm:dofM_k} which states that $d_{1/2}(M_k)=-2k+\frac{1}{2}$ and $d_{-1/2}(M_k)=-\frac{5}{2}$ if $k\geq 1$.
\begin{proof}[Proof of Theorem \ref{thm:dofM_k}] Recall that $M_k$ is obtained from $S^3_1(T_{2,3})$ by surgery on $J_k$ along its Seifert framing. Since any acyclic summand does not change $d$-invariants, we have
\begin{align*}
&d_{1/2}(M_k)=d_{1/2}(S^3_0(T_{2,4k-3}))-2=-\tfrac{3}{2}-2V_0(T_{2,4k-3}),\\
&d_{-1/2}(M_k)=d_{-1/2}(S^3_0(T_{2,4k-3}))-2=-\tfrac{5}{2}+2V_0(T_{2,-4k+3}).
\end{align*}
by Proposition \ref{proposition:CFKofJ_k}(\ref{item:CFKofJ_k}) and Theorem \ref{thm:OS}. Strictly speaking, Theorem~\ref{thm:OS} pertains only to surgery on knots in $S^3$, but the proof easily yields a corresponding formula for surgery on knots in an integral homology sphere $L$-space; in these cases, the correction terms inherit an overall shift by the $d$-invariant of the ambient manifold (here, $-2$). Since $k\geq 1$,
\begin{align*}&V_0(T_{2,4k-3})=k-1,\\
&V_0(T_{2,-4k+3})=0\end{align*} (for example, see \cite[Theorem~1.6]{Borodzik-Nemethi:2013-1}). This completes the proof.
\end{proof}
\section{Correction terms of Seifert manifolds}\label{section:Seifert}
In this section we provide some general constraints on the correction terms of a Seifert fibered homology $S^1\times S^2$. More precisely, we show $d_{-1/2}(M)\geq -\frac{1}{2}$ and $d_{1/2}(M)\leq\frac{1}{2}$ for any Seifert fibered homology $S^1\times S^2$. It follows at once that none of our manifolds are homology cobordant to a Seifert fibered space. We also see that the zero surgery obstruction can say nothing about Seifert manifolds.
\vskip0.1in
Our estimates hinge on the following proposition, which was pointed out to us by Marco Golla. (Compare \cite[Theorem~5.2]{Neumann-Raymond:1978-1}.)
\begin{proposition}\label{proposition:Seifertboundsboth}
Suppose $M$ is a Seifert fibered homology $S^1\times S^2$. Then both $M$ and $-M$ bound negative semi-definite, plumbed $4$-manifolds.
\end{proposition}
\begin{proof}
Choose an orientation and a Seifert fibered structure of $M$. As an oriented manifold, $M$ is homeomorphic to $M(e;r_1,\ldots,r_n)$ in Figure~\ref{figure:Seifertmanifold} where $e$ is an integer, and each $r_i$ is a non-zero rational number. We change the Seifert invariant $(e;r_1,\ldots,r_n)$ via the following two steps.
\begin{figure}[b]
\centering
\includegraphics[scale=1]{figure4}
\caption{A Seifert fibered 3-manifold $M(e;r_1,\ldots,r_k)$.}
\label{figure:Seifertmanifold}
\end{figure}
\begin{enumerate}
\item If $r_i$ is an integer, remove $r_i$ from the tuple $(e;r_1,\ldots,r_n)$ and add $r_i$ to $e$.
\item For each $i$, replace $r_i$ and $e$ by $r_i-\lfloor r_i\rfloor$ and $e+\lfloor r_i\rfloor$, respectively.
\end{enumerate}
Note that the above procedures are realized by slam-dunk moves, so the homeomorphism type remains unchanged. For brevity, we still denote the resulting Seifert invariant of $M$ by $(e;r_1,\ldots,r_n)$, so that each rational number $r_i$ satisfies $0<r_i<1$. Since $0<r_i<1$, we can write $-\frac{1}{r_i}$ as a negative continued fraction $[a_{i1},\ldots,a_{ik_i}]$ where $a_{ij}\leq -2$ for all $i$ and $j$. Then $M$ bounds a star-shaped plumbed 4-manifold $X_{\Gamma}$ whose corresponding plumbing graph is $\Gamma$ depicted in Figure~\ref{figure:plumbingofSeifert}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{figure5}
\caption{A plumbing graph $\Gamma$.}
\label{figure:plumbingofSeifert}
\end{figure}
Since $a_{ij}\leq -2$ for all $i$ and $j$, it is easy to check that $Q_{X_\Gamma}$ is negative semi-definite. (Since $\partial X_\Gamma=M$ is a homology $S^1\times S^2$ and $\Gamma$ is a tree, $Q_{X_\Gamma}$ has determinant $0$.)
\end{proof}
\begin{remark}
The plumbed 4-manifold $X_\Gamma$ constructed in the proof of Proposition~\ref{proposition:Seifertboundsboth} is called the \emph{normal form} of $M$. (Note that $X_\Gamma$ depends only on the choice of orientation of $M$.) What we have shown in Proposition~\ref{proposition:Seifertboundsboth} is that the normal forms of $M$ and $-M$ are negative semi-definite plumbings. (Compare \cite[Theorem~5.2]{Neumann-Raymond:1978-1} where it is shown that one normal form of a Seifert fibered rational homology sphere is a negative-definite plumbing.)
\end{remark}
We recall a special case of \cite[Corollary~4.8]{Levine-Ruberman:2014-1}. (Note that if $M$ is a closed, oriented 3-manifold with $H_1(M)\cong \mathbb{Z}$, then $M$ has standard $HF^\infty$, and $d_{-1/2}(M)$ is equal to $d(M,\mathfrak{s}_0,H_1(M))$ with the notation of \cite[Corollary~4.8]{Levine-Ruberman:2014-1}.) We remark that this special case essentially follows from \cite[Theorem~9.11]{Ozsvath-Szabo:2003-2} and Elkies' theorem \cite{Elkies:1995-1}.
\begin{proposition}[{\cite[Corollary~4.8]{Levine-Ruberman:2014-1}}]\label{proposition:Levine-Ruberman} Let $M$ be a closed, oriented $3$-manifold with first homology $\mathbb{Z}$. Suppose that $M$ bounds a negative semi-definite, simply connected $4$-manifold $X$. Then $d_{-1/2}(M)\geq -\frac{1}{2}$.
\end{proposition}
\begin{theorem}\label{theorem:dofSeifert}Suppose $M$ is homology cobordant to a Seifert fibered homology $S^1\times S^2$. Then $d_{-1/2}(M)\geq -\frac{1}{2}$ and $d_{1/2}(M)\leq \frac{1}{2}$.
\end{theorem}
\begin{proof}Since $d_{-1/2}$ and $d_{1/2}$ are homology cobordism invariants, we can assume that $M$ is a Seifert fibered homology $S^1\times S^2$. By Proposition~\ref{proposition:Seifertboundsboth}, both $M$ and $-M$ bound negative semi-definite, plumbed 4-manifolds. Since plumbed 4-manifolds are simply connected, we can apply Proposition~\ref{proposition:Levine-Ruberman} to conclude that $d_{-1/2}(M)\geq -\frac{1}{2}$, and $d_{-1/2}(-M)\geq -\frac{1}{2}$. Since $d_{-1/2}(-M)=-d_{1/2}(M)$, the desired conclusion follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:A}] From the surgery diagram of $M_k$ in Figure~\ref{figure:Hedden-Mark}, it is easy to compute that $H_1(M_k)\cong \mathbb{Z}$. By Proposition~\ref{prop:splice}, $M_k$ is a splice of non-trivial knot complements which, by Propositions~\ref{prop:irreduciblesplice} and \ref{prop:pushoutweight}, implies that $M_k$ is irreducible and has weight 1 fundamental group. Ozsv{\'a}th and Szab{\'o}'s obstruction, Theorem~\ref{thm:OS}, combined with our calculation of the correction terms, Theorem~\ref{thm:dofM_k}, shows that $M_k$ is not the result of Dehn surgery on a knot in $S^3$. Since $d_{1/2}$ is a homology cobordism invariant, $M_k$ and $M_l$ are not homology cobordant if $k\neq l$ by Theorem~\ref{thm:dofM_k}. Finally, Theorems~\ref{thm:dofM_k} and~\ref{theorem:dofSeifert} show that $M_k$ is not homology cobordant to any Seifert fibered 3-manifold. This completes the proof.
\end{proof}
\section{Rohlin invariant and another surgery obstruction}\label{section:rokhlin}
While the Heegaard Floer correction terms provide an obstruction for a 3-manifold to arise as 0-surgery on a knot in $S^3$, we see by comparing Theorem~\ref{theorem:dofSeifert} with the zero-surgery obstruction of Section~\ref{section:background} that they cannot show a Seifert manifold is not 0-surgery on a knot (one could view the zero-surgery obstruction and the Seifert constraint Theorem~\ref{theorem:dofSeifert} as arising from the same observation, that in both cases the 3-manifold in question bounds negative semi-definite with both orientations). In this section we observe that the classical Rohlin invariant can obstruct a homology $S^1\times S^2$ from having surgery number~1, and we will see that this obstruction can be effective in the Seifert case.
Recall that if $(Y,s)$ is a spin 3-manifold, the Rohlin invariant $\mu(Y,s)\in {\mathbb Q}/2{\mathbb Z}$ is defined to equal $\frac{1}{8}\sigma(X)$ modulo $2$, where $X$ is a compact 4-manifold with $\partial X = Y$ that admits a spin structure extending $s$. If $Y$ is a homology sphere then $Y$ admits a unique spin structure, and since a spin 4-manifold with boundary $Y$ has signature divisible by 8, we have $\mu(Y)\in {\mathbb Z}/2{\mathbb Z}$. If $Y_0$ has the homology of $S^1\times S^2$ then $Y_0$ has two spin structures, and hence two Rohlin invariants (each also with values in ${\mathbb Z}/2{\mathbb Z}$).
\begin{lemma}\label{lemma:rokhlincalc} Let $Y$ be an integral homology sphere and $K\subset Y$ a knot\textup{;} write $Y_0(K)$ for the result of $0$-framed surgery along $K$. Then the Rohlin invariants of $Y_0(K)$ are equal to $\mu(Y_0(K), \mathfrak{s}_0) = \mu(Y)$ and $\mu(Y_0(K), \mathfrak{s}_1) = \mu(Y) + \arf(K)$.
\end{lemma}
\begin{proof} Let $X$ be a spin 4-manifold with boundary $Y$. The obvious 0-framed 2-handle cobordism $W$ from $Y$ to $Y_0(K)$ carries a (unique) spin structure, and if $\mathfrak{s}_0$ is the spin structure on $Y_0(K)$ induced by the one on the cobordism, then $X\cup_Y W$ is a spin 4-manifold with spin boundary $(Y_0(K),\mathfrak{s}_0)$ and the same signature as $X$. Hence $\mu(Y_0(K), \mathfrak{s}_0) = \mu(Y)$.
It is not hard to see that the other spin structure on $Y_0(K)$ is spin cobordant (by a 0-framed surgery cobordism) to the unique spin structure on $Y_1(K)$, the result of $1$-framed surgery on $K$ (see, for example, Section 5.7 of \cite{gompfstipsicz}). The same argument as above implies $\mu(Y_0(K), \mathfrak{s}_1) = \mu(Y_1(K))$. On the other hand, the surgery formula for the Rohlin invariant (as in \cite[Theorem 2.10]{saveliev}) implies $\mu(Y_1(K)) = \mu(Y) + \arf(K)$.
\end{proof}
One could also phrase the lemma as the statement that the Rohlin invariants of $Y_0(K)$ are equal to $\mu(Y)$ and $\mu(Y_1(K))$. Since $\mu(S^3) = 0$, we infer:
\begin{corollary} If $Y_0$ is a $3$-manifold obtained by $0$-framed surgery on a knot in $S^3$, then at least one of the Rohlin invariants of $Y_0$ vanishes. The other Rohlin invariant is equal to the Arf invariant of any knot $K\subset S^3$ such that $Y_0 = S^3_0(K)$.
\end{corollary}
\begin{corollary}\label{corollary:0surgeryobstruction} If an integral homology $S^1\times S^2$ has two nontrivial Rohlin invariants, then it is not obtained by surgery on a knot in $S^3$.
\end{corollary}
\section{Properties of the manifolds $N_k$}\label{section:Ozsvath-Szabo}
In this section, we discuss the manifolds $N_k$ given by the surgery diagrams of Figure~\ref{figure:OS}.
\begin{proposition}\label{proposition:Seifert}For any positive integer $k$, $N_k$ is an irreducible Seifert fibered $3$-manifold.
\end{proposition}
\begin{proof} We can see a Seifert fibered structure of $N_k$ from the sequence of Kirby moves depicted in Figure~\ref{figure:Ozsvath_Szabo_Kirbydiagram}. We remark that a similar sequence of Kirby moves is given in Lemma 2.1 of~\cite{Lisca-Stipsicz:2007-2}. By Figure~\ref{figure:Ozsvath_Szabo_Kirbydiagram}, $N_k$ admits a Seifert fibering $ N_k\mathchoice{\longrightarrow}{\rightarrow}{\rightarrow}{\rightarrow} S^2$. Note that the slopes $r_i$ of the exceptional fibers are $\frac{8k-3}{16k-2}$, $\frac{1}{8k-1}$ and $\frac{1}{2}$, respectively. It is known that any orientable, reducible Seifert fibered 3-manifold is homeomorphic to either $S^1\times S^2$ or $\mathbb{R}\mathbb{P}^3\#\mathbb{R}\mathbb{P}^3$ (for example, see \cite[Lemma~VI.7]{Jaco:1980}). Since $H_1(N_k)\cong \mathbb{Z}$, $N_k$ is not homeomorphic to $\mathbb{R}\mathbb{P}^3\#\mathbb{R}\mathbb{P}^3$. By the homeomorphism classification of Seifert fibered 3-manifolds \cite{Seifert}, we can conclude that $N_k$ is not homeomorphic to $S^1\times S^2$, and hence $N_k$ is irreducible.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{figure6}
\caption{A Seifert fibered structure of $N_k$.}
\label{figure:Ozsvath_Szabo_Kirbydiagram}
\end{figure}
\begin{proposition}The weight of $\pi_1(N_k)$ is one for any positive integer $k$.
\end{proposition}
\begin{proof}We observed above that $N_k$ is a Seifert fibered 3-manifold with 3 exceptional fibers whose slopes are $\frac{8k-3}{16k-2}$, $\frac{1}{8k-1}$ and $\frac{1}{2}$. Therefore we have a presentation of $\pi_1(N_k)$ as follows (compare \cite[page 91]{Jaco:1980}):
\begin{align*}\pi_1(N_k)&\cong \langle x_1,x_2,x_3,h\mid x_1^{16k-2}=h^{8k-3}_{\vphantom{1}}, x_2^{8k-1}=h, x_3^2=h, x_1x_2x_3=h, [h,x_i]=1\rangle\\
&\cong\langle x_1,x_2,h\mid x_1^{16k-2}=h^{8k-3}, x_2^{8k-1}=h, h=x_1x_2x_1x_2, [h,x_1]=[h,x_2]=1\rangle\\
&\cong \langle x_1^{\vphantom{16k-2}},x_2^{\vphantom{16k-2}}\mid x_1^{16k-2}=x_2^{(8k-1)(8k-3)}, x_2^{8k-1}=(x_1^{\vphantom{8k-1}}x_2^{\vphantom{8k-1}})^2, [x_2^{8k-1},x_1^{\vphantom{8k-1}}]=1\rangle.
\end{align*}
In the second equality, we cancel the generator $x_3$ with the relation $x_3^{\vphantom{-1}}=x_2^{-1}x_1^{-1}h$. Note that the relation $x_3^2=h$ is equivalent to the relation $x_2^{-1}x_1^{-1}hx_2^{-1}x_1^{-1}h=h$, and hence to the relation $h=x_1x_2x_1x_2$. In the last equality, we cancel the generator $h$ and the relation $h=x_2^{8k-1}$.
Let $\langle \langle x_2^{4k-2}x_1^{-1}\rangle\rangle$ be the normal subgroup of $\pi_1(N_k)$ generated by $x_2^{4k-2}x_1^{-1}$. Then
\begin{align*} \pi_1(N_k)/\langle \langle x_2^{4k-2}x_1^{-1}\rangle\rangle
&\cong \langle x_1^{\vphantom{16k-2}},x_2^{\vphantom{16k-2}}\mid x_1^{16k-2}=x_2^{(8k-1)(8k-3)}, x_2^{8k-1}=(x_1^{\vphantom{8k-1}} x_2^{\vphantom{8k-1}})^2, x_1^{\vphantom{4k-2}}=x_2^{4k-2}\rangle\\
&\cong \langle x_2\mid x_2^{(8k-1)(8k-4)}=x_2^{(8k-1)(8k-3)}, x_2^{8k-1}=x_2^{8k-2}\rangle\\
&\cong \langle x_2\mid x_2^{(8k-1)(8k-4)}=x_2^{(8k-1)(8k-3)}, x_2=1\rangle=1.
\end{align*}
Hence, the weight of $\pi_1(N_k)$ is 1.
\end{proof}
Since $N_k$ is Seifert, the correction terms do not provide information on the surgery number of $N_k$; instead we apply the obstruction of Corollary \ref{corollary:0surgeryobstruction} of the previous section. To do so we must calculate the Rohlin invariants of the two spin structures on $N_k$. One way to make this calculation, along the lines of the previous section, is to observe that $N_k$ can be realized as the result of nullhomologous surgery on the singular Seifert fiber of order $4k-1$ in the Brieskorn homology sphere $\Sigma(2,4k-1,8k-1)$, which has Rohlin invariant equal to $k$ modulo 2. Performing surgery on that fiber with framing $+1$ gives another plumbed 3-manifold whose Rohlin invariant is also $k$ modulo 2, and these two calculations give the desired invariants for $N_k$ by the remark after Lemma \ref{lemma:rokhlincalc}.
Alternatively, one can proceed directly from the final diagram in Figure \ref{figure:Ozsvath_Szabo_Kirbydiagram} using the algorithm in \cite[Section~6]{Neumann-Raymond:1978-1} (see also \cite[Section~4]{Neumann:1979} for the case with nonzero first homology), as follows. If $P_k$ denotes the plumbed 4-manifold described by the last diagram of Figure \ref{figure:Ozsvath_Szabo_Kirbydiagram}, we can find exactly two homology classes $\nu_1,\nu_2\in H_2(P_k;{\mathbb Z}/2)$, represented by embedded spheres or a disjoint union thereof, satisfying $\nu_i.x = x.x\mod 2$ for each homology class $x$. (Here the dot indicates the intersection product.) These ``spherical Wu classes'' give the Rohlin invariants of the two spin structures on $N_k$ by the formula $\mu(N_k, \mathfrak{s}_i) = \frac{1}{8}(\sigma(P_k) - \nu_i.\nu_i)\mod 2$, where $\sigma(P_k)$ is the signature of the intersection form on $P_k$.
The two Wu classes on $P_k$ are given by letting $\nu_1$ be the sum of the spheres represented by the circles with framings $-8k+1$ and $-4$, and taking $\nu_2$ as the sum of the $-8k+1$ sphere with the two $-2$ spheres. It is straightforward to check that $P_k$ has $b^+(P_k) = 1$ and hence $\sigma(P_k) = -3$, while $\nu_1.\nu_1 = \nu_2.\nu_2 = -8k-3$. Hence the Rohlin invariants $\mu(N_k, \mathfrak{s}_i)$ are both equal to $k \mod 2$, and we conclude:
\begin{theorem}\label{theorem:Nk} For any odd integer $k\geq 1$, the manifold $N_k$ is a Seifert fibered integral homology $S^1\times S^2$ that cannot be obtained by surgery on a knot in $S^3$.
\end{theorem}
Moreover, since the Rohlin invariant is unchanged under integral homology cobordism, we have that when $k$ is odd, no $N_k$ is homology cobordant to a 3-manifold with $DS(Y) =1$. This concludes the proof of Theorem~\ref{theorem:B}.
As noted in the introduction, the manifold $N_1$ is the example from Ozsv\'{a}th and Szab\'{o} \cite[Section 10.2]{Ozsvath-Szabo:2003-2}, where the case $k=1$ of Theorem \ref{theorem:Nk} was claimed based on an argument using the Heegaard Floer correction terms. As we have seen, the correction terms do not provide an obstruction to $DS = 1$ in the case of Seifert manifolds. Here we revisit the calculation of $d_{\pm 1/2}(N_1)$ from \cite{Ozsvath-Szabo:2003-2}, which relies on the surgery exact triangle and some understanding of the maps therein. In particular, they fit the Floer homology of $N_1$ in an exact triangle between two lens spaces, $L(49,40)$ and $L(49,44)$, and identify a spin structure on the 2-handle cobordism between $L(49,40)$ and $N_1$. The map on Floer homology associated to this spin structure, which is summed along with those associated to the other spin$^c$ structures, induces an isomorphism between submodules of $HF^\infty$ isomorphic to $\mathbb{F}[U,U^{-1}]$. It follows that that the ``tower" in $HF^+$ of the spin structure on $L(49,40)$ surjects onto the tower of $HF^+(N_1)$ relevant to $d_{-1/2}$. From this, and the grading shift by $-\frac{1}{2}$ for the map induced by the spin structure, one concludes an inequality \[d_{-1/2}(N_1)\ge d(L(49,40),\mathfrak{s}_0)-\tfrac{1}{2}=-\tfrac{5}{2}.\]
Here $d(L(49,40),\mathfrak{s}_0)$ is the correction term for the spin structure, which is easily seen to be $-2$. This inequality is opposite to the one inferred by Ozsv\'{a}th and Szab\'{o}. A rather detailed examination of the exact triangle shows that the bottommost element of the tower associated to the spin structure lies in the kernel of the sum of the cobordism maps involved in the exact triangle (which is, of course, the only way for $d_{-1/2}(N_1)> -\frac{5}{2}$).
We conclude with an alternate proof that $d_{-1/2}(N_k)\geq -\frac{1}{2}$ and $d_{1/2}(N_k)\leq \frac{1}{2}$. First recall a result of Ozsv\'{a}th and Szab\'{o}.
\begin{proposition}[{\cite[Corollary~9.14]{Ozsvath-Szabo:2003-2}}]\label{theorem:Ozsvath-Szaboinequality} Suppose that $K$ is a knot in a homology $3$-sphere~$Y$. Let $Y_0$ be the result of Dehn surgery along $K$ via its Seifert framing. Then
\[d_{1/2}(Y_0)-\tfrac{1}{2}\leq d(Y)\leq d_{-1/2}(Y_0)+\tfrac{1}{2}.\]
\end{proposition}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{figure7}
\caption{A knot $K$ in $S^3_{-1}(T_{2,4k-1})$.}
\label{figure:K_k}
\end{figure}
\begin{proposition}\label{proposition:dofN_k}For any positive integer $k$, $d_{-1/2}(N_k)\geq -\frac{1}{2}$ and $d_{1/2}(N_k)\leq \frac{1}{2}$.
\end{proposition}
\begin{proof}Consider the knot $K\subset S^3_{-1}(T_{2,4k-1})$ which is depicted in Figure~\ref{figure:K_k}. By a surgery formula given in \cite{Ni-Wu:2015-1}, $d(S^3_{-1}(T_{2,4k-1}))=2V_0(T_{2,-4k+1})=0$ since $k\geq 1$. Then $N_k$ is the result of Dehn surgery along the knot $K\subset S^3_{-1}(T_{2,4k-1})$ via its Seifert framing. By Theorem~\ref{theorem:Ozsvath-Szaboinequality}, we have
\[d_{1/2}(N_k)-\tfrac{1}{2}\leq 0\leq d_{-1/2}(N_k)+\tfrac{1}{2}\]
for any positive integer $k$. This completes the proof.
\end{proof}
\bibliographystyle{amsalpha}
\renewcommand{\MR}[1]{}
|
{
"timestamp": "2018-06-04T02:07:58",
"yymm": "1802",
"arxiv_id": "1802.08620",
"language": "en",
"url": "https://arxiv.org/abs/1802.08620"
}
|
\section{Introduction}
Metal-Organic Frameworks have emerged as front-edge materials, due to their potential impact
on several types of applications, mainly those based on adsorption and separation (such as
hydrogen storage \cite{Suh2011}, methane and carbon dioxide capture \cite{eddaoudi2002systematic,sumida2011carbon}, or hydrocarbon \cite{li2011metal} and
enantiomeric separation \cite{li2011metal,Cychosz2010}). Unlike traditional nanoporous solids, i.e. zeolites, carbons, and
clays, MOFs do not only exhibit enormous surface areas (beyond 5000 \si{\metre\squared\per\gram}), but also a huge
structural and compositional diversity, resulting from the large amount of research carried out,
which has recently reached over 2000 scientific papers by year. Obviously, it is very expensive
and time consuming to carry out experimental studies on several different materials. But
computer modelling is a useful tool, which can help guiding the experimental search into new
and potentially interesting materials. It is possible, for example, to use computer simulations to
devise viable routes for materials selection, via large screenings \cite{Wilmer2011,Dubbeldam2012}.
Computer simulations can also provide a platform for understanding the material behavior at an atomic scale, which
often leads to application-tailored materials design \cite{Thornton2012, R2010}.
Since the study of adsorption, separation and diffusion related phenomena involves the explicit
consideration of hundreds, or even thousands of atoms (particularly in structures with large unit
cells, such as MOFs) classical simulation methods are the first choice \cite{Dren2009,Reedijk2013}. It is worth noting
that recently, quantum mechanics-based calculations have emerged as valuable tools in this field \cite{Yu2013,sillar2009ab},
but in MOFs their computational cost still precludes its use for screenings of a larger
number of materials, for the calculation of adsorption isotherms, diffusion of complex
molecules, or the study of systems in which entropic effects are relevant, etc. In atomistic
classical simulations the energy of the system can be written as:
\begin{equation}
E=E_{\text{bonding}}+E_{\text{non-bonding}}
\end{equation}
where $E_{\text{bonding}}$ involves contributions directly related to bonded atoms, and are described by the
sum of bond, angles and dihedral terms, while $E_{\text{non-bonding}}$ includes the interactions between non-
bonded atoms and has the form:
\begin{equation}
E_{\text{non-bonding}}=E_{\text{van der Waals}}+E_{\text{coulombic}}
\end{equation}
The van der Waals interactions are usually described by the typical 12-6 Lennard-Jones potential:
\begin{equation}
E^{\text{LJ}}_{ij}=4\epsilon_{ij}\left[\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{12}-\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{6}\right]
\end{equation} where $\epsilon$ is the energy at the minimum and $\sigma$ is the distance at which the energy is zero. The Coulombic interactions are calculated as follows:
\begin{equation}
E^{\text{coulombic}}_{ij}=\frac{1}{4\pi\epsilon_0}\frac{q_iq_j}{r_{ij}}
\end{equation} where $r_{ij}$ is the distance between atoms $i$ and $j$, $q_i$ and $q_j$ are the corresponding atomic partial charges and $k_e=\nicefrac{1}{4\pi\epsilon_0}$ is the Coulomb's constant.
The parameters used for the calculation of bonded and van der Waals interactions are usually
taken from generic force fields, such as Dreiding \cite{SL1990}, UFF \cite{rappe1992uff}, OPLS \cite{jorgensen1988opls}, TraPPE \cite{MG1998,B1999,JM2004,MG1999} or
AMBER \cite{WD1992}. Lennard--Jones van der Waals interactions between different atoms are computed
using the Lorentz-Berthelot \cite{allen2017computer} or the Jorgensen mixing rules \cite{WL1986}. When specific molecules
force fields are used for modelling adsorbates, the atomic charges are usually taken from the
force field used. In a number of cases, however, using the generic or specific force fields the
experimental adsorption data are not reproduced, and hence transferable force field
parameterization is required, via fitting of parameters to reproduce experimental data \cite{Calero2004,MartnCalvo2012} or
via fitting to reproduce ab-initio surface energies \cite{Fang2014,Lin2014,Bureekaew2013}. The parameters that describe the van
der Waals interactions and the interactions between bonded atoms are usually employed directly
as taken from the generic force fields. But the atomic charges need to be calculated for each
material. Since the atomic charges arise from the electronic density of the solids, even small
chemical differences between related MOFs lead to differences in the charges, as was recently
shown for functionalized imidazolates \cite{Sevillano2012}.
For the computation of the intermolecular interactions (MOF-adsorbate and adsorbate--adsorbate
interactions), which control adsorption, diffusion and separation processes, it is important to
keep in mind that they are of non-bonded nature, and consequently their correct description
depends on achieving a balance between van der Waals and Coulombic contributions \cite{T2011}. This
implies that, if a generic force field is used, it is necessary to use charges that would be not very
different from those employed during the parameterisation of the force field. For example, the
parameters of the van der Waals interactions in the Dreiding and UFF force fields were fitted
employing Gasteiger \cite{J1980} and QEq charges \cite{AK1991}, respectively. This seems to be one of the main
reasons why calculated and experimental data do not agree, when generic force fields largely
fail to model intermolecular interactions. As illustration, \citet{Babarao2011} found that a good
agreement with experimental \ce{CO2} isotherms in ZIF-68 was obtained when ChelpG or Mulliken
charges were used in conjunction with the Dreiding force field.
The effect of the choice of the atomic charges on computing adsorption and diffusion properties
of MOFs has been a topic of increasing attention. A few years ago, \citet{Walton2008} showed that
the inclusion of the electrostatic interactions between adsorbate molecules and the framework
was crucial in reproducing the step-like adsorption of \ce{CO2} in IRMOF-1.
\citet{T2011} showed that even quadrupolar molecules, such as \ce{CO2}, can interact very distinctly with MOFs,
being the electrostatic interaction more or less relevant than the van der Waals interactions,
depending on the atomic charges employed. They found that the influence of the charges
on the adsorption properties is very material dependent, i.e. for some materials we observe the
same adsorption behavior, for a wide range of atomic charges, but for other materials, slight
changes in atomic charges produce large changes in the adsorption properties. They computed
\ce{CO2} adsorption isotherms up to 0.1 bar in IRMOF-1, ZIF-8, ZIF-90, and Zn(nicotinate)$_2$,
employing charges calculated by the REPEAT, DDEC, Hirshfeld and CBAC methods, and also
without considering charges. These methods exhibit significant differences in the values of the
charges that they predict, e.g. \ce{Zn} charges calculated with the mentioned methods in IRMOF-1
are 1.2787, 1.2149, 0.4229 and 1.5955, respectively. However, the adsorption isotherms are
very similar in Zn(nicotinate)$_2$, less similar in IRMOF-1 and ZIF-8 and very different in ZIF-90.
In a study with 20 different MOFs with different topologies, pore sizes, and chemical
characteristics, it was found that the guest--framework electrostatic interaction can account for
10--40\% of the \ce{CO2} uptake at very low pressure, and these values decrease at least by factor of
4 at high pressures, where guest--guest interactions dominate \cite{Zheng2009b}. \citet{Sevillano2012}
used three sets of framework charges, changing in a range of 30\% of their values, to examine its
effect on the adsorption of \ce{CO2} in ten ZIFs of different functionalities, and found that, while
adsorption heats are almost the same for ZIF-8 and small differences are observed for ZIF-96,
the effect of varying framework charges on ZIF-3, -7, -93 and -97 is large. The
hydrophobic character of ZIF-8 seems to be responsible for the negligible effect that the choice
of charges has on the values of \ce{CO2} adsorption heats, which is supported by the results of
\citet{Zhang2013}, who found that simulated methanol adsorption in ZIF-8 is not affected by the framework
charges.
When modelling water in MOFs, the choice of the charges is much more relevant. \citet{Castillo2008}
studied water adsorption in HKUST, and found that, in order to reproduce the experimental
adsorption isotherms in the low pressure range, the \emph{ab-initio} derived framework charges needed
to be scaled up by 25 \%. And \citet{Salles2011} studied the adsorption in the hydrophobic MIL-
47, finding that the previously used \emph{ab-initio} charges for modelling \ce{CO2} adsorption needed to be
scaled down by 30\%, in order to reproduce water adsorption behaviour.
The influence of the MOF framework charges on molecular diffusion has been a topic of less
research. The calculated self-diffusion coefficients for \ce{CO2} in ZIF-8 using charges obtained
with the CBAC, REPEAT, and DDEC, and ESP methods show significant differences \cite{Zheng2012}. The
latter set of charges provides results in good agreement with experimental values, but the other
three sets overestimate the diffusion coefficient between 1.5 and 20 times. \citet{Liu2010} used a
different set of charges (as well as different Lennard-Jones potentials), and the calculated self-
diffusion coefficient of \ce{CO2} in ZIF-8 was two times larger than in the previous cited work.
Since in a number of MOFs the proper choice of the framework charges is of key importance to
model correctly the adsorption and diffusion behaviour, it is natural that the simulation of
molecular separation would be also markedly influenced by the electrostatic interactions. For
instance, the simulated \ce{CO2}/\ce{CH4} selectivity in HKUST shows reverse behaviors when charges
are not considered at all than when there is a fully account of both host-guest and guest-guest
electrostatic interactions \cite{Yang2006}.
For quadrupolar molecules, such as \ce{CO2} and \ce{N2}, it has been observed that the atomic charges produce an electric field inside the nanopores that largely enhances the selectivity due to the difference in quadrupole moments \cite{Yang2007}.
In the following section we will present a brief description of the most used methods for
calculating atomic charges in MOFs, referring the reader to the relevant references for a more
in-depth description. Then, we will present the results of the calculations we have carried out, to
illustrate the influence of the structure on the charge calculation of DMOF-1. We will also show
how the different sets of framework charges predict different thermal behaviors of IRMOF-1.
\section{Methods for calculating atomic charges in MOFs}
There are several methods with which to calculate atomic charges. They are always developed
with the aim of providing the most realistic description of the system. But we have to take into
account the fact that atomic charges are not quantum observables. Electron density can be easily
calculated and studied, but, there are no operators to unambiguously determine the charges
associated to each atom. This makes the calculation of charges almost a matter of choice.
Nevertheless, there are several methods that can provide atomic charges which can be used to
model porous materials with reasonable accuracy. We will describe the most widely used
methods to calculate atomic charges, employing quantum mechanical calculations. Methods a)
and b) are based on the population analysis of the wavefunction, methods c), d), e), and f) are
based on the partition of the electron density, methods g), h), and i) are based on the fitting of
the electrostatic potential around the molecule, and methods i) and j) are semiempirical
approaches, the first based on electronegativity equalisation and the other on bond connection
sequences.
\subsection{Mulliken Charges}
Mulliken charges are obtained from the Mulliken Population Analysis \cite{Mulliken1955}. The first step in the
calculation of these charges is to obtain the wavefunction. Like in other methods, the partial
charge of atom $A$ ($q_A$) can be calculated as:
\begin{equation}
q_A=Z_A-\int_{V_A}{\rho_A(r)dr}
\end{equation}
where $Z_A$ is the charge of the positively charge atom core, and $\rho_A(r)$ is the electron density
surrounding the core, associated to that atom. This seemingly simple equation becomes very
complex when we want to know which part of the total electron density (which can be easily
calculated with any quantum mechanical calculation) is associated to that particular atom. And
here is where each method makes a different choice. In the Mulliken method the charge is
calculated as:
\begin{equation}
q_A=Z_A-G_A
\end{equation}
where $G_A$ is the gross atom population for atom $A$, which is calculated as the sum of the
population of all orbitals belonging to atom $A$. The population matrix is constructed by
assigning half the electron density to each of the two atoms that share electrons in a bond,
regardless of the electronegativity of the atoms.
Mulliken charges have been widely used, mainly due to the simplicity and computational speed
with which they can be obtained. For these reason they have been widely used in MOFs \cite{Ramsahye2007,Szeto2007,Watanabe2009,Rydn2013,Gaponik2005,Messner2013,Bao2011,Khvostikova2010,Xiong2010,Yang2014}.
There are two main problems with the Mulliken charges. Firstly, they are very dependent on the
molecular geometry and the basis set, so that small changes in either the geometry or the basis
sets give rise to large differences in the calculated charges \cite{Martin2004}. And secondly, they do not
provide a good description of the degree of covalency in bonds.
\subsection{Natural Population Analysis charges}
In order to overcome the problems associated with the Mulliken method, \citet{Reed1985}
developed the Natural Population Analysis (NPA). NPA charges are calculated using a set of
orthonormal orbitals called natural atomic orbitals (NAOs), which are generated from the
atomic orbitals that form the basis set. NAOs are used to calculate another set of orthonormal
orbitals, called natural bond orbitals (NBOs), which are then used to perform the population
analysis that provides the NPA charges. NPA charges usually provide charges that are not very
dependent on the molecular conformation or the basis set, but they have not been developed to
be calculated on periodic systems, so that the cluster approach (see section g) must be used if
the charges of a periodic system need to be calculated. That is one of the main reasons why they
have not been used often in the study of MOFs. Nevertheless there are some studies in which
they have been used \cite{Sevillano2012, Xiong2010, Choomwattana2008, Noro2009}.
\subsection{Bader charges}
These charges are calculated using Bader's atoms-in-molecules (AIM) theory. In this theory it is
possible to partition the electron density and assign the density to each atom, by analysing the
gradient and the Laplacian of the electron density. The electron density must be obtained first,
using any quantum mechanical method (HF, post-HF, DFT, etc.). Once we have the electron
density, we look for critical points in the middle of each bond, which are the points along the
line between two atoms at which the electron density is minimal. From that point a surface is
created by moving along the direction given by the gradient vector (that points to the direction
of fastest electron density decrease). This gradient vector will creates a surface that encloses a
certain volume, which will be the volume associated to the atom enclosed. The integral of the
electron density within that volume will provide the negative charge of the atom, and the partial
charge is the atom can be calculated just by subtracting that negative charge to the positive
charge of the nucleus.
Despite being useful to provide atomic partial charges, this method has been more frequently
used to get information about the changes on the electron density that take place upon
adsorption \cite{Yang2013} or the differences in electron density when the metals sites of MOFs are
changed \cite{Canepa2013}. Direct use of Bader charges in MOFs is not found very often \cite{Yang2013, Yang2012,Jrgensen2012,Tabrizi2014,Zhou2011,Park2012}.
\subsection{Density Derived Electrostatic and Chemical (DDEC) charges}
This method (developed by \citet{Manz2012}), is based on the atoms-in-molecules method
described above, but there are two main differences: it is designed to incorporate spherical
averaging to minimise atomic multipole magnitudes (in order to get a better description of the
electrostatic potential) and it uses reference ion densities to enhance the transferability and
chemical meaning of the charges.
These charges are better suited to model porous materials than Bader charges, because the latter
do not give a correct description of the electrostatic potential (because they predict too large
atomic multipole moments \cite{Manz2010}). This method has been used to study the adsorption of water in
Cu-BTC \cite{Zang2013}, \ce{N2}/\ce{CO2} separation in a large number of MOFs \cite{Haldoupis2012}, \ce{CO2}/\ce{CH4}, \ce{CO2}/\ce{N2} and
\ce{H2}/\ce{CO2} separations in several MOFs \cite{Erucar2014} and separation in \ce{Zr}-Based MOFs \cite{Jasuja2012}.
\subsection{Hirshfeld charges}
In the Hirshfeld method \cite{Hirshfeld1977} the population of each atom is calculated by assuming that the
charge density at each point is shared among the surrounding atoms in direct proportion to their
free-atom densities at the corresponding distances from the nuclei. There have been several
improvements upon the original Hirshfeld scheme, such as the Iterative Hirshfeld \cite{Bultinck2007} method
(HI), Fractional Occupation Hirshfeld-I method (FOHI-D) \cite{Geldof2014}, and the Extended Hirshfeld
method (HE) \cite{Verstraelen2013}, which has been proved to provide good results for periodic materials \cite{Vanpoucke2012}.
Hirshfeld charges has been used for the development of MIL-53(\ce{Al}) force field \cite{Vanduyfhuys2012} and
modelling functionalizing effects in MIL-47 \cite{Biswas2013}, among other works.
\subsection{Charge Model 5 (CM5) charges}
This method was developed by \citet{Marenich2012} and it uses the charges obtained from a
Hirshfeld population analysis (of a wavefunction obtained with density functional calculations)
as a starting point. The charges are then varied, using a set of parameters derived by fitting to
reference values of the gas-phase dipole moments of 614 molecular structures. CM5 charges
have been successfully used to study hydrocarbon separation \cite{Verma2013} and \ce{N2}/\ce{CH4} separation in
MOF-74 with various types of metal atoms \cite{Lee2013}. These charges can also be used to study the
hydration of molecules in aqueous solutions, obtaining the best results when the charges are
scaled by the factor 1.27 \cite{Vilseck2014}.
One drawback of this method is that it is implemented as a script that uses the output from
the Gaussian09 code as the input for calculating the charges. This means that only non-periodic
systems can be studied, and the calculation of charges of periodic systems must be performed
making use of the cluster approach, which is explained in the following method.
\subsection{Electrostatic Potential (ESP) derived charges}
\begin{figure}[!h]
\begin{center}
\centering
\includegraphics[width=0.48\textwidth]{Figure1a.png}\\
\includegraphics[width=0.48\textwidth]{Figure1b.png}\\
\includegraphics[width=0.48\textwidth]{Figure1c.png}
\caption{Top) Ball and stick representation of the atoms of the unit cell of DMOF-1 (Zn, O, N, C, and H atoms are represented as light blue, red, dark blue, grey and white atoms respectively). Middle) Cluster created by cutting directly a piece of framework. This cluster cannot be used to model the environment of the BDC ligand and calculate its charges, since there are cleaved bonds that will have very different electronic structures than in the bulk structure. Bottom) Same cluster shown in b), although the cleaved bonds have been saturated with H atoms in order to achieve electronic structures in the terminal N and C atoms that are similar to those in the crystal structure.}
\label{fig:DMOF1_clusterb}
\end{center}
\end{figure}
The first step is the calculation of the electrostatic potential around the molecule of interest,
using any quantum mechanical method. Once this potential is known for each point of space, a
set of initial atomic charges is assigned to each atom. With these initial charges, the potential on
a grid of points placed in a surface around the molecule is calculated, and an iterative method is
followed with which to fit the atomic partial charges that minimise the difference between the
quantum mechanical ESP and the one calculated with the atomic partial charges. There are
various methods to calculate ESP charges (differing in the choice of the points at which to
calculate the potential), such as CHELPG (CHarges from Electrostatic Potentials using a Grid-
based method \cite{Breneman1990}) and Merz-Kollman \cite{Singh1984}. The main drawback of these methods is that they
allow the calculation of charges for non-periodic systems. For crystals these methods cannot be
applied, since the electroscatic potential in periodic systems is not uniquely determined, because
there is a constant shift at each point of space. This problem has been circumvented by using the
so-called cluster approach (see Figure \ref{fig:DMOF1_clusterb}). This approach consists in using a cluster model of the
crystal, i.e. cutting a piece of crystal bulk, in the hope that the ESP derived charges for this
cluster model will be the same than for the bulk. This approximation works better for larger
clusters, so usually the charges are calculated for clusters of different sizes, until convergence is
achieved. There is another drawback for these methods, which is the fact that when the crystal is
cut to create the cluster model, there will be several bonds cleaved, leaving dangling bonds.
They are usually saturated with H atoms or with methyl groups. But these species are not part of
the original crystal, and they might have an influence on the fitted charges. Nevertheless, ESP
derived charges, have been the most widely used methods to obtain atomic partial charges, with
large success in modelling MOFs \cite{Ramsahye2007,Xiong2010,Vanduyfhuys2012,Sagara2004,Tafipolsky2007,Chen2010,Grosch2012,Zhang2012,Sun2011,Ma2012,Qiao2014}.
Only in the last few years they have been gradually replaced by other methods better suited for studying periodic systems.
\subsection{Repeating Electrostatic Potential Extracted Atomic (REPEAT) charges}
This method is similar to the ESP based methods described above. It was developed by \citet{Campana2009},
with the aim of solving the problems that ESP methods presented in the
study of periodic systems. The key point is the introduction of an error functional which acts on
the relative differences of the potential and not on its absolute values. For non-periodic systems
the REPEAT method provides charges that are very similar to those obtained with the CHELPG
method, and for periodic systems the charges it provides are chemically sound. Another
advantage of REPEAT charges is that is predicts similar charges when different codes (such as
CPMD, VASP or SIESTA) are employed. This method is becoming very popular to model
MOFs \cite{Jasuja2012,Ray2012,Vaidhyanathan2010,Morris2010,Morris2012,Sutrisno2012}.
\subsection{Density Derived Atomic Point (DDAP) charges}
This method was developed by \citet{Blochl1995}. It is based on the use of plane-waves to
calculate the density of a molecule. Atom-centered Gaussians are used to decouple the density
of the molecule (or each portion of the structure) from its periodic images, and the Ewald
summation is used to calculate their interaction energy. Finally, the charge density is modelled
with a set of atomic point charges. Although these charges can be used to study MOFs, its main
used has been in the study of ionic liquids \cite{Schmidt2010,Dommert2013,Zhang2012b}.
\subsection{Extended Charge Equilibration (EQEq) charges}
This method is based on the Charge Equilibration (QEq) method of \citet{AK1991}.
In the QEq method the charges are calculated using a set of experimental data, namely
atomic ionisation potentials, electron affinities, and atomic radii, with which an atomic chemical
potential is obtained (taking also into account shielded electrostatic interactions between all the
atomic charges). These charges are iteratively changed, until the equilibrium is found, when the
chemical potentials are equal in all atoms. The EQEq method \cite{CE2012} uses less fitting parameters,
while maintaining the accuracy. One important aspect in the charge equilibration methods is that
they do not require the calculation of wavefunction of electron densities; the only data needed
are the positions of the atoms and their atomic number. For this reason, these are the fastest
methods in terms of computation time, which makes them very useful for performing screenings
of a large number of materials \cite{CE2012,Wilmer2012b,Bernini2014}. Recent reparametrisations of the Qeq method have
been carried out by \citet{Haldoupis2012} and by \citet{Kadantsev2013}.
\subsection{Connectivity-based atom contribution (CBAC) charges}
In this method (developed by \citet{Xu2010}) there is no need to perform quantum
calculations, as happened in the EQEq method. The basis of this method was the assumption
that atoms with same bonding connectivity have identical charges in different MOFs. They first
obtained the charges of a set of 30 MOFs, using the ChelpG method (with the cluster approach)
from the electron density calculated with unrestricted B3LYP calculations. The basis set
employed is LANL2DZ for the metal atoms and 6-31+G* for the rest. The average charges for
similar atoms were calculated and tabulated. It is therefore possible to obtain the charges of any
MOFs, as long as it has the same types of atoms that were studied in the set of 30 MOFs (plus
16 COFs with which the database was subsequently expanded \cite{Zheng2010}). There is one small
drawback associated with the wide range of MOFs that can be studied with this method, which
is that in some cases the structures are not charge neutral. Nevertheless, it is very easy to
calculate charges with this method, and they usually provide good results, so they are frequently
used to model adsorption in MOFs \cite{Robinson2012,Zhuang2011}.
\section{Learning from two examples}
Here we show two examples chosen to illustrate two different aspects related to the charges, i.e.
(a) influence of the framework geometries on the calculated charges and (b) influence of the
chosen charges on structural properties, namely the negative thermal expansion of IRMOF-1.
\subsection{Influence of the framework geometries on the calculated charges}
We have calculated atomic charges of DMOF-1, which exhibits breathing-like flexibility \cite{Dybtsev2004}.
The dabco pillars are disordered along the fourfold crystallographic axis. Such disorder
precludes the direct use of the structure for calculating the charges, due to the atoms overlap.
Thus, the reported crystal structure in the I4/mcm (\# 140) space group needs to be fixed for its
description without symmetry (using a P1 space group). The obtained P1 structure has a number
of constrained bonds that can be relaxed using a generic force field (we employed the UFF force
field in our case). This structure is labelled as DMOF-1-ini. We have labelled as DMOF-1-opti1
the structure after an optimisation has been carried out at the DFT-D level, with the VASP code
\cite{Kresse1993,Kresse1994}. The dabco unit has a complex structure and their atoms in the DMOF-1-ini structure
are slightly disordered, the optimisation leads to a configuration with a relatively high energy.
For that reason, we carried out an additional optimisation, in which we first adjusted the
symmetry of the system, and then we reoptimise it with VASP. We called this third structure
DMOF-1-opti2. All the VASP calculations are carried out employing the PAW potentials \cite{PhysRevB.59.1758},
with the PBE exchange-correlation functional \cite{PhysRevLett.77.3865}, and a cut-off energy of 500 eV. Due to the
large sizes of the unit cells ($a=15.0630$ \AA and $c=19.2470$ \AA) only the gamma point was used.
The framework of DMOF-1 is shown in Figure \ref{fig:DMOF1_cluster}a, while the atom labels used for reporting the charges are shown in Figure \ref{fig:DMOF1_clusterc}.
\begin{figure*}[!htp]
\begin{center}
\centering
\includegraphics[width=0.6\textwidth]{dmof1_cluster_label.png}
\caption{Atom labels of DMOF-1 (see Tables 1, 2, and 3 for charges associated to the C3, Zn, and H2 atoms respectively).}
\label{fig:DMOF1_clusterc}
\end{center}
\end{figure*}
The calculations with the VASP code permit the calculation of the REPAT and Bader charges,
for which we use the codes provided by \citet{Campana2009} and \citet{Tang2009} respectively.
We also calculated the Mulliken and DDAP charges for the same structures, using the cp2k code \cite{Laino2006} and
the PBE exchange-correlation functional. Finally, we also calculated the EQeq charges, with the code provided by \citet{Wilmer2012b}.
We report, in Table \ref{tbl:tb1_charges}, the range of variation of the charges of the C3 atom, for the three studied structures. We can see that, overall,
the values of the calculated charges vary in a very wide range. For a given structure, each
method provides different charges, ranging for instance the charge of the C3 atom, in the
DMOF-1-ini structure, from $-0.299$ to $+0.128$ when calculated with the REPEAT method, while
when the DDAP method is used the range of variation goes from $+0.224$ to $0.240$. This is not
surprising, since we have mentioned the intrinsic subjectivity associated to the process of
assigning the electronic density to each atom. The smaller range of variation is observed for the
Mulliken method, while the Bader charges are the ones that show a larger range of variation. A
similar behavior is observed for the \ce{Zn} and \ce{H2} atoms, as can be seen in Tables \ref{tbl:tb2_charges} and \ref{tbl:tb3_charges}. The
tables with the charges of the rest of the atoms are presented in the Supporting Information. The
large range of variation of the charges in structures DMOF-1-ini and DMOF-opti1 indicates that
the obtained charges will not be able to be used in force field-based simulations, since atoms
that should have the same chemical behavior are predicted to have very different charges. It is
worth noticing that three of the methods (Bader, DDAP and EQeq) predicted a negative charge
for the \ce{H2} hydrogen atom (see Table \ref{tbl:tb3_charges}).
\begin{table}[!h]
\caption{Range of variation of the atomic partial charges for atom C3, calculated for the structures DMOF-1-ini, DMOF-1-opti1 and DMOF-1-opti2, using 5 different methods, namely REPEAT, Bader, Mulliken, DDAP, and EQeq.}
\centering
\label{tbl:tb1_charges}
\begin{tabular}{cccc}
\hline
Method & DMOF-1-ini & DMOF-1-opti1 & DMOF-1-opti2\\
\hline
REPEAT & -0.299; 0.128 & -0.573; 0.308 & -0.363; 0.137 \\
Bader & -0.03; 0.443 & 0.176; 0.543 & 0.187; 0.598 \\
Mulliken & -0.052; -0.049 & -0.052; -0.042 & -0.040; -0.036 \\
DDAP & 0.224; 0.240 & 0.148; 0.233 & 0.194; 0.223 \\
EQeq & -0.119; 0.093 & 0.098; 0.161 & -0.047; -0.033 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\caption{Range of variation of the atomic partial charges for atom Zn, calculated for the structures DMOF-1-ini, DMOF-1-opti1 and DMOF-1-opti2, using 5 different methods, namely REPEAT, Bader, Mulliken, DDAP, and EQeq.}
\centering
\label{tbl:tb2_charges}
\begin{tabular}{cccc}
\hline
Method & DMOF-1-ini & DMOF-1-opti1 & DMOF-1-opti2\\
\hline
REPEAT & 0.962; 0.968 & 0.881; 0.926 & 0.920; 0.922 \\
Bader & 1.251; 1.269 & 1.258; 1.285 & 1.074; 1.082 \\
Mulliken & 0.516; 0.519 & 0.502; 0.505 & 0.565; 0.568 \\
DDAP & 0.855; 0.856 & 0.810; 0.831 & 0.806; 0.809 \\
EQeq & 1.092; 1.143 & 1.072; 1.144 & 1.131; 1.132 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\caption{Range of variation of the atomic partial charges for atom H2, calculated for the structures DMOF-1-ini, DMOF-1-opti1 and DMOF-1-opti2, using 5 different methods, namely REPEAT, Bader, Mulliken, DDAP, and EQeq.}
\centering
\label{tbl:tb3_charges}
\begin{tabular}{cccc}
\hline
Method & DMOF-1-ini & DMOF-1-opti1 & DMOF-1-opti2\\
\hline
REPEAT & 0.048; 0.126 & 0.024; 0.202 & 0.047; 0.147 \\
Bader & -0.158; 0.112 & -0.258; 0.052 &-0.236; 0.081 \\
Mulliken &0.067; 0.073 & 0.064; 0.080 & 0.063; 0.070 \\
DDAP & -0.086; -0.036 & -0.090; 0.004 & -0.087; -0.029 \\
EQeq & -0.030; 0.083 & -0.039; 0.112 & 0.035; 0.038 \\
\hline
\end{tabular}
\end{table}
We have discussed the influence of the method for calculating charges, but even more
interesting is the influence that the geometry of the framework has on the charges. When the
same method is employed, the slight variations of the framework geometry that exist between
the three structures induce significant differences in atomic charges. For example, in Table \ref{tbl:tb1_charges} we
see that the charge of atom C3 calculated with the EQeq method can vary from $-0.119$ to $0.093$
for the DMOF-1-ini structure, but for the DMOF-1-opti1 there are no negatively charged C3
atoms. This is a weak point of the force field-based calculations, which rely upon the validity of
the charges to provide an adequate description of the electrostatic interactions. The influence of
the geometry on the calculated charges is more marked for the Bader and REPEAT methods,
while the Mulliken method seems to be the one that minimises the spread of charges for atoms
that are symmetrically equivalent. The DDAP method also shows and acceptable spread of
charges, and if we take into account both the advantages and drawbacks of the two methods,
discussed in the previous section, we would suggest using these charges for the calculation of
atomic partial charges. If a screening of a large number of structures will be performed the use
of DDAP charges is unfeasible. In that case, the EQeq method provides reasonably good
charges, at a low computational cost, so that method would be the method of choice.
\section{Influence of the chosen charges on structural properties}
The effect of the charges on the calculation of adsorption heats, diffusion constants and
separation properties has already been treated in the literature, as shown in section 1. Here we
discuss how charges affect the structural behavior of MOFs. To do this, we have selected
IRMOF-1, which is known to show a negative thermal expansion \cite{zhou2008origin,dubbeldam2007exceptional}. The atomic
charges reported by \citet{dubbeldam2007exceptional} were scaled by 0.95 and 1.05, and the thermal
behavior was studied by means of molecular dynamics. The framework has been modelled by
molecular dynamics simulations in the isothermal-isobaric (NPT) ensemble (fully flexible cell,
using Nosé-Hoover thermostat and Parrinello-Rahman barostat). Intramolecular interactions
were taken into account employing the force field developed by \citet{dubbeldam2007exceptional}. The
external pressure is set to zero. The simulations have been run for 5 ns, using an integration step
of 0.5 fs. Ewald summation was used to calculate the electrostatic energy in the crystalline
framework, and a cut-off radius of 12 \AA~ was used for short-range interactions. We have used the
RASPA code to carry out the simulations \cite{RASPA_code}.
\begin{figure}[!h]
\begin{center}
\centering
\includegraphics[width=0.40\textwidth]{Figure3a.png}
\includegraphics[width=0.38\textwidth]{Figure3b.png}
\caption{Left: Dependence of the cell volume with the temperature, for IRMOF-1. Right: Dependence of the thermal expansion coefficient with the temperature, for IRMOF-1.}
\label{fig:MOF5_NTE}
\end{center}
\end{figure}
In Figure \ref{fig:MOF5_NTE}-Left, we show the dependence with temperature of the cell volume, in IRMOF-1, for
three different sets of charges. Since the charges are homogeneously changed in the whole unit
cell, and the charges do not affect the bond strengths, it is somewhat surprising that the small
changes introduced in the charges (5\%) produce a significant modification in the (negative)
thermal expansion of IRMOF-1. For each temperature, it is observed that there is an inverse
dependence of the cell volume with the amount of charge scaling, which is an evidence of the
role of long range (Coulombic) interactions in the overall structure of MOFs. However, the rate
of the structural changes with temperature has a direct dependence with the charges, as revealed
by the behavior of the thermal expansion coefficient (Figure \ref{fig:MOF5_NTE}-Right).
This is probably due to a balance between the elastic and the entropic effects, as long range forces compete with the
bonding interactions that are not modified by the charges.
\section{Conclusions}
We have reviewed the different methods available to calculate atomic partial charges in MOFs,
and we have also presented two examples of materials in which the choice of charges has a big
influence on the results obtained. The decision about what method is the best is not a simple
one, and the choice will depend on factors such as the knowledge and experience of the
researcher, the codes that he or she has access to, the type of systems that will be studied, etc.
Once a method has been chosen, it is important to check carefully that all charges are
chemically sound. And, if possible, it is desirable to compare the charges obtained with more
than one method. We also suggest charge calculations on structures optimized by different
approaches, as small structural differences might have a large impact on the resulting atomic
charges. We have also shown that not only molecular adsorption, separation and diffusion are
affected by the choice of the charges, but also the structural properties, which is particularly
relevant for modeling systems with at least certain degree of flexibility.
\section{Acknowledgments}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement nº [279520]), the Spanish \emph{Ministerio de Economía y Competitividad} (CTQ2013-48396-P), and the Netherlands Council for Chemical Sciences (NWO/CW) through a VIDI grant. SRG thanks the \emph{Ministerio de Economía y Competitividad} for his predoctoral fellowship.
|
{
"timestamp": "2018-02-27T02:02:58",
"yymm": "1802",
"arxiv_id": "1802.08771",
"language": "en",
"url": "https://arxiv.org/abs/1802.08771"
}
|
\section{Introduction}
An infinite dimensional Lie group $G$ in Milnor's sense, with exponential map $\exp\colon \mg\supseteq \dom[\exp]\rightarrow G$, is said to have the \emph{strong Trotter property} \cite{HGM} \deff for each $\mu\in C^1([0,1],G)$ with $\mu(0)=e$ and $\dot\mu(0)\in \dom[\exp]$, we have
\begin{align}
\label{sddssddszz}
\textstyle\lim_n \mu(\tau/n)^n=\exp(\tau\cdot \dot\mu(0))\qquad\quad\forall\: \tau\in [0,\ell]
\end{align}
uniformly for each $\ell>0$. As already figured out in \cite{HGM}, this implies\footnote{Although in \cite{HGM} $\dom[\exp]=\mg$ is presumed, the proves of the mentioned implications just carry over to the situation considered in this paper -- provided, of course, that the definitions given in \cite{HGM} for the (strong) commutator-, and the Trotter property are adapted in the obvious way.} the \emph{strong commutator property}; and, also the \emph{Trotter-}, and the \emph{commutator property} that are relevant, e.g., in representation theory of infinite dimensional Lie groups \cite{N1}.
More importantly, Theorem I in \cite{HGM} states that $G$ has the strong Trotter property if it is $R$-regular.
Now, $R$-regularity implies $C^0$-continuity of the evolution map; so that Theorem 1 in \cite{RGM} shows that $G$ is \emph{locally $\mu$-convex (constrained in \cite{RGM})}. This condition has originally been introduced in \cite{HGGG}; and states that to each continuous seminorm $\uu$ on the modeling space $E$ of $G$, there exists a continuous seminorm $\uu\leq \oo$ on $E$, such that
\begin{align}
\label{opdspoopdpod}
(\uu\cp\chart)\big(\chart^{-1}(X_1)\cdot {\dots}\cdot \chart^{-1}(X_n)\big)\leq \oo(X_1)+{\dots}+\oo(X_n)
\end{align}
holds for each $X_1,\dots,X_n\in E$ with $\oo(X_1)+{\dots}+\oo(X_n) \leq 1$. Evidently, this generalizes the triangle inequality for locally convex vector spaces; and, in general (no regularity presumption on $G$) is equivalent to that the evolution map is $C^0$-continuous on its domain, cf.\ Theorem 1 in \cite{RGM}.
In this paper, we show that this condition already suffices to ensure validity of \eqref{sddssddszz}; i.e.,
\begin{theorem}
\label{podspopods}
If $G$ is locally $\mu$-convex, then $G$ has the strong Trotter property.
\end{theorem}
In particular, this drops the presumptions made in \cite{HGM} on the domain of the evolution map, as well as
the completeness presumptions made in \cite{HGM} on $\mg$.
\section{Preliminaries}
\label{dsdssd}
In this section, we fix the notations, and discuss the properties of the product integral (evolution map) that we will need in Sect.\ \ref{dskdskjkjdskjdsds} to prove Theorem \ref{podspopods}. The proofs of the facts mentioned but not verified in this section, can be found, e.g., in Sect.\ 3 in \cite{RGM}.
\subsection{Lie Groups}
In the following, $G$ will denote an infinite dimensional Lie group in Milnor's sense \cite{HG,HA,MIL,KHN} that is modeled over the Hausdorff locally convex vector space $E$, with corresponding system of continuous seminorms $\SEM$.
We denote the Lie algebra of $G$ by $\mg$, fix a chart
\begin{align*}
\chart\colon G\supseteq \U\rightarrow \V\subseteq E\qquad\quad\text{with}\qquad\quad e\in \U\qquad\quad\text{and}\qquad\quad \chart(e)=0,
\end{align*}
and identify $\mg\cong E$ via $\dd_e\chart\colon \mg\rightarrow E$ -- Specifically, this means that we will write $\ppp(X)$ instead of $(\pp\cp\dd_e\chart)(X)$ for each $\pp\in \SEM$ and $X\in \mg$ in the following. We let $\mult\colon G\times G\rightarrow G$ denote the Lie group multiplication, $\RT_g:=\mult(\cdot, g)$ the right translation by $g\in G$, and $\Ad\colon G\times \mg\rightarrow \mg$ the adjoint action, i.e., we have
\begin{align*}
\Ad(g,X)\equiv \Ad_g(X):=\dd_e\conj_g(X)\qquad\quad\text{with}\qquad\quad \conj_g\colon G\ni h\mapsto g\cdot h\cdot g^{-1}
\end{align*}
for each $X\in \mg$, and $g\in G$.
\subsection{The Product Integral}
Let $\COMP\equiv\{[r,r']\subseteq \RR\: |\: r<r'\}$ denote the set of all proper compact intervals in $\RR$.
The \emph{right logarithmic derivative} is given by
\begin{align*}
\Der\colon C^1([r,r'],G)\rightarrow C^0([r,r'],\mg),\qquad \mu\mapsto \dd_\mu\RT_{\mu^{-1}}(\dot \mu)\qquad\qquad\forall\: [r,r']\in \COMP.
\end{align*}
We let
$\DIDE:= \bigsqcup_{K\in \COMP}\Der(C^1(K,G))$, with
$\DIDE_K:= \Der(C^1(K,G))$ for each $K\in \COMP$; and define
\begin{align*}
\EV\colon \DIDE_{[r,r']}\rightarrow C_*^{1}([r,r'],G),\qquad\Der(\mu)\mapsto \mu\cdot \mu(r)^{-1}\qquad\qquad\forall\: [r,r']\in \COMP.
\end{align*}
The \emph{product integral} is given by
\begin{align*}
\textstyle\innt_s^t\phi:= \EV\big(\phi|_{[s,t]}\big)(t)\in G\qquad\quad \forall \:[s,t]\subseteq \dom[\phi],\: \phi\in \DIDE;
\end{align*}
and we let $\innt\phi\equiv\innt_r^{r'}\phi$ for $\phi\in \DIDE$ with $\dom[\phi]=[r,r']$.
\begin{remark}
Evidently, for $[r,r']\equiv[0,1]$, $\innt \phi$ just equals the ``small evolution map'', usually denoted by $\EVE$ in the literature. Moreover, $\innt \phi$ equals the Riemann integral for the case that $(G,\cdot)\equiv(E,+)$ is the additive group of a locally convex vectors space $E$ -- The formulas \ref{kdsasaasassaas}--\ref{subst} below, then just generalize the well-known formulas for the Riemann integral.\hspace*{\fill}$\ddagger$
\end{remark}
\noindent
We have the following elementary identities:
\vspace{2pt}
\begingroup
\setlength{\leftmargini}{17pt}
{
\renewcommand{\theenumi}{\emph{\alph{enumi})}}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item
\label{kdsasaasassaas}
\hspace{3pt}$\textstyle\innt_r^t \phi \cdot \innt_r^t\psi=\innt_r^t \phi+\Ad_{[\innt_r^\bullet\phi]^{-1}}(\psi)\qquad\quad\quad\hspace{10.6pt}\forall\: \phi,\psi\in \DIDE_{[r,r']},\: t\in [r,r']$.
\vspace{4pt}
\item
\label{kdskdsdkdslkds}
\hspace{1pt}$\textstyle\big[\!\innt_r^t \phi\big]^{-1} \big[\innt_r^t\psi\big]=\innt_r^t\Ad_{[\innt_r^\bullet\phi]^{-1}}(\psi-\phi)\qquad\quad\forall\: \phi,\psi\in \DIDE_{[r,r']},\: t\in [r,r']$.
\vspace{4pt}
\item
\label{pogfpogf}
\hspace{4pt}For $r=t_0<{\dots}<t_n=r'$ and $\phi\in \DIDE_{[r,r']}$, we have
\begin{align*}
\textstyle\innt_r^t\phi=\innt_{t_{p}}^t \phi\cdot \innt_{t_{p-1}}^{t_{p}} \phi \cdot {\dots} \cdot \innt_{r}^{t_1}\phi\qquad\quad\forall\:t\in (t_p,t_{p+1}],\: p=0,\dots,n-1.
\end{align*}
\vspace{-15pt}
\item
\label{subst}
\hspace{4pt}For $\varrho\colon [\ell,\ell']\rightarrow [r,r']$
of class $C^1$, we have
\begin{align*}
\textstyle\innt_r^{\varrho}\phi=\big[\innt_\ell^\bullet\dot\varrho\cdot \phi\cp\varrho\he\big]\cdot \big[\innt_r^{\varrho(\ell)}\phi\he\big]\qquad\quad\forall\:\phi\in \DIDE_{[r,r']}.
\end{align*}
\end{enumerate}}
\endgroup
\noindent
Next, for $X\in \mg$ with $\DIDE_{[0,1]}\ni \phi_X\colon [0,1]\ni t\mapsto X$, we define $\exp(X):= \innt \phi_X$. Evidently, we have $0\in \dom[\exp]$; and it is straightforward from \ref{subst} that\footnote{It even follows that $\RR\cdot X\subseteq \dom[\exp]$ holds; and that $\RR\ni t\mapsto \exp(t\cdot X)$ is a smooth Lie group homomorphism, cf., e.g., Remark 2.1) in \cite{RGM}.}, cf.\ Appendix \ref{App1}
\begin{align}
\label{spodpodspodspods}
X\in \dom[\exp]\qquad\quad\Longrightarrow\qquad\quad \RR_{\geq 0}\cdot X\subseteq \dom[\exp].
\end{align}
Finally, we let $\DP^0(\COMP,\mg)$ denote the set of all piecewise integrable curves; i.e., all maps $\phi\colon [r,r']\rightarrow \mg$ for some $[r,r']\in \COMP$, such that there exist $r=t_0<{\dots}<t_n=r'$ and $\phi[p]\in \DIDE^0_{[t_p,t_{p+1}]}$ with
\begin{align*}
\phi|_{(t_p,t_{p+1})}=\phi[p]|_{(t_p,t_{p+1})}\qquad\quad\forall\: p=0,\dots,n-1.
\end{align*}
In this situation, we define $\innt_r^r\phi:=e$, and let
\begin{align*}
\textstyle\innt_r^t\phi&\textstyle:=\innt_{t_{p}}^t \phi[p] \cdot \innt_{t_{p-1}}^{t_p} \phi[p-1]\cdot {\dots} \cdot \innt_{r}^{t_1}\phi[0]\qquad\quad \forall\: t\in (t_{p}, t_{p+1}].
\end{align*}
A standard refinement argument in combination with \ref{pogfpogf} then shows that this is well defined, i.e., independent of any choices we have made.
It is furthermore not hard to see that, for $\phi,\psi\in \DP^0([r,r'],\mg)$, we have
$\Ad_{[\innt_r^\bullet\phi]^{-1}}(\psi -\phi)\in \DP^0([r,r'],\mg)$ with
\begin{align}
\label{dfdssfdsfd}
\textstyle\big[\innt_r^t \phi\big]^{-1}\big[\innt_r^t \psi\big]=\innt_r^t \Ad_{[\innt_r^\bullet\phi]^{-1}}(\psi -\phi)
\qquad\quad\forall\: t\in [r,r'].
\end{align}
\subsection{Some Estimates}
We recall that, cf.\ Sect.\ 3.4.1 in \cite{RGM}
\begingroup
\setlength{\leftmargini}{17pt}
{
\renewcommand{\theenumi}{\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item
\label{as1}
For each compact $\compact\subseteq G$, and each $\qq\in \SEM$, there exists some $\qq\leq \mm\in \SEM$, as well as $U\subseteq G$ open with $\compact\subseteq U$, such that
\begin{align*}
\qqq\cp \Ad_g\leq \mmm\qquad\quad\forall\: g\in U.
\end{align*}
\item
\label{as2}
Suppose that $\im[\mu]\subseteq \U$ holds for $\mu\in C^1([r,r'],G)$. Then, we have $\Der(\mu)=\dermapdiff(\chart\cp\mu,\partial_t(\chart\cp\mu))$,
for the smooth map
\begin{align*}
\dermapdiff\colon& \V\times E\rightarrow \mg,\qquad (x,X)\mapsto \dd_{\chartinv(x)}\RT_{[\chartinv(x)]^{-1}}(\dd_x\chartinv(X)).
\end{align*}
Let now $[r,r']\equiv[0,1]$; and suppose that $[0,\ell]\cdot [0,1/m]\subseteq [0,1]$ holds, for $\ell>0$ and $m\geq 1$. For each $\tau\in [0,\ell]$, we define $\mu_{\tau}\colon [0,1/m]\ni t\mapsto \mu(\tau\cdot t)$; and observe that
\begin{align}
\label{fdpopofdpofd}
\alpha\colon [0,\ell]\times [0,1/m]\rightarrow \mg,\qquad (\tau,t)\mapsto \Der(\mu_\tau)(t)
\end{align}
is continuous, because we have
\begin{align*}
\Der(\mu_{\tau})(t)=\dermapdiff((\chart\cp\mu_\tau)(t),(\partial_t(\chart\cp\mu_\tau))(t))=\dermapdiff((\chart\cp\mu)(\tau\cdot t),\tau\cdot (\partial_t(\chart\cp\mu))(t\cdot \tau))
\end{align*}
for each $\tau\in [0,\ell]$, and each $t\in [0,1/m]$.
\end{enumerate}}
\endgroup
\noindent
We say that $C^0([0,\ell],G)\supseteq\{\mu_n\}_{n\in \NN}\rightarrow \mu\in C^0([0,\ell],G)$ converges uniformly for $\ell>0$ \deff for each neighbourhood $U\subseteq G$ of $e$, there exists some $n_U\in\NN$ with
\begin{align*}
\mu_n(t)\in U\cdot \mu(t)\: \cap\: \mu(t)\cdot U\qquad \quad \forall\: n\geq n_U,\: t\in [0,\ell].
\end{align*}
It is straightforward to see that
\begin{lemma}
\label{fdfddfd}
A sequence $C^0([0,\ell],G)\supseteq\{\mu_n\}_{n\in \NN}\rightarrow \mu\in C^0([0,\ell],G)$ converges uniformly \deff for each neighbourhood $V\subseteq G$ of $e$, there exists some $n_V\in\NN$ with
\begin{align*}
\mu_n(t)\in \mu(t)\cdot V\qquad \quad \forall\: n\geq n_V,\: t\in [0,\ell].
\end{align*}
\end{lemma}
\begin{proof}
The proof is elementary, and can be found in Appendix \ref{App2}.
\end{proof}
\subsection{Continuity of the Integral}
\label{sssxyxyaaaayx}
As already mentioned in the introduction, Theorem 1 in \cite{RGM} shows that
locally $\mu$-convexity \eqref{opdspoopdpod} (constrainedness in \cite{RGM}) is equivalent to that the product integral is continuous on $\DIDE\cap C^k([r,r'],\mg)$ for any $k\in \NN\sqcup\{\lip,\infty\}$ and $[r,r']\in \COMP$, w.r.t.\ the $C^0$-topology, i.e., w.r.t.\ the seminorms
\begin{align*}
\textstyle\ppp_\infty(\phi):=\sup_{t\in [r,r']}\ppp(\phi(t))\qquad\quad\forall\: \phi\in \DIDE\cap C^k([r,r'],\mg).
\end{align*}
It was furthermore shown in \cite{RGM} that \eqref{opdspoopdpod} implies that the product integral is continuous at zero on $\DP^0(\COMP,\mg)$ w.r.t.\ the $L^1$-topology; i.e., that, cf.\ Proposition 2 in \cite{RGM}
\begin{proposition}
\label{aaapofdpofdpofdpofd}
If $G$ is locally $\mu$-convex, then for each $\pp\in \SEM$, there exists some $\pp\leq \qq\in \SEM$ with
\begin{align*}
\textstyle\int\qqq(\phi(s))\:\dd s \leq 1\quad\:\text{for}\quad\:\phi\in \DP^0(\COMP,\mg)\qquad\quad\Longrightarrow\qquad\quad \textstyle(\pp\cp\chart)\big(\innt_r^\bullet\phi\big)\leq \int_r^\bullet \qqq(\phi(s))\:\dd s
\end{align*}
for $[r,r']\equiv\dom[\phi]$.
\end{proposition}
Then, using \eqref{dfdssfdsfd}, this generalizes to
\begin{lemma}
\label{podspodspods}
Suppose that $G$ is locally $\mu$-convex; and let $\compacto\subseteq G$ be compact. Then, for each $\pp\in \SEM$,
there exist $\pp\leq\mm\in \SEM$ and $O\subseteq G$ open with $\compacto\subseteq O$, such that for each $[r,r']\in \COMP$, we have
\begin{align*}
\textstyle(\pp\cp\chart)\big([\innt_r^\bullet\phi\big]^{-1}\big[\innt_r^\bullet \psi\big]\big)\leq \int_r^\bullet \mmm(\psi(s)-\phi(s))\:\dd s
\end{align*}
for each $\phi,\psi\in \DP^0([r,r'],\mg)$ with $\im[\innt_r^\bullet\phi] \subseteq O$ and $\int \mmm(\psi(s)-\phi(s))\:\dd s\leq 1$.
\end{lemma}
\begin{proof}
For $\pp\in \SEM$ fixed, we choose $\pp\leq \qq$ as in Proposition \ref{aaapofdpofdpofdpofd}.
Since $\compact:=\compacto^{-1}$ is compact, by \ref{as1}, there exists some $\qq\leq\mm\in \SEM$, as well as $O\subseteq G$ open with $\compacto\subseteq O$, such that
\begin{align*}
\qqq\cp\Ad_{g^{-1}}\leq \mmm\qquad\quad\forall\:g\in O
\end{align*}
holds.
For $\phi,\psi\in \DP^0([r,r'],\mg)$ with $\im[\innt_r^\bullet\phi]\subseteq O$ and $\int\mmm(\psi(s)-\phi(s))\:\dd s\leq 1$, we thus have
\begin{align}
\label{podspodspodsaaaa}
\qqq\big(\Ad_{[\innt_r^\bullet \phi]^{-1}}(\psi-\phi)\big)\leq \mmm(\psi-\phi).
\end{align}
Then, for $\chi:= \Ad_{[\innt_r^\bullet \phi]^{-1}}(\psi-\phi)$, we have $\int\qqq(\chi(s))\: \dd s\leq \int\mmm(\chi(s))\: \dd s\leq 1$; and obtain from Proposition \ref{aaapofdpofdpofdpofd} that
\vspace{-6pt}
\begin{align*}
\textstyle(\pp\cp\chart)\big(\innt_r^t\chi\big)\leq \int_0^t \qqq(\chi(s))\:\dd s\stackrel{\eqref{podspodspodsaaaa}}{\leq} \int_r^t \mmm(\psi(s)-\psi(s))\:\dd s\qquad\quad\forall\: t\in [0,1]
\end{align*}
holds.
Since $\innt_r^t\chi$ equals the right side of \eqref{dfdssfdsfd}, we have shown
\begin{align*}
\textstyle(\pp\cp\chart)\big([\innt_r^\bullet \phi\big]^{-1}\big[\innt_r^\bullet \psi\big]\big)\leq \int \mmm(\psi(s)-\phi(s))\:\dd s,
\end{align*}
which proves the claim.
\end{proof}
\section{The Strong Trotter Property}
\label{dskdskjkjdskjdsds}
We now are going to prove Theorem \ref{podspopods}. For this, we first observe that
\begin{lemma}
\label{poopo}
Suppose that $L\cdot \phi\subseteq \DP^0([0,1],\mg)$ holds, for $L\in \COMP$ and $\phi\colon [0,1]\rightarrow \mg$. Then,
\begin{align}
\label{kjfdkjdfkjfdfd}
\textstyle\Phi\colon L\times [0,1]\rightarrow G,\qquad (\tau,t)\mapsto \innt_0^t\tau\cdot \phi
\end{align}
is continuous; thus, has compact image.
\end{lemma}
\begin{proof}
Let $\tau\in L$, $t\in[0,1]$, and $h,h'\in [-1,1]$ be such that $\tau+[0,1]\cdot h\subseteq L$ and $t+[0,1]\cdot h'\subseteq [0,1]$ holds. Then,
\begin{align*}
\textstyle\mathrm{B}_{h}:=[\innt_0^t (\tau+h)\cdot \phi]\cdot [\innt_0^{t} \tau\cdot \phi]^{-1}\equiv[\innt_0^{t} \tau\cdot \phi]\cdot\underbrace{\textstyle[\innt_0^{t} \tau\cdot \phi]^{-1}\cdot [\innt_0^t (\tau+h)\cdot \phi]}_{\mathrm{C}_h}\cdot\: [\innt_0^{t} \tau\cdot \phi]^{-1}
\end{align*}
tends to $e$ for $h\rightarrow 0$; because $\mathrm{C}_h$ tends to $e$ for $h\rightarrow 0$, by Lemma \ref{podspodspods} applied to $\compacto\equiv\im[\innt_0^\bullet\tau\cdot \phi]$. We obtain from \ref{pogfpogf} that
\begin{align*}
\textstyle\Phi(\tau+h,t+h')\cdot \Phi(\tau,t)^{-1} &\textstyle= \big[\underbrace{\textstyle\innt_t^{t+h'} (\tau+h)\cdot \phi}_{\mathrm{A}^+_{h'}}\big]\hspace{16pt}\cdot\:\:\mathrm{B}_{h}
\qquad\quad\text{holds for}\qquad\quad h'>0,\\
\textstyle\Phi(\tau+h,t+h')\cdot \Phi(\tau,t)^{-1}&\textstyle= \textstyle\big[\underbrace{\textstyle\innt_{t-|h'|}^{t} (\tau+h)\cdot \phi}_{A_{h'}^-}\big]^{-1}\cdot\:\: \mathrm{B}_{h}
\qquad\quad\text{holds for}\qquad\quad h'<0.\\[-20pt]
\end{align*}
Since the integrands are bounded, Proposition \ref{aaapofdpofdpofdpofd} shows that $\lim_{h'\rightarrow 0} A^{\pm}_{h'}=e$ converges uniformly in $h$; from which the claim is clear.
\end{proof}
Combining this with Lemma \ref{podspodspods}, we obtain
\begin{corollary}
\label{podspodspodsdd}
Suppose that $G$ is locally $\mu$-convex; and that $L\cdot \phi\subseteq \DP^0([0,1],\mg)$ holds, for $L\in \COMP$ and $\phi\colon [0,1]\rightarrow \mg$. Then, for each $\pp\in \SEM$,
there exists some $\pp\leq\mm\in \SEM$, such that
\begin{align*}
\textstyle(\pp\cp\chart)\big([\innt_0^\bullet \tau\cdot \phi\big]^{-1}\big[\innt_0^\bullet \psi\big]\big)\leq \int_0^\bullet \mmm(\psi(s)-\tau\cdot \phi(s))\: \dd s
\end{align*}
holds for each $\tau\in L$ and $\psi\in \DP^0([0,1],\mg)$ with $\int \mmm(\psi(s)-\tau\cdot \phi(s))\:\dd s\leq 1$.
\end{corollary}
\begin{proof}
Let $\Phi$ be defined by \eqref{kjfdkjdfkjfdfd}. Since Lemma \ref{poopo} shows that $\compacto\equiv\im[\Phi]$ is compact, the claim is clear from Lemma \ref{podspodspods}.
\end{proof}
We are ready for the
\begin{proof}[Proof of Theorem \ref{podspopods}]
We fix $\ell>0$, let $X:=\dot\mu(0)$; and choose $m \geq 1$ such large that $\mu([0,\ell/m])\subseteq \dom[\chart]\equiv \U$ holds. We obtain from \eqref{spodpodspodspods} that
\begin{align}
\label{lkdflkdflkdffkld}
\{X_\tau\equiv\tau\cdot X\:|\: \tau\in [0,\ell]\}\subseteq \dom[\exp]\qquad\quad\text{holds, implying}\qquad\quad [0,\ell]\cdot \phi_X\subseteq \DP^0([0,1],\mg);
\end{align}
so that $L\equiv [0,\ell]$, and $\phi\equiv \phi_X$ fulfill the presumptions of Corollary \ref{podspodspodsdd}.
Now,
\begingroup
\setlength{\leftmargini}{12pt}
{
\renewcommand{\theenumi}{{\rm \Alph{enumi})}}
\renewcommand{\labelenumi}{\theenumi}
\begin{itemize}
\item
\label{as100}
For $\tau\in [0,\ell]$ and $n\geq m$, we let
\begin{align*}
\chi_{\tau,n}:=\Der(\mu_{\tau})|_{[0,1/n]}\qquad\qquad\text{for}\qquad\qquad\mu_{\tau}\colon [0,1/m]\rightarrow G,\qquad t\mapsto \mu(\tau\cdot t);
\end{align*}
and define $\phi_{\tau,n}\in \DP^0([0,1],\mg)$ by (``we put $\chi_n$ n-times in a row'')
\begin{align*}
\phi_{\tau,n}|_{[p/n,(p+1)/n]}\colon [p/n,(p+1)/n]\ni t\mapsto \chi_{\tau,n}(t-p/n)\qquad\quad\forall\: p=0,\dots, n-1.
\end{align*}
\item
\label{as10}
For $\tau\in [0,\ell]$, $n\geq m$, and $0\leq p\leq n-1$, we apply \ref{subst} to $\varrho_p\colon [p/n,(p+1)/n]\ni t\mapsto t-p/n\in [0,1/n]$; and obtain
\begin{align}
\label{podspodsaaaaaaa}
\textstyle\innt \phi_{\tau,n}|_{[p/n,(p+1)/n]}=\innt \chi_{\tau,n}\cp\varrho_p=\innt\dot\varrho_p\cdot \chi_{\tau,n}\cp\varrho_p\stackrel{\ref{subst}}{=}\innt\chi_{\tau,n}=\mu_\tau(1/n)=\mu(\tau/n).
\end{align}
Then, \ref{pogfpogf} provides us with
\begin{align}
\label{as11}
\textstyle \innt\phi_{\tau,n} \stackrel{\ref{pogfpogf}}{=} \innt\phi_{\tau,n}|_{[(n-1)/n,1]}\cdot {\dots} \cdot\innt\phi_{\tau,n}|_{[0,1/n]}\stackrel{\eqref{podspodsaaaaaaa}}{=}\mu(\tau/n)^n.
\end{align}
\item
\label{as12}
For each $\tau\in [0,\ell]$ and $n\geq m$, we have
\begin{align*}
\textstyle\mmm_\infty(\phi_{\tau,n}-\phi_{X_\tau})=\sup_{t\in[0,1/n]}\mmm(\chi_{\tau,n}(t)-\tau\cdot X)\qquad\quad\forall\:\mm\in \SEM
\end{align*}
with $\chi_{\tau,n}(0)=\Der(\mu_{\tau})(0)=\dot\mu_{\tau}(0)=\tau\cdot \dot\mu(0)=\tau\cdot X$.
\vspace{3pt}
It thus follows from continuity of \eqref{fdpopofdpofd} in \ref{as2}\footnote{Observe that for each $\tau\in [0,\ell]$ and $n\geq m$, we have $\chi_{\tau,n}=\Der(\mu_{\tau})|_{[0,1/n]}$.}, that for each $\mm\in \SEM$ and $0<\epsilon\leq 1$, there exists some $n_{\mm,\epsilon}\geq m$ with
\begin{align}
\label{lkfdlkfd}
\nonumber\mmm_\infty(\phi_{\tau,n}&-\phi_{X_\tau})\leq \epsilon\qquad\qquad\hspace{26.2pt}\forall\:\tau\in [0,\ell],\: n\geq n_{\mm,\epsilon}\qquad\\[2pt]
\text{implying}\qquad\quad\textstyle\int \mmm(\phi_{n,\tau}(s)&-\tau\cdot \phi_X(s))\:\dd s\leq \epsilon\qquad\quad\forall\:\tau\in [0,\ell],\: n\geq n_{\mm,\epsilon}.
\end{align}
\end{itemize}}
\endgroup
\noindent
Let now $\pp\in \SEM$ and $0<\epsilon\leq 1$ be fixed.
We choose $\pp\leq \mm\in \SEM$ as in Corollary \ref{podspodspodsdd} for $L\equiv [0,\ell]$, and $\phi\equiv \phi_X$ there (recall that $[0,\ell]\cdot \phi_X\subseteq \DP^0([0,1],\mg)$ holds by \eqref{lkdflkdflkdffkld}); and let $n_{\mm,\epsilon}\geq m$ be as in \eqref{lkfdlkfd}. Since we have
\begin{align*}
\textstyle\innt_0^t \tau\cdot \phi_X=\exp(t\cdot \tau\cdot X)\qquad\quad\forall\: t\in [0,1],\:\tau\in [0,\ell],
\end{align*}
we obtain from \eqref{lkfdlkfd} and Corollary \ref{podspodspodsdd} that
\vspace{-8pt}
\begin{align*}
\textstyle(\pp\cp\chart)\big(\exp(t\cdot \tau\cdot X)^{-1}\cdot \innt_0^t \phi_{\tau,n}\big)\leq \int_0^t \mmm(\phi_{\tau,n}(s)-\tau\cdot \phi_X(s))\: \dd s\stackrel{\eqref{lkfdlkfd}}{\leq} \epsilon
\end{align*}
holds, for each $t\in [0,1]$, $\tau\in[0,\ell]$, and $n\geq n_{\mm,\epsilon}$.
It is thus clear that for each open neighbourhood $V\subseteq G$ of $e$, there exists some $n_V\geq m$ with
\begin{align*}
\textstyle\mu(\tau/n)^n\stackrel{\eqref{as11}}{=}\innt \phi_{\tau,n}\in \exp(\tau\cdot \dot\mu(0))\cdot V\qquad\quad\: \forall\:n\geq n_V,\: \tau\in [0,\ell];
\end{align*}
so that the claim is clear from Lemma \ref{fdfddfd}.
\end{proof}
\section*{Acknowledgements}
The author thanks K.-H. Neeb for raising the question whether the strong Trotter property holds in the locally $\mu$-convex context. He furthermore thanks K.-H. Neeb and H. Gl\"ockner for general remarks on a draft of the present article.
This work has been supported by the Alexander von Humboldt foundation of Germany.
\addtocontents{toc}{\protect\setcounter{tocdepth}{0}}
|
{
"timestamp": "2018-02-27T02:08:10",
"yymm": "1802",
"arxiv_id": "1802.08923",
"language": "en",
"url": "https://arxiv.org/abs/1802.08923"
}
|
\section{Introduction}
Traffic is the pulse of a city that impacts the daily life of millions of people. One of the most fundamental questions for future smart cities is how to build an efficient transportation system. To address this question, a critical component is an accurate demand prediction model. The better we can predict demand on travel, the better we can pre-allocate resources to meet the demand and avoid unnecessary energy consumption. Currently, with the increasing popularity of taxi requesting services such as Uber and Didi Chuxing, we are able to collect massive demand data at an unprecedented scale. The question of how to utilize big data to better predict traffic demand has drawn increasing attention in AI research communities.
In this paper, we study the taxi demand prediction problem; that problem being how to predict the number of taxi requests for a region in a future timestamp by using historical taxi requesting data. In literature, there has been a long line of studies in traffic data prediction, including traffic volume, taxi pick-ups, and traffic in/out flow volume. To predict traffic, time series prediction methods have frequently been used. Representatively, autoregressive integrated moving average (ARIMA) and its variants have been widely applied for traffic prediction~\cite{li2012prediction,moreira2013predicting,shekhar2008adaptive}. Based on the time series prediction method, recent studies further consider spatial relations~\cite{deng2016latent,tong2017sim} and external context data (e.g., venue, weather, and events)~\cite{pan2012utilizing,wu2016interpreting}. While these studies show that prediction can be improved by considering various additional factors, they still fail to capture the complex nonlinear spatial-temporal correlations.
Recent advances in deep learning have enabled researchers to model the complex nonlinear relationships and have shown promising results in computer vision and natural language processing fields~\cite{lecun2015deep}. This success has inspired several attempts to use deep learning techniques on traffic prediction problems. Recent studies \cite{zhang2016deep,zhang2016dnn} propose to treat the traffic in a city as an image and the traffic volume for a time period as pixel values. Given a set of historical traffic images, the model predicts the traffic image for the next timestamp. Convolutional neural network (CNN) is applied to model the complex spatial correlation. \citeauthor{yu2017deep}~\shortcite{yu2017deep} proposes to use Long Short Term Memory networks (LSTM) to predict loop sensor readings.
They show the proposed LSTM model is capable of modeling complex sequential interactions.
These pioneering attempts show superior performance compared with previous methods based on traditional time series prediction methods. However, none of them consider spatial relation and temporal sequential relation simultaneously.
In this paper, we harness the power of CNN and LSTM in a joint model that captures the complex nonlinear relations of both space and time. However, we cannot simply apply CNN and LSTM on demand prediction problem. If treating the demand over an entire city as an image and applying CNN on this image, we fail to achieve the best result. We realize including regions with weak correlations to predict a target region actually hurts the performance. To address this issue, we propose a novel local CNN method which only considers spatially nearby regions. This local CNN method is motivated by the First Law of Geography: ``near things are more related than distant things,''~\cite{tobler1970computer} and it is also supported by observations from real data that demand patterns are more correlated for spatially close regions.
While local CNN method filters weakly correlated remote regions, this fails to consider the case that two locations could be spatially distant but are similar in their demand patterns (i.e., on the semantic space). For example, residential areas may have high demands in the morning when people transit to work, and commercial areas may be have high demands on weekends. We propose to use a graph of regions to capture this latent semantic, where the edge represents similarity of demand patterns for a pair of regions. Later, regions are encoded into vectors via a graph embedding method and such vectors are used as context features in the model. In the end, a fully connected neural network component is used for prediction.
Our method is validated via large-scale real-world taxi demand data from Didi Chuxing. The dataset contains taxi demand requests through Didi service in the city of Guangzhou in China over a two-month span, with about 300,000 requests per day on average. We conducted extensive experiments to compare with state-of-the-art methods and have demonstrated the superior performance of our proposed method.
In summary, our contributions are summarized as follow:
\begin{itemize}
\item We proposed a unified multi-view model that jointly considers the spatial, temporal, and semantic relations.
\item We proposed a local CNN model that captures local characteristics of regions in relation to their neighbors.
\item We constructed a region graph based on the similarity of demand patterns in order to model the correlated but spatially distant regions. The latent semantics of regions are learnt through graph embedding.
\item We conducted extensive experiments on a large-scale taxi request dataset from Didi Chuxing. The results show that our method consistently outperforms the competing baselines.
\end{itemize}
\section{Related Work}
Problems of traffic prediction could include predicting any traffic related data, such as traffic volume (collected from GPS or loop sensors), taxi pick-ups or drop-offs, traffic flow, and taxi demand (our problem). The problem formulation process for these different types of traffic data is the same. Essentially, the aim is to predict a traffic-related value for a location at a timestamp. In this section, we will discuss the related work on traffic prediction problems.
The traditional approach is to use time series prediction method. Representatively, autoregressive integrated moving average (ARIMA) and its variants have been widely used in traffic prediction problem~\cite{shekhar2008adaptive,li2012prediction,moreira2013predicting}.
Recent studies further explore the utilities of external context data, such as venue types, weather conditions, and event information~\cite{pan2012utilizing,wu2016interpreting,wang2017deepsd,tong2017sim}. In addition, various techniques have also been introduced to model spatial interactions. For example,~\citeauthor{deng2016latent}~\shortcite{deng2016latent} used matrix factorization on road networks to capture a correlation among road connected regions for predicting traffic volume. Several studies~\cite{tong2017sim,ide2011trajectory,zheng2013time} also propose to smooth the prediction differences for nearby locations and time points via regularization for close space and time dependency. These studies assume traffic in nearby locations should be similar. However, all of these methods are based on the time series prediction methods and fail to model the complex nonlinear relations of the space and time.
Recently, the success of deep learning in the fields of computer vision and natural language processing~\cite{lecun2015deep,krizhevsky2012imagenet} motivates researchers to apply deep learning techniques on traffic prediction problems. For instance, \citeauthor{wang2017deepsd}~\shortcite{wang2017deepsd} designed a neural network framework using context data from multiple sources and predict the gap between taxi supply and demand.
The method uses extensive features, but does not model the spatial and temporal interactions.
A line of studies applied CNN to capture spatial correlation by treating the entire city's traffic as images. For example, \citeauthor{ma2017learning}~\shortcite{ma2017learning} utilized CNN on images of traffic speed for the speed prediction problem. \citeauthor{zhang2016dnn}~\shortcite{zhang2016dnn} and \citeauthor{zhang2016deep}~\shortcite{zhang2016deep} proposed to use residual CNN on the images of traffic flow. These methods simply use CNN on the whole city and will use all the regions for prediction. We observe that utilizing irrelevant regions (e.g., remote regions) for prediction of the target region might actually hurts the performance. In addition, while these methods do use traffic images of historical timestamps for prediction, but they do not explicitly model the temporal sequential dependency.
Another line of studies uses LSTM for modeling sequential dependency.
\citeauthor{yu2017deep}~\shortcite{yu2017deep} proposed to apply Long-short-term memory (LSTM) network and autoencoder to capture the sequential dependency for predicting the traffic under extreme conditions, particularly for peak-hour and post-accident scenarios. However, they do not consider the spatial relation.
In summary, the biggest difference of our proposed method compared with literature is that we consider \emph{both} spatial relation and temporal sequential relation in a joint deep learning model.
\section{Preliminaries}
In this section, we first fix some notations and define the taxi demand problem. We follow previous studies~\cite{zhang2016deep,wang2017deepsd} and define the set of non-overlapping locations $L = \{l_1,l_2,..., l_i, ... ,l_N\}$ as rectangle partitions of a city,
and the set of time intervals as $\mathcal{I} = \{I_0, I_1, ..., I_t, ..., I_T\}$. 30 minutes is set as the length of the time interval. Alternatively, more sophisticated ways of partitioning can also be used, such as partition space by road network~\cite{deng2016latent} or hexagonal partitioning. However, this is not the focus of this paper, and our methodology can still be applied. Given the set of locations $L$ and time intervals $T$, we further define the following.
\textbf{Taxi request}:
A taxi request $o$ is defined as a tuple $(o.t, o.l, o.u)$, where $o.t$ is the timestamp, $o.l$ represents the location, and $o.u$ is user identification number. The requester identifications are used for filtering duplicated and spammer requests.
\textbf{Demand}:
The demand is defined as the number of taxi requests at one location per time point, i.e.,
$y_t^i = |\{o : o.t\in I_t \wedge o.l\in l_i \}| $, where $|\cdot|$ denotes the cardinality of the set. For simplicity, we use the index of time intervals $t$ representing $I_t$, and the index of locations $i$ representing $l_i$ for rest of the paper.
\textbf{Demand prediction problem}:
The demand prediction problem aims to predict the demand at time interval $t+1$, given the data until time interval $t$. In addition to historical demand data, we can also incorporate context features such as temporal features, spatial features, meteorological features (refer to Data Description section for more details). We denote those context features for a location $i$ and a time point $t$ as a vector $ \mathbf{e}^i_t \in \mathbb{R}^r$, where $r$ is the number of features. Therefore, our final goal is to predict
\begin{equation*}
y_{t+1}^i = \mathcal{F}(\mathcal{Y}^L_{t-h,...,t}, \mathcal{E}^L_{t-h,...,t})
\end{equation*}
for $i \in L$, where $\mathcal{Y}^L_{t-h,...,t}$ are historical demands and $\mathcal{E}^L_{t-h,...,t}$ are context features for all locations $L$ for time intervals from $t-h$ to $t$, where $t-h$ denotes the starting time interval. We define our prediction function $\mathcal{F}(\cdot)$ on all regions and previous time intervals up to $t-h$ to capture the complex spatial and temporal interaction among them.
\section{Proposed DMVST-Net Framework}
\begin{figure*}[t]
\centering
\includegraphics[width=0.82\textwidth]{framework}
\caption{The Architecture of DMVST-Net.
(a). The spatial component uses a local CNN to capture spatial dependency among nearby regions. The local CNN includes several convolutional layers. A fully connected layer is used at the end to get a low dimensional representation. (b). The temporal view employs a LSTM model, which takes the representations from the spatial view and concatenates them with context features at corresponding times.
(c). The semantic view first constructs a weighted graph of regions (with weights representing functional similarity). Nodes are encoded into vectors. A fully connected layer is used at the end for jointly training. Finally, a fully connected neural network is used for prediction.}
\label{fig:framework}
\end{figure*}
In this section, we provide details for our proposed Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework, i.e., our prediction function $\mathcal{F}$. Figure~\ref{fig:framework} shows the architecture of our proposed method. Our proposed model has three views: spatial, temporal, and semantic view.
\subsection{Spatial View: Local CNN}
As we mentioned earlier, including regions with weak correlations to predict a target region actually hurts the performance. To address this issue, we propose a local CNN method which only considers spatially nearby regions. Our intuition is motivated by the First Law of Geography~\cite{tobler1970computer} - ``near things are more related than distant things''.
As shown in Figure~\ref{fig:framework}(a), at each time interval $t$, we treat one location $i$ with its surrounding neighborhood as one $S \times S$ image (e.g., $7\times 7$ image in Figure~\ref{fig:framework}(a)) having one channel of demand values (with $i$ being at the center of the image), where the size $S$ controls the spatial granularity. We use zero padding for location at boundaries of the city. As a result, we have an image as a tensor (having one channel) $\mathbf{Y}_t^{i} \in \mathbb{R}^{S\times S\times 1}$, for each location $i$ and time interval $t$. The local CNN takes $\mathbf{Y}_t^{i}$ as input $\mathbf{Y}_t^{i,0}$ and feeds it into $K$ convolutional layers. The transformation at each layer $k$ is defined as follows:
\begin{equation}
\mathbf{Y}_t^{i,k}=f(\mathbf{Y}_t^{i,k-1}\ast \mathbf{W}_t^{k} +\mathbf{b}_t^{k}),
\end{equation}
where $\ast$ denotes the convolutional operation and $f(\cdot)$ is an activation function. In this paper, we use the rectifier function as the activation, i.e., $f(z)=max(0, z)$. $\mathbf{W}_t^{k}$ and $\mathbf{b}_t^{k}$ are two sets of parameters in the $k^{th}$ convolution layer. Note that the parameters $\mathbf{W}_t^{1,...,K}$ and $\mathbf{b}_t^{1,...,K}$ are shared across all regions $i \in L$ to make the computation tractable.
After $K$ convolution layers, we use a flatten layer to transform the output $\mathbf{Y}_t^{i,K}\in \mathbb{R}^{ S \times S \times \lambda}$ to a feature vector $\mathbf{s}^i_t \in \mathbb{R}^{ S^2 \lambda}$ for region $i$ and time interval $t$. At last, we use a fully connected layer to reduce the dimension of spatial representations $\mathbf{s}^i_t$, which is defined as:
\begin{equation}
\hat{\mathbf{s}}^i_t=f(W^{fc}_t\mathbf{s}^i_t+b^{fc}_t),
\end{equation}
where $W^{fc}_t$ and $b^{fc}_t$ are two learnable parameter sets at time interval $t$. Finally, for each time interval $t$, we get the $\hat{\mathbf{s}}_t^i \in \mathbb{R}^d$ as the representation for region $i$.
\subsection{Temporal View: LSTM}
The temporal view models sequential relations in the demand time series. We propose to use Long Short-Term Memory (LSTM) network as our temporal view component. LSTM~\cite{hochreiter1997long} is a type of neural network structure, which provides a good way to model sequential dependencies by recursively applying a transition function to the hidden state vector of the input. It is proposed to address the problem of classic Recurrent Neural Network~(RNN) for its exploding or vanishing of gradient in the long sequence training~\cite{hochreiter2001gradient}.
LSTM learns sequential correlations stably by maintaining a \emph{memory cell} $\mathbf{c}_t$ in time interval $t$, which can be regarded as an accumulation of previous sequential information. In each time interval, LSTM takes an input $\mathbf{g}_t^i$, $\mathbf{h}_{t-1}$ and $\mathbf{c}_{t-1}$ in this work, and then all information is accumulated to the memory cell when the \emph{input gate} $\mathbf{i}^i_t$ is activated. In addition, LSTM has a \emph{forget gate} $\mathbf{f}^i_t$. If the forget gate is activated, the network can forget the previous memory cell $\mathbf{c}^i_{t-1}$. Also, the \emph{output gate} $\mathbf{o}^i_t$ controls the output of the memory cell. In this study, the architecture of LSTM is formulated as follows:
\begin{equation}
\begin{aligned}
&\mathbf{i}^i_t=\sigma(\mathbf{W}_i\mathbf{g}_t^i+\mathbf{U}_i\mathbf{h}^i_{t-1}+\mathbf{b}_i),\\
&\mathbf{f}^i_t=\sigma(\mathbf{W}_f\mathbf{g}_t^i+\mathbf{U}_f\mathbf{h}^i_{t-1}+\mathbf{b}_f),\\
&\mathbf{o}^i_t=\sigma(\mathbf{W}_o\mathbf{g}_t^i+\mathbf{U}_o\mathbf{h}^i_{t-1}+\mathbf{b}_o),\\
&\mathbf{\theta}^i_t=\tanh(\mathbf{W}_g\mathbf{g}_t^i+\mathbf{U}_g\mathbf{h}^i_{t-1}+\mathbf{b}_g),\\
&\mathbf{c}^i_t=\mathbf{f}^i_t\circ \mathbf{c}^i_{t-1}+\mathbf{i}^i_t\circ \mathbf{\theta}^i_t,\\
&\mathbf{h}^i_t=\mathbf{o}^i_t\circ \tanh(\mathbf{c}^i_t).\\
\end{aligned}
\end{equation}
where $\circ$ denotes Hadamard product and $\tanh$ is hyperbolic tangent function. Both functions are element-wise. $\mathbf{W}_a, \mathbf{U}_a, \mathbf{b}_a$ ($a\in\{i,f,o,g\}$) are all learnable parameters. The number of time intervals in LSTM is $h$ and the output of region $i$ of LSTM after $h$ time intervals is $\mathbf{h}^i_t$.
As Figure~\ref{fig:framework}(b) shows, the temporal component takes representations from the spatial view and concatenates them with context features. More specifically, we define:
\begin{equation}
\mathbf{g}_t^i=\hat{\mathbf{s}}^i_t \oplus \mathbf{e}^i_t,
\end{equation}
where $\oplus$ denotes the concatenation operator, therefore, $\mathbf{g}_t^i \in \mathbb{R}^{r+d}$.
\subsection{Semantic View: Structural Embedding}
Intuitively, locations sharing similar functionality may have similar demand patterns, e.g., residential areas may have a high number of demands in the morning when people transit to work, and commercial areas may expect to have high demands on weekends. Similar regions may not necessarily be close in space. Therefore, we construct a graph of locations representing functional (semantic) similarity among regions.
We define the semantic graph of location as $G=(V,E,D)$, where the set of locations $L$ are nodes $V=L$, $E \in V\times V$ is the edge set, and $D$ is a set of similarity on all the edges. We use Dynamic Time Warping (DTW) to measure the similarity $\omega_{ij}$ between node (location) $i$ and node (location) $j$.
\begin{equation}
\omega_{ij} = \exp(-\alpha {\rm DTW}(i, j)),
\end{equation}
where $\alpha$ is the parameter that controls the decay rate of the distance (in this paper, $\alpha=1$), and ${\rm DTW}(i, j)$ is the dynamic time warping distance between the demand patterns of two locations. We use the average weekly demand time series as the demand patterns. The average is computed on the training data in the experiment. The graph is fully connected because every two regions can be reached.
In order to encode each node into a low dimensional vector and maintain the structural information, we apply a graph embedding method on the graph. For each node $i$ (location), the embedding method outputs the embedded feature vector $\mathbf{m}^i$. In addition, in order to co-train the embedded $\mathbf{m}^i$ with our whole network architecture, we feed the feature vector $\mathbf{m}^i$ to a fully connected layer, which is defined as:
\begin{equation}
\hat{\mathbf{m}}^i=f(W_{fe}\mathbf{m}^i+b_{fe}),
\end{equation}
$W_{fe}$ and $b_{fe}$ are both learnable parameters. In this paper, we use LINE for generating embeddings~\cite{tang2015line}.
\subsection{Prediction Component}
Recall that our goal is to predict the demand at $t+1$ given the data till $t$. We join three views together by concatenating $\hat{\mathbf{m}}^i$ with the output $\mathbf{h}_t^i$ of LSTM:
\begin{equation}
\mathbf{q}_t^i=\mathbf{h}_t^i \oplus \hat{\mathbf{m}}^i.
\end{equation}
Note that the output of LSTM $\mathbf{h}_t^i$ contains both effects of temporal and spatial view. Then we feed $\mathbf{q}_t^i$ to the fully connected network to get the final prediction value $\hat{y}_{t+1}^i$ for each region. We define our final prediction function as:
\begin{equation}
\hat{y}_{t+1}^i=\sigma(W_{ff}\mathbf{q}^i_t+b_{ff}),
\end{equation}
where $W_{ff}$ and $b_{ff}$ are learnable parameters. $\sigma(x)$ is a Sigmoid function defined as $\sigma(x)=1/(1+e^{-x})$. The output of our model is in $[0,1]$, as the demand values are normalized. We later denormalize the prediction to get the actual demand values.
\subsection{Loss function}
In this section, we provide details about the loss function used for jointly training our proposed model. The loss function we used is defined as:
\begin{equation}
\label{eq:loss}
\mathcal{L}(\theta) = \sum_{i=1}^N((y^i_{t+1} - \hat{y}^i_{t+1})^2 + \gamma (\frac{y^i_{t+1} -\hat{y}^i_{t+1}}{y^i_{t+1}})^2),
\end{equation}
where $\theta$ are all learnable parameters in the DMVST-Net and $\gamma$ is a hyper parameter. The loss function consists of two parts: mean square loss and square of mean absolute percentage loss. In practice, mean square error is more relevant to predictions of large values. To avoid the training being dominated by large value samples, we in addition minimize the mean absolute percentage loss. Note that, in the experiment, all compared regression methods use the same loss function as defined in Eq.~\eqref{eq:loss} for fair comparison. The training pipeline is outlined in Algorithm~\ref{alg:outline}. We use Adam~\cite{kingma2014adam} for optimization. We use Tensorflow and Keras~\cite{chollet2015keras} to implement our proposed model.
\begin{algorithm}[t]
\caption{Training Pipeline of DMVST-Net}
\label{alg:outline}
\KwIn{Historical observations: $\mathcal{Y}^L_{1,...,t}$;
Context features: $\mathcal{E}^L_{t-h,...,t}$;
Region structure graph $G=(V,E,D)$;
Length of the time period $h$;}
\KwOut{Learned DMVST-Net model}
Initialization\;
\For{$\forall i \in L$}{
Use LINE on $G$ and get the embedding result $\mathbf{m}^i$\;
\For{ $\forall t \in [h, T]$}{
$\mathcal{S}_{spa}=[\mathbf{Y}_{t-h+1}^i, \mathbf{Y}_{t-h+2}^i,..., \mathbf{Y}_t^i]$\;
$\mathcal{S}_{cox}=[\mathbf{e}^i_{t-h+1}, \mathbf{e}^i_{t-h+2},..., \mathbf{e}^i_t]$\;
Append $<\{\mathcal{S}_{spa}$, $\mathcal{S}_{cox}$, $\mathbf{m}^i\},y^i_{t+1}>$ to $\Omega_{bt}$ \;}
}
Initialize all learnable parameters $\theta$ in DMVST-Net\;
\Repeat{stopping criteria is met}{
Randomly select a batch of instance $\Omega_{bt}$ from $\Omega$\;
Optimize $\theta$ by minimizing the loss function Eq.~\eqref{eq:loss} with $\Omega_{bt}$
}
\end{algorithm}
\section{Experiment}
\subsection{Dataset Description}
In this paper, we use a large-scale online taxi request dataset collected from Didi Chuxing, which is one of the largest online car-hailing companies in China. The dataset contains taxi requests from $02/01/2017$ to $03/26/2017$ for the city of Guangzhou.
There are $20\times 20$ regions in our data. The size of each region is $0.7km\times 0.7km$.
There are about $300,000$ requests each day on average. The context features used in our experiment are the similar types of features used in~\cite{tong2017sim}. These features include temporal features (e.g., the average demand value in the last four time intervals), spatial features (e.g., longitude and latitude of the region center), meteorological features (e.g., weather condition), event features (e.g., holiday).
In the experiment, the data from $02/01/2017$ to $03/19/2017$ is used for training ($47$ days), and the data from $03/20/2017$ to $03/26/2017$ ($7$ days) is used for testing. We use half an hour as the length of the time interval. When testing the prediction result, we use the previous 8 time intervals (i.e., 4 hours) to predict the taxi demand in the next time interval. In our experiment, we filter the samples with demand values less than 10. This is a common practice used in industry. Because in the real-world applications, people do not care about such low-demand scenarios.
\subsection{Evaluation Metric}
We use Mean Average Percentage Error (MAPE) and Rooted Mean Square Error (RMSE) to evaluate our algorithm, which are defined as follows:
\begin{equation}
MAPE=\frac{1}{\xi}\sum_{i=1}^{\xi}\frac{|\hat{y}_{t+1}^i-y_{t+1}^i|}{y_{t+1}^i},
\end{equation}
\begin{equation}
RMSE=\sqrt{\frac{1}{\xi} \sum_{i=1}^\xi(\hat{y}_{t+1}^i-y_{t+1}^i)^2},
\end{equation}
where $\hat{y}_{t+1}^i$ and $y_{t+1}^i$ mean the prediction value and real value of region $i$ for time interval $t+1$, and where $\xi$ is total number of samples.
\subsection{Methods for Comparison}
We compared our model with the following methods, and tuned the parameters for all methods. We then reported the best performance.
\begin{itemize}
\item \textbf{Historical average (HA)}: Historical average predicts the demand using average values of previous demands at the location given in the same relative time interval (i.e., the same time of the day).
\item \textbf{Autoregressive integrated moving average (ARIMA)}: ARIMA is a well-known model for forecasting time series which combines moving average and autoregressive components for modeling time series.
\item \textbf{Linear regression (LR)}: We compare our method with different versions of linear regression methods: ordinary least squares regression (OLSR), Ridge Regression (i.e., with $\ell_2$-norm regularization), and Lasso (i.e., with $\ell_1$-norm regularization).
\item \textbf{Multiple layer perceptron~(MLP)}: We compare our method with a neural network of four fully connected layers. The number of hidden units are $128$, $128$, $64$, and $64$ respectively.
\item \textbf{XGBoost}~\cite{chen2016xgboost}: XGBoost is a powerful boosting tree based method and is widely used in data mining applications.
\item \textbf{ST-ResNet}~\cite{zhang2016deep}: ST-ResNet is a deep learning based approach for traffic prediction. The method constructs a city's traffic density map at different times as images. CNN is used to extract features from historical images.
\end{itemize}
We used the same context features for all regression methods above. For fair comparisons, all methods (except ARIMA and HA) use the same loss function as our method defined in Eq.~\eqref{eq:loss}.
We also studied the effect of different view components proposed in our method.
\begin{itemize}
\item \textbf{Temporal view}: For this variant, we used only LSTM with inputs as context features. Note that, if we do not use any context features but only use the demand value of last timestamp as input, LSTM does not perform well. It is necessary to use context features to enable LSTM to model the complex sequential interactions for these features.
\item \textbf{Temporal view + Semantic view}: This method captures both temporal dependency and semantic information.
\item \textbf{Temporal view + Spatial (Neighbors) view}: In this variant, we used the demand values of nearby regions at time interval $t$ as $\hat{\mathbf{s}}^i_t$ and combined them with context features as the input of LSTM. We wanted to demonstrate that simply using neighboring regions as features cannot model the complex spatial relations as our proposed local CNN method.
\item \textbf{Temporal view + Spatial (LCNN) view}: This variant considers both temporal and local spatial views. The spatial view uses the proposed local CNN for considering neighboring relation. Note that when our local CNN uses a local window that is large enough to cover the whole city, it is the same as the global CNN method. We studied the performance of different parameters and show that if the size is too large, the performance is worse, which indicates the importance of locality.
\item \textbf{DMVST-Net}: Our proposed model, which combines spatial, temporal and semantic views.
\end{itemize}
\subsection{Preprocessing and Parameters}
We normalized the demand values for all locations to $[0,1]$ by using Max-Min normalization on the training set. We used one-hot encoding to transform discrete features (e.g., holidays and weather conditions) and used Max-Min normalization to scale the continuous features (e.g., the average of demand value in last four time intervals). As our method outputs a value in $[0,1]$, we applied the inverse of the Max-Min transformation obtained on training set to recover the demand value.
All these experiments were run on a cluster with four NVIDIA P100 GPUs. The size of each neighborhood considered was set as $9\times 9$ (i.e., $S$ = 9), which corresponds to $6km \times 6km$ rectangles. For spatial view, we set $K=3$ (number of layers), $\tau=3\times 3$ (size of filter), $\lambda=64$ (number of filters used), and $d=64$ (dimension of the output). For the temporal component, we set the sequence length $h=8$ (i.e., 4 hours) for LSTM. The output dimension of graph embedding is set as $32$. The output dimension for the semantic view is set to $6$. We used Sigmoid function as the activation function for the fully connected layer in the final prediction component. Activation functions in other fully connected layers are ReLU. Batch normalization is used in the local CNN component. The batch size in our experiment was set to $64$. The first $90\%$ of the training samples were selected for training each model and the remaining $10\%$ were in the validation set for parameter tuning. We also used early-stop in all the experiments. The early-stop round and the max epoch were set to $10$ and $100$ in the experiment, respectively.
\subsection{Performance Comparison}
\subsubsection{Comparison with state-of-the-art methods.} Table~\ref{tab:baseline} shows the performance of the proposed method as compared to all other competing methods. DMVST-Net achieves the lowest MAPE ($0.1616$) and the lowest RMSE ($9.642$) among all the methods, which is $12.17\%$ (MAPE) and $3.70\%$ (RMSE) relative improvement over the best performance among baseline methods. More specifically, we can see that HA and ARIMA perform poorly (i.e., have a MAPE of $0.2513$ and $0.2215$, respectively), as they rely purely on historical demand values for prediction. Regression methods (OLSR, LASSO, Ridge, MLP and XGBoost) further consider context features and therefore achieve better performance. Note that the regression methods use the same loss function as our method defined in Eq.~\eqref{eq:loss}. However, the regression methods do not model the temporal and spatial dependency. Consequently, our proposed method significantly outperforms those methods.
Furthermore, our proposed method achieves $18.01\%$ (MAPE) and $6.37\%$ relative improvement over ST-ResNet. Compared with ST-ResNet, our proposed method further utilizes LSTM to model the temporal dependency, while at the same time considering context features. In addition, our use of local CNN and semantic view better captures the correlation among regions.
\begin{table}[t]
\begin{center}
\caption{Comparison with Different Baselines}
\begin{tabular}{l|c|c}
\hline
Method & MAPE & RMSE\\\hline
Historical average & 0.2513 & 12.167\\
ARIMA & 0.2215 & 11.932\\
Ordinary least square regression & 0.2063 &10.234\\
Ridge regression & 0.2061 & 10.224\\
Lasso & 0.2091& 10.327\\
Multiple layer perceptron & 0.1840 & 10.609\\
XGBoost& 0.1953 & 10.012\\
ST-ResNet & 0.1971 & 10.298\\\hline
DMVST-Net & \textbf{0.1616} & \textbf{9.642}\\\hline
\end{tabular}
\label{tab:baseline}
\end{center}
\end{table}
\subsubsection{Comparison with variants of our proposed method.} Table~\ref{tab:variants} shows the performance of DMVST-Net and its variants. First, we can see that both Temporal view + Spatial (Neighbor) view and Temporal view + Spatial (LCNN) view achieve a lower MAPE (a reduction of $0.63\%$ and $6.10\%$, respectively). The result demonstrates the effectiveness of considering neighboring spatial dependency. Furthermore, Temporal view + Spatial (LCNN) view outperforms Temporal view + Spatial (Neighbor) view significantly, as the local CNN can better capture the nonlinear relations. On the other hand, Temporal view + Semantic view has a lower MAPE of $0.1708$ and an RMSE of $9.789$ compared to Temporal view only, demonstrating the effectiveness of our semantic view. Lastly, the performance is best when all views are combined.
\begin{table}[t]
\begin{center}
\caption{Comparison with Variants of DMVST-Net}
\begin{tabular}{l|c|c}
\hline
Method & MAPE & RMSE\\\hline
Temporal view & 0.1721 & 9.812\\
Temporal + Semantic view & 0.1708 & 9.789\\
Temporal + Spatial (Neighbor) view & 0.1710 & 9.796\\
Temporal + Spatial (LCNN) view & 0.1640 & 9.695\\
DMVST-Net & \textbf{0.1616} & \textbf{9.642}\\\hline
\end{tabular}
\label{tab:variants}
\end{center}
\end{table}
\subsection{Performance on Different Days}
Figure~\ref{fig:day} shows the performance of different methods on different days of the week. Due to the space limitation, We only show MAPE here. We get the same conclusions of RMSE. We exclude the results of HA and ARIMA, as they perform poorly. We show Ridge regression results as they perform best among linear regression models. In the figure, it shows that our proposed method DMVST-Net outperforms other methods consistently in all seven days. The result demonstrates that our method is robust.
Moreover, we can see that predictions on weekends are generally worse than on weekdays. Since the average number of demand requests is similar ($45.42$ and $43.76$ for weekdays and weekends, respectively), we believe the prediction task is harder for weekends as demand patterns are less regular. For example, we can expect that residential areas may have high demands in the morning hours on weekdays, as people need to transit to work. Such regular patterns are less likely to happen on weekends. To evaluate the robustness of our method, we look at the relative increase in prediction error on weekends as compared to weekdays, i.e., defined as $|\bar{wk}-\bar{wd}|/\bar{wd}$, where $\bar{wd}$ and $\bar{wk}$ are the average prediction error of weekdays and weekends, respectively. The results are shown in Table~\ref{tab:day}. For our proposed method, the relative increase in error is the smallest, at $4.04\%$.
At the same time, considering temporal view, only (LSTM) has a relative increase in error of $4.77\%$, while the increase is more than $10\%$ for Ridge regression, MLP, and XGBoost. The more stable performance of LSTM can be attributed to its modeling of the temporal dependency. We see that ST-ResNet has a more consistent performance (relative increase in error of $4.41\%$), as the method further models the spatial dependency. Finally, our proposed method is more robust than ST-ResNet.
\begin{table}[t!]
\begin{center}
\caption{Relative Increase in Error (RIE) on Weekends to Weekdays}
\scriptsize
\begin{tabular}{@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}|@{\hskip 0.04in}c@{\hskip 0.04in}}
\hline
Method & RIDGE & MLP & XGBoost & ST-ResNet & Temporal & DMVST-Net \\ \hline
\begin{tabular}[c]{@{}l@{}}RIE\end{tabular} & 14.91\% & 10.71\% & 16.08\% & 4.41\% & 4.77\% & \textbf{4.04\%} \\ \hline
\end{tabular}
\label{tab:day}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{days.pdf}
\caption{The Results of Different Days.}
\label{fig:day}
\end{figure}
\subsection{Influence of Sequence Length for LSTM}
In this section, we study how the sequence length for LSTM affects the performance. Figure~\ref{fig:lstmlen} shows the prediction error of MAPE with respect to the length. We can see that when the length is $4$ hours, our method achieves the best performance. The decreasing trend in MAPE as the length increases shows the importance of considering the temporal dependency. Furthermore, as the length increases to more than $4$ hours, the performance slightly degrades but mainly remains stable. One potential reason is that when considering longer temporal dependency, more parameters need to be learned. As a result, the training becomes harder.
\subsection{Influence of Input Size for Local CNN}
Our intuition was that applying CNN locally avoids learning relation among weakly related locations. We verified that intuition by varying the input size $S$ for local CNN. As the input size $S$ becomes larger, the model may fit for relations in a larger area. In Figure~\ref{fig:depthsize}, we show the performance of our method with respect to the size of the surrounding neighborhood map. We can see that when there are three convolutional layers and the size of map is $9\times 9$, the method achieves the best performance. The prediction error increases as the size decreases to $5 \times 5$. This may be due to the fact that locally correlated neighboring locations are not fully covered. Furthermore, the prediction error increases significantly (more than $3.46\%$), as the size increases to $13 \times 13$ (where each area approximately covers more than $40\%$ of the space in GuangZhou). The result suggests that locally significant correlations may be averaged as the size increases. We also increased the number of convolution layers to four and five layers, as the CNN needed to cover larger area. However, we observed similar trends of prediction error, as shown in Figure~\ref{fig:depthsize}. We can now see that the input size for local CNN when the method performs best remains consistent (i.e., the size of map is $9\times 9$).
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[height=0.8\textwidth]{lstm_len}
\caption{\label{fig:lstmlen}}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[height=0.8\textwidth]{depth_size}
\caption{\label{fig:depthsize}}
\end{subfigure}
\caption{(\subref{fig:lstmlen}) MAPE with respect to sequence length for LSTM. (\subref{fig:depthsize}) MAPE with respect to the input size for local CNN.}
\end{figure}
\section{Conclusion and Discussion}
The purpose of this paper is to inform of our proposal of a novel Deep Multi-View Spatial-Temporal Network (DMVST-Net) for predicting taxi demand. Our approach integrates the spatial, temporal, and semantic views, which are modeled by local CNN, LSTM and semantic graph embedding, respectively. We evaluated our model on a large-scale taxi demand dataset. The experiment results show that our proposed method significantly outperforms several competing methods.
As deep learning methods are often difficult to interpret, it is important to understand what contributes to the improvement. This is particularly important for policy makers. For future work, we plan to further investigate the performance improvement of our approach for better interpretability. In addition, seeing as the semantic information is implicitly modeled in this paper, we plan to incorporate more explicit information (e.g., POI information) in our future work.
\section{Acknowledgments}
The work was supported in part by NSF awards \#1544455, \#1652525,
\#1618448, and \#1639150. The views and conclusions contained in
this paper are those of the authors and should not be interpreted
as representing any funding agencies.
\bibliographystyle{aaai}
\fontsize{9.0pt}{10.0pt} \selectfont
|
{
"timestamp": "2018-02-28T02:04:33",
"yymm": "1802",
"arxiv_id": "1802.08714",
"language": "en",
"url": "https://arxiv.org/abs/1802.08714"
}
|
\section*{Introduction}
A $d \times n$ integer matrix $A=(\bm{a}_{1},\bm{a}_{2},\ldots ,\bm{a}_{n})$ is called a {\em configuration} if there exists a vector $\bm{c}\in \mathbb{R}^d$ such that for all $1\leq{i}\leq{n}$, the inner product $\bm{a}_{i} \cdot \bm{c}$ is equal to $1$.
Let $K$ be a field and let $K[\bm{x}]=K[x_{1},x_{2},\ldots ,x_{n}]$
be a polynomial ring in $n$ variables. For an integer vector $\bm{b}=(b_{1},b_{2},\ldots ,b_{d})\in \mathbb{Z}^d$,
we define the Laurent monomial $\bm{t^{b}}=t_{1}^{b_{1}}t_{2}^{b_{2}}\ldots t_{d}^{b_{d}} \in K[t_1^{\pm 1},t_2^{\pm 1},\ldots,t_d^{\pm 1}]$ and
$K[A]=K[\bm{t}^{\bm{a}_{1}},\bm{t}^{\bm{a}_{2}},\ldots ,\bm{t}^{\bm{a}_{n}}]$. Let $\pi$ be a homomorphism
$\pi:K[\bm{x}]\rightarrow K[A]$, where $\pi (x_{i})=\bm{t}^{\bm{a}_{i}}$. The kernel of $\pi$ is called
the {\em toric ideal} of $A$ and is denoted by $I_{A}$.
Commutative algebraists are interested in the following properties:
\begin{enumerate}
\item The toric ideal $I_{A}$ is generated by quadratic binomials;
\item The toric ring $K[A]$ is Koszul;
\item There exists a monomial order satisfying that a Gr\"{o}bner basis of $I_{A}$ consists of quadratic binomials.
\end{enumerate}
The implication $(3)\Rightarrow(2)\Rightarrow(1)$ is true, but
both $(1)\Rightarrow(2)$ and $(2)\Rightarrow(3)$ are false in general
(for example, see \cite{Hibi, OhsugiHibi1}).
Several classes of toric ideals with
lexicographic/reverse lexicographic quadratic Gr\"{o}bner bases are known (for example, see
\cite{AHT,Dari, OhsugiHibi2, OhsugiHibi3, OhsugiHibi4, Shibata}).
In contrast, in \cite{AokiOhsugiHibiTakemura1, AokiOhsugiHibiTakemura2, OhsugiHibi5},
sorting monomial orders (which are not necessarily lexicographic or reverse lexicographic) are used to construct a quadratic Gr\"{o}bner basis.
The monomial orders appearing in the theory of toric fiber products \cite{Sulli} constitute another example
that is not necessarily lexicographic or reverse lexicographic.
The following conjecture was presented by Hibi.
\begin{Conjecture}
\label{conj}
Suppose that the toric ideal $I_A$ has a quadratic Gr\"{o}bner bases.
Then $I_A$ has either a lexicographic or reverse lexicographic
quadratic Gr\"{o}bner basis.
\end{Conjecture}
In the present paper, we will present a cut ideal of a graph as a counterexample to this conjecture.
Now, we define the cut ideal of a graph.
Let $G$ be a finite connected simple graph with the vertex set $V(G)=\{1,2,\ldots ,m\}$ and the edge set $E(G)=\{e_{1},e_{2},\ldots ,e_{r}\}$.
Given a subset $C$ of $V(G)$,
we define a vector $\delta_{C}=(d_{1},d_{2},\ldots ,d_{r})\in \{0,1\}^r$ by
\begin{equation*}
d_{i}=\begin{cases}
1 & \text{$|C\cap e_{i}|=1$,}\\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
We consider the configuration
\begin{equation*}
A_G=
{\displaystyle
\begin{pmatrix}
\delta_{C_{1}} && \delta_{C_{2}} && \cdots && \delta_{C_{N}}\\
\\
1 && 1 && \cdots && 1\\
\end{pmatrix},
}
\end{equation*}
where $\{\delta_{C} \ | \ C \subset V(G) \} = \{\delta_{C_{1}}, \delta_{C_{2}},\ldots,\delta_{C_{N}}\}$ and $N=2^{m-1}$.
The toric ideal of $A_G$ is called the {\em cut ideal} of $G$ and is denoted by $I_G$
(see \cite{Sturm} for details).
We introduce important known results on the quadratic Gr\"{o}bner bases of
cut ideals.
An edge {\it contraction} for a graph $G$ is an operation that merges two vertices joined by the edge $e$
after removing $e$ from $G$.
A graph $H$ is called a {\it minor} of the graph $G$ if $H$ is obtained by deleting some edges and vertices and contracting some edges. In this paper, $K_{n},K_{m,n}$, and $C_{n}$ stand for the complete graph with $n$ vertices, the complete bipartite graph
on the vertex set $\{1, 2, \ldots, m\} \cup \{m+1,m+2,\ldots, m+n\}$ and the cycle of length $n$, respectively.
\begin{Proposition}[\cite{Engst}]
\label{quadgene}
Let $G$ be a graph.
Then $I_G$ is generated by quadratic binomials
if and only if $G$ is free of $K_4$ minors.
\end{Proposition}
\begin{Proposition}[\cite{Shibata}]
Let $G$ be a graph.
Then $K[A_G]$ is strongly Koszul if and only if $G$
is free of $(K_4, C_5)$ minors.
In addition, if $K[A_G]$ is strongly Koszul, then $I_G$ has a
quadratic Gr\"{o}bner basis.
\end{Proposition}
Nagel and Petrovi\'{c} \cite[Proposition 3.2]{Petro2} claimed that
if $G$ is a cycle, then $I_{G}$ has a (lexicographic) quadratic Gr\"{o}bner basis.
However, \cite[Propositions 2 and 3]{Petro1}, which are used in the proof of
\cite[Proposition 3.2]{Petro2}, contain some errors.
We will explain this in Section 2.
In contrast, the following problem is open.
\begin{Problem}
Classify the graphs whose cut ideals have a
quadratic Gr\"{o}bner basis.
\end{Problem}
This paper comprises Sections $1$ and $2$.
In Section 1, we show some results concerning the existence of a
lexicographic/reverse lexicographic quadratic Gr\"{o}bner basis
of cut ideals.
Then, we give a graph whose cut ideal is a counterexample to Conjecture~\ref{conj}.
In Section 2, we study the cut ideal of a cycle.
First, we point out an error in the lexicographic quadratic Gr\"{o}bner basis of cut ideals of cycles given in
\cite[Proposition 3]{Petro1} (and introduced in \cite{Petro2}).
Finally, we construct a lexicographic quadratic Gr\"{o}bner basis of the cut ideal of a cycle of length $\le 7$.
\section{Lexicographic and reverse lexicographic Gr\"{o}bner bases}
In this section, we present necessary conditions for cut ideals to have a lexicographic/reverse lexicographic quadratic Gr\"{o}bner basis.
Using these results, we present a graph whose cut ideal is a counterexample to Conjecture~\ref{conj}.
First, we study reverse lexicographic quadratic Gr\"{o}bner bases of cut ideals.
The following was proved in \cite[Theorem~1.3]{Sturm}.
\begin{Proposition}
\label{compressed}
Let $G$ be a graph.
Then the following conditions are equivalent{\rm :}
\begin{itemize}
\item[{\rm (i)}]
The graph $G$ is free of $K_5$ minors and has no induced cycles of length $\ge 5${\rm ;}
\item[{\rm (ii)}]
Any reverse lexicographic initial ideal of $I_G$ is squarefree{\rm ;}
\item[{\rm (iii)}]
There exists a reverse lexicographic order such that
the initial ideal of $I_G$ is squarefree.
\end{itemize}
\end{Proposition}
Since $A_G$ is a $(0,1)$ matrix, Proposition~\ref{compressed} gives us the following proposition.
\begin{Proposition}
\label{inducedcycle}
Suppose that a graph $G$ has an induced cycle of length $\ge 5$.
Then $I_G$ has no reverse lexicographic quadratic Gr\"{o}bner bases.
\end{Proposition}
\begin{proof}
Suppose that $G$ has an induced cycle of length $\ge 5$ and
that $I_G$ has a reverse lexicographic quadratic Gr\"{o}bner basis.
Since $A_G$ is a $(0,1)$ matrix, there exist no nonzero binomials
of the form $x_i^2 - x_j x_k$ in $I_G$.
It therefore follows that the initial ideal is generated
by squarefree monomials.
However, since $G$ has an induced cycle of length $\ge 5$,
the initial ideal with respect to any reverse lexicographic order
is not squarefree by Proposition~\ref{compressed}.
This is a contradiction.
\end{proof}
Second, we study the lexicographic quadratic Gr\"{o}bner bases of cut ideals.
For the complete bipartite graph $K_{2,3}$ on the vertex set
$\{1,2\} \cup \{3,4,5\}$, the configuration associated with $K_{2,3}$ is
$$A_{K_{2,3}}=
\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
\hline
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1\\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\
\hline
0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1\\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1\\
\hline
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}
\right).
$$
We show that $I_{K_{2,3}}$ has no lexicographic quadratic Gr\"{o}bner bases.
\begin{Proposition}
\label{k23}
The cut ideal of the complete bipartite graph $K_{2,3}$ is generated by quadratic binomials and
has no lexicographic quadratic Gr\"{o}bner bases.
\end{Proposition}
\begin{proof}
Since $K_{2,3}$ is free of $K_4$ minors,
$I_{K_{2,3}}$ is generated by quadratic binomials according to Proposition~\ref{quadgene}.
Let $<$ be a lexicographic order on $K[{\bf x}]$.
Suppose that the initial ideal of $I_{K_{2,3}}$ with respect to $<$ is quadratic.
Let ${\mathcal M}$ be the set of all monomials in $K[{\bf x}]$
and let $$S =
\{u \in {\mathcal M} \ | \ \pi (u) = t_1 t_2 t_3 t_4 t_5 t_6 t_7^2 \}
.$$
Then we have
$$
S=
\{
x_{1} x_{16}, x_{2} x_{15}, x_{3} x_{14}, x_{4} x_{13},
x_{5} x_{12}, x_{6} x_{11}, x_{7} x_{10}, x_{8} x_{9}
\}.
$$
Since $A_{K_{2,3}}$ has a symmetry group that is transitive on its columns,
we may assume that $x_{1} x_{16}$ is the smallest monomial in $S$
with respect to $<$.
It then follows that $x_{1} x_{16} \notin {\rm in}_<(I_{K_{2,3}})$.
We now consider the following 8 cubic binomials of $I_{K_{2,3}}$:
$$
\begin{array}{ccccc}
f_1 &=& x_{6} x_{7} x_{9} &- & x_{1} x_{5} x_{16},\\
f_2 &=& x_{5} x_{8} x_{10} &- & x_{1} x_{6} x_{16},\\
f_3 &=& x_{5} x_{8} x_{11} &- & x_{1} x_{7} x_{16},\\
f_4 &=& x_{6} x_{7} x_{12} &- & x_{1} x_{8} x_{16},\\
f_5 &=& x_{5} x_{10} x_{11} &- & x_{1} x_{9} x_{16},\\
f_6 &=& x_{6} x_{9} x_{12} &- & x_{1} x_{10} x_{16},\\
f_7 &=& x_{7} x_{9} x_{12} &- & x_{1} x_{11} x_{16},\\
f_8 &=& x_{8} x_{10} x_{11} &- & x_{1} x_{12} x_{16}.
\end{array}
$$
It is easy to see that there exist no nonzero binomials in $I_{K_{2,3}}$
of the form $x_1 x_i - x_j x_k$, $x_i x_{16} - x_j x_k$
for any $i \in \{5,\ldots, 12\}$.
Thus $x_{1} x_{16}, x_1 x_i , x_i x_{16} \notin {\rm in}_<(I_{K_{2,3}})$
for each $i \in \{5,\ldots, 12\}$.
Since ${\rm in}_<(I_{K_{2,3}})$ is generated by quadratic monomials,
it follows that the initial monomial of each $f_i$ is the first monomial.
However,
since each $f_i$ belongs to $R=K[x_1, x_5,x_6, \ldots, x_{12}, x_{16}]$
and since each variable in $R$ appears in the second monomial of $f_j$ for some $j$,
this contradicts the claim that $<$ is a lexicographic order.
\end{proof}
\begin{Remark}
Shibata \cite{Shibata} showed that the cut ideal of the complete bipartite graph $K_{2,m}$
has a quadratic Gr\"{o}bner basis with respect to a reverse lexicographic order.
\end{Remark}
Let $A=(\bm{a}_{1},\bm{a}_{2},\ldots ,\bm{a}_{n})$ be a $d \times n$ configuration and
let $B=(\bm{a}_{i_1},\bm{a}_{i_2},\ldots ,\bm{a}_{i_m})$ be a submatrix of $A$.
Then $K[B]$ is called a {\em combinatorial pure subring} of $K[A]$
if there exists a vector $\bm{c}\in \mathbb{R}^d$ such that
$$
\bm{a}_i \cdot \bm{c}
\left\{
\begin{array}{cc}
=1 & i \in \{i_1,i_2,\ldots, i_m\}, \\
\\
<1 & \mbox{otherwise.}
\end{array}
\right.
$$
That is, $K[B]$ is a combinatorial pure subring of $K[A]$
if and only if there exists a face $F$ of the convex hull of $A$ such that
$\{\bm{a}_{1},\bm{a}_{2},\ldots ,\bm{a}_{n}\} \cap F =
\{\bm{a}_{i_1},\bm{a}_{i_2},\ldots ,\bm{a}_{i_m}\}$.
It is known that a combinatorial pure subring $K[B]$ inherits numerous properties
of $K[A]$
(see \cite{HOH}).
In particular, we have the following:
\begin{Proposition}
\label{cp}
Suppose that $K[B]$ is a combinatorial pure subring of $K[A]$.
If $I_A$ has a lexicographic quadratic Gr\"{o}bner basis,
then so does $I_B$.
\end{Proposition}
Suppose that a graph $H$ is obtained by an edge contraction from a graph $G$;
then it is known from \cite[Lemma 3.2 (2)]{Sturm} that $K[A_H]$ is a combinatorial pure subring of $K[A_G]$.
Thus we have the following from Propositions~\ref{k23} and \ref{cp}.
\begin{Proposition}
\label{k23contraction}
Let $G$ be a graph.
Suppose that $K_{2,3}$ is obtained by a sequence of contractions from $G$.
Then $I_G$ has no lexicographic quadratic Gr\"{o}bner bases.
\end{Proposition}
Let $G$ be a graph with $6$ vertices and $7$ edges, as shown in Fig.~\ref{6-1}.
\begin{center}
\includegraphics[width=35mm, pagebox=cropbox, clip]{6-1.pdf}
\\
Figure 1. A counterexample to Conjecture ~\ref{conj}.
\label{6-1}
\\
\end{center}
Then the configuration $A_{G}$ is
\medskip
\begin{center}
$\left( \begin{smallmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1\\
0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1\\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{smallmatrix} \right)$.
\end{center}
\vspace{5truept}
The configuration $A_{G}$ contains six combinatorial pure subrings which are isomorphic to $A_{K_{2,3}}$.
By considering weight vectors such that the reduced Gr\"{o}bner basis of $I_{K_{2,3}}$
is quadratic, we found a weight vector $\bm{w} \in {\mathbb R}^{32}$ such that the reduced Gr\"{o}bner basis of $I_G$
is also quadratic.
Let $\bm{w}
=(25, 24, 24, 45, 46, 44, 37, 37, 47, 47, 63, 107, 47, 25, 24,$\\ $46, 36,
33,20, 26, 102, 87, 80,103, 92, 35, 25, 26, 53, 37, 22, 27)$.
The reduced Gr\"{o}bner basis of $I_G$ with respect to $\bm{w}$ is quadratic:\\
$\{-x_{20}x_{31}+x_{19}x_{32},-x_{15}x_{3}+x_{14}x_{2},x_{28}x_{20}-x_{27}x_{19},x_{27}x_{31}+x_{28}x_{32},x_{18}x_{31}-x_{30}x_{19},\\
x_{3}x_{32}-x_{8}x_{19},x_{3}x_{31}-x_{7}x_{19},x_{2}x_{19}-x_{18}x_{3},-x_{15}x_{19}+x_{18}x_{14},-x_{26}x_{15}+x_{27}x_{14},\\
x_{27}x_{3}-x_{26}x_{2},x_{1}x_{19}-x_{17}x_{3},-x_{17}x_{2}+x_{1}x_{18},x_{2}x_{31}-x_{30}x_{3},-x_{15}x_{31}+x_{30}x_{14},\\
-x_{30}x_{20}+x_{18}x_{32},x_{7}x_{27}-x_{28}x_{8},x_{7}x_{20}-x_{3}x_{32},-x_{8}x_{31}+x_{7}x_{32},x_{2}x_{31}-x_{6}x_{19},\\
x_{3}x_{20}-x_{4}x_{19},-x_{1}x_{31}+x_{5}x_{19},x_{4}x_{31}-x_{3}x_{32},x_{27}x_{19}-x_{18}x_{26},-x_{6}x_{3}+x_{7}x_{2},\\
-x_{7}x_{15}+x_{6}x_{14},-x_{6}x_{20}+x_{2}x_{32},x_{2}x_{32}-x_{8}x_{18},x_{2}x_{31}-x_{7}x_{18},-x_{5}x_{3}+x_{1}x_{7},\\
-x_{5}x_{2}+x_{1}x_{6},x_{27}x_{3}-x_{4}x_{28},x_{28}x_{15}-x_{16}x_{27},-x_{8}x_{20}+x_{4}x_{32},x_{27}x_{31}-x_{30}x_{26},\\
x_{1}x_{32}-x_{10}x_{27},x_{2}x_{32}-x_{9}x_{27},x_{16}x_{20}-x_{15}x_{19},x_{5}x_{20}-x_{1}x_{32},-x_{15}x_{3}+x_{13}x_{1},\\
-x_{10}x_{2}+x_{9}x_{1},-x_{15}x_{31}+x_{16}x_{32},x_{17}x_{31}-x_{29}x_{19},x_{1}x_{31}-x_{10}x_{28},x_{2}x_{31}-x_{9}x_{28},\\
-x_{1}x_{32}+x_{17}x_{8},x_{1}x_{31}-x_{17}x_{7},x_{6}x_{32}-x_{8}x_{30},x_{6}x_{31}-x_{7}x_{30},x_{1}x_{31}-x_{29}x_{3},\\
-x_{29}x_{2}+x_{1}x_{30},x_{30}x_{2}-x_{6}x_{18},x_{2}x_{20}-x_{4}x_{18},-x_{29}x_{20}+x_{17}x_{32},-x_{6}x_{26}+x_{7}x_{27},\\
x_{5}x_{18}-x_{1}x_{30},-x_{1}x_{30}+x_{17}x_{6},x_{28}x_{14}-x_{16}x_{26},x_{1}x_{20}-x_{17}x_{4},x_{2}x_{32}-x_{4}x_{30},\\
x_{3}x_{32}-x_{9}x_{26},x_{29}x_{1}-x_{17}x_{5},x_{8}x_{3}-x_{4}x_{7},-x_{11}x_{19}+x_{10}x_{18},x_{15}x_{19}-x_{13}x_{17},\\
x_{10}x_{18}-x_{9}x_{17},-x_{7}x_{15}+x_{16}x_{8},-x_{11}x_{31}+x_{10}x_{30},-x_{29}x_{18}+x_{17}x_{30},-x_{11}x_{3}+x_{10}x_{2},\\
-x_{10}x_{15}+x_{11}x_{14},-x_{1}x_{30}+x_{11}x_{28},x_{8}x_{2}-x_{4}x_{6},x_{5}x_{32}-x_{29}x_{8},x_{5}x_{31}-x_{29}x_{7},\\
x_{15}x_{3}-x_{16}x_{4},x_{1}x_{8}-x_{5}x_{4},x_{7}x_{15}-x_{13}x_{5},-x_{10}x_{6}+x_{9}x_{5},x_{9}x_{14}-x_{13}x_{10},\\
x_{5}x_{30}-x_{29}x_{6},-x_{1}x_{32}+x_{29}x_{4},-x_{1}x_{32}+x_{11}x_{26},x_{15}x_{31}-x_{13}x_{29},-x_{10}x_{30}+x_{9}x_{29},\\
-x_{11}x_{7}+x_{10}x_{6},-x_{23}x_{15}+x_{11}x_{27},-x_{1}x_{32}+x_{23}x_{14},x_{9}x_{15}-x_{11}x_{13},-x_{1}x_{32}+x_{22}x_{15},\\
x_{23}x_{3}-x_{22}x_{2},x_{22}x_{14}-x_{10}x_{26},-x_{23}x_{26}+x_{22}x_{27},-x_{25}x_{15}+x_{13}x_{27},-x_{25}x_{14}+x_{13}x_{26},\\
-x_{27}x_{3}+x_{25}x_{1},x_{23}x_{19}-x_{22}x_{18},-x_{23}x_{31}+x_{22}x_{30},-x_{1}x_{30}+x_{23}x_{16},x_{2}x_{32}-x_{21}x_{15},\\
x_{24}x_{15}-x_{1}x_{30},-x_{2}x_{32}+x_{23}x_{13},-x_{3}x_{32}+x_{21}x_{14},x_{23}x_{3}-x_{21}x_{1},-x_{24}x_{27}+x_{23}x_{28}\\
,x_{27}x_{19}-x_{25}x_{17},x_{1}x_{31}-x_{24}x_{14},x_{24}x_{20}-x_{23}x_{19},x_{23}x_{31}-x_{24}x_{32},-x_{23}x_{7}+x_{22}x_{6},\\
-x_{12}x_{15}+x_{11}x_{16},-x_{12}x_{27}+x_{1}x_{30},-x_{12}x_{14}+x_{10}x_{16},x_{1}x_{31}-x_{22}x_{16},x_{12}x_{20}-x_{10}x_{18},\\
-x_{12}x_{32}+x_{10}x_{30},-x_{3}x_{32}+x_{22}x_{13},-x_{24}x_{26}+x_{22}x_{28},x_{13}x_{28}-x_{25}x_{16},-x_{7}x_{27}+x_{25}x_{5},\\
x_{23}x_{19}-x_{21}x_{17},x_{3}x_{32}-x_{25}x_{10},-x_{24}x_{8}+x_{23}x_{7},x_{1}x_{31}-x_{12}x_{26},-x_{12}x_{8}+x_{10}x_{6},\\
x_{27}x_{31}-x_{25}x_{29},-x_{23}x_{3}+x_{24}x_{4},x_{2}x_{31}-x_{21}x_{16},-x_{23}x_{7}+x_{21}x_{5},-x_{12}x_{28}+x_{24}x_{16},\\
-x_{25}x_{9}+x_{21}x_{13},-x_{21}x_{10}+x_{22}x_{9},x_{2}x_{31}-x_{24}x_{13},-x_{23}x_{10}+x_{22}x_{11},x_{10}x_{2}-x_{12}x_{4},\\
x_{9}x_{16}-x_{12}x_{13},-x_{23}x_{31}+x_{21}x_{29},x_{2}x_{32}-x_{25}x_{11},x_{23}x_{9}-x_{21}x_{11},x_{21}x_{27}-x_{25}x_{23},\\
x_{21}x_{26}-x_{25}x_{22},x_{24}x_{11}-x_{12}x_{23},x_{24}x_{10}-x_{12}x_{22},x_{21}x_{28}-x_{24}x_{25},x_{2}x_{31}-x_{12}x_{25},\\
x_{24}x_{9}-x_{12}x_{21}\}$\vspace{5truept} \\
The monomial order $\bm{w}$ is neither lexicographic nor reverse lexicographic.
In fact,
all monomial orders for which the reduced Gr\"{o}bner bases of $I_{G}$ consist of quadratic binomials
are neither lexicographic nor reverse lexicographic.
\begin{Theorem}
Let $G$ be a graph of Fig.~$\ref{6-1}$.
Then $I_G$ has quadratic Gr\"{o}bner bases,
all of which are neither lexicographic nor reverse lexicographic.
In particular, $I_G$ is a counterexample to Conjecture~$\ref{conj}$.
\end{Theorem}
\begin{proof}
Since $G$ has an induced cycle of length 5,
$I_G$ has no reverse lexicographic quadratic Gr\"{o}bner bases by
Proposition~\ref{inducedcycle}.
Moreover, since $K_{2,3}$ is obtained by
contraction of an edge of $G$,
$I_G$ has no lexicographic quadratic Gr\"{o}bner bases by
Proposition~\ref{k23contraction}.
\end{proof}
\section{Squarefree Veronese subrings and cut ideals of cycles}
In this section, we point out an error in the proof of
\cite[Propositions 2 and 3]{Petro1}
for the cut ideal of the cycle and
present a lexicographic order for which
the reduced Gr\"{o}bner basis of the cut ideal of the cycle of length $7$ consists of quadratic binomials.
First, we explain an error in the proof of \cite[Propositions 2 and 3]{Petro1}.
For each $m$-dimensional ($0,1$) vector $(i_{1},i_{2},\ldots ,i_{m})$,
we associate a variable $q_{i_{1}i_{2}\cdots i_{m}}$.
Let $K[q_{i_{1}i_{2}\ldots i_{m}}\ |\ i_{1},i_{2},\ldots ,i_{m}\in \{0,1\}]$
and $K[a_{i_{j}}^{(j)}\ | \ i_{j}\in \{0,1\} , j=1,\ldots ,m+1]$ be
polynomial rings over $K$.
Let
$$\varphi_{m}:
K[q_{i_{1}i_{2}\ldots i_{m}}\ |\ i_{1},i_{2},\ldots ,i_{m}\in \{0,1\}]\rightarrow K[a_{i_{j}}^{(j)}\ | \ i_{j}\in \{0,1\} , j=1,\ldots ,m+1]$$
be a homomorphism such that
$\varphi _{m}(q_{i_{1}i_{2}\ldots i_{m}})=a_{i_{1}}^{(1)}a_{i_{2}}^{(2)}\ldots a_{i_{m}}^{(m)}a_{i_{1}+i_{2}+\cdots +i_{m} ({\rm mod} \ 2)}^{(m+1)}$ and let $I_{m}$ be the kernel of $\varphi_{m}$.
According to \cite{Petro2}, the ideal $I_{m}$ is the cut ideal of the cycle of length $m+1$.
Let $G_{m}$ be a set of all quadratic
binomials
$$q_{i_{1}i_{2}\cdots i_{m}}q_{j_{1}j_{2}\cdots j_{m}}-q_{k_{1}k_{2}\cdots k_{m}}q_{l_{1}l_{2}\cdots l_{m}}\in I_{m}$$
satisfying one of the
following properties:
\begin{enumerate}
\item For some $1\leq{a}\leq{m}$ and $j\in \{0,1\}$,
\begin{equation*}
i_{a}=j_{a}=j=k_{a}=l_{a}
\end{equation*}
and the binomial
$$q_{i_{1}\ldots i_{a-1}i_{a+1}\ldots i_{m}}q_{j_{1}\ldots j_{a-1}j_{a+1}\ldots j_{m}}-q_{k_{1}\ldots k_{a-1}k_{a+1}\ldots k_{m}}q_{l_{1}\ldots l_{a-1}l_{a+1}\ldots l_{m}}$$
belongs to $I_{m-1}$;
\item For each $1\leq{b}\leq{m}$,
\begin{equation*}
i_{b}+j_{b}=1=k_{b}+l_{b}
\end{equation*}
and the binomial
$$q_{i_{1}\ldots i_{b-1}i_{b+1}\ldots i_{m}}q_{j_{1}\ldots j_{b-1}j_{b+1}\ldots j_{m}}-q_{k_{1}\ldots k_{b-1}k_{b+1}\ldots k_{m}}q_{l_{1}\ldots l_{b-1}l_{b+1}\ldots l_{m}}$$
belongs to $I_{m-1}$.
\end{enumerate}
In \cite[Proposition 2]{Petro1},
$I_{m}$ is claimed to be generated by $G_{m}$ for any $m \ge 4$.
However, this is incorrect for $m \ge 5$.
Now, we consider the quadratic binomial
\begin{equation*}
q=q_{10101}q_{01010}-q_{11111}q_{00000}\in I_{5}
\end{equation*}
and the binomial $q'=q_{1101}q_{0010}-q_{1111}q_{0000}$.
Then $q'$ does not belong to $I_4$ and hence $q\notin G_{5}$.
Let
\begin{equation*}
P=\left \{q_{i_{1}i_{2}i_{3}i_{4}i_{5}}q_{j_{1}j_{2}j_{3}j_{4}j_{5}}\ \middle |
\begin{array}{l}
i_{k}+j_{k}=1,
i_{k},j_{k}\in \{0,1\} \mbox{ for }
1\leq{k}\leq{5}
\end{array} \right \}.
\end{equation*}
By a similar discussion, it follows that a binomial $q=u-v$ where $u,v\in P$ does not belong to $G_{5}$. Thus, $I_5$ is not generated by $G_5$.
This is the error in the proof of \cite[Proposition 2]{Petro1}.
By this error, instead of $I_m$, an ideal that is strictly smaller than $I_m$ is considered in
the proof of \cite[Proposition 3]{Petro1}.
Unfortunately, the reduced Gr\"{o}bner basis of $I_m$ with respect to
a lexicographic order considered in \cite[Proposition 3]{Petro1} is not quadratic for $m \ge 5$.
We describe the number of binomials in the reduced Gr\"{o}bner basis with respect to a lexicographic order in \cite{Petro1} for $m=5$ in Table 1.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|}\hline
degree & the number of binomials \\ \hline
2 & 195 \\
3 & 10 \\
4 & 2 \\ \hline
\end{tabular}
\caption{The number of binomials in the reduced Gr\"{o}bner basis of $I_5$.}
\end{center}
\end{table}
Thus the existence of a quadratic Gr\"{o}bner basis of the cut ideal of a cycle
is now an open problem.
However, we will show that there exists a lexicographic order such that
the reduced Gr\"{o}bner basis of the cut ideal of a cycle of length $7$
consists of quadratic binomials.
Let $G$ be the cycle of length 7.
Then we have
\begin{center}
{$A_{G}=
\begin{pmatrix}
\bm{0} & A & B & C \\
1 & \cdots & \cdots & \cdots\ 1
\end{pmatrix}
$},
\end{center}
where
\begin{center}
{$A=
\begin{pmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1
\end{pmatrix}
$},
\end{center}
\vspace{5truept}
\begin{center}
{$B=\left(
\begin{smallmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1
\end{smallmatrix}\right)
$},
\end{center}
\vspace{5truept}
\begin{center}
{$C=
\begin{pmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 1 & 0 & 1 \\
1 & 1 & 1 & 1 & 0 & 1 & 1 \\
1 & 1 & 1 & 0 & 1 & 1 & 1 \\
1 & 1 & 0 & 1 & 1 & 1 & 1 \\
1 & 0 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 1 & 1 & 1 & 1
\end{pmatrix}$}.
\end{center}
The matrix $A$ is a configuration of the $(7,2)$-squarefree Veronese subring, and $B$ is a configuration of the $(7,4)$-squarefree Veronese subring.
According to \cite[Theorem~$1.4$]{OhsugiHibi?},
there is a lexicographic order such that the reduced Gr\"{o}bner basis of the toric ideal of the $(d,2)$-squarefree Veronese subring
consists of quadratic binomials for any integer $d \ge 2$.
However, it is not known whether there is a lexicographic order such that
the reduced Gr\"{o}bner basis of $I_{B}$ consists of quadratic binomials. Now, we consider the following question:
\begin{Question}
If we use lexicographic orders such that the reduced Gr\"{o}bner bases of $I_A$ and $I_B$ consist of quadratic binomials,
do we obtain a lexicographic order such that the reduced Gr\"{o}bner basis of $I_{A_{G}}$ consists of quadratic binomials?
\end{Question}
To answer this question, we look for a lexicographic order $>_{1}$ such that the reduced Gr\"{o}bner basis of $I_{B}\subset K[y_{1},y_{2},\ldots ,y_{35}]$ consists of quadratic binomials.
For $i = 1,2,\ldots, 7$, we consider the subconfiguration $B_i$ of $B$ with column vectors consisting of all column vectors of $B$ whose $i$-th component is one.
We consider combining lexicographic orders such that the reduced Gr\"{o}bner bases of $I_{B_i}$ consist of quadratic binomials.
We write down the lexicographic order $>_{1}$:\vspace{5truept} \\
$y_{1}>y_{2}>y_{4}>y_{3}>y_{5}>y_{7}>y_{6}>y_{10}>y_{9}>y_{8}>y_{11}>y_{13}>y_{12}>y_{16}>y_{15}>y_{14}>y_{20}>
y_{19}>y_{18}>y_{17}>y_{21}>y_{23}>y_{22}>y_{26}>y_{25}>y_{24}>y_{30}>y_{29}>y_{28}>y_{27}>y_{35}>y_{34}>y_{33}>
y_{32}>y_{31}$.\vspace{5truept} \\
Next, we consider combining two lexicographic orders such that the reduced Gr\"{o}bner bases of $I_{A}$ and $I_{B}$ consist
of quadratic binomials.
We fix the order\\
$x_{23}>x_{24}>x_{26}>x_{25}>x_{27}>x_{29}>x_{28}>x_{32}>x_{31}>
x_{30}>x_{33}>x_{35}>x_{34}>x_{38}>x_{37}>x_{36}>x_{42}>x_{41}>x_{40}>x_{39}>x_{43}>x_{45}>x_{44}>x_{48}>x_{47}>x_{46}>
x_{52}>x_{51}>x_{50}>x_{49}>x_{57}>x_{56}>x_{55}>x_{54}>x_{53}$\\
which corresponds to the lexicographic order $>_{1}$
and look for the order such that
the reduced Gr\"{o}bner basis of $I_{A_{G}}$ consists of quadratic binomials
by modifying the order for $I_A$ using computational experiments.
A desired lexicographic order is\\
$x_{1}>x_{17}>x_{18}>x_{19}>x_{22}>x_{20}>x_{21}>x_{13}>x_{14}>x_{15}>x_{16}>x_{2}>x_{3}>x_{4}>x_{5}>x_{6}>x_{7}>
x_{8}>x_{9}>x_{10}>x_{11}>x_{12}>x_{23}>x_{24}>x_{26}>x_{25}>x_{27}>x_{29}>x_{28}>x_{32}>x_{31}>x_{30}>
x_{33}>x_{35}>x_{34}>x_{38}>x_{37}>x_{36}>x_{42}>x_{41}>x_{40}>x_{39}>x_{43}>x_{45}>x_{44}>x_{48}>x_{47}>x_{46}>
x_{52}>x_{51}>x_{50}>x_{49}>x_{57}>x_{56}>x_{55}>x_{54}>x_{53}>x_{58}>x_{59}>x_{60}>x_{61}>x_{62}>x_{63}>x_{64}$.\vspace{5truept} \\
The reduced Gr\"{o}bner basis of $I_{G}$ consists of $1050$ quadratic binomials.
Note that any cycle of length $\le 6$ is obtained by the sequence of contractions
from $G$.
Thus, we have the following.
\begin{Theorem}
Let $G$ be a cycle of length $\le 7$.
Then $I_G$ has a lexicographic quadratic Gr\"{o}bner basis.
\end{Theorem}
|
{
"timestamp": "2018-02-27T02:04:03",
"yymm": "1802",
"arxiv_id": "1802.08796",
"language": "en",
"url": "https://arxiv.org/abs/1802.08796"
}
|
\section{Introduction}
\label{intro}
Survival analysis models the time it takes for death and other long-term events to occur, focusing on the distribution of survival times. Survival modeling examines the relationship between survival and one or more predictors, usually called $covariates$ in the survival-analysis literature. The standard modeled event is $death$, from which the name $survival$ $ analysis$ and much of its terminology derives, but the scope and applications of survival analysis are much broader. Similar methods are used in other disciplines with different outcomes of interest: operating time of a machine to measure $reliability$, $event$-$history$ $analysis$ of marriage, divorce, and unemployment in sociology, and duration of contracts in actuarial sciences (survival time $T$ from the execution until the cancellation or completion of a contract).
The semi-parametric approach is one of three approaches found in survival analysis. It is an intermediate method between the parametric and non-parametric approaches. In the semi-parametric approach, the real probability distributions of observations are assumed to belong to a class of laws dependent upon parameters, while other parts are written as non-parametric functions. This approach is commonly used in survival data analysis (Cox $ 1972$; Cox \& Oakes $1984$).
By using the Cox regression model, we specifically aim to model the impact of predictors on the hazard function, which characterizes for an individual $ j $ the probability of dying or experiencing a particular outcome within a short interval of time provided the individual has survived or not experienced the outcome previously. It is useful for identifying the risk factors of a disease, comparing treatments, and estimating the probability of occurrence of an event such as death or relapse in a given identified individual with a vector of explanatory variables. Many extended versions of the Cox regression model have been implemented to take into account clustered data or groups within which the failure times may be correlated (Martinussen \& Scheike $2006$). These groups may represent such distinct entities as members of the same family, patients in the same hospital, or organs within an individual. These groups may also represent repeated timed observations in the same individual, including recurring symptoms of certain diseases or multiple relapses.
Grouping structures arise naturally in many statistical modeling problems. As addressed by Ma et al., complex diseases such as cancer are often caused by mutations in pathways involving multiple genes; therefore, it would be preferable to select groups of related genes together rather than individual genes separately if they operate on the same causal pathway($2007$).
In linear regression, many variable selection techniques have traditionally been used. Three examples are best subset and forward and backward stepwise selection, which produce a sparse model. Best subset regression finds for each $k\in \lbrace 1,..., p\rbrace$ the subsets of size $k$ that gives the smallest residual sum of squares. The question of how to choose $k$ involves the trade-off between bias and variance, along with the more subjective desire for parsimony. There are a number of criteria that one may use; typically, we choose the smallest model that minimizes an estimate of the expected prediction error.
However, this technique is often unsatisfactory for two reasons: 1) the number of "all possible subsets" grows exponentially with the number of predictors ($p$), so when the number of predictors ($ p$) is large, searching all possible subsets is computationally intensive and inefficient; 2) subset selection is discontinuous, implying that an infinitesimally small change in the data can result in completely different estimates. This causes the subset selection method to be unstable and highly variable, especially in higher dimensions (Breiman $1995$; Fani \& Li 2001).
Rather than search through all possible subsets (which becomes infeasible for $p$ much larger than $40$), we can seek a guided path through them. $Forward-stepwise$ $selection$ starts with the intercept and then sequentially adds into the model the predictor that most improves the fit. Forward-stepwise selection is a $greedy$ $algorithm$, producing a nested sequence of models. In this sense it might seem suboptimal compared to best subset selection, but there are a few reasons why it might be preferred. First, computationally; for large $p$ we cannot compute the best subset sequence, but we can always compute the forward-stepwise sequence (even when the number of predictors $p$ is greater than the sample size $n$).
Second, statistically; a price is paid in variance for selecting the best subset of each size; forward-stepwise selection is a more constrained search and will have lower variance but perhaps more bias.
$Backward-stepwise$ $selection$ starts with the full model and sequentially deletes the predictor that has the least impact on the fit. The candidate variable for dropping is the one with the smallest Z-score. Backward selection can only be used when the sample size $n$ is greater than the number of predictors $p$, while forward stepwise can always be used (Hastie et al. 2009). While useful in many contexts, stepwise techniques (forward and backward) for variable selection are still unsatisfactory in certain situations (Greenland 2008).
Penalized regression techniques have been proposed to accomplish the same goals as the best subset selection and forward- and backward- stepwise selection but in a more stable, continuous, and computationally efficient fashion. These techniques include a $L_{1}$ absolute value "Least Absolute Shrinkage and Selection Operator" ("LASSO") penalty (Tibshirani $1996$, $1997$), and a $L_{2}$ quadratic ("ridge") penalty (Hoerl \& Kennard $1970$; Le Cessie \& van Houwelingen $1992$; Verweij \& Van Houwelingen $1994$). $L_{1}$ and $L_{2}$ penalized estimation methods shrink the estimates of the regression coefficients towards zero relative to the maximum likelihood estimates. The purpose of this shrinkage is to prevent overfitting due to either collinearity of the covariates or high dimensionality. Although both methods are shrinkage oriented, the effects of $L_{1}$ and $L_{2}$ penalization are quite different in practice. Applying a $L_{2}$ penalty tends to result in all small but non-zero regression coefficients. As a continuous shrinkage method, if there is high correlation between predictors, ridge regression achieves better predictive performance through a bias-variance trade-off that favors ridge over LASSO (Tibshirani $ 1996 $). However, ridge regression cannot produce a parsimonious model, as it produces coefficient values for each of the predictor variables. Applying a $L_{1}$ penalty tends to result in many regression coefficients shrunk exactly to zero and a few other regression coefficients with comparatively little shrinkage. Consequently, LASSO has become more popular due to its sparse output.
The $L_{1}$ penalty has been applied to other models including Cox regression ( Tibshirani $1997$) and logistic regression ( Lokhorst $1999$; Roth $2004$; Genkin et al. $2007$). Even though LASSO has been successfully utilized in many situations, its popularity and applications are still limited.
In the $p>n$ case, LASSO selects at most $n$ variables before it saturates because of the nature of the convex optimization problem. Moreover, LASSO is not well defined unless the bound on the $L_{1}$ norm of the coefficients is smaller than a certain value (Zou \& Hastie $2005$).
When predictors are categorical, the LASSO solution is not satisfactory, as it only selects individual dummy variables instead of whole factors and depends on how the dummy variables are coded (Meier et al. $2008$). This process results in models that are dependent upon how categories are defined and may produce findings that are artifacts of this arbitrary nature and use of breakpoints.
The group LASSO method is an extension of this popular model selection and shrinkage estimation $L_{1}$ penalty technique to address the problem of variable selection in high dimensions $(i.e.,$ the number of regressors $p$ is greater than the number of observations $n$). Group LASSO (Bakin $ 1999$; Cai $2001$, Antoniadis \& Fan $2001$; Youan \& Lin $ 2006$ Meier et al. $2008$) handles these problems by extending the LASSO penalty to cover group variable structures.
Estimating coefficients in group LASSO is slightly different from standard LASSO because the constraints are now applied to each grouping of variables. In regular LASSO it is possible to have a different constraint for each coefficient. Group LASSO removes a set of explanatory variables in the model by shrinking its corresponding parameter to zero and keeping a subset of significant variables upon which the hazard function depends. As can be noticed, Group LASSO penalizes each factor in a very similar manner as usual LASSO. In other words, same tuning parameter $ \lambda$ is used for each factor without assessing its relative importance. In a typical linear regression setting, it has been shown that such an excessive penalty applied to the relevant variables can degrade the estimation efficiency (Fan \& Li 2001) and affect the selection consistency (Leng et al. 2006; Yuan \& Lin 2006; Zou 2006). Therefore, it can reasonably be expected that Group LASSO suffers the same drawback. For linear regression problems, (Wang \& Leng 2008) proposed adaptive group LASSO, which allows for unique tuning parameter values to be used for separate factors. Such flexibility in turn produces different amounts of shrinkage for different factors. Intuitively, if a relatively large amount of shrinkage is applied to the zero coefficients and a relatively small amount is used for the nonzero coefficients, an estimator with a better efficiency can be obtained.
In the classic semi-parametric Cox model, the study population is implicitly assumed to be homogeneous, meaning all individuals have the same risk of death. This assumption rarely holds true. Individuals within a group may possess a non-observed susceptibility to death from differential genetic predisposition to certain diseases or have common environmental exposures that influence time to the studied event. Another standard assumption in the analysis of survival data is that the individuals under observation are independent. This assumption may be violated in many cases. We may observe a relationship among individuals of the same group when they share unobserved risk factors. Typical groups sharing some risk factors include families, villages, hospitals, and repeated measurements on one individual. A simple model for dependent survival times that is a generalization of the proportional hazard model can be implemented using the concept of $frailty$. This was first proposed by (Vaupel et al. $1979$).
The frailty distributions that have been studied mostly belong to the power variance function family, a particular set of distributions introduced first by Tweedy ($1984$) and later independently studied by Hougaard ($1986$). The gamma, inverse Gaussian, positive stable, and compound Poisson distributions are all members of this group. Generally, the gamma distribution is used to model frailty, mostly for mathematical convenience. It has been demonstrated that its Laplace transform is a useful mathematical tool for several measures of dependence, and the $ n^{th}$ derivative of its Laplace transform has a simple notation. To control the hidden heterogeneity and/or dependence among individuals with a group-related $"frailty"$, we introduce into our model a random variable that follows a gamma distribution. In frailty modeling, the gamma distribution is typically parametrized with one parameter being used simultaneously for both shape and scale.
In this context (Fan \& Li $2002$) proposed LASSO for the Cox proportional hazard frailty model. In this paper, we further improve this procedure by extending it to group LASSO for the Cox proportional hazard frailty model for survival censored times in high dimensions. Like classic LASSO, group LASSO shrinks and selects important predictors, taking into account group structure and known linkages between predictor variables that are supplied in the model. Additionally, allowance is made for a group-level frailty previously described that may be related to unmeasured but suspected background vulnerability or resilience to a particular disease outcome. This model algorithm, using group LASSO with the Cox proportional hazard frailty model, is most applicable in situations with the aforementioned characteristics. In this paper, we will provide a simulated situation and dataset that demonstrates how this method may be used.
\section{Methods}
\label{MaM}
\subsection{Model set-up}
Suppose that there are n clusters and that the $i^{th}$ cluster has $J_i$ individuals and associates with unobserved shared frailty $u_i ( 1\leq i\leq n)$. A vector $ X_{ij} (1\leq i \leq n, 1 \leq j \leq J_i)$ is associated with the $ij^{th}$ survival time $T_{ij}$ of the $j^{th}$ individual in the $i^th$ cluster. Assume that we have independent and identically distributed survival data for a subject $j$ in $i^{th}$ cluster: $(Z_{ij},\delta_{ij}, X_{ij}, u_i)$ with $ \delta_{ij}= \mathds{1}_{\lbrace T_{ij}\leq C_{ij} \rbrace}$ the status indicator of censoring, $C_{ij}$ the censoring time and $ Z_{ij} =min(T_{ij}, C_{ij})$ the observed time respectively for the individual $j$ of the cluster $i$. The corresponding likelihood function with a shared gamma frailty is given by:
\begin{equation}\label{eq:0}
L_n(\beta,H,\alpha)=\prod_{i=1}^{n} \prod_{j=1}^{J_i}\Bigg\{ h_{ij}\Big( Z_{ij}| u_i,X_{ij}\Big) ^{\delta_{ij}}S_{ij}\Big( Z_{ij}| u_i,X_{ij}\Big) \Bigg\} \prod_{i=1}^{n}g(u_i)
\end{equation}
with $S(t)=\exp(-H_0(t))$ a conditional survival function, $h(t| X,u)$ a conditional hazard function
of $T$ given $X$ and $ u$, and $$ g(u)=\frac{\alpha^\alpha u^{\alpha-1}\exp (-\alpha u)}{\Gamma(\alpha)}$$ the density function of a one-gamma frailty $u$ .
Consider the Cox proportional hazard with frailty model:
\begin{equation}\label{eq:1}
h_{ij}(t| X_{ij}, u_{i}) = h_{o}(t)u_{i}\exp(\beta^{\top}X_{ij})
\end{equation}
with $h_o(t)$ the baseline hazard function and $\beta$ the parameter vector of interest, $ H_0(t)=\int^t_0 h_o(\mu)\,d\mu$ the cumulative baseline hazard function. Then (\ref{eq:0}) becomes:
\begin{equation}\label{eq:2}
\prod_{i=1}^{n} \prod_{j=1}^{J_i} h_0(Z_{ij})^{\delta_{ij}}\exp(\beta^{\top}X_{ij})u_i^{\delta_{ij}}\exp \lbrace-H_0(Z_{ij})\exp(\beta^{\top}X_{ij})u_i\rbrace \prod_{i=1}^{n}g(u_i).
\end{equation}
The likelihood of the observed data is obtained by integrating (\ref{eq:2}) with respect to $u_1,...,u_n$.\\
$$\int_{u_1}\dots\int_{u_n}\prod_{i=1}^{n} \prod_{j=1}^{J_i}\Big\{ h_0(Z_{ij})^{\delta_{ij}}\exp(\beta^{\top}X_{ij})u_i^{\delta_{ij}}\exp \Big [-H_0(Z_{ij})\exp(\beta^{\top}X_{ij})u_i\Big ] \Big \}\prod_{i=1}^{n}g(u_i)\,du_n\dots\,du_1$$\\
$$=\prod_{i=1}^{n} \prod_{j=1}^{J_i} h_0(Z_{ij})^{\delta_{ij}}\exp(\beta^{\top}X_{ij})*\underbrace{\int_{u_1}\dots\int_{u_n}\prod_{i=1}^{n}\Big \{ \prod_{j=1}^{J_i}u_i^{\delta_{ij}}\exp\Big [-H_0(Z_{ij})\exp(\beta^{\top}X_{ij})u_i \Big ]\Big \}\prod_{i=1}^{n}g(u_i)\,du_n\dots\,du_1}.$$\\
Let $A = \int_{u_1}\dots\int_{u_n}\Big \{ \prod_{i=1}^{n}u_i^{\sum_{j=1}^{J_i}\delta_{ij}}\exp \Big[-\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})u_i\Big]\Big \} g(u_i)\,du_n\dots\,du_1$\\
$=\int_{u_1}u_1 ^{A_1}\exp\Big[-\sum_{j=1}^{J_1}H_0(Z_{1j})\exp(\beta^{\top}X_{1j})u_1\Big]\frac{1}{\Gamma(\alpha)}\alpha^\alpha u_1^{\alpha-1}\exp(-\alpha u_1)\,du_1 * \dots $ with the product continued for $ i=2,...n$ according to the format notated above for $i = 1$, with $ A_i=\sum_{j=1}^{J_i}\delta_{ij}$.\\
\begin{equation}\label{eq}
L_n(\beta,\alpha,H_o)=\prod_{i=1}^n\underbrace{\int_{u_i}u_i ^{(A_i+\alpha)-1}\exp\Big\{-\Big[\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\Big] u_i\Big\}\,du_i}*\prod_{i=1}^n\frac{\alpha^\alpha}{\Gamma(\alpha)}
\end{equation}
With a suitable change of variables,\\
$$L_n(\beta,\alpha,H_o)=\prod_{i=1}^n\Gamma(A_i+\alpha)\frac{1}{\Big[\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\Big]^{A_i+\alpha}}\prod_{i=1}^n\frac{\alpha^\alpha}{\Gamma(\alpha)}$$ With $ A_i=\sum_{j=1}^{J_i}\delta_{ij}$
\begin{equation}\label{eq:3}
L_n(\beta,\alpha,H_o)=\prod_{i=1}^n \frac{\alpha^\alpha\prod_{j=1}^{J_i}h_0(Z_{ij})^{\delta_{ij}}\exp(\beta^{\top}X_{ij})\delta_{ij}}{\Gamma(\alpha)\Big[\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\Big]^{A_i+\alpha}}\Gamma(A_i+\alpha)
\end{equation}\\
The logarithm of the likelihood in (\ref{eq:3}) is given by
\begin{equation}\label{equ:4}
\begin{split}
\ell_n(\beta, \alpha, H_0)&=\sum_{i=1}^n\Bigg\{ \alpha\log\alpha+\sum_{j=1}^{J_i}\big[\beta^{\top}X_{ij}\delta_{ij}+\delta_{ij}\log h_0(Z_{ij})\big]+\log \Gamma(A_i+\alpha)-\log\Gamma(\alpha)\\
& -(A_i+\alpha)\log\big[\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\big]\Bigg\}
\end{split}
\end{equation}
\begin{equation}\label{eq:5}
\ell_n(\beta, \alpha, H_0) \equiv \sum_{i=1}^n \sum_{j=1}^{J_i} \delta_{ij}\log h_0(Z_{ij})-\sum_{i=1}^n(A_i+\alpha)\log\Bigg\{\sum_{j=1}^{J_i}H_0(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\Bigg\}
\end{equation}
We formulate a profiled likelihood as follows: Consider the least informative nonparametric modeling for $H_0$ in which $H_0(Z)$ has a possible jump of size $\rho_l$ at the observed failure time $\tilde{Z_l}$. Then
\begin{equation}\label{eq:6}
\begin{split}
H_N(Z)&=\sum_{l=1}^N \rho_l \mathds{1}_{\lbrace\tilde{Z_l}\leq Z\rbrace}\\
h_N(Z_{ij})&=\prod_{l=1}^N \rho_l^{\mathds{1}_{\lbrace \tilde{Z_l} \leq Z_{ij}\rbrace}}
\end{split}
\end{equation} where $\tilde{Z_l},l=1,...,N$ are pooled observed failure times. Substituting (\ref{eq:6}) in (\ref{eq:5}), we get:\\
\begin{equation}\label{eq:7}
\begin{split}
\ell_n(\beta, \alpha, H_N)& \equiv \sum_{i=1}^n \sum_{j=1}^{J_i} \delta_{ij}(\sum_{l=1}^N \mathds{1}_{\lbrace \tilde{Z_l}\leq Z_{ij}\rbrace}\log \rho_l)\\
&-\sum_{i=1}^n(A_i+\alpha)\log\Bigg\{\alpha+\sum_{j=1}^{J_i}\exp(\beta^{\top}X_{ij})\sum_{l=1}^N \rho_l \mathds{1}_{\lbrace \tilde{Z_l}\leq Z_{ij}\rbrace}\Bigg\}
\end{split}
\end{equation}\\
\begin{equation}
\begin{split}
\frac{\partial \ell_n(\beta, \alpha, H_N) }{\partial \rho_k}&=\sum_{i=1}^n \sum_{j=1}^{J_i} \delta_{ij}\mathds{1}_{\lbrace \tilde{Z_k} \leq Z_{ij}\rbrace}\frac{1}{\rho_k}\\
&-\sum_{i=1}^n(A_i+\alpha)\frac{\sum_{j=1}^{J_i}\exp(\beta^{\top}X_{ij})\mathds{1}_{ \lbrace \tilde{Z_k}\leq Z_{ij}\rbrace}}{\alpha+\sum_{j=1}^{J_i}\exp(\beta^{\top}X_{ij})\sum_{l=1}^N \rho_l \mathds{1}_{\lbrace \tilde{Z_l}\leq Z_{ij}\rbrace}}, k=1,...N
\end{split}
\end{equation}\\
Assume there are no simultaneous events ("ties") occurring for different groups.
\begin{equation}\label{eq:9}
\frac{1}{\rho_k}=\sum_{i=1}^n\frac{(A_i+\alpha)\sum_{j=1}^{J_i}\exp(\beta^{\top}X_{ij})\mathds{1}_{\lbrace\tilde{Z_k}\leq Z_{ij}\rbrace}}{\alpha+\sum_{j=1}^{J_i}\exp(\beta^{\top}X_{ij})\sum_{l=1}^N \rho_l \mathds{1}_{\lbrace\tilde{Z_l}\leq Z_{ij}\rbrace}}, k=1,...N
\end{equation}
The value of $\rho_k$ in (\ref{eq:9}) is obtained numerically with the algorithm described section (4).
\subsection{Group LASSO estimator for Cox regression with frailty}
The objective function in the Group LASSO for Cox model with frailty is
\begin{equation}\label{eq:9b}
Q_n(\beta, \lambda_n)=-\frac{1}{n}\ell_n(\alpha,\beta, H_N )+ \lambda_n\sum_{\substack{j=1}}^K \sqrt{p_j} \lVert \beta_{(j)} \rVert_2
\end{equation}
where $Q_n(\beta, \lambda_n) $ is the objective convexe function to be minimized over the model parameter $\beta$ with a given optimal tuning parameter $\lambda_n$. This optimal turning parameter controls the amount of penalization. $\ell_n(\beta,\beta, H_N )$ is the profiled partial log-likelihood from (\ref{eq:7}). The model parameter $\beta$ is decomposed into $K$ vectors $\beta_{(j)}, j=1,2,...,K$ which correspond to the $K$ covariate groups, respectively. The term $ \sqrt{p_j}$ adjusts for the varying group sizes, and $ \lVert . \rVert_2$ is the Euclidean norm.
The group LASSO estimator for Cox regression with frailty is defined as: \\
\begin{equation}\label{eq:10}
\hat{\beta}_n(\lambda_n)=\arg\min_{\beta}\left\lbrace -\frac{1}{n}\ell_n(\alpha,\beta, H_N )+ \lambda_n\sum_{\substack{j=1}}^K \sqrt{p_j} \lVert \beta_{(j)} \rVert_2 \right\rbrace
\end{equation}
This estimator does not have an explicit solution in general due to non-differentiability. Therefore, we use an iterative procedure to solve the minimization problem. Depending on the value of the optimal tuning parameter $ \lambda_n$, the estimated coefficients within a given parameter group $j$ satisfy: Either $ (\hat{\beta}_{(j)}= 0)$ for all its components or $ (\hat{\beta}_{(j)}\neq 0)$ for all its components.
This occurs as a consequence of non-differentiability of the square root function at zero $ (\beta_{(j)} =0)$. If the group sizes are all one, the process reduces to the standard $LASSO $.
\subsection{Model selection - find an optimal tuning parameter $\lambda$}
It is necessary to have an automated method for selecting the tuning parameter $\lambda$ that controls the amount of penalization that is considered to be optimal dependent on a specific criterion, such as the Akaike information criterion (AIC) (Akaike, 1973), the Bayesian information criterion (BIC) (Schwarz 1978) or generalized cross-validation (GCV) (Craven and Wahba 1978). We would like to assign the best value to $\lambda$, however that is defined. There is no easy or universally agreed upon best way to find the optimal value for $\lambda$, or for any tuning parameter. In general, the selected value is based on optimizing some function, typically a loss function $\sum_{i=1}^n L(y_i ,\hat{f}(X_i))$ where $\hat{f}(X)$ is a prediction model fitted on a training subset of data. Finding the value for $\lambda$ that performs best according to the metric of choice can be done through several methods, of which k-fold cross-validation (CV) is the most common. In k-fold CV we randomly split the data into k so-called folds. For every fold $i = 1 ... k$, we fit a model on all available data less the data in that particular fold, which is used as the training set. With that model, we try to predict the data in the missing fold, known as the test set. For each fold we obtain an estimate of some metric to evaluate our model, such as an evaluation of a relevant loss function. As a final estimate of how our model performs, we take the average metric over all of the folds. The cross validation error for the subset is naturally chosen to be the negative log likelihood. An important problem of k-fold CV is the computational burden. Fitting a penalized proportional hazards model is computationally intensive, especially if the model has to be fit multiple times for each value of $\lambda$ we want to evaluate. In this paper, choosing $k$ to be equal to $10$, we estimate $\lambda$ by minimizing a k-Cross Validation( GCV) error that is mathematically illustrated as follows: $$ CV_k(\lambda)=-\sum_{i=1}^{k}\ell_n^i\Big(\hat{\beta}_{(n-i)}(\lambda)\Big)/n$$ $\hat{\beta}_{(n-i)}(\lambda)$ is the penalized estimate for $ \beta$ at $ \lambda$ with the $ i^{th}$ subset taken out as the test set and the remaining $k-1$ subsets kept as the training set. $ \ell_n^i(.)$ is the log partial likelihood for the $ i^{th}$ subset.
\section{ Algorithm}\label{sec:1}
To minimize (\ref{eq:9b}) we use the following procedure: We split (\ref{equ:4}) into two pseudo log-likelihood functions. One mainly depending on $\beta$ :
\begin{equation}\label{eq:12}
\ell_n^{(\beta)}(\beta, \alpha, H_N)\equiv\sum_{i=1}^n\sum_{j=1}^{J_i}\beta^{\top}X_{ij}\delta_{ij}-\sum_{i=1}^n(A_i+\alpha)\log\Bigg\{\sum_{j=1}^{J_i}H_N(Z_{ij})\exp(\beta^{\top}X_{ij})+\alpha\Bigg\}
\end{equation}
and the other mainly depending on $ \alpha$:
\begin{equation}\label{eq:13}
\ell_n^{(\alpha)}(\beta,\alpha,H_N)\equiv\sum_{i=1}^n\Bigg\{\alpha\log\alpha+\log \Gamma(A_i+\alpha)-\log\Gamma(\alpha)-(A_i+\alpha)\log\Big[\sum_{j=1}^{J_i}H_N(Z_{ij})\exp(\beta^\top X_{ij})+\alpha\Big]\Bigg\}
\end{equation}
Since the the penalty term in (\ref{eq:9b}) depends only on $ \beta$, minimizing (\ref{eq:9b}) is equivalent with minimizing:
\begin{equation}\label{eq:14}
-\frac{1}{n}\ell_n^{(\beta)}(\beta, \alpha, H_N)+ \lambda_n\sum_{\substack{j=1}}^K \sqrt{p_j} \lVert \beta_{(j)} \rVert_2
\end{equation}
We cycle through the parameter groups and minimize (\ref{eq:14}) keeping all except the current parameter group fixed. The Block Co-ordinate Gradient Descent algorithm is to be applied to solve the non-smooth convex optimization problem in (\ref{eq:14}) (Yun et al. 2011). This algorithm would also be used to optimize (\ref{eq:13}). However, (\ref{eq:13}) involves the first two order derivatives
of the gamma function, which may not exist for certain values of $\alpha$. We use an approach similar to that in (Fan \& Li 2002) to avoid this difficulty by using a grid of possible values for the frailty parameter $ \alpha$ and finding the minima of (\ref{eq:13}) over this discrete grid, as suggested by Nielsen et al. (1992).
Denote $Q_{\lambda_n}(\beta)= -\frac{1}{n}\ell_n^{(\beta)}(\beta, \alpha, H_N)+ \lambda_n\sum_{\substack{j=1}}^K \sqrt{p_j} \lVert \beta_{(j)} \rVert_2 $ a penalized objective function to be minimized and denote $ \nabla Q_{\lambda_n}(\beta)$ its gradient to be evaluated at $\beta$\\
\begin {table}[!ht]
\caption {Block Co-ordinate Gradient (BCGD) Descent Algorithm} \label{tab:title}
\begin{tabular}{l|p{10cm}}\hline
Steps & Algorithm \\\hline
1. & For $j= 1,...,K$ \newline choose $ \hat{\beta}_{(j)}^{(0)}$ as initial values.\\
2. & For the $m^{th}$ iteration, $ \hat{\beta}_{(j)}^{(m+1)} \leftarrow \hat{\beta}_l^{(m)}-\gamma_n \nabla Q_{\lambda_n}(\hat{\beta}_l^{(m+1)}) $ with $ m =0,1,2,... $ and $ \gamma_n>0$ the step size computed following Armijo rule \\
3.& For each $j$, repeat steps 2 until some convergence criterion is met\\ \hline
\end{tabular}
\end {table}
With BCGD, we propose the following algorithm to solve (\ref{eq:9b}).
\begin{tabular}{l|p{10cm}}\hline
Steps & Algorithm \\\hline
1. & For $j= 1,...,K$ \newline choose $ \hat{\beta}_{(j)}^{(0)},\hat{\alpha}_{(j)}^{(0)}, \hat{\rho}_{j,k}^{(0)}$, k=1,...,N as initial values.\\
2. & For the $m^{th}$ iteration, $\hat{\rho}_{j,k}^{(m+1)}$ is updated from (\ref{eq:9} ) with $ m =0,1,2,... $ and then compute $\hat{H}_{N}^{(m+1)}$ from (\ref{eq:6})\\
3. & Since $\hat{H}_{N}^{(m+1)}$ is known, we can then minimize (\ref{eq:13}) with respect to $\left( \hat{\beta}_{(j)}^{(m+1)}\right)$ using BCGD algorithm\\
4.& Since $\left(\hat{H}_{N}^{(m+1)} ,\hat{\beta}_{(j)}^{(m+1)}\right)$ are known, we minimize (\ref{eq:14}) with respect to $\left( \hat{\alpha}_{(j)}^{(m+1)}\right) $ as stated above\\
5.& For each $j$, repeat steps 2 up 4 until some convergence criterion is met\\ \hline
\end{tabular}\\
\section{Theoretical consistency of the method}
Consider the penalized pseudo-partial likelihood estimator: $$ \hat{\beta}_n(\lambda_n)=\arg\min_{\beta}\left\lbrace -\frac{1}{n}\ell_n(\alpha,\beta, H_N )+ \lambda_n \sum_{\substack{j=1}}^K \sqrt{p_j} \lVert \beta_{(j)} \rVert_2 \right\rbrace $$ Denote $\beta^0 $ the true value of the model parameter $\beta=(\alpha,\beta, H_N)$. $ \forall \varepsilon > 0$, we need to show that $\mathds{P}\left\lbrace \hat{\beta_n}(\lambda_n)-\beta^0\|<\varepsilon \right\rbrace\rightarrow 1$ as $ n \to \infty$. Given (A)-(D) regularity conditions in (Andersen and Gill $1982$), according to the Theorem 3.2 in Andersen and Gill (1982), the following two results hold.
$$ n^{-1/2}\dot{\ell}_n(\beta^0)\overset{\mathds{P}}{\to}\mathcal{N}(0, \Sigma)$$
$$ -\frac{1}{n}\ddot{\ell}_n(\beta^\ast)\overset{\mathds{P}}{\to}\Sigma \hspace{3mm} \forall \hspace{2mm}\beta^\ast\overset{\mathds{P}}{\to}\beta^0$$ $\dot{\ell}_n(\beta^0)$ and $ \ddot{\ell}_n(\beta^\ast)$ are the first and the second order derivatives of $\ell_n(\beta)$, i.e, the score function and the Hessian matrix, evaluated at $\beta^0$ and $\beta^\ast$ respectively. $\Sigma$ is the positive definite Fisher information. The consistency theorem stated in this section buids up on the two results above.
\pagebreak
\begin{Theorem}(Consistency)
Assume that $(X_{ij},T_{ij},C_{ij})$ are independently distributed random samples given $u_i$ which are i.i.d. from a Gamma distribution for $i=1,...,n$ and $j=1,...,J_i$. $ T_{ij}$ and $ C_{ij}$ are conditionally independent given $X_{ij}$. Under regularity conditions (A)-(D) in Anderson and Gill (1982), if $\lambda_n\to 0$ when $ n \to \infty$, then there exists a local minimizer $\hat{\beta_n}(\lambda_n)$ of $ Q_n(\beta, \lambda_n)$ such that $ \mathds{P}\left\lbrace \| \hat{\beta_n}(\lambda_n)-\beta^0\|<\varepsilon\right\rbrace\rightarrow 1$
\end{Theorem}
Proof: Applying Theorem 5.7 in Van der Vaart(1998) with a slightly different approach the theorem can be proved as follows: Let us first show that $ Q_n(\beta_n, \lambda_n)>Q_n(\beta_n^0, \lambda_n).$
$$ Q_n(\beta_n, \lambda_n)-Q_n(\beta_n^0, \lambda_n).$$
$$ =-\frac{1}{n}\left( \ell_n(\beta)-\ell_n(\beta^0)\right)+\sum_{j=1}^K \lambda_n \sqrt{p_j}\left( \| \beta_{(j)}\|-\| \beta^0_{(j)}\|\right) $$
$$ \geq-n^{-1/2}\left(n^{-1/2}\frac{\partial}{\partial\beta}\Big(\ell_n(\beta^0)\Big) \right)^\top\left(\beta-\beta^0 \right)+ \left(\beta-\beta^0 \right)^\top \left( n^{-1/2}\frac{\partial^2}{\partial\beta^2}\Big(\ell_n(\beta^0)\Big)\right) \left(\beta-\beta^0 \right)$$ $$+n^{-1}o_p\left( \|\beta- \beta^0\|^2\right)-\sum_{j=1}^K \lambda_n \sqrt{p_j}\left( \| \beta_{(j)}\|-\| \beta^0_{(j)}\|\right) $$
$$ \geq-n^{-1}O_p(1) \|\beta- \beta^0\|+\left(\beta-\beta^0 \right)^\top\left( \Sigma+o_p(1)\right)\left(\beta-\beta^0 \right)+n^{-1}o_p\left( \|\beta- \beta^0\|^2\right) -\lambda_n \sum_{j=1}^K \sqrt{p_j}\left( \| \beta_{(j)}\|-\| \beta^0_{(j)}\|\right) $$ Since $\lambda_n\rightarrow 0$ as $n\rightarrow0$ then $Q_n(\beta_n, \lambda_n)-Q_n(\beta_n^0, \lambda_n)\geq \left(\beta-\beta^0 \right)^\top\left( \Sigma+o_p(1)\right)\left(\beta-\beta^0 \right)$ and this right side part is positive since $\Sigma$ is positive. $Q_n(\beta_n, \lambda_n)$ is non empty and lower bounded by $Q_n(\beta_n^0, \lambda_n)$ consequently it admits a local minimum. Since $Q_n(\beta_n, \lambda_n)$ is concave, its local minimum is also its global minimum.
$$ Q_n(\beta_n, \lambda_n)>Q_n(\beta_n^0, \lambda_n).$$For any positive $\varepsilon$
$$ \left\lbrace \sup_{\beta:\|\beta- \beta^0\|=a}Q_n(\beta_n, \lambda_n)>Q_n(\beta_n^0, \lambda_n)\right\rbrace \subseteq \left\lbrace \hat{\beta_n}(\lambda_n)-\beta^0\|<\varepsilon\right\rbrace $$
$$ \Rightarrow \mathds{P}\left\lbrace \hat{\beta_n}(\lambda_n)-\beta^0\|<\varepsilon\right\rbrace \geq \mathds{P}\left\lbrace \sup_{\beta:\|\beta- \beta^0\|=a}Q_n(\beta_n, \lambda_n)>Q_n(\beta_n^0, \lambda_n)\right\rbrace $$
Thus $ \Rightarrow \mathds{P}\left\lbrace \hat{\beta_n}(\lambda_n)-\beta^0\|<\varepsilon\right\rbrace\rightarrow 1$
\section{ Applications}
With the advent of molecular biology to study the relationship between genetics and disease outcomes such as cancer, and as exposure science improves for taking multiple polluant or pathogen measurements, in air and water as well as in other media, it becomes possible for affected individuals, researchers and public health practitioners to generate large datasets with rich information such that the numbers of predictors $p$ is greater than the sample sizes $n$. Statistical methods are needed to handle and analyze such data sets. In the case of genetic epidemiology, researchers are able to identify genes that act along identical or similar pathways and are able to group these genes together to understand associations with health outcomes and to calculate cumulative risk. In the case of exposure assessment, environmental health scientists now understand that pollution sources release multiple pollutants that contribute to the same morbidities. Examples include the many chemicals in tobacco smoke, vehicle emissions, and effluents from industrial plants. People experiencing diarrhea may have co-infection with multiple pathogenic agents, and understanding the nature of outbreaks may be improved as water exposure science advances in the future. Personalized medicine has opened the door to personalized public health as more information can be gathered at the individual level. By using group LASSO with group level frailty in survival analysis, we will be better able to trace health outcomes back to sources that contribute multiple exposures of interest. Group LASSO's preferential shrinking towards zero of non-significant groups of predictors will produce sparse models that link back to pollution sources rather than individual chemical or biological exposures. This application could be applied in the case of land-use studies, brownfield risk assessment, and environmental impact assessments of new construction projects. Group LASSO with the Cox proportional hazards frailty model will be part of the new paradigm of risk assessment that encompasses cumulative exposures (National Research Council of the National Academies 2009). For use with genetic epidemiology, as gene mapping and gene testing become increasingly cost effective, large cohort datasets will become available to more effectively establish associations between genetic and epigenetic markers and disease outcomes. As previously discussed, group LASSO with group frailty allows common pathways and mechanisms to be incorporated into the analysis while also including a frailty term to account for unmeasured susceptibility or resilience that exist in subpopulations.
\section{ Simulated data}
Data sets were simulated with sample size $m = \sum_{i=1}^{n}J_i$ (where $n$ is the number of observation clusters and $J_i$ is the number of observations in the $i^{th}$ cluster) fixed to $100$ and predictors $p$ equals to $100$. Group sizes for both individuals (with respect to frailty) and predictors (with respect to variable groupings) were set to 10 arbitrarily, though this can easily be adjusted depending on the dataset. We simulated a design matrix of of order $(n,p)$ where $X_i\overset{i.i.d}{\to}\mathcal{N}(0,1)$ and the covariance matrix $\Sigma_{i,j}=\rho^{| i-j|}$ with $\rho=0.5$. In practice, the assumption of a constant hazard function is rarely tenable. A more general form of the hazard function is given by the Weibull distribution, which is characterized by two positive parameters: the scale parameter$(\lambda>0)$ and the shape parameter ($\nu >0$). Its corresponding baseline hazard function is $$ h_0(t)=\lambda\nu t^{\nu-1}$$ and the survival time for a shared-gamma frailty Cox model is $$ T=\left( \frac{\log(U)\exp(-\beta^\top X)}{\lambda G}\right)$$ with $ U\leadsto Uni[0,1]$ and $G\leadsto\Gamma(\alpha,\alpha)$. Taking into account the censoring status, we simulated censoring times from the exponential distribution: $C\leadsto\exp(n,3)$. The observed failure time for each observation is the minimum between its survival time $T$ and and its censoring status $C$.
The algorithms described in (\ref{sec:1}) were implemented to select the appropriate tuning parameter $\lambda$ to maximize the k-fold CV criterion. Performance of group LASSO with Cox proportional hazard frailty model is compared and contrasted with group SCAD and group MCP. Figures 1-3 show an example solution path for group LASSO, group SCAD, and group MCP, respectively.
\begin{figure}[h!]
\includegraphics[width = 4in]{Lasso2}
\centering
\caption{Group Lasso Solution path for simulated examples}
\label{fig:Sln path Lasso}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=4in]{SCAD2}
\centering
\caption{Group SCAD Solution path for simulated examples}
\label{fig:Sln path SCAD}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=4in]{MCP2}
\centering
\caption{Group MCP Solution path for simulated examples}
\label{fig:Sln path MCP}
\end{figure}
\pagebreak
Figures 4-6 compare the performance of the three methods over 100 simulations with summary measures of tuning parameter value choice, cross-validation error, and R-squared, respectively (remembering that this is a simulated data set). Some summary trends appear. Notably for these simulations, group lasso tends to pick a smaller tuning parameter value, centered around 0.03 compared with 0.09 for group SCAD and 0.10 for group MCP. R-squared performance for group lasso is significantly better, averaging around 0.18 compared with 0.05 for group SCAD and 0.03 for group MCP. Considering cross-validation error, the results are more similar, with group lasso demonstrating only slightly better performance (139 for group lasso compared with 151 for group SCAD and 156 for group MCP) in this set of simulations.
\begin{figure}[h!]
\includegraphics[width=4in]{Compare_Tuning}
\centering
\caption{Distribution of tuning parameter for each of the three methods over 100 simulations.}
\label{fig:Compare Tuning Parameter}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=4in]{Compare_CVE}
\centering
\caption{Distribution of cross-validation errors for each of the three methods over 100 simulations.}
\label{fig:Compare CVE}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=4in]{Compare_R2}
\centering
\caption{Distribution of R-squared values for each of the three methods over 100 simulations.}
\label{fig:Compare R-squared}
\end{figure}
\section{ Discussion }
The limitations of this methodology overlap with the limitations of LASSO. Group LASSO remains a penalization method that is not appropriate for all studies and circumstances and is outperformed at times by ridge regression, least-angle regression (LARS), and the non-negative garrrotte (Yuan and Lin $2007$). Even though group LASSO and group frailty make adjustments to account for clustering effects, this method requires a resolution of data and background knowledge that is not available for many data sets and research questions. Future research will continue to elucidate many of these scenarios and make the datasets more amenable to use with group LASSO.
While the group LASSO gives a sparse set of groups, if it includes a group in the model then all coefficients in the group will be nonzero. Sometimes we would like parsimony both between groups and within each group. As an example, if the predictors are genes, we would then like to identify particularly "important" genes in the pathways of interest. Toward this end (Friedman et al. 2010) focused on the "sparse-group LASSO" wherein they introduced a regularized model for linear regression with $L_1$ and $L_2$ penalties. They discussed the sparsity and other regularization properties of the optimal fit for this model and show that it has the desired effect of group-wise and within group sparsity. Even though the group LASSO is an attractive method for variable selection, since it respects the grouping structure in the data, it is generally not selection consistent and also can select groups that are not important in the model (Wei and Huang $2011$). To improve the selection results, researchers proposed an adaptive group LASSO method which is a generalization of the adaptive LASSO and requires an initial estimator. They showed that the adaptive group LASSO is consistent in group selection under certain conditions if the group LASSO is used as the initial estimator.
In this context, interested researchers may look into the "sparse-group LASSO" or "adaptive group LASSO" for use with the Cox proportional hazard model with frailty when optimizing grouped variable selection.
\section*{ Acknowledgments}
Funding for the initial meeting of authors JCU, TML, and PN was provided through MMED - the Center for Inference and Dynamics of Infectious Diseases and funding provided through MIDAS-National Institute of General Medical Sciences under award U54GM111274.
\pagebreak
\section*{References}
|
{
"timestamp": "2018-02-26T02:11:01",
"yymm": "1802",
"arxiv_id": "1802.08622",
"language": "en",
"url": "https://arxiv.org/abs/1802.08622"
}
|
\section{Introduction}
After the fundamental paper by Anderson \cite{PWA} it was believed for
a long time that in one-dimensional random potentials all eigenstates
are localized in the thermodynamic limit for arbitrarily weak disorder
\cite{mott,ishii,efetov}. If Azbel resonances \cite{azbel,azbel2}, which form
a set of measure zero, are neglected, the above statement is
rigorously speaking valid only for white noise spatially uncorrelated
randomness \cite{pastur}. Later it was realized
that the spatial correlations of the disorder potential can have
profoundly influence Anderson localization \cite{JK,F,KM}.
In this case localization can be partially suppressed, at
least for weak disorder \cite{IK}. In this context a
delocalization-localization transition in 1D for {\it long-range}
correlated disorder potentials has been intensively discussed
in the literature \cite{ML1,ML2,ML3,ML4,stanley1,stanley2}. On the other hand, it was
found that models with specific short-range correlated disorder,
so-called dimer models, exhibit conducting states \cite{dimer1,dimer2,dimer3,dimer4}.
In recent years considerable efforts have been made to understand
the combined effects of disorder and interactions, which leads to the
phenomenon of \emph{many-body localization} (MBL)
\cite{BAA,MBL1,MBL3,MBL4,MBL5,MBL6,MBL7,MBL8}, see
Refs~\cite{MBLR1,MBLR2,MBLR3,AP,PV} for recent reviews.
The MBL transition generally occurs at finite energy densities and is
characterized by ergodicity breaking, the existence of an extensive
number of quasi-local integrals of motion in the localized
phase \cite{MBL7,MBL8,MBL9} and Poissonian level statistics. This is
reminiscent of Yang-Baxter integrable many-body
systems \cite{Korepinbook,Gaudinbook}, which also feature Poissonian level
statistics and extensive numbers of conservation laws. In Yang-Baxter
integrable systems the conserved charges are extensive but have
(quasi) local densities. An interesting question is then whether there
are any connections between Yang-Baxter integrability and MBL. An
example of a Yang-Baxter integrable model that is localized is
provided by disordered Richardson models \cite{BDS}. However, this
class of models is infinite-ranged whereas studies of MBL have focussed
on models like the spin-1/2 Heisenberg chain with a random white-noise
correlated magnetic field. Other forms of disorder such as
random exchange interactions \cite{DM,DSF} have been explored as well
\cite{VA,VFPP,nonabel2}, and there appears to be a wide spread belief
that MBL behaviour is a rather generic phenomenon in the strong
disorder regime. Non-MBL behaviour has been found in a disordered
Hubbard chain \cite{PBZ}, but this could be related to the presence of
non-abelian symmetries \cite{nonabel,nonabel2}. Another little
explored issue is what effects correlations in the disorder have on
MBL \cite{hoyos1,MBLcorr2,hoyos2}.
In this work we study a Yang-Baxter integrable model of a
Heisenberg-like spin chain with tuneable randomness and abelian
symmetry. We employ a number of standard tools used to probe for
(many-body) localized
behaviour: inverse participation ratios, local quantum quench dynamics
and transport properties in energy eigenstates. All methods point to
the same conclusion: the model does not exhibit any traces of
localization irrespective of the magnitudes of the interactions and
disorder. On the contrary, we find that the model is an ideal
conductor for both spin and energy.
Moreover, we show in a
non-interacting limit that by deforming the model by tuning the
correlations between the random interaction parameters (the resulting
model is no longer Yang-Baxter integrable) it is possible to induce
localization. This suggests that the model we study here is a
particular example of a broader class of strongly disordered models in
one dimensions that do not exhibit MBL.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{iXXZchain2}
\caption{Schematic representation of the disordered interacting spin
chain studied here. Three types of position-dependent interactions
with random parameters are present: a nearest-neighbour exchange
$J^{(1)}_{2j}$, a next-nearest neighbour coupling $J^{(2)}_{2j}$ and a
three-spin interaction $K_{2j}$. Explicit expressions for the various terms
in the Hamiltonian are given in the text. The ratios
$J^{(1)}_{2j}/J^{(2)}_{2j}$ and $J^{(1)}_{2j}/K_{2j}$ are correlated.}
\label{fig:system}
\end{center}
\end{figure}
\section{The Model}
The Hamiltonian of our integrable chain contains nearest-neighbour,
next-nearest neighbour and three-spin interactions with random
couplings, \emph{cf.} Fig. (\ref{fig:system}), and
can be expressed in the form
\begin{eqnarray}
H&=&\sum_{j=1}^{L/2}
J^{(1)}_{2j}\Big(
\left[\vec{\sigma}_{2j-1}\cdot\vec{\sigma}_{2j}\right]_{\Delta_{2j}}
+\left[\vec{\sigma}_{2j}\cdot\vec{\sigma}_{2j+1}\right]_{\Delta_{2j}}\Big)\nonumber\\
&&\quad+K_{2j}\Big(\left[\vec{\sigma}_{2j}\cdot\big(
\vec{\sigma}_{2j-1}\times\vec{\sigma}_{2j+1}\big)\right]_{\Delta_{2j}^{-1}}+\Delta^{-1}_{2j}
\Big)\nonumber\\
&&\quad+J^{(2)}_{2j}\big(\vec{\sigma}_{2j-1}\cdot\vec{\sigma}_{2j+1}-1\big)\ ,
\label{Hamil}
\end{eqnarray}
where $[\vec{\sigma}_j\cdot\vec{\sigma}_k]_\Delta=
\sigma^x_j\sigma^x_k+\sigma^y_j\sigma^y_k+\Delta
(\sigma^z_j\sigma^z_k-1)$. The exchange couplings are parametrized as
\begin{eqnarray}
J^{(1)}_{2j}&=&\frac{\sin^2\eta\cosh\xi_{2j}}{\sin^2\eta+\sinh^2\xi_{2j}}\ ,\
J^{(2)}_{2j}=\frac{\cos\eta\sinh^2\xi_{2j}}{\sin^2\eta+\sinh^2\xi_{2j}}\ ,\nonumber\\
K_{2j}&=&\frac{\sin\eta\cos\eta\sinh\xi_{2j}}{\sin^2\eta+\sinh^2\xi_{2j}}\ ,\
\Delta_{2j}=\frac{\cos\eta}{\cosh\xi_{2j}}\ ,
\end{eqnarray}
where $\xi_{2j}$ and $\Delta=\cos(\eta)$ are free parameters of the
model. By construction we recover the spin-1/2 Heisenberg XXZ Hamiltonian if
we set all inhomogeneities to zero $\xi_{2}=\xi_4=\dots=\xi_L=0$.
In the following we mainly consider the case where $\xi_{2k}$
are independent random variables drawn from a flat
distribution
\begin{equation}
P_W(\xi)=\frac{1}{2W}\theta(W-|\xi|)\ .
\label{PD1}
\end{equation}
The derivation of the Hamiltonian \fr{Hamil} is summarized in
Appendix \ref{app:inhom}. The model \fr{Hamil} is a variant of a class
of disordered impurity models previously studied by Kl\"umper and Zvyagin
\cite{KZ1}, who in particular determined thermodynamic properties
\cite{KZ1,KZ2,Z1,Z11,Z2}. Yang-Baxter integrability imposes severe
restrictions on the form of the Hamiltonian. This results in all
three kinds of interactions involving the same random parameters and
can be viewed as short-range correlated disorder in a model of
interacting spins.
As a sufficiently strong next-nearest neighbour exchange can induce
dimerization our model can in some sense be considered as an
interacting analogue of the ``dimer models'' mentioned
above.
\subsection{Higher conservation laws}
\label{sec:HCL}
As shown in Appendix \ref{app:inhom} the Hamiltonian \fr{Hamil} is
related to the transfer matrix $\tau(\mu)$ of an inhomogeneous
six-vertex model. This connection is useful for constructing higher
conservation laws, which are a characteristic feature of Yang-Baxter
integrable models. In the case at hand they
can be obtained by taking logarithmic derivatives of the transfer
matrix at $\mu=0$
\begin{equation}
Q^{(n)}=i^{n}\frac{d^{n-1}}{d\mu^{n-1}}\Bigg|_{\mu=0}\ln\big(\tau(\mu)\big)\ ,\quad
n=2,3,\dots
\label{HCL}
\end{equation}
The Hamiltonian is by construction proportional to $Q^{(2)}$
\begin{equation}
H=-2i\sin\eta\ Q^{(2)}\ .
\end{equation}
Importantly the higher conservation laws are also (ultra)local in the
following sense: they can be expressed in the form
\begin{equation}
Q^{(n)}=\sum_{j=1}^L Q^{(n)}_j\ ,
\end{equation}
where $Q^{(n)}_j$ act non-trivially only on a finite number of
neighbouring sites. We note that the structure of these conservation
laws is very different from that on the ``l-bits'' in many-body
localized systems.
In the following we will make use of the first higher conservation law
$Q^{(3)}$. To that end we require an explicit expression in terms of
the L-operator \fr{Lop} and its derivatives. For the operator
$Q^{(2)}$ this is readily done:
\begin{equation}
Q^{(2)}=-\sum_{j=1}^{L/2}Q^{(2,1)}_{2j-1,2j}+Q^{(2,2)}_{2j-1,2j,2j+1}\ ,
\end{equation}
where
\begin{eqnarray}
\Big[Q^{(2,2)}_{1,2,3}\Big]_{\alpha_1\alpha_2\alpha_3}^{\beta_1\beta_2\beta_3}
&=&\Big[L(-x_2)\Big]^{\alpha_1c}_{\alpha_2d}
\Big[L'(0)\Big]^{c\beta_3}_{\alpha_3e}
\Big[L(x_2)\Big]^{e\beta_1}_{d\beta_2}\ ,\nonumber\\
\Big[Q^{(2,1)}_{1,2}\Big]_{\alpha_1\alpha_2}^{\beta_1\beta_2}
&=&\Big[L'(-x_{2})\Big]^{\alpha_{1}c}_{\alpha_{2}d}
\Big[L(x_{2})\Big]^{c\beta_{1}}_{d\beta_{2}}\ .
\end{eqnarray}
The conservation law $Q^{(3)}$ can be expressed as a sum of terms that
involve spin interactions on two, three, four and five neighbouring
sites respectively
\begin{eqnarray}
Q^{(3)}&=&-i\sum_{j=1}^{L/2}\Bigg[ Q^{(3,1)}_{2j-1,2j}+Q^{(3,2)}_{2j-1,2j,2j+1}\nonumber\\
&+&Q^{(3,3)}_{2j-1,2j,2j+1,2j+2}
+Q^{(3,4)}_{2j-1,2j,2j+1,2j+2,2j+3}\Bigg],\nonumber\\
\label{Q3}
\end{eqnarray}
where
\begin{eqnarray}
\Big(Q^{(3,1)}_{1,2}\Big)_{\alpha_1\alpha_2}^{\beta_1\beta_2}
&=&\Big[L''(-x_{2})\Big]^{\alpha_{1}c}_{\alpha_{2}d}
\Big[L(x_{2})\Big]^{c\beta_{1}}_{d\beta_{2}}\nonumber\\
&&-\Big[Q^{(2,1)}_{1,2}\ Q^{(2,1)}_{1,2}\Big]_{\alpha_{1}\alpha_{2}}^{\beta_{1}\beta_{2}}\ ,\nonumber\\
\Big(Q^{(3,2)}_{1,2,3}\Big)_{\alpha_1\alpha_2\alpha_3}^{\beta_1\beta_2\beta_3}&=&
2\Big[L'(-x_2)\Big]^{\alpha_1c}_{\alpha_2d}
\Big[L'(0)\Big]^{c\beta_3}_{\alpha_3e}
\Big[L(x_2)\Big]^{e\beta_1}_{d\beta_2}\nonumber\\
&&-
\Big[Q^{(2,1)}_{1,2}Q^{(2,2)}_{1,2,3}+
Q^{(2,2)}_{1,2,3}Q^{(2,1)}_{1,2}\Big]_{\alpha_1\alpha_2\alpha_3}^{\beta_1\beta_2\beta_3}\ ,\nonumber\\
Q^{(3,3)}_{1,2,3,4}&=&
Q^{(2,1)}_{3,4}\ Q^{(2,2)}_{1,2,3}-Q^{(2,2)}_{1,2,3}\ Q^{(2,1)}_{3,4}\ ,\nonumber\\
Q^{(3,4)}_{1,2,3,4,5}&=&Q^{(2,2)}_{3,4,5}\ Q^{(2,2)}_{1,2,3}-
Q^{(2,2)}_{1,2,3}\ Q^{(2,2)}_{3,4,5}\ .
\label{Qnm}
\end{eqnarray}
The operator $Q^{(3)}$ can be expressed in terms of Pauli matrices
using \fr{Lop}, but this is not particularly useful for our purposes.
\section{Non-interacting limit}
The particular case $\eta=\pi/2$ maps to non-interacting spinless
fermions by means of a Jordan-Wigner transformation \cite{LSM}. The
resulting Hamiltonian (\ref{Hamil}) is block-diagonal
$H=P_+H_++P_-H_-$, where
$P_\pm=\frac{1}{2}(1\pm(-1)^{F})$ are projection operators onto the subspaces
with even and odd numbers of fermions respectively ($F$ is the
fermion number operator). We find
\begin{equation}
H_+=\sum_{j<k=1}^Lc^\dagger_jA_{jk}c_k+{\rm h.c.}\ ,
\label{HFF}
\end{equation}
where $A_{1,L-1}=2i\tanh(\xi_{L})$,
$A_{1,L}=\frac{2}{\cosh(\xi_{L})}$ and
\begin{equation}
A_{2j\pm1,2j}=-\frac{2}{\cosh(\xi_{2j})},\
A_{2j-1,2j+1}=2i\tanh(\xi_{2j})\ ,
\label{HFF2}
\end{equation}
Fermions tunnel between neighbouring sites with amplitudes that are
random apart from the constraints $A_{2j-1,2j}=A_{2j,2j+1}$.
In addition there is a next-nearest neighbour hopping on the
sublattice of all odd sites. Importantly the
corresponding tunneling amplitudes $A_{2j-1,2j+1}$ are not independent
random variables, but are related to the amplitudes $A_{2j-1,2j}$. The
fermion hopping \fr{HFF2} can therefore be thought of as realizing
a particular kind of \emph{correlated disorder}. As we will see, this
has important consequences for the physical properties of energy
eigenstates. Single-particle energy eigenstates are constructed as
$|\Psi_n\rangle=\sum_{j=1}^L\phi_{n,j}c^\dagger_j|0\rangle$, where
$\phi_n$ are the (orthonormal) eigenvectors of the matrix $A$ and
$|0\rangle$ is the state without fermions. In order to investigate
whether the model \fr{HFF} is localized we have determined the inverse
participation ratio of single-particle energy eigenstates
$I_n=\sum_{j=1}^L|\phi_{n,j}|^4$. We have considered several probability
distributions of the random parameters $\xi_{2j}$, all of which lead
to the same conclusion. We therefore focus on \fr{PD1}. In
Fig.~\ref{fig:ipr} we show normalized histograms of $I_n$ averaged
over 1000 disorder realizations for $W=3$ and two different system sizes.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{IPR3.pdf}
\caption{Histograms of the inverse participation ratios for
single-particle energy eigenstates for system sizes $L=64$ (yellow)
and $L=128$ (blue) averaged over 1000 disorder realizations with
probability distribution for inhomogeneities given by the box distribution
with $W=3$. Inset: same for eigenstates of \fr{HFF}, \fr{HFF3}
with $s=0$, $s'=0.2$.}
\label{fig:ipr}
\end{center}
\end{figure}
We see that the inverse participation ratios are strongly peaked at a
value that we find to scale inversely with system size as $1/L$. This
indicates that the eigenstates are not localized.
At this point the question arises whether the model \fr{HFF},
\fr{HFF2} is delocalized as a result of fine tuning, or whether it is
representative of a broader class of theories. To investigate this
issue we have considered free fermion models of the type \fr{HFF}
with tunneling amplitudes
\begin{eqnarray}
A_{2j\pm1,2j}&=&-2|x_{2j}|,\nonumber\\
A_{2j-1,2j+1}&=&2is\ {\rm sgn}(x_{2j})\sqrt{1-x_{2j}^2}
+s'y_{2j},
\label{HFF3}
\end{eqnarray}
where we take $x_{2j}$ and $y_{2j}$ to be independent random
variables with probability distribution $P_1(x)$ \fr{PD1}. The tuning
parameters $0\leq s,s'\leq 1$ allow us to interpolate between the
``Yang-Baxter'' case in which the next-nearest neighbor tunneling
amplitudes $A_{2j-1,2j+1}$ are fixed in terms of the $A_{2j,2j+1}$ and
the limit in which they become independent random variables.
We have analyzed IPRs for a range of values $s$ and
$s'$.
The results
suggest that for $s\approx 1$ and small values of $s'$, i.e. Hamiltonians
close to the Yang-Baxter point, eigenstates are
delocalized. On the other hand for small values of $s$ and $s'$, i.e. weak
uncorrelated next-nearest neighbour tunneling, the data
is more consistent with localization as shown in the inset of
Fig.~\ref{fig:ipr}. This suggests that the ``Yang-Baxter'' model
\fr{HFF}, \fr{HFF2} does not correspond to an isolated point in
parameter space but is representative of a delocalized region that arises
as a result of the correlation between the nearest neighbor and
next-nearest neighbor tunneling.
\subsection{Local Quantum Quench}
A second way of investigating localization properties in energy
eigenstates is by considering the spreading of correlations after a
local quantum quench. We prepare the system in the initial finite
energy density state and then overturn two neighboring spins. This
choice of initial state allows us to work in the even fermion
parity sector of the Hilbert space, $(-1)^{\hat{F}}=1$. In order to
investigate the spreading of correlations we determine the expectation
value of the $z$-component of spin at site $\ell$. Using Wick's
theorem we obtain compact expressions for $S^{z}_{\ell}(t)$ that can be
evaluated numerically for systems of hundreds of spins. In
Fig.~(\ref{fig:beta1}) we show results for a representative example,
where a system of size $L=128$ is initially prepared in an energy eigenstate
corresponding to inverse temperature $\beta=1$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.23\textwidth]{beta1L128DP.pdf}\quad
\includegraphics[width=0.222\textwidth]{beta1L64_pure_DP_500.png}
\caption{
Left plot: $\langle S^z_\ell(t)\rangle$ averaged over $30$ disorder realizations from the box probability distribution for a system of size $L=128$
and initial thermal state with $\beta=1$. There is a clear
light cone effect. Right plot: the same for the modified free fermion model \fr{HFF3} with $s=s'=0$, $L=64$. Picture is consistent with localization. }
\label{fig:beta1}
\end{center}
\end{figure}
We see that the perturbation, which is initially localized at sites $L/2$ and
$L/2+1$, propagates ballistically through the system, as can be seen
from the presence of a ``light-cone'' outside of which our observable
remains negligibly small. The velocity characterizing this ballistic
propagation depends on the disorder distribution and can be determined
exactly in the thermodynamic limit. The spreading of a local
perturbation in energy eigenstates of the modified free fermion model
\fr{HFF3} can be analyzed in an analogous way. As shown in Fig.~\ref{fig:beta1},
for small values of $s$ and $s'$ the perturbation remains localized at sites
$L/2$ and $L/2+1$ in an extended time window even though a weak
light-cone effect occurs at early times. This again indicates that the
modified free fermion model is localized at small values of $s$, $s'$.
\section{Strongly interacting regime}
Examination of the IPR of \fr{Hamil} away
from the free fermion point for small system sizes $L=10, 12$ is
compatible with delocalized behaviour of energy eigenstates. We also have
studied the spreading of local perturbations in energy eigenstates.
(i) We have considered a single spin flip at an odd site on top of the
saturated ferromagnetic state. Representative results for the
subsequent dynamics on an $L=100$ site system are shown in
Fig.~(\ref{fig:quench}). There is a clear light-cone effect that
signals ballistic spreading of the perturbation. (ii) We have flipped two
neighbouring spins in the ground state, \emph{cf.}
Ref.~\onlinecite{ganahl} for a discussion of the analogous protocol in
the clean system. In this case however our numerics is limited however
to small systems of up to $L=16$. We find that there
again is a clear light cone effect, see Fig.~\ref{fig:quench2}.
\begin{figure}[ht]
\includegraphics[width=0.48\textwidth]{magnon1}
\caption{Spreading of a single spin flip on top of
the saturated ferromagnetic state for $L=100$ and $\eta=0$ in
(\ref{Hamil}), averaged over 50 disorder realizations with
distribution $P_1(\xi)$.
}
\label{fig:quench}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.48\textwidth]{quenchD2.pdf}
\caption{(b) Spreading of a spin flips at two
neighbouring sites on top of the ground state for $N=16$ and
$\cos(\eta)=2$ in (\ref{Hamil}), averaged over 20 disorder realizations with
distribution $P_{20}(\xi)$.
}
\label{fig:quench2}
\end{figure}
\section{Bounds on spin and energy transport}
We will now demonstrate that the eigenstates of \fr{Hamil} exhibit
ballistic energy and spin transport for any anisotropy $\eta$ and
disorder strength $W$. We employ a combination of two methods:
the first is based on Mazur's inequality \cite{Mazur} and was
previously employed to establish the existence of a finite temperature
Drude weight in the clean case \cite{Zotos}, while the second is based
on the recently developed hydrodynamic approach to transport in
integrable models \cite{IN,IN2,hydro1,hydro2}. The starting point of
the first approach is the existence of a set of conserved quantities
$[H,{\cal Q}_n]=0$ that are orthogonal in the sense that $\langle
{\cal Q}_n\ {\cal Q}_m\rangle_\beta=\delta_{n,m} \langle {\cal
Q}_n^2\rangle_\beta.$ Here $\langle . \rangle_\beta$ denotes a
thermal expectation value. As the z-component of total spin
$\sigma^z=\sum_{j=1}^L\sigma_j^z$ is a conserved quantity in our model
we employ a magnetic field term to fix the magnetization in our
thermal ensemble. Given an operator $A=A^\dagger$ with $\langle
A\rangle_\beta=0$ the following inequality due to Mazur \cite{Mazur}
then holds
\begin{equation}
\lim_{T_0\to\infty}\frac{1}{T_0}\int_0^{T_0} dt\ \langle
A(t)A\rangle_\beta
\geq \sum_n\frac{\langle A{\cal Q}_{n}\rangle_\beta^2}{\langle
\big({\cal Q}_{n}\big)^2\rangle_\beta}\ .
\label{Mazur}
\end{equation}
A positive bound for the right-hand side of \fr{Mazur} implies that the
autocorrelation function of the operator $A$ does not decay to zero at
late times. This implies that the Fourier transform has a
non-vanishing (generalized) Drude weight
\begin{equation}
\frac{1}{L}\int_0^\infty dt\ \cos(\omega t) \langle
A(t)A\rangle_\beta= 2\pi D_A\delta(\omega)+\dots
\label{Drude}
\end{equation}
When $A$ is the spin current or the energy current
operator the non-decay of the autocorrelation functions shows that
the system is an ideal conductor of spin/energy.
The Hamiltonian \fr{Hamil} has an extensive number of integrals of
motion $Q^{(n)}$ \fr{HCL}. The conservation laws relevant to us here have local
densities and we focus on the most local of these, $Q^{(3)}$, which
involves interactions between spins on at most five neighbouring
sites, \emph{cf.} eqn \fr{Q3}. We furthermore constrain our discussion
to infinite temperatures $\beta=0$. For local operators the
corresponding thermal average equals the expectation value in
typical energy eigenstates at the associated energy density, which
allows us to draw conclusions about the local properties of the
eigenstates of \fr{Hamil}. In order to use the Mazur inequality \fr{Mazur}
we carry out a subtraction ${\cal Q}_3=Q^{(3)}-\langle Q^{(3)}\rangle_{\beta=0}$,
which ensures that the expectation value of ${\cal Q}_3^2$ is
extensive, i.e. $\lim_{L\to\infty}L^{-1}\langle {\cal
Q}_3^2\rangle_{\beta=0}=a_1>0.$ The expression for $a_1$ is very
cumbersome so that we do not report it here.
The spin and energy current operators $J^{S,E}$ associated with the
Hamiltonian \fr{Hamil} $H=\sum_jH_{2j-1,2j,2j+1}$ are obtained from
the continuity equations
\begin{eqnarray}
i\sum_{j=-\infty}^\ell[\sigma^z_j,H]&=&J^{\rm S}_\ell,\nonumber\\
i\sum_{j=-\infty}^\ell[H_{2j-1,2j,2j+1},H]&=&J^{\rm E}_{2\ell}\ .
\end{eqnarray}
Evaluating the commutators and then summing over all sites gives
\begin{eqnarray}
J^{\rm E}&=&4i\sin^2(\eta)\sum_jQ^{(3,3)}_{2j-1,2j,2j+1,2j+2}\nonumber\\
&&\qquad\qquad+Q^{(3,4)}_{2j-1,2j,2j+1,2j+2,2j+3} ,
\label{en-curr}
\end{eqnarray}
where $Q^{(3,3)}$ and $Q^{(3,4)}$ are given in \fr{Qnm}.
The spin current operator can be written in the form
\begin{widetext}
\begin{eqnarray}
J^{\rm S}&=&2\sum_{j}
J^{(1)}_{2j} \big( T^{xy}_{2j-1,2j}-T^{yx}_{2j-1,2j}
+T^{xy}_{2j,2j+1}-T^{yx}_{2j,2j+1}\big)
+2 J^{(2)}_{2j}\big(T^{xy}_{2j-1,2j+1}-T^{yx}_{2j-1,2j+1}\big)\nonumber\\
&-&\frac{2K_{2j}}{\Delta_{2j}}
\big(T^{xzx}_{2j-1,2j,2j+1}+T^{yzy}_{2j-1,2j,2j+1}\big)
+K_{2j}
\big(T^{zxx}_{2j-1,2j,2j+1}+T^{zyy}_{2j-1,2j,2j+1}
+T^{yyz}_{2j-1,2j,2j+1}+T^{xxz}_{2j-1,2j,2j+1}\big),
\label{spin-curr}
\end{eqnarray}
\end{widetext}
where we have defined
\begin{equation}
T^{\alpha_1\dots\alpha_n}_{j_1,\dots,j_n}=\prod_{k=1}^n\sigma^{\alpha_k}_{j_k}\ .
\end{equation}
We find that in contrast to the homogeneous case, the energy current is not
conserved, i.e. $[H,J^E]\neq 0$.
At infinite
temperature and finite magnetization ${\rm m}$ a tedious but
straightforward calculation gives the following result for the overlap
of the spin current with the third conserved charge
\begin{equation}
\langle J^{\rm S} {\cal Q}_3\rangle_{\beta=0}=
\frac{ \Delta}{1-\Delta^2}
\sum_{n} \frac{4 \text{m} \left(1-4
\text{m}^2\right)f(\xi_{2n})}{[\cosh (2 \xi_{2n})-(\cos (2
\eta))]^3}\ ,
\end{equation}
where
\begin{eqnarray}
f(z)&=&\cos(2\eta)\cosh(6z)\nonumber\\
&-&2\big(\cos(4\eta)-\cos (2 \eta )+3\big) \cosh (4z)
\nonumber\\
&+&\big(6\cos(4 \eta )-\cos (6 \eta )+10\big) \cosh (2 z)\nonumber\\
&-&18 \cos (2 \eta )+2 \cos(4\eta )+6\ .
\end{eqnarray}
For a very large system we may replace the sum by an integral so that
\begin{eqnarray}
&&\langle J^{\rm S} {\cal Q}_3\rangle_{\beta=0}= a_{\rm S}L+o(L)\ ,\nonumber\\
&&a_{\rm S}=\frac{\Delta}{1-\Delta^2}
\int d\xi \frac{4 \text{m} \left(1-4 \text{m}^2\right)f(\xi)P(\xi)}{\big(\cosh (2 \xi)-\cos (2
\eta)\big)^3}.
\label{JQ}
\end{eqnarray}
Here $P(\xi)$ is the probability distribution on the random variables
$\xi_{2n}$. Importantly we have $a_{\rm S}\neq 0$ unless we fine-tune
the probability distribution. This in turn provides a positive bound
for the Mazur inequality
\begin{equation}
\lim_{L \to\infty}
\lim_{T_0\to\infty}\frac{1}{T_0L}\int_0^{T_0} dt\ \langle
J^{\rm S}(t)J^{\rm S}\rangle_{\beta=0}
\geq \frac{a_{\rm{S}}^{2}}{a_1}\ .
\end{equation}
In the case of the energy current for simplicity we consider the zero
magnetization sector ${\rm m}=0$. Applying Mazur's inequality we find
\begin{eqnarray}
&&\lim_{L \to\infty}
\lim_{T_0\to\infty}\frac{1}{T_0L}\int_0^{T_0} dt\ \langle
J^{\rm E}(t)J^{\rm E}\rangle_{\beta=0}\nonumber\\
&&\geq \lim_{L\to\infty}\frac{1}{L}\frac{\langle J^{\rm E}{\cal
Q}_3\rangle_{\beta=0}^2}{\langle {\cal Q}_3^2\rangle_{\beta=0}}
=\frac{64\big(2+2\cos(2\eta)\big)^2}{16\sin^4(\eta)a_1}\ .
\label{Ebound}
\end{eqnarray}
Interestingly the bound \fr{Ebound} is independent of the
inhomogeneities. The generalization to ${\rm m}\neq 0$ is
very tedious but straightforward and provides a non-zero bound as
well.
The above calculation proves that at energy densities corresponding to
infinite temperature the model \fr{Hamil} exhibits (i) a non-zero
Drude weight at any finite magnetization; (ii) ballistic energy transport.
\section{Spin and energy transport from generalized hydrodynamics}
Generalized Drude weights \fr{Drude} can be analyzed in full by means of
the approach introduced in Ref.~\onlinecite{IN}. The starting point is the
existence of a basis of local charges $\hat{Q}_i$ and associated
currents $J_i$. Using these charges a {\it generalized Gibbs ensemble}
is defined by the density matrix $\rho_{\rm GGE}\sim \exp(-\sum_{n}\mu_{n}\hat{Q}_{n})$,
where $\mu_{i}$ are ``chemical potentials''. The generalized Drude
weights $D_A$ are then obtained from appropriate expectation values in
this ensemble and are determined by using the thermodynamic Bethe
ansatz (TBA) method \cite{takahashi}. According to Ref.~\onlinecite{IN},
in integrable models $D_A$ can be expressed as
\begin{equation}
D_A=\sum_{n}\int d\lambda
\frac{\eta_n(\lambda)}{\rho_n^{\rm tot}(\lambda)}
\left(\frac{\epsilon'_{n}(\lambda)q^{\rm
eff}_{A}(\lambda)}{2\pi(1+\eta_n(\lambda))}
\right)^2 ,
\label{D}
\end{equation}
where $\eta_n(\lambda)=\bar{\rho}_{n}(\lambda)/\rho_{n}(\lambda)$ is
the ratio of hole and particle densities,
$\rho^{\rm tot}_{n}(\lambda,\{\xi_{2j}\})=\rho_{n}+\bar{\rho}_{n}$,
$\epsilon_{n}(\lambda)$ are the energies of n-string excitations over
the state of thermal equilibrium \cite{BEL} and
$q^{\rm eff}_{A}=\partial_{\mu_{A}}\log\eta_{n}$ are
effective transport charges. The implementation of this approach in
our ``inhomogeneous'' case reveals (see Appendix \ref{app:hydro} for
more details) that the disorder merely renormalizes the Drude weight
through the disorder-dependence of the velocity of the elementary
excitations over the equilibrium state under consideration, which
enters \fr{Drude} via the factor $1/\rho^{\rm tot}_n(\lambda)$. It
follows then that the disorder average can be exchanged with the
integration and summation in (\ref{D}). The disorder averaged Drude
weight is then given by
\begin{equation}
\overline{D}_A\!=\!\sum_{n}\int d\lambda
\overline{[\rho_n^{\rm tot}(\lambda)]^{-1}}
\eta_n(\lambda)
\left[\frac{\epsilon'_{n}(\lambda)q^{\rm
eff}_{A}(\lambda)}{2\pi(1+\eta_n(\lambda))}
\right]^2,
\end{equation}
where $\overline{[\rho_n^{\rm tot}(\lambda)]^{-1}}=
\int
P(\{\xi\})\frac{1}{\rho_{n}^{\rm tot}(\lambda,\{\xi\})}$
denotes the disorder average with probability distribution function $P(\xi)$.
As the total density $\rho^{tot}_{n}(\lambda)$ is a positive
quantity this average is non-zero for generic $P(\xi)$. Therefore,
the Drude weight is only renormalized due to the disorder dependence
of string particle and hole densities. We note that in contrast to the
Mazur bound calculation the TBA approach takes into account the full
set of conserved quantities. These observations can be universally
extended to any integrable model with disorder of the type described here.
\section{Conclusions}
In this paper we studied a Yang-Baxter integrable interacting spin
system with controllable short-range correlated disorder. Using
a combination of diagnostics we have demonstrated the absence of
many-body localization.
We find that the model is in fact an ideal
conductor for both energy and magnetization. For particular parameter
values the model can be mapped to non-interacting fermions and we have
established the absence of Anderson localization in this case. In
contrast, a sufficiently strong deformation of the free-fermion Hamiltonian away
from the Yang-Baxter point shows signatures of localization. We expect
that in the interacting case small perturbations away from the
Yang-Baxter point will lead to diffusive behaviour , while
sufficiently strong deformations will be required to induce an MBL
transition.
\acknowledgments
We are grateful to M. Brockmann, J.-S. Caux and E. Ilievski for
collaboration in the early stages of this project. We thank
W. Buijsman, A. de Luca, A. Pal, S. Parameswaran and V. Yudson for
very helpful discussions. This work was supported by the EPSRC under
grant EP/N01930X (FHLE) and the Delta-ITP consortium (VG), a program
of the Netherlands Organization for Scientific Research funded by the
Dutch Ministry of Education, Culture and Science.
|
{
"timestamp": "2018-09-27T02:14:30",
"yymm": "1802",
"arxiv_id": "1802.08827",
"language": "en",
"url": "https://arxiv.org/abs/1802.08827"
}
|
\section{Introduction}
Space situational awareness (SSA) has come into the spotlight in recent years because of the increasing dependence of mankind on space assets and the large number of debris objects that constantly endanger them. The plans to inject thousands of additional satellites as part of multiple technology-advancing mega-constellations into orbits already densely populated puts further stress on SSA operations (\cite{MegaC}). In the event of a collision, the smallest of debris objects, traveling at relative velocities of several km/sec, can render significant damage to systems onboard the space assets. In addition, each collision event can push the space environment closer to a cascade, otherwise known as the Kessler syndrome \cite{Kessler}, that can render space itself inaccessible for future generations. Mitigating this threat requires the integration of comprehensive SSA into operations.
In low Earth orbit (LEO), one of the most densely populated orbital regimes, atmospheric drag is the largest source of uncertainty in our ability to accurately predict the orbit of resident space objects (RSOs). Thermospheric mass density, which feeds directly into the drag model, is one of the major causes behind the uncertainty. The thermosphere is a highly dynamic environment with the Sun being the strongest driver of its variations. The thermosphere is readily heated by the Sun's irradiance, especially in the EUV spectrum, causing significant variations of mass density. Other sources of variations include solar rotation and cycle, diurnal and higher order harmonics, magnetic storms and substorm, gravity waves, winds and tides, and long term climate change. A review of the basic dynamics and drivers is given by \cite{Forbes}. A review of thermospheric mass density behavior in the context of satellite drag is provided by \cite{Emmert}.
Existing models for thermospheric mass density can be classified either as physics-based or empirical. Physics-based models are based on first principles and solve the fluid equations by discretizing over the volume of interest resulting in hundreds of thousands of solved for states. Solving the system of equations dynamically provides the models with good predictive/forecasting capabilities; however, the true predictive/forecasting potential cannot be unlocked until the data assimilation schemes are improved. On the other end of the spectrum, empirical models specify the average behavior of the thermosphere with parameterized functions formulated using measurements or observations from multiple sources. Empirical models are fast to evaluate due to their simplified mathematical formulation; however, they offer very limited forecasting abilities since they do not model the system dynamically. Since either type of models can have large biases and/or errors, data assimilation is almost always required and is a very active area of research.
This work develops methodology for a quasi-physical dynamic reduced order model (ROM) of the thermosphere that has the desired properties of both the physics-based and empirical models, and can be easily and readily integrated into SSA operations. A reduced order model facilitates real-time large ensemble evaluations for improved characterization of model uncertainty and collision probabilities at a significantly reduced cost. The approach can also provide insights into the model errors associated with the underlying dynamics, which will be a subject of future work. The new method uses a dynamic systems formulation that inherently provides predictive capabilities, an advantage and improvement over previous work using the Proper Orthogonal Decomposition (POD) or Empirical Orthogonal Function (EOF) approach (\cite{Mehta_POD,EOF1}). Observations can be introduced into the model through data assimilation, which will also be a subject of future work.
Modal decomposition or variance reductions methods, highly popular in the fluid dynamics community (\cite{MA_review}), provide a path to achieving an efficient method for estimating the state of the thermosphere. The main idea behind modal decomposition is that a large fraction of the total energy/variance of a system can be captured through projection along a handful of dominant directions or dimensions, referred to in this work as modes. The modes represent time-independent coherent structures that exist in the flow, which when combined with time-dependent coefficients can reproduce the total energy/variance of the system.
\cite{MA_review} provide an overview of existing modal decomposition methods applied to fluid flows. They classify the method as data-based or operator-based. Data-based approaches that include proper orthogonal decomposition (POD) and Dynamic Mode Decomposition (DMD) are ideal for cases where high-fidelity measurements are available with little to no knowledge of the system dynamics. Operator-based approaches that include the well-known Koopman analysis require knowledge of the system dynamics for order reduction. Even though we may have knowledge of the dynamic equations behind physics-based models, we choose a data-based approach to keep the development completely data-driven and for the ease of implementation.
\cite{Mehta_POD} presented a methodology for reduced order modeling and calibration of the upper atmosphere based on proper orthogonal decomposition (POD). They developed the methodology using the Naval Research Laboratory's MSIS model. Although MSIS is already a highly simplified implementation with small computational expense, it was an ideal platform for testing the methodology. The method was able to accurately identify the known dynamics in the MSIS model as POD modes or basis functions while providing an approach for calibration of the model. For more details, the reader is directed to \cite{Mehta_POD}.
The POD approach, also known as Empirical Orthogonal Functions (EOFs), has been previously applied to infer thermosphere dynamics from discrete measurements along the satellite's path [\cite{EOF1}, \cite{EOF2}]; however, the work has been restricted to 2-dimensional structures, typically at a constant altitude and does not constitute a model, per se. EOF based analysis has also been used by Sutton et al. (2012) to extract modes of the thermosphere from a physics-based model and use them as a replacement for the generic spherical harmonic modes commonly used in semi-empirical models. The methodology developed by \cite{Mehta_POD} is 3-dimensional and fully data-driven, i.e. the 3-dimensional coherent structures of the modes are derived using simulation output from a model over the volumetric grid of interest, and constitutes a model.
In this paper, we seek to develop a ROM for physics-based models that represent dynamic systems with exogenous inputs. Because model order reduction for the upper atmosphere requires simulation data covering a full solar cycle lasting over a decade, we develop a new method to keep the problem tractable. The new method is based on Dynamic Mode Decomposition with control (DMDc) (\cite{DMDc}), and uses the Hermitian Space of the problem towards a comprehensive quasi-physical dynamic ROM for thermospheric mass density using a dynamic system formulation that has not been attempted before. We call the new method Hermitian Space - Dynamic Mode Decomposition with control (HS-DMDc). The dynamic system formulation provides inherent predictive capabilities while significantly simplifying the process of assimilating data. We develop a proof of concept quasi-physical dynamic ROM for NCAR's Thermosphere-Ionosphere-Electrodynamics General Circular Model (TIE-GCM).
The paper is structured as follows: section \ref{s:Methods} describes the data-based variance reduction methods including POD, DMD and the newly developed HS-DMDc method. Section \ref{s:MD} provides details about the process of model development including method validation in section \ref{s:MeV} using yearly simulations for 2002 and 2009; developing a universally applicable model in section \ref{s:UM} using 12 years of TIE-GCM simulations covering a full solar cycle; and validation of the universal model in section \ref{s:UMV}. Section \ref{s:ML} discusses the model limitations with section \ref{s:FD} describing further development steps. Finally, section \ref{s:Conc} concludes the paper.
\section{Methodology}\label{s:Methods}
Data-driven, equation-free methods for order reduction of complex high-dimensional systems have become very popular in the fluids community (\cite{MA_review}). Most decomposition methods were originally developed as diagnostic tools for characterizing complex fluid flows by extracting physically interpretable spatial structures or modes and their associated temporal responses. The POD technique, first introduced to the fluids community by \cite{Lumley}, serves as the basis and motivation for the development of other modal decomposition techniques, including DMD, DMDc, and HS-DMDc. In this work, we will limit our discussion on modal decomposition techniques to POD, DMD, and the derivation of HS-DMDc. Readers are referred to \cite{DMDc} for details about DMDc.
\subsection{Proper Orthogonal Decomposition}\label{s:POD}
The two most commonly used formalisms of POD are the method of snapshots and singular value decomposition (SVD). The idea behind POD is the derivation of an optimal set of basis functions or modes for a given set of snapshots. Snaphots can be a collection of the model/simulation output or experimental data acquired over time. The POD framework seeks to split space and time by decomposing the energy/variance of a spatially and temporally varying field. Both formalisms provide an optimal set of orthogonal vectors; however, they differ in the manner in which the modes are derived. The method of snapshots, used in \cite{Mehta_POD}, decomposes the variance after removing the mean component in the following manner
\begin{equation}\label{e:POD1}
\tilde{\textbf{x}}({\bf s},{ t}) = \textbf{x}({\bf s},{t}) - \bar{\textbf{x}}({\bf s}) \approx \sum_{i=1}^{r} {c}_i({ t})\pmb{\phi}_{i}({\bf s})
\end{equation}
where $\textbf{x}({\bf s},{ t})$ is a random field (in this case the neutral mass density) on a spatial domain (in this case a uniform grid in local time, latitude and altitude), $\bar{\textbf{x}}({\bf s})$ is the temporal mean, $\tilde{\textbf{x}}({\bf s},{t})$ is the variance, ${\bf s}$ is the spatial vector (number of spatial points saved per time snapshot unfolded into a column vector of size $n$), and $t$ is the time. The POD modes, $\pmb{\phi}_{i}({\bf s})$, are purely spatial while the temporal response is given by the coefficients, ${c}_{i}({t})$, where $r$ is the number of modes used to construct the truncated low order representation. The POD modes are derived using an eigendecomposition of the square correlation matrix (using the Hermitian Space) that represents the distance between the snapshots. For details about application of the POD method of snapshots to reduced order modeling of neutral mass density, the reader is referred to \cite{Mehta_POD}.
The SVD can be thought of as a generalization of the eigendecomposition (used in method of snapshots) to rectangular matrices. SVD decomposes a rectangular matrix ${\bf M} \in \mathbb{R}^{n \times m}$ as
\begin{equation}\label{e:SVD}
{\bf M}={\bf U} \mathbf{\Sigma} {\bf V}^T
\end{equation}
where ${\bf U} \in \mathbb{C}^{n \times n}$ and ${\bf V} \in \mathbb{C}^{m \times m}$ are unitary matrices, and $\mathbf{\Sigma} \in \mathbb{R}^{n \times m}$ is a diagonal matrix. The left singular vectors ${\bf U}$ that span the range of ${\bf M}$ are orthogonal and optimal, and represent the POD modes. In practice, one only needs to compute a reduced SVD correponding to the non-zero singular values, henceforth referred to in this work as economy SVD (E-SVD), of which there are at most $min(n,m)$. The SVD formalism decomposes the snapshots directly (${\bf M}$ contains a series of snapshots) without taking away the mean component; and therefore, the first mode or singular vector of ${\bf U}$ contains a strong mean component. The eigenvalue and singular value decomposition are closely related; the left singular vectors, ${\bf U}$, represent the eigenvectors of ${\bf M}{\bf M}^{T}$ (the correlation matrix decomposed in the method of snapshots before taking away the mean), where $T$ denotes the conjugate transpose. Therefore, if the SVD decomposition is performed on the snapshots after subtracting the mean component, the POD modes from method of snapshots and SVD will be equivalent. For more details about eigenvalue and singular value decomposition, the reader if referred to \cite{MA_review}.
\subsection{Dynamic Mode Decomposition}\label{s:DMD}
The SVD formalism of POD sits at the heart of DMD. The weakness of POD that DMD seeks to overcome is that the temporal coefficients of the POD modes generally contain a mix of frequencies and does not allow a formulation for forecast or prediction. \cite{Mehta_POD} overcame this using fast Fourier transform (FFT) coupled with a Gaussian Process model for the coefficients. DMD can be thought of as an ideal combination of POD with Fourier transforms in time, resulting in the DMD modes associated with a single frequency with a possible growth or decay rate. The idea behind DMD is the derivation of a best-fit linear dynamical system by fitting the time domain data or snapshots obtained from the underlying nonlinear dynamical system. Consider a continuous time dynamical system given as
\begin{equation}\label{e:DS1}
\frac{d{\bf x}}{dt}=F({\bf x},t;\mathbf{\Theta})
\end{equation}
where $F(\cdot)$ represents the dynamics, $\mathbf{\Theta}$ is the set of system parameters, and ${\bf x}$ is the state vector $\in \mathbb{R}^{n}$ (comprising of the random field on the spatial vector ${\bf s}$). Because a closed form solution for the evolution of the dynamic system is generally not feasible, a numerical solution approach is typically used, as in the case of the physics-based models. DMD uses an equation-free approach (not operating on the physical dynamic equations) to construct an approximate locally linear dynamical system
\begin{equation}\label{e:DS2}
\frac{d{\bf x}}{dt}=\pmb{\mathcal{A}}{\bf x}
\end{equation}
under the assumption that the linear operator $\pmb{\mathcal{A}}$ is diagonalizable. Given the initial condition ${\bf x}(0)$, the above system has a well known solution [\cite{EDE}]
\begin{equation}\label{e:DS3}
{\bf x}(t)=\sum_{i=1}^{n}b_i \exp(\omega_i t)\pmb{\phi}_{i} = {\bf b}\exp(\mathbf{\Omega}t)\mathbf{\Phi}
\end{equation}
where $\omega_i$ and $\phi_i$ are eigenvalues and eigenvectors of the dynamic matrix $\pmb{\mathcal{A}}$, and $b_i$ are the coefficients corresponding to the initial condition in the eigenvector basis.
Since the snapshots are samples from the continuous system sampled in time ($\Delta t$), an analogous discrete-time system is given by
\begin{equation}\label{e:DS4}
{\bf x}_{k+1}={\bf A}{\bf x}_k
\end{equation}
where ${\bf A} \in \mathbb{R}^{n \times n}$ is the discrete time map of $\pmb{\mathcal{A}}$ such that
\begin{equation}\label{e:DS5}
{\bf A}=\exp(\pmb{\mathcal{A}}\Delta t)
\end{equation}
The solution to the discrete system can be given with the eigenvalues ($\lambda$) and eigenvectors ($\pmb{\phi}$) of ${\bf A}$ as
\begin{equation}\label{e:DS6}
{\bf x}_k \approx \sum_{i=1}^{r}b_i \lambda_i^k \pmb{\phi}_i = {\bf b}\mathbf{\Lambda}^k\mathbf{\Phi}
\end{equation}
where ${\bf b}$ again are the initial conditions coefficients in the eigenvector basis. The eigenvector basis ($\pmb{\phi}$) in Eq. \ref{e:DS6} represents dynamic modes that differ from the modes derived in POD and are not orthogonal. The low order eigendecomposition of the matrix ${\bf A}$ produced by DMD represents an optimal fit to the measured trajectory in the least squares sense, minimizing $\| {\bf x}_{k+1} - {\bf A}{\bf x}_k \|$ across all snapshots. The above description of DMD is derived from \cite{DMD_book}, where the readers can find further details about the method.
\subsubsection{DMD Algorithm}
The DMD algorithm provides a solution to the discrete-time system in Eq.~\ref{e:DS4} by extracting an estimate of the dynamic matrix ${\bf A}$ by rearranging a series of outputs from a dynamical system or snapshots, ${\bf x}_k$, into time-shifted data matrices. Let ${\bf X}_1$ and ${\bf X}_2$ be the time-shifted matrix of snapshots such that
\begin{equation}\label{e:DMD1}
{\bf X}_1 = \left[ {\bf x}_1, \quad {\bf x}_2, \quad \cdots, {\bf x}_{m-1}\right], \quad \quad
{\bf X}_2 = \left[ {\bf x}_2, \quad {\bf x}_3, \quad \cdots, {\bf x}_{m}\right]
\end{equation}
where $m$ is the number of snapshots. The data matrices ${\bf X}_1$ and ${\bf X}_2$ can be related (${\bf X}_2$ is the time evolution of ${\bf X}_1$) through a best-fit linear model as in Eq.~\ref{e:DS4} such that
\begin{equation}\label{e:DMD2}
{\bf X}_2 = {\bf A}{\bf X}_1.
\end{equation}
The dynamic matrix ${\bf A}$ is estimated as ${\bf A}={\bf X}_2{\bf X}_1^{\dagger}$, where ${\bf X}_1^{\dagger}$ represents the pseudoinverse of the snapshot matrix ${\bf X}_1$. The pseudoinverse is calculated using a E-SVD such that ${\bf X}_1 = {{\bf U}_{r}}\mathbf{\Sigma}_{r} {\bf V}_{r}^T$ and ${\bf X}_1^{\dagger} = {\bf V}_{r}\mathbf{\Sigma}_{r}^{-1}{\bf U}_{r}^T$, where $r$ is the reduced rank. Since computing and storing the full order dynamic matrix, ${\bf A} \in \mathbb{R}^{n \times n}$, can be computationally infeasible when $n \gg 1$, a reduced order approximation of the dynamic matrix, $\tilde{{\bf A}} \in \mathbb{R}^{r \times r}$, is derived using a similarity transform with projection onto a reduced set of orthogonal basis vectors. Using a reduced set of the left singular vectors ${\bf U}_{r} \in \mathbb{R}^{n \times r}$ from the E-SVD of ${\bf X}_1$, we get
\begin{equation}\label{e:DMD3}
{\bf z}_k = {\bf U}_{r}^{\dagger}{\bf x}_k = {\bf U}_{r}^{T}{\bf x}_k.
\end{equation}
where ${\bf z} \in \mathbb{R}^{r}$ is the reduced order state vector. Substituting Eq.~\ref{e:DMD3} into Eq.~\ref{e:DS4} gives
\begin{equation}\label{e:DMD4}
{\bf U}_{r}{\bf z}_{k+1} = {\bf A}{\bf U}_{r}{\bf z}_{k}
\end{equation}
Now, multiplying both sides of the above equation by ${\bf U}^{\dagger}$, we get
\begin{equation}\label{e:DMD5}
{\bf z}_{k+1} = {\bf U}_{r}^{\dagger}{\bf A}{\bf U}_{r}{\bf z}_{k} = {\bf U}_{r}^{T}{\bf A}{\bf U}_{r}{\bf z}_{k} = \tilde{{\bf A}}{\bf z}_{k}
\end{equation}
The reduced order state vector ${\bf z}$ represents the coefficients corresponding to the left singular vectors or POD modes.
The algorithm steps are given below:
\begin{enumerate}
\item Construct the snapshot matrices ${\bf X}_1$ and ${\bf X}_2$.
\item Perform E-SVD ${\bf X}_1 = {\bf U}_{r}\mathbf{\Sigma}_{r} {\bf V}_{r}^T$ to compute the psuedoinverse ${\bf X}_1^{\dagger}= {\bf V}_{r}\mathbf{\Sigma}_{r}^{-1}{\bf U}_{r}^T$ .
\item Compute the reduced order dynamic matrix $\tilde{{\bf A}} = {\bf U}_{r}^T{\bf A}{\bf U}_{r} = {\bf U}_{r}^T{\bf X}_2{\bf V}_{r}\mathbf{\Sigma}_{r}^{-1}$
\item Compute the DMD modes as $\mathbf{\Phi}={\bf U_{r}}{\bf W}$, where ${\bf W}$ are the eigenvectors of $\tilde{{\bf A}}$ such that $\tilde{{\bf A}}{\bf W}={\bf W}\mathbf{\Lambda}$.
\end{enumerate}
\subsection{Hermitian Space - Dynamic Mode Decomposition with control}\label{s:NM}
The HS-DMDc method developed here is an extension of DMD to dynamical systems with exogenous inputs. The method draws inspiration from the equation-free Dynamic Mode Decomposition with control (DMDc) algorithm derived by \cite{DMDc} that also builds on DMD, but can extract both the underlying dynamics and the input-output characteristics of a dynamical system. The method can be used to construct a ROM of the high-dimensional system for future state prediction under the influence of dynamics and external control. Unlike DMD, the snapshots include the state and input(s). The method characterizes the relationship between the future state, ${\bf x}_{k+1}$, the current state, ${\bf x}_k$, and the current input, ${\bf u}_k$, with a locally linearized model
\begin{equation}\label{e:DMDc1}
{\bf x}_{k+1} = {\bf A}{\bf x}_k + {\bf B}{\bf u}_k
\end{equation}
where ${\bf x} \in \mathbb{R}^{n}$, ${\bf u} \in \mathbb{R}^{q}$, ${\bf A} \in \mathbb{R}^{n \times n}$, and ${\bf B} \in \mathbb{R}^{n \times q}$. The dynamic matrix ${\bf A}$ describes the unforced dynamics of the system and the input matrix ${\bf B}$ characterizes the effect of input ${\bf u}_k$ on the state ${\bf x}_{k+1}$.
The difference between the HS-DMDc and DMDc algorithms is the formalism used in the computation of the pseudoinverse and the left singular vectors. In order for the developed model (estimates of the dynamic and input matrices, $\bf A$ and $\bf B$) to be applicable for all space weather conditions, the simulated snapshots need to represent the full range of inputs. Because the solar cycle lasts over a decade, this requires a large data set of more than ($m \approx$) 400,000 snapshots with a 0.25 hr resolution. A 5 degree grid resolution in TIE-GCM results in a state vector size of ($n \approx$) 75,000 with a 2.5 degree grid resolution resulting in $n \approx 300,000$.
Large data has motivated extensions to DMD even beyond E-SVD (\cite{Hemati,Erichson}), but have been limited to systems with no exogenous inputs. The theoretical computational complexity of full rank SVD of ${\bf X}_1 \in \mathbb{R}^{n \times m}$ used in DMDc is $O(mn^2)$ with $n\leq m$, making its application intractable for the problem at hand. The use of E-SVD reduces the complexity to $O(mnr)$ (\cite{SVD_Lit}) by computing only the first $r$ singular values and vectors. HS-DMDc reduces the computation of the psuedoinverse ($^\dagger$) to the Hermitian space by performing an eigendecomposition of the correlation matrix, ${\bf X}_1 {\bf X}_1^T \in \mathbb{R}^{n \times n}$, reducing the full rank complexity to $O(nn^2)$. The complexity can be reduced to $O(n^2r)$ using an economy EigenDecomposition (E-ED). In theory, the computation of the correlation matrix ${\bf X}_1 {\bf X}_1^T$ also introduces linear scaling with $m$ - $O(mn^2)$. Although formulating the problem in the Hermitian space is somewhat of a common practice, motivated in part by the method of snapshot formalism of POD, it is important to note that using Eigendecomposition to compute the singular values and vectors can be more sensitive to numerical roundoff errors.
Because in practice the cost depends on several factors, we perform a simple representative numerical study to highlight the cost savings from HS-DMDc. The study is performed using Matlab$\textsuperscript{\textregistered}$ on a Macbook Pro: 3.1GHz Intel i7 with 16GB of RAM. The study uses the $svds$ and $eigs$ functions to compute the first 20 most energetic POD modes for state size's $n = $ 10,000; the number of snapshots $m$ is varied from 10,000 to 150,000. The computational cost comparison is shown in Figure \ref{f:Numerical_Cost}. In this case, HS-DMDc offers cost savings of close to an order or magnitude for $m = 100,000$ with the savings growing to two orders of magnitude for $m = 150,000$. We expect similar, if not better savings for increasing $n$ and/or $r$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Numerical_Cost.eps}
\caption{Numerical computational cost comparison between HS-DMDc and DMDc.}
\label{f:Numerical_Cost}
\end{figure}
HS-DMDc uses the same time-shifted snapshot matrices as defined for DMD; however, because the system now includes external control defined as $\mathbf{\Upsilon}=[{\bf u}_1,\;{\bf u}_2,\; \cdots,\; {\bf u}_{m-1}]$, Eq.~\ref{e:DMD2} is modified such that
\begin{equation}\label{e:DMDc2}
{\bf X}_2 = {\bf Z}\mathbf{\Psi}
\end{equation}
where ${\bf Z}$ and $\mathbf{\Psi}$ are the augmented operator and data matrices respectively.
\begin{equation}\label{e:DMDc3}
{\bf Z} \triangleq \begin{bmatrix} {\bf A} & {\bf B}
\end{bmatrix} \quad \text{and} \quad \mathbf{\Psi} \triangleq \begin{bmatrix} {\bf X}_1 \\
\mathbf{\Upsilon}
\end{bmatrix}
\end{equation}
The goal again is to estimate the dynamic and input matrices while minimizing $\| {\bf X}_{2} - {\bf Z}\mathbf{\Psi} \|$. The augmented operator matrix is solved for just as in DMD, ${\bf Z}={\bf X}_2\mathbf{\Psi}^{\dagger}$; however, the psuedoinverse of $\mathbf{\Psi}$ is computed in the Hermitian space using E-ED such that $\mathbf{\Psi}^{\dagger}=\mathbf{\Psi}^T(\mathbf{\Psi}\mathbf{\Psi}^T)^{-1}$ and $(\mathbf{\Psi}\mathbf{\Psi}^T)^{-1}=(\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}\hat{{\bf U}}_{\hat{r}}^{\dagger})^{-1}=\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r}}^T$. The orthogonal basis vectors $\hat{{\bf U}}_{\hat{r}} \in \mathbb{R}^{(n+p) \times \hat{r}}$, equivalent to the left singular vectors of a SVD of $\mathbf{\Psi}$, are eigenvectors of the correlation matrix $\mathbf{\Psi}\mathbf{\Psi}^T$ (such that $\mathbf{\Psi}\mathbf{\Psi}^T\hat{{\bf U}}_{\hat{r}}=\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}$), where $p$ is the number of inputs and $\hat{r}$ is the low rank truncation value for E-ED of $\mathbf{\Psi}$.
The dynamic and input matrices can then be estimated as
\begin{equation}\label{e:DMDc4}
{\bf A} = {\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},1}^T \qquad \text{and} \qquad {\bf B} = {\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},2}^T
\end{equation}
where $\hat{\mathbf{\Xi}}_{\hat{r}} \in \mathbb{R}^{\hat{r} \times \hat{r}}$ are the eigenvalues and $\hat{{\bf U}}_{\hat{r}}^T = [\hat{{\bf U}}_{\hat{r},1}^T \; \hat{{\bf U}}_{\hat{r},2}^T]$ with $\hat{{\bf U}}_{\hat{r},1} \in \mathbb{R}^{n \times \hat{r}}$ and $\hat{{\bf U}}_{\hat{r},2} \in \mathbb{R}^{p \times \hat{r}}$. Again, the reduced order or low rank approximations for the dynamic and input matrices are achieved through projection onto a truncated POD basis. This however, requires an additional E-ED in the Hermitian space for either ${\bf X}_1$ or ${\bf X}_2$ since $\hat{{\bf U}}_{\hat{r}}$ is defined in the input space and projection is performed in the output space. Substituting Eq.~\ref{e:DMD3} into Eq.~\ref{e:DMDc1}, we get
\begin{equation}\label{e:DMDc5}
{\bf U}_{r}{\bf z}_{k+1} = {\bf A}{\bf U}_{r}{\bf z}_{k} + {\bf B}{\bf u}_k
\end{equation}
where $\mathbf{U}_{r} \in \mathbb{R}^{n \times r}$ are the orthogonal eigenvectors such that ${\bf X}_1{\bf X}_1^T{\bf U}_{r}={\bf U}_{r}\mathbf{\Xi}_{r}$, and $r$ is the low rank truncation value such that $\hat{r} > r$. Multiplying both sides by ${\bf U}_{r}^{\dagger}$, we get
\begin{equation}\label{e:DMDc6}
{\bf z}_{k+1} = {\bf U}_{r}^{\dagger}{\bf A}{\bf U}_{r}{\bf z}_{k} + {\bf U}_{r}^{\dagger}{\bf B}{\bf u}_k
= \tilde{{\bf A}}{\bf z}_{k} + \tilde{{\bf B}}{\bf u}_k
\end{equation}
The reduced order state vector again represents the coefficients of the POD modes. The reduced order approximations for the dynamic and input matrices are then computed as
\begin{equation}\label{e:DMDc7}
\tilde{{\bf A}} = {\bf U}_{r}^T{\bf A}{\bf U}_{r} = {\bf U}_{r}^T{\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},1}^T{\bf U}_{r} \qquad \text{and} \qquad \tilde{{\bf B}} = {\bf U}_{r}^T{\bf B} = {\bf U}_{r}^T{\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},2}^T
\end{equation}
where $\mathbf{\Xi}_{r} \in \mathbb{R}^{r \times r}$ are the eigenvalues.
\subsubsection{HS-DMDc Algorithm}\label{s:NA}
\begin{enumerate}
\item Construct the data matrices ${\bf X}_1$, ${\bf X}_2$, $\mathbf{\Upsilon}$, and $\mathbf{\Psi}$.
\item Perform E-ED in the Hermitian space, $\mathbf{\Psi}\mathbf{\Psi}^T=\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}\hat{{\bf U}}_{\hat{r}}^T$, to compute the pseudoinverse $\mathbf{\Psi}^{\dagger}=\mathbf{\Psi}^T(\mathbf{\Psi}\mathbf{\Psi}^T)^{-1} = \mathbf{\Psi}^T(\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}\hat{{\bf U}}_{\hat{r}}^{\dagger})^{-1}=\mathbf{\Psi}^T\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r}}^T$. Choose the truncation value $\hat{r}$.
\item Perform a second E-ED in the Hermitian space, ${\bf X}_1{\bf X}_1^T={\bf U}_{r}\mathbf{\Xi}_{r}{\bf U}_{r}^T$, to derive the POD modes (${\bf U}_{r}$) for reduced order projection. Choose the truncation value $r$ such that $\hat{r} > r$.
\item Compute the reduced order dynamic and input matrices: $\tilde{{\bf A}} = {\bf U}_{r}^T{\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},1}^T{\bf U}_{r}$ and $\tilde{{\bf B}} = {\bf U}_{r}^T{\bf X}_2\mathbf{\Psi}\hat{{\bf U}}_{\hat{r}}\hat{\mathbf{\Xi}}_{\hat{r}}^{-1}\hat{{\bf U}}_{\hat{r},2}^T$.
\item Compute the DMD modes as $\mathbf{\Phi}={\bf U}_{r}{\bf W}$, where ${\bf W}$ are the eigenvectors of $\tilde{{\bf A}}$ such that $\tilde{{\bf A}}{\bf W}={\bf W}\mathbf{\Lambda}$.
\end{enumerate}
\section{Model Development}\label{s:MD}
We use the TIE-GCM to perform the simulations for obtaining the snapshots. TIE-GCM is commonly used for modeling of the Ionosphere-Thermosphere environment. TIE-GCM is a comprehensive, first-principles based, three-dimensional, time-dependent, numerical simulation model of the Earth's upper atmosphere, including the thermosphere. It uses a finite differencing scheme to obtain a self-consistent solution for the three-dimensional momentum, energy, and continuity equations for neutral and ion constituents. The model can simulate on low and high resolution grid parameters. The model uses spherical geographic coordinates: latitude from -87.5 to 87.5 degrees (low resolution) and -88.75 to 88.75 degrees (high resolution), longitude from -180 to 180 degrees, with the vertical direction using a log-pressure coordinate system. The low and high resolution grids make increments of 5 and 2.5 degrees, respectively, in latitude and longitude \cite{TIE-GCM}.
In this work, we run the model with standard inputs. The Heelis model of convection electric fields \cite{Heelis} is used, driven by the geomagnetic index Kp. Absorption of and ionization from solar ultraviolet is parameterized, driven by proxy in the form of the radio flux measured at a wavelength of 10.7 cm (F10.7). At the lower boundary of the model around 97 km, migrating diurnal and semidiurnal tidal fields are specified by the Global Scale Wave Model (GSWM) \cite{Hagan}, with eddy diffusivity specified in accordance with \cite{Qian}.
We define the spatial decomposition of the thermosphere using a coarser resolution of the TIE-GCM grid. We reduce the grid to 24, 20, and 16 partitions in the geographic longitude, latitude and altitude dimensions, respectively in order to keep the problem computationally tractable. In addition, for this proof-of-concept, we use a little more than 105,000 snapshots with 1 hr resolution in time. Since TIE-GCM uses the log-pressure coordinate for the vertical dimension, the geometric height of the upper boundary can vary from $\sim$450-700 depending on solar activity. For the current work, we set the range of altitude at 100-450 km. Extending the model to higher altitudes will be a subject for future work. We convert from a log-pressure to a geometric height grid. We use density in the log scale for model development since its variance is much more uniform \cite{Emmert and Picone}. We then convert the azimuthal variable from longitude to local time since the local time variations are much larger than longitudinal variations.
Previous experience \cite{Mehta_POD} suggests that in order for the developed ROM to be applicable for all input conditions, the snapshots need to include simulation output covering the full input domain. However, as a first step, it is important to get a feel for the effectiveness of HS-DMDc for the problem of interest under an observed input scenario. Therefore, we first apply the HS-DMDc algorithm to TIE-GCM simulations performed with observed inputs for the years 2002 and 2008, representing high and low solar activity conditions, respectively.
\subsection{Method Validation}\label{s:MeV}
Figure~\ref{f:2002_2008_Inputs} shows the variation of $F_{10.7}$ and $K_p$ for years 2002 and 2008. The 27-day period is clearly visible in $F_{10.7}$ for both years while the $K_p$ variation seems purely stochastic. The year 2002 was slightly more active geomagnetically, having some of the largest geomagnetic storms of the solar cycle, with an average $K_p$ close to 3 whereas the average for 2008 is closer to unity. Also, the $K_p$ values reach extremely high values (> 6) during 2002, albeit, very rarely.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{2002_2008_Inputs.eps}
\caption{$F_{10.7}$ and $K_p$ inputs for years 2002 and 2008}
\label{f:2002_2008_Inputs}
\end{figure}
We apply the HS-DMDc algorithm to snapshots collected every hour for both years. The choice of truncation value $r$ is subjectively constrained to $\hat{r}>r$. While the choice of $\hat{r}$ needs to ensure that all important modal excitations by the inputs are captured, the choice of $r$ is a trade-off between accuracy and model parameter estimation for data assimilation, which will be a subject of future work. We use four inputs ($p$), the solar flux proxy $F_{10.7}$, the geomagnetic index $K_p$; time of the day in universal time $UT$, and day of the year $DOY$. Figure~\ref{f:EV_2002_2008} shows the contribution of the first 20 POD modes to the total energy upon which the full state dynamics matrix is projected for order reduction, while Figure~\ref{f:Modes_2002_2008} shows the first 5 POD modes sampled at 450 km altitude for 2002 and 2008. Mode 1 which includes a strong component of the mean, accounts for about 98\% of the total energy. Qualitative analysis of the modes is beyond the scope of the current work, but quick observations can be made based on the comparison of the modes for 2002 and 2008. Temperature through $F_{10.7}$ dominates the variations; the modes for 2002 in Figure~\ref{f:Modes_2002_2008} are similar to those derived by \cite{EOF1} using CHAMP and GRACE derived density observations. Modes 2 and 3 for 2002 represent annual and semi-annual variations; however, mode 2 is also influenced solar activity ($F_{10.7}$) that drives the temperature variations. It is important to note that the dominant modes are different for 2002 and 2008 due to the difference in solar activity levels. Mode 2 for 2008 also seems solar activity driven with a weak diurnal component. Mode 3 for 2008 seems to represent annual variation but also with an influence of solar activity.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{EV_2002_2008.eps}
\caption{The contriubtion of the first 20 POD modes to the total energy for 2002 and 2008. The first 20 modes capture more than 99\% of the total energy.}
\label{f:EV_2002_2008}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Modes_2002_2008.eps}
\caption{First 5 POD modes for 2002 and 2008 derived using HS-DMDc at 450 km altitude.}
\label{f:Modes_2002_2008}
\end{figure}
For both years, the first 20 modes capture 99\% of the total energy. Next, we perform a sensitivity analysis to understand the impact of truncation values $\hat{r}$ and $r$. We vary $r$ to take values of 3, 5, and 10. Because $\hat{r}>r$, we vary $\hat{r}$ to take values of 10, 15, and 20. Figures~\ref{f:2002_170_140_FE} and \ref{f:2008_5_245_FE} show the results of sensitivity analysis for 2002 and 2008, respectively. The forecast error represents the state propagation error, given initial condition and inputs. The forecast error is the root mean square percentage error on the 3-dimensional spatial grid. The top panels in the figures cover inputs close to the mean for the year. The bottom panels cover inputs at the extremes in either $F_{10.7}$, $K_p$, or both.
As expected, state propagation using the locally linear approximation of the dynamic matrix in time causes the solution to depart from the true solution; however, the forecast error after 1 day for each case is close to or below 10\%. The forecast errors seem to band based on the number of modes used for order reduction, $r$, with very small sensitivity to $\hat{r}$. A value of $\hat{r}=$ 20 seems to capture all of the important excitations by the inputs with forecast errors for $\hat{r}=$ 15 and 20 being nearly indistinguishable. The rest of the paper will present results using $\hat{r}=$ 20 and $r=$ 10, unless otherwise specified. The forecast errors rise faster for decreasing values of $r$. Figures~\ref{f:2002_170_140_FE} and \ref{f:2008_5_245_FE} also show that the errors rise and fall sharply around extreme values of $K_p$, possibly due to the snapshots not covering the high $K_p$ enough times and/or the model becoming highly non-linear. Within data, the current approach shows comparable performance for the case with low geomagnetic activity as the POD-based approach discussed in \cite{Mehta_POD}. However, the current approach is much simpler and provides a natural extension for forecasting. The model performance at really high values of $K_p$ is discussed in a later section as a limitation of the model.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{2002_170_140_FE.eps}
\caption{Forecast error in reproducing the 2002 simulation snapshots using ROM developed with the same 2002 snapshots for different combinations of [$\hat{r}, r$].}
\label{f:2002_170_140_FE}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{2008_5_245_FE.eps}
\caption{Forecast error in reproducing the 2008 simulation snapshots using ROM developed with the same 2008 snapshots for different combinations of [$\hat{r}, r$].}
\label{f:2008_5_245_FE}
\end{figure}
To assess the ability of the technique to specify density outside the range of conditions under which it was tuned, we attempt to reproduce the 2008 (low solar activity) snapshots using the 2002 model (high solar activity) and vice-a-versa. Figure~\ref{f:Cross_Prediction} shows the results of this cross prediction. As observed, the forecast errors are much larger than the errors in Figures~\ref{f:2002_170_140_FE} and \ref{f:2008_5_245_FE} and rises in general propagating away from the initial condition. Further investigation into the cause behind the large errors affirms previous knowledge that the developed models are only applicable for conditions captured by the snapshots. Figure~\ref{f:Cross_Prediction} shows errors that converge (or close to convergence) to large steady state values that represent the inability to match the variations caused by solar activity (scaling by $F_{10.7}$) because the mean component in mode 1 only applies to the solar activity levels covered with the snapshots. In other words, the model captures most of the dynamics but is biased in the absolute scale. An exception is observed in Figure~\ref{f:Cross_Prediction}(d) where the error rises quickly due to the bias in the absolute density during high and low levels of solar activity, but falls due to the geomagnetic storm significantly increasing the density taking it close to 2002 levels. The error again rises after the storm has passed.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Cross_Prediction.eps}
\caption{Cross Prediction forecast error: 2002 snapshot reproduction with model generated using 2008 snapshots and vice-a-versa.}
\label{f:Cross_Prediction}
\end{figure}
\subsection{Universal Model}\label{s:UM}
As mentioned previously, in order for the developed model to be universally applicable, the snapshots need to cover the full range of input conditions with the snapshots. Covering the entire range of $F_{10.7}$ with true variations along with the condition for the snapshot matrices to be dynamically continuous requires TIE-GCM simulations for more than 10 years (a full solar cycle), which in itself can be computationally expensive. Therefore, we first develop two models using input sampling strategies needed to ensure that the snapshots cover all possible input conditions with only one year of TIE-GCM simulations. We then develop a third model that uses 12 years of TIE-GCM simulations covering the last solar cycle from 1997 to 2009.
The standard methods used for dynamic system identification are not expected to work for the problem at hand since \textit{time} is an input, e.g. annual and semiannual variations. Therefore, one straightforward strategy to ensure adequate coverage is to use oscillatory functions (e.g., sine or cosine) for both $F_{10.7}$ and $K_p$. We first attempted a sine function for $F_{10.7}$ and $K_p$ with periods of 27 and 4 days respectively. However, inspection of the POD modes showed that an oscillatory $K_p$ resulted in artificial modes with periods of 4 days resulting in large model (defined as the error at current time, $k$, given initial condition and control at previous time, $k-1$) and forecast errors. This sits well with the understanding that $F_{10.7}$, for the most part, scales the atmosphere (contraction and expansion) without significantly modifying the global distribution, whereas $K_p$ very sensitively impacts the global distribution captured by the spatial modes.
With that knowledge, we perform two additional runs of TIE-GCM to collect snapshots both using oscillatory $F_{10.7}$ and Sim1: $K_p$ sampled from observed distributions, and Sim2: $K_p$ sampled from normal distributions, one each for high and low solar activity. Recognizing that persistence is a strong indicator of geomagnetic activity, at least on relatively short timescales (i.e., 3 hours or less), we developed a strategy to construct a random time series of $K_p$ that preserves this quality. For our $K_p$ sampling, we analyzed historical 3-hourly $K_p$ values dating back to 1947 obtained from NOAA's National Center for Environmental Information (NCEI). These values were then separated into low, medium, and elevated solar activity levels based on the corresponding daily $F_{10.7}$ values with following respective binning thresholds: $F_{10.7} < 120$ sfu, $120 \leq F_{10.7} < 180$ sfu, and $ F_{10.7} \geq 180$ sfu. Within each of these three bins, a conditional probability distribution was constructed for $K_p(k)$ at the current time $t$, based on the previous 3-hourly value, $K_p(k-1)$. Given the 28 discrete values that $K_p$ can take, this method requires construction of 84 separate histograms. A time series can then be created by randomly drawing from these histograms, knowing the current $F_{10.7}$ and seeding with an initial $K_p$ value. This method produces a timeseries in which both the conditional and overall distribution of $K_p$ probabilities resembles those of the historical $K_p$ indices.
Sim2 uses two normal distributions: $[\mu,\sigma]=[0,2.5]$ for $F_{10.7} < 120$ sfu and $[\mu,\sigma]=[3,3]$ for $F_{10.7} > 120$ sfu, where $\mu$ is the mean and $\sigma$ is the standard deviation. We apply several operators to the distributed samples to make them more physically realistic. We convert the sampled negative values to positive using the absolute operator. In addition, samples above nine are converted to a value equal to nine. We then subtract half of the maximum value from samples with values above 4.5 when $F_{10.7}<120$. This is because active geomagnetic conditions are typically observed for an active Sun. Sim3 uses the observed $F_{10.7}$ and $K_p$ for the period from 1997 to 2009. Figure~\ref{f:Sim123_Inputs} shows the distribution of observed $K_p$ since 1947, simulated inputs for cases Sim1 and Sim2, and the variation of the observed $F_{10.7}$ and $K_p$ from 1997 to 2009 used in Sim3. As observed, the $K_p$ distribution for Sim2 pushes the tail further to the right with more samples in the 4-5 range. This allows to expand the region where the model is applicable as will be discussed in section \ref{s:ML} on model limitations. While the peak of the observed $K_p$ distribution lies close to a value of 1.5, the peaks for Sim1 and Sim2 lie close to values of 2 and 1, respectively.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_Inputs.eps}
\caption{(a) Histogram of the observed daily $K_p$ values from 1947-2017. (b,e) simulated oscillatory $F_{10.7}$ input for Sim1 and Sim2. (c,f) simulated $K_p$ input for Sim1 and Sim2. (d,g) histograms of the simulated $K_p$ inputs for Sim1 and Sim2. (h) and (i) shows the $F_{10.7}$ and $K_p$ variation from 1997-2009 used in Sim3, respectively. }
\label{f:Sim123_Inputs}
\end{figure}
Figure \ref{f:Sim123_EVs} shows the normalized POD eigenvalues that represent the contribution to the total energy for the three cases. For all the three cases, the first mode captures close to 97\% with the first 3 modes capturing more than 98\% of the total energy. Figure~\ref{f:Sim123_POD_Modes} shows the first 5 POD modes for the three cases. As previously mentioned, qualitative analysis of the dynamics is beyond the scope of the current work; therefore, we will only make some quick observations. Mode 1 again contains a strong mean component with the first 3 modes being almost identical. Figure~\ref{f:Sim123_POD_Coefficients} shows the coefficients corresponding to the first 5 modes. Sim3 POD coefficients shown are for the year 2002. Modes 1 and 2 have a strong correlation (positive and negative, respectively) with $F_{10.7}$ and the same 27-day period representing the variations caused by solar activity. The third mode represents annual variation. Mode 4 for Sim1 and Sim3 and mode 5 for Sim3 represent semi-annual variations. Mode 5 for Sim1 and Sim2 and mode 4 for Sim3 also contain a very weak semi-annual trend.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_EVs.eps}
\caption{The contriubtion of the first 20 POD modes to the total energy for Sim1, Sim2, and Sim3. The reduced order dynamic matrix $\tilde{{\bf A}}$ is obtained by projecting the full order dynamic matrix ${\bf A}$ onto the POD modes. The first 20 modes capture more than 99\% of the total energy.}
\label{f:Sim123_EVs}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_POD_Modes.eps}
\caption{First 5 POD modes for Sim1, Sim2, and Sim3 derived using HS-DMDc at 450 km altitude.}
\label{f:Sim123_POD_Modes}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_POD_Coefficients.eps}
\caption{Coefficients for first 5 POD modes for Sim1, Sim2, and Sim3. Sim3 coefficients are for the year 2002.}
\label{f:Sim123_POD_Coefficients}
\end{figure}
\subsection{Universal Model Validation}\label{s:UMV}
We validate the three (Sim1, Sim2, and Sim3) models with the real case simulations performed for 2002 and 2008. Because TIE-GCM output for 2002 and 2008 is incorporated through the snapshots into the Sim3 model, we perform additional TIE-GCM simulations for the year 1996 to conduct a validation of the models with an independent dataset. Figure~\ref{f:Sim123_Validation} shows the forecast error using Sim1, Sim2, and Sim3 models for a couple different initial conditions in 2002, 2008, and 1996, respectively. We use the initial conditions for 2002 and 2008 and choose two initial conditions for 1996 that represent mean inputs and extreme conditions as previously described in section~\ref{s:MeV}. As observed, the Sim1 and Sim2 ROMs perform well in keeping the forecast error close to or below 10\% after 24 hours and close to or below 15\% after 48 hours in general. An exception is shown in Figure~\ref{f:Sim123_Validation}d, where much like in Figure~\ref{f:Cross_Prediction}d, the forecast error is initially very large and falls with the onset of the storm and begins to rise again after the storm due to the dynamic propagation. The large errors in the first 24-48 hours are because the $F_{10.7}$ value, and hence the density, for the first 48 hours is outside or on the boundary of the input range incorporated in the Sim1 and Sim2 snapshots. The error falls with the onset of the storm because it increases the density and brings it in the range of the values incorporated into Sim1 and Sim2. It is important to note that the Sim1 and Sim2 absolute errors in Figure~\ref{f:Sim123_Validation}d are much smaller than the errors in Figure~\ref{f:Cross_Prediction}d because of the sampling strategies used. The errors can be reduced by expanding the range of the inputs.
Sim3 consistently outperforms Sim1 and Sim2 in every scenario including the validation using independent datasets of 1996. Sim3 ROM performs very well in keeping the forecast error close to or below 5\% after 24 hours and close to or below 10\% after 48 hours in general. Sim3 also performs well in Figure~\ref{f:Sim123_Validation}d because it incorporates the 2008 TIE-GCM output into the model through the snapshots. The model errors for Sim3 are also either better or as good as for Sim1 and Sim2. For all the three models, the model and forecast errors rise sharply during a storm event. The models can be improved either by incorporating more storm-time snapshots into the development or through data assimilation, which will be the subject of future work. The storm time performance is a limitation for the current version of the model.
Both the forecast and model errors for all the three model can be further reduced by increasing the truncation order $r$. As alluded to previously, the choice of $r$ is a trade-off between accuracy and the practicality of estimating model parameters (modal coefficients) for data assimilation. By definition, the idea behind modal decomposition is to capture a significant portion of the energy/variance with a small number of modes, ideally less than 5. Even for application to reduced order modeling of the upper atmosphere, close to 98\% of the total energy is captured by the first 5 modes. When assimilating measurements, using a large number of modes can lead to overfitting. However, certain dynamics that are important but do not dominate the energy/variance may not be captured in the first 5-10 modes. This argument and trade-off will be discussed in detail in future work on the development of the framework for data assimilation for the ROM.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_Validation.eps}
\caption{Validation of Sim1 and Sim2 models. ME: Model Error is defined as the error at current time $k$ given initial condition and control at previous time $k-1$ }
\label{f:Sim123_Validation}
\end{figure}
\subsection{Dynamic modes }\label{s:DM}
Although the DMD modes (computed in step 5 of the HS-DMDc in section \ref{s:NA}) do not play a role in the development of the ROM, we compute the DMD modes because they provide insights into the dynamics embedded within the dynamic matrix ${\bf A}$. In addition, detailed qualitative analysis of the dynamics is beyond the scope of this work; however, the DMD eigenvalues provide a measure of thermosphere dynamics on a wide range of time scales. Figure~\ref{f:Sim123_DMD_EVs} shows the discrete-time eigenvalues of the reduced order dynamic matrix for the models corresponding to 2002, 2008, Sim1, Sim2, and Sim3. The eigenvalues are a mix of real and complex conjugate pairs. As observed, there is some overlap of the DMD eigenvalues, representing the common set of dynamics between the models. The eigenvalues most likely do not overlap exactly because of dynamics that differ as a function of the input conditions. The angle from the real axis for an eigenvalue represents the single frequency of the dynamic mode, whereas, the magnitude (distance from the origin) represents the decay rate for the given mode. As observed, most eigenvalues are clustered close to the unit circle; a magnitude greater than unity corresponds to an unstable or growing mode, a value of unity corresponds to modes with no growth or decay, while a value less than unity corresponds to a decaying mode. An eigenvalue on the real line corresponds to pure scaling with no oscillatory properties.
The discrete-time eigenvalues in Figure~\ref{f:Sim123_DMD_EVs} can be converted to continuous-time using the relation in Eq.~\ref{e:DS5}. Because the DMD eigenvalues and eigenvectors computed with the eigendecomposition are not ordered in any given order, we sort the non-zero (|$\Lambda$| > 1e-3) eigenvalues and eigenvectors using a custom criteria. We sort by total contribution of the DMD mode calculated by projection of the eigenvectors onto the first 10 POD modes scaled by the POD eigenvalues. Figure~\ref{f:Sim123_DMD_TPs} shows the continuous-time time period (1/frequency; value of Inf for time period corresponding to a zero frequency is set to 0) and growth rates of the dynamic modes ordered using the sorting described above. Figures~\ref{f:Sim3_DMD_Modes} show the first 5 dynamic modes for Sim3. To avoid redundancy, we show only one mode for each complex conjugate pair. Dynamic modes for the other models are not shown to save on space. Corresponding to the eigenvalues, the dynamic modes can be real or complex; the real modes or eigenvalues represent pure scaling with no dynamic or oscillatory properties. The real and imaginary parts combine to give the modes their dynamic properties.
The first dynamic mode is real with a corresponding zero frequency and represents scaling of the atmosphere with $F_{10.7}$. Modes 2 represents a conjugate pair and corresponds to scaling and long term variations driven by $F_{10.7}$. The time period of close to 4 years seems to be a result of the variation of $F_{10.7}$ with a dip at close to 4 years (at the beginning of 2001). We expect that the conjugate pair would manifest as independent real modes in the absence of the dip in $F_{10.7}$ (the first 5 modes for 2002 are all real). Mode 3 represents a conjugate pair combining semi-annual variations and higher order harmonics of the daily period. The combination has a time period of close to 70 days. Mode 4 represents a conjugate pair that seems to correspond to the 27-day variations in $F_{10.7}$. The next three modes, including Mode 5 in Figure~\ref{f:Sim3_DMD_Modes}, seem to represent short term variations. All the dynamic modes that represent scaling are embedded into the multi-frequency POD modes that correspond to scaling of the thermosphere. All the dynamic modes are stable with negligible decay rates.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_DMD_EVs.eps}
\caption{Discrete-time DMD eigenvalues of the reduced order dynamic matrix $\tilde{{\bf A}}$ for 2002, 2008, Sim1, Sim2, and Sim3.}
\label{f:Sim123_DMD_EVs}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim123_DMD_TPs.eps}
\caption{Continuous-time time period (1/frequency) and growth-rates for the DMD modes of the reduced order dynamic matrix $\tilde{{\bf A}}$ for 2002, 2008, Sim1, Sim2 and Sim3 ordered by the custom criteria described above. Time periods (Inf) corresponding to zero frequency are not set to zero.}
\label{f:Sim123_DMD_TPs}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{Sim3_DMD_Modes_New.eps}
\caption{The real and imaginary parts for the DMD modes of the reduced order dynamic matrix $\tilde{{\bf A}}$ for Sim3 at 450 km altitude.}
\label{f:Sim3_DMD_Modes}
\end{figure}
\section{Model Limitations}\label{s:ML}
In this section, we discuss the limitations of the quasi-physical dynamic reduced order model for TIE-GCM developed as part of this work. We call it TIE-GCM-ROM-v1.0. The developed model has two major limitations:
\begin{enumerate}
\item The current version of the model is only applicable for geographic altitudes between 100 and 450 km.
\item The current version of the model is recommended for use in a $K_{p}$ range from 0 to 5. Application outside of this input range requires further development and/or data assimilation without which the model can result in significantly large errors.
\end{enumerate}
The model error is on the order of a few percent (3-5\%), which is a function of the number of POD modes used in the reduced order approximation of the dynamic and input matrices (Step 3 of the algorithm described in section \ref{s:NA}).
\section{Further Development}\label{s:FD}
Further development will seek to address the limitations discussed in the previous section while advancing the methodology for modeling the neutral chemical species for a physically self-consistent model and developing the framework for data assimilation. Because the geographic altitude modeled by TIE-GCM is a function of $F_{10.7}$, the extension to higher altitudes will either require extrapolation to higher altitudes under assumptions during low levels of solar activity or development of multiple models valid for smaller ranges of $F_{10.7}$. We will attempt to expand the model's applicability to times to extreme geomagnetic storms. Future work will involve developing the techniques and framework for data assimilation with ROM.
\section{Conclusions} \label{s:Conc}
Accurate specification of the thermosphere is important in the context of atmospheric drag, the largest source of uncertainty for orbit prediction in low Earth orbit (LEO), pertinent to space situational awareness. Most existing models can be classified as either empirical (fast to evaluate but with limited forecasting ability) or physics-based (potential for good forecasting abilities but require dedicated parallel resources for real-time application and data assimilative capabilities that have not yet been developed).
In this work, we develop a quasi-physical dynamic reduced order model (ROM) to overcome the limitations of both empirical and physics-based models. The ROM is developed using modal decomposition methodology, with a newly developed Hermitian Space - Dynamic Mode Decomposition with control (HS-DMDc) algorithm based on Dynamic Mode Decomposition (DMD). The ROM formulation is also expected to simplify the framework for data assimilation; the development of which will be the subject of future work. We call the developed ROM TIE-GCM-ROM-v1.0. We validate the model by reproducing TIE-GCM output for the years of 2002 and 2008, corresponding to high and low solar activity, respectively, and an independent TIE-GCM simulation in 1996 not used in the development. Results show that the ROM performs well in serving as a reduced order substitute for TIE-GCM with the ability to maintain low forecast error ($\sim$5\%) for a minimum of 24 hours. The reduced order model evaluation requires very little computational resources with a full day forecast taking only a fraction of a second.
\section{acknowledgments}
The first two authors wish to acknowledge support of this work by the Air Force's Office of Scientic Research under Contract Number FA9550-18-1-0149 issued by Erik Blasch. The authors wish to acknowledge useful conversations related to DMD and reduced order modeling with Humberto Godinez of Los Alamos National Laboratory. The authors also wish to thank the anonymous reviewers for their helpful comments. The model can be downloaded at the University of Minnesota Digital Coservancy (https://conservancy.umn.edu/) with a search for TIEGCM-ROM.
|
{
"timestamp": "2018-03-20T01:18:15",
"yymm": "1802",
"arxiv_id": "1802.08901",
"language": "en",
"url": "https://arxiv.org/abs/1802.08901"
}
|
\section{Introduction}
In this paper all knots will be oriented and we write equality for orientation preserving ambient isotopy.
For a knot $K$ we write $r(K)$ for $K$ with orientation reversed and $m(K)$ for the mirror image of $K$.
It is known that the knot quandle $\mathcal{Q}(K)$ distinguishes distinct oriented knots $K_1$ and $K_2$ if and only if $K_2 \neq rm(K_1)$ (\cite{Joyce,Mat}\ ). The knot group $\pi_K$ cannot distinguish $K$ from $s(K)$ for any
$s \in \{ r,m,rm \}$ (see, e.g.,\cite{BZ}). It follows that neither the set of (quandle) homomorphisms ${\rm Hom}_{\rm Qnd}(\mathcal{Q}(K), Q)$ from $\mathcal{Q}(K)$ to a quandle $Q$ nor the set ${\rm Hom}_{\rm Gp}(\pi_K,G)$ of (group) homomorphisms from $\pi_K$ to a group $G$ is a complete invariant of oriented knots.
In the case of quandles a stronger invariant (the {\it 2-cocyle invariant} or {\it 2-cocycle state-sum invariant}) was obtained in \cite{CJKLS}
using a 2-cocycle $\varphi$ for a finite quandle $Q$ with coefficients in an abelian group $\Lambda$. One defines a mapping
$$B_\varphi: {\rm Hom}_{\rm Qnd}(\mathcal{Q}(K), Q) \rightarrow \Lambda:\quad \rho \mapsto B_\varphi(\rho)$$ whose fibers determine a partition of ${\rm Hom}_{\rm Qnd}(\mathcal{Q}(K), Q)$ indexed by $\Lambda$. Since $Q$ is finite this partition can be expressed as
an element $\Phi_Q^\varphi(K)$ of the group ring $\Z[\Lambda]$. See \cite{CDS} for evidence that the 2-cocyle invariant might be a complete invariant for oriented knots.
In the case of groups the knot group {\it peripheral system} $(\pi_K, m_K, l_K)$, where $(m_K, l_k)$ is a meridian-longitude pair, is a complete invariant of oriented knots (see \cite{BZ}). Using this, Eisermann \cite{Eis-colpoly} defined the {\it knot coloring polynomial} for a pointed finite group $(G,x)$ corresponding to a peripheral system $(\pi_K, m_K, l_K)$ as
$$P_G^x(K) = \sum_{\rho} \rho (l_K) $$
where the sum is taken over all homomorphism $\rho: \pi_K \rightarrow G$ with $\rho(m_K) = x$.
It turns out that longitude images lie in $\Lambda = C(x) \cap G'$ and hence $P_G^x(K)$ is an element of the group ring $\Z[\Lambda]$. Eisermann shows in \cite{Eis-colpoly} that when $G$ is finite and $\Lambda$ is abelian a knot coloring polynomial can be expresssed as a 2-cocycle invariant over the conjugation quandle $x^G$ and conversely a 2-cocycle invariant for a finite quandle $Q$ is a specializations of a knot coloring
polynomial for $G = {\rm Inn}(\tilde{Q})$ where $\tilde{Q}$ is the abelian extension corresponding to the given 2-cocycle. In particular, any knots distinguishable by 2-cocycle invariants are distinguishable by knot coloring polynomials. We note however that in general the price one pays for this is a group much larger than the quandle.
In case $G$ is infinite the coefficients of $P_G^x(K)$ may be infinite, then we replace it by the {\it longitudinal mapping}
$$ \mathcal{L}_G^x(K) : {\rm Hom}_{\rm Gp}(\pi_K, m_K; G,x) \rightarrow \Lambda, \quad \rho \mapsto \rho(l_K),$$
where ${\rm Hom}_{\rm Gp}(\pi_K, m_K; G,x)$ is the set of homomorphisms $\rho: \pi_K \rightarrow G$ with $\rho(m_K) = x$. If $G$ is a topological group, $\mathcal{L}_G^x (K)$ may be thought of as a topological analogue of the 2-cocycle invariant or the knot coloring polynomial. This is the invariant we examine for the case $G = SU(2)$ in this paper. We find $\mathcal{L}_G^x (K)$ when $K$ is a torus knot $T(2,n)$ for odd $n \geq 3$ and when $K$ is the figure eight knot $4_1$.
Let $Q$ be any quandle (possibly infinite) and let
$T$ be a 1-tangle diagram whose closure is the knot $K$.
Denote the initial arc of $T$ by $0$ and the terminal arc by $n$.
For arbitrary fixed $e \in Q$ let ${\rm Col}_Q^e(T)$ denote the set of colorings of $T$ by quandle $Q$
such that $C(0)=e$.
Furthermore, by Lemma 2.2 in \cite{CDS}, for $C \in {\rm Col}_Q^e(T)$, $b = C(n)$ satisfies $R_b = R_e$. That is, $b$ lies the the fiber $F_e = {\rm inn}^{-1}(R_e)$. We define the mapping
$$\Psi_Q^e(K) : {\rm Col}_Q^e (T) \rightarrow F_e, \quad C \mapsto C(n).$$
In the appendix we show that if $Q$ is the generalized Alexander Quandle ${\rm GAlex}(G',f_x)$ constructed from the pointed group $(G,x)$ where $f_x (u) = x^{-1}ux$, then $ \mathcal{L}_G^x(K)$ is equivalent to $\Psi_Q^e(K)$ where $e = 1$. This gives a way to construct the longitudinal mapping without use of a meridian-longitude pair.
\section{Basic Definitions}
In this section we briefly review some definitions and examples.
More details can be found, for example, in \cite{CKS}.
If $X$ is a set with a binary operation $*$ the {\it right translation} ${R}_a:X \rightarrow X$, by $a \in X$, is defined
by ${ R}_a(x) = x*a$ for $x \in X$.
The magma $(X,*)$ is a {\it quandle} if each right translation $R_a$ is an automorphism of $(X,*)$ and every element of $X$ is idempotent.
A {\it quandle homomorphism} between two quandles $X, Y$ is
a map $f: X \rightarrow Y$ such that $f(x*_X y)=f(x) *_Y f(y) $, where
$*_X$ and $*_Y$
denote
the quandle operations of $X$ and $Y$, respectively.
A {\it quandle isomorphism} is a bijective quandle homomorphism, and
two quandles are {\it isomorphic} if there is a quandle isomorphism
between them.
The set of quandle homomorphisms from $X$ to $Y$ is denoted by ${\rm Hom}_{\rm Qnd}(X, Y)$.
A quandle epimorphism $f: X \rightarrow Y$ is a {\it covering}~\cite{Eis-unknot}
if $f(x)=f(y)$ implies $a*x=a*y$ for all $a, x, y \in X$.
For a quandle $(X,*)$, since $R_a$ for each $a \in X$ is an automorphism, one may define the binary
operation $\bar{*}$ by $x\ \bar{*}\ y=R^{-1}_y(x)$. This gives a quandle structure on $X$, called the {\rm dual} quandle.
The subgroup of ${\rm Sym}(X)$ generated by the permutations ${ R}_a$, $a \in X$, is
called the {\it {\rm inner} automorphism group} of $X$, and is
denoted by ${\rm Inn}(X)$.
The map ${\rm inn}: X \rightarrow {\rm inn}(X) \subset {\rm Inn}(X)$
(which is a quandle under conjugation)
defined by ${\rm inn}(x)=R_x$ is called the {\it inner representation}.
An inner representation is a covering.
A quandle is {\it indecomposable} if ${\rm Inn}(X)$ acts transitively on $X$. We use {\it indecomposable}
here rather than {\it connected} to avoid confusion with the topological sense of the word.
A quandle is {\it faithful} if the mapping ${\rm inn}: X \rightarrow {\rm Inn}(X)$ is an injection.
As in Joyce \cite{Joyce}, given a group $G$ and
and $f \in {\rm Aut}(G)$, a quandle operation is defined on $G$ by
$x*y=f(xy^{-1}) y ,$ $ x,y \in G$. We call such a quandle a {\it generalized Alexander quandle} and denote it by ${\rm GAlex}(G,f)$.
If $G$ is abelian, such a quandle is known as an {\it Alexander quandle} or {\it affine quandle}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3in]{Xing}
\end{center}
\caption{Colored crossings, positive (left) and negative (right)}
\label{Xing}
\end{figure}
\bigskip
Let $D$ be a diagram of a knot $K$, and ${\cal A}(D)$ be the set of arcs of $D$.
A {\it coloring} of a knot diagram $D$ by a quandle $X$
is a map $C: {\cal A}(D) \rightarrow X$ satisfying the condition depicted in Figure~\ref{Xing}
at every
positive (left) and negative (right) crossing.
respectively.
The set of colorings of $D$ by $X$ is denoted by ${\rm Col}_X(D)$. There is a bijection from ${\rm Hom}_{\rm Qnd}(\mathcal{Q}(K), X) $ to ${\rm Col}_X(D)$. The cardinality $| {\rm Col}_X(D) |$ is a knot invariant
(e.g. see \cite{CKS}).
A $1$-{\it tangle}, or a {\it long knot}, is a properly embedded arc in a $3$-ball, and the equivalence of
long knots
is defined by ambient isotopies of the $3$-ball fixing the
boundary.
A diagram of a $1$-tangle is defined in a
manner similar to a knot diagram, from a regular projection to a disk by
specifying crossing information. An orientation
of a $1$-tangle is specified by an arrow on a diagram. A knot
diagram is obtained from a $1$-tangle diagram by closing the end points by a
trivial arc outside of a disk. This procedure is called the {\it closure} of a
$1$-tangle. If a $1$-tangle is oriented, then the closure inherits the
orientation.
Two diagrams of the same $1$-tangle are related by Reidemeister moves.
There is a bijection between knots and $1$-tangles for classical knots, and invariants of 1-tangles give rise to
invariants of knots, see, for example, \cite{Eis-unknot,Nieb}.
A quandle coloring of an oriented $1$-tangle diagram is defined in a manner
similar to those for knots. We do not require that the end points receive the
same color for a quandle coloring of $1$-tangle diagrams. However this will be the case for a conjugation quandle.
For a quandle $Q$ and $x \in Q$,
denote by ${\rm Col}_Q^{x}(T)$ the set of colorings of a $1$-tangle $T$ by $Q$ with the initial arc colored by $x$.
\bigskip
\section{Computation of the Longitudinal Mapping} \label{sec:poly}
For convenience we often identify the diagram of a tangle $T$ with the tangle itself.
\begin{definition}{\rm
({\it Wirtinger code} \, Eisermann \cite{Eis-unknot}) \label{Wirtinger}
Label the arcs of a $1$-tangle $T$ by integers, ${\cal A}(T)=\{ 0, \ldots, n \}$,
such that $0$ and $n$ are the initial and terminal arcs, respectively,
and the remaining arcs are labeled in order when traveled along the tangle from $0$ to $n$.
At the end of arc number $i-1$, we undercross arc $\kappa(i) =\kappa i$ and continue on arc number $i$.
Let $\epsilon(i) = \epsilon i$ be the sign of crossing $i$.
Note that these are maps $\kappa : \{1,\dots, n \} \rightarrow \{0, \dots, n \}$ and $\epsilon : \{1, \dots, n \} \rightarrow \{1,-1\}.$ The pair $(\kappa,\epsilon)$ is called the {\it Wirtinger code} of the diagram $T$.
}
\end{definition}
The $1$-tangle group $\pi_T$ with diagram $T$ and Wirtinger code $(\kappa,\epsilon)$ allows the presentation
$$ \pi_T = \langle x_0,x_1,\dots,x_n \ | \ r_1, \dots, r_n \rangle, \text{ where }
r_i \text{ is the relation } x_i = x_{\kappa i}^{-\epsilon i} x_{i-1} x_{\kappa i}^{\epsilon i}.$$
As in \cite{Eis-colpoly} we choose the meridian
\begin{eqnarray*}\label{meridian}
m_T = x_0
\end{eqnarray*}
and the (preferred) longitude
\begin{eqnarray*}\label{long2}
l_T =
x_0^{-w(T)} x_{\kappa 1} ^{\epsilon 1}x_{\kappa 2}^{\epsilon 2} \cdots x_{\kappa n} ^{\epsilon n}.
\end{eqnarray*}
See Remark 3.13 of \cite{BZ} for this form of the longitude.
The knot group $\pi_K$ is isomorphic to $\pi_T$.
For a pointed {\it finite} group $(G,x)$, Eisermann defined the {\it knot coloring polynomial} of $K$ to be
$$
P_G^x(K) = \sum_{\rho} \rho(l_T),
$$
where the sum is taken over all homomorphisms $\rho: \pi_T \rightarrow G$ with $\rho(m_T) = x$. It turns out (see \cite{Eis-colpoly}) that the values $\rho(l_T)$ lie in the {\it longitudinal group}
$\Lambda = C(x) \cap G'$ where $C(x)$ is the centralizer of $x$ and $G'$ is the commutator subgroup of $G$. Thus $P^x_G(K)$ lies in the group ring $\Z[\Lambda]$.
Let ${\rm Rep}_G^x(T)$ be the set of
homomorphisms $\rho: \pi_T \rightarrow G$ with $\rho(m_T) = x$,
and ${\rm Col}^x_Q(T)$ be the set of colorings ${ C}$ by a quandle $Q$ such that ${ C}(0)=x$, where
$0$ is the initial arc of $T$.
There is a bijection between $ {\rm Rep}_G^x(T)$ and ${\rm Col}^x_Q(T)$ where $Q$ is the conjugacy class $x^G$ of $x$
under the product $a*b = b^{-1}ab$.
We wish to extend Eisermann's knot coloring polynomial to groups not necessarily finite.
\begin{definition}
{\rm
Let $(G,x)$ be any pointed group. Let $K$ be a knot and $T$ be a $1$-tangle corresponding to $K$.
We define the knot invariant $$\mathcal{L}_G^x(K): {\rm Rep}^x_G(T) \rightarrow \Lambda, \quad \rho \mapsto \rho(l_T). $$
We call it the {\em longitudinal mapping}. When there is no chance of confusion we write $\mathcal{L}$ in place of $\mathcal{L}_G^x(K)$. We shall say that two such longitudinal mappings
$\mathcal{L}_1: {\rm Rep}^x_G(T_1) \rightarrow \Lambda$ and
$\mathcal{L}_2: {\rm Rep}^x_G(T_2) \rightarrow \Lambda$ are {\it equivalent} if there is a bijection
$\beta: {\rm Rep}^x_G(T_1) \rightarrow {\rm Rep}^x_G(T_2)$ such that $\mathcal{L}_1 = \mathcal{L}_2 \beta.$ Clearly the longitudinal mapping $\mathcal{L}: {\rm Rep}_G^x(T) \rightarrow \Lambda$
is a knot invariant up to equivalence of mappings and if $G$ is a topological group $\mathcal{L}$ is continuous. In this case $\beta$ must be a homeomorphism. See Rubinsztein \cite{Rub} for the topology on ${\rm Col}_Q(T)$.
}
\end{definition}
\begin{remark}{\rm
See the appendix for a definition of $\mathcal{L}$ that doesn't depend on the meridian-longitude pair $(m_T, l_T)$.}
\end{remark}
For a finite group $G$ the knot coloring polynomial is
$P^x_G(K)=\sum_{v \in \Lambda} |\mathcal{L}^{-1}(v)| v$.
Thus $\mathcal{L}$ can be seen as an analogue of the knot coloring polynomial
defined for topological quandles.
Since the knot coloring polynomial is a generalization of the quandle $2$-cocycle invariant (see Theorem 3.24 in \cite{Eis-colpoly}), the invariant $\mathcal{L}$ is a generalization of the quandle $2$-cocycle invariant.
See \cite{CDS,Eis-colpoly} for more details of relations among these invariants.
A similar but different invariant using longitudes was considered in \cite{Nieb}.
\begin{remark}
{\rm
Note that the group $\Lambda$ acts on the set of homomorphisms $\rho: \pi_K \rightarrow G$ with $\rho(m_K) = x$ by setting $\rho^g(a) = g^{-1}\rho(a)g$ for $a \in G$. Since $g \in C(x)$ it follows that $\rho^g(m_K) = x$. Hence if $\Lambda $ is abelian, then $\mathcal{L}$ is constant on the orbits of this action by $\Lambda$.
In our application, $\Lambda $ is abelian.
Thus for example for a two-bridge knot with diagram $T$, suppose the arcs $x_0$ and $x_1$ in the above notation are the two bridges. Then $\mathcal{L} (\rho)$ is completely determined by the values $\rho(x_0) = \rho(m_K) = x \in G$ and the values of $\rho(x_1)$.
}
\end{remark}
\begin{proposition} \label{mirror} $\mathcal{L}_G^x(rm(K))(\rho) = \mathcal{L}_G^x(K)(\rho)^{-1}$ for all $\rho \in {\rm Rep}^x_G(T) $. \end {proposition}
\begin{proof} This is immediate from the fact that if $(\pi_K, m_K, l_K)$ is a peripheral system for knot $K$ then $(\pi_K, m_K, {l_K}^{-1})$ is a peripheral system for the knot $rm(K)$ (\cite{Kawauchi}, Chapter 6).
\end{proof}
\section{Background for ${\rm SU(2)}$ } \label{sec:SU(2)}
For the remainder of the paper, we examine the invariant $\mathcal{L}$ for $(G,x) =({\rm SU(2)},\mbf{x})$ with various choices of $\mbf{x}$. We represent ${\rm SU(2)}$ by the group of unit quaternions, that is,
$$ {\rm SU(2)} = \{ a + b\mathbf{i} + c\mathbf{j}+ d\mathbf{k}: a^2+b^2+c^2+d^2 = 1 \}.$$
The group ${\rm SO(3)}$ will also be of use. Elements of ${\rm SO(3)}$ will be denoted by $\rot{\theta}{v}$, $\theta \in \R$, $\mathbf{v} \in S^2$. If $\mathbf{u} \in \R^3$, $\mathbf{u}\rot{\theta}{v}$ is the vector obtained by rotating $\mathbf{u}$ about $\mathbf{v}$ by $\theta$ radians using the right-hand rule.
We represent elements of $\R^3$ as pure quaternions $\mathbf{u} = u_1\mathbf{i}+ u_2\mathbf{j} + u_3\mathbf{k}$ and we identify the set of pure unit quaternions with the sphere $S^2.$
Then each element of ${\rm SU(2)}$ can be represented the form
$$ \Exp{\theta}{u} = \cos(\theta) + \sin(\theta)\mathbf{u}, \quad
\mathbf{u} \in S^2,\quad 0 \leq \theta < 2\pi.$$
Note that a pure quaternion $\mathbf{u}$ satisfies $\mathbf{u}^2 = -1$ and hence the quaternions $\Exp{\theta}{u}$ for fixed $\mathbf{u}$ behave just like complex numbers $\Exp{\theta}{i}= \cos(\theta) + \sin(\theta) \mathbf{i}$.
From \cite{DJ} (Section 1.2) the conjugacy classes of ${\rm SU(2)}$
are given by
$$
\tilde{C}_\theta = \{ \Exp{\theta}{u} : \ \mathbf{u} \in S^2\},
$$
for $0 \le \theta \le \pi.$
In this case $\tilde{C}_0 = \{1\}$, $\tilde{C}_{\pi} = \{-1\}$ and for $0 < \theta < \pi$, $\tilde{C}_\theta$ is a sphere. This also follows from Lemma~\ref{lem:conjugation} below.
It is known (see for example \cite{Kuipers}, Theorem 5.1) that for $\mbf{u},\mbf{v} \in S^2$ and $\theta \in \R$ that
$$ \Exp{\theta}{u}\mathbf{v}\Exp{-\theta}{u} = \mathbf{v} \rot{2\theta}{u}.$$
The double covering homomorphism $\phi: {\rm SU(2)} \rightarrow {\rm SO(3)}$ may be defined by
$$\mathbf{v} \phi(\mathbf{q})= \mathbf{q}^{-1}\mathbf{v}\mathbf{q}.$$
In this case if $\mathbf{q} = \Exp{\theta}{u}$, then $\phi(\mathbf{q}) = \rot{-2\theta}{\mathbf{u}}$, the rotation by $-2\theta$ radians about the unit
vector $\mathbf{u}$. We must take $\phi(\mathbf{q})$ to be $\mathbf{q}^{-1}\mathbf{v}\mathbf{q}$ instead of $\mathbf{q}\mathbf{v}\mathbf{q}^{-1}$ since we write the rotation operator on the right of the argument.
\begin{lemma}\label{lem:conjugation} For fixed $\theta, \beta \in \R $ and $\mathbf{u,v} \in S^2$ we
have
$${\Exp{-\beta}{v}}\Exp{\theta}{u} \Exp{\beta}{v} =\Exp{\theta}{w}.$$
where $\mathbf{w} = \mathbf{u} \rot{-2\beta}{v}$.
\end{lemma}
\begin{proof} We compute:
$$\begin{array}{lllll}
\Exp{-\beta}{v}\Exp{\theta}{u} \Exp{\beta}{v}
&= &
\Exp{-\beta}{v}(\cos(\theta) + \sin(\theta) \mathbf{u}) \Exp{\beta}{v}
&= &
\cos(\theta) + \sin(\theta){\Exp{-\beta}{v}} \mathbf{u}\Exp{\beta}{v} \\
&= &
\cos(\theta) + \sin(\theta) \mathbf{u}\rot{-2\beta}{v}
&= & \Exp{\theta}{w},
\end{array}
$$
where $\mathbf{w} = \mathbf{u}\rot{-2\beta}{v}.$
\end{proof}
Since ${\rm SO(3)}$ acts transitively on $S^2$ from Lemma~\ref{lem:conjugation} we have:
\begin{corollary}\label{cor:conj}
The conjugacy class of $\mbf{x} = \Exp{\theta}{u}$ has the form
$$ \mbf{x}^{\rm SU(2)} = \{ \Exp{\theta}{v}: \ \mathbf{v} \in S^2 \}.$$
\end{corollary}
\begin{definition}\label{def:S^2}
{\rm For $ 0 < \psi <2 \pi$ we denote by $S^2_\psi$ the quandle with underlying set $S^2$ and product
$\mathbf{u}*\mathbf{v} =\mathbf{u} \rot{\psi}{v}$,
for $\mathbf{u},\mathbf{v} \in S^2$. We call this a {\it spherical quandle}.
}
\end{definition}
\begin{lemma} \label{lem:QuandleIsomorphism}
For $0 < \theta < \pi$ the mapping $\mbf{u} \mapsto \Exp{\theta}{u}$ is an isomorphism from quandle $S_{\psi}^2$ with $\psi = 2\pi - 2\theta$ to the conjugacy class $\tilde{C}_\theta = \{ \Exp{\theta}{u}: \ \mathbf{u} \in S^2 \}$ considered as a quandle under conjugation:
$\mbf{p*q=q^{-1} p q}$.
\end{lemma}
\begin{proof}
The result follows from Lemma~\ref{lem:conjugation} by taking $\beta = \theta.$
\end{proof}
\begin{lemma}
$SU(2)$ is a perfect group, that is, it is its own commutator subgroup.
\end{lemma}
\begin{proof} By \cite{Porteous}, Prop.~10.24 every unit quaternion $\mbf{q}$ has the form
$\mbf{q=aba^{-1}b^{-1}}$ for non-zero quaternions $\mbf{a}$ and $\mbf{b}$. The same holds
if we normalize $\mbf{a}$ and $\mbf{b}$.
\end{proof}
\begin{lemma}\label{lem:Lambda}
If $\mbf{x}=\Exp{\theta}{u}$ for $0 < \theta < \pi$ then the centralizer $C(\mbf{x})$ is the circle group:
$$\{ \Exp{\beta}{u} : \ 0 \leq \beta < 2\pi \}.$$
Hence, the longitudinal group for $(SU(2),\mbf{x})$ is given by
$$\Lambda = C(\mbf{x}) \cap SU(2)' = C(\mbf{x}) = \{ \Exp{\beta}{u} : \ 0 \leq \beta < 2\pi \}.$$
\end{lemma}
\begin{proof} This follows from Lemma~\ref{lem:conjugation} and the fact that for $0<\beta<\pi$,
$ \mathbf{u} \rot{\beta}{\mathbf{v}} = \mathbf{u}$, with $\mathbf{u}, \mathbf{v} \in S^2$ if and only if $\mathbf{v} = \pm \mathbf{u}$ together with the fact that
$$\{ \Exp{\beta}{u} : \ 0 \leq \beta < 2\pi \}=\{ \Exp{\beta}{(-u)} : \ 0 \leq \beta < 2\pi \}.$$
\end{proof}
\begin{remark}
{\rm
It is easy to see that for $\mbf{x}= \Exp{\theta}{u}$ the conjugacy classes $x^{\rm SU(2)} = \tilde{C}_\theta$ and $(-\mbf{x})^{\rm SU(2)} =\tilde{C}_{\theta + \pi}$
are isomorphic via $ \mathbf{q} \mapsto -\mathbf{q}$ as conjugation quandles. Note also that $\mathbf{u} \mapsto -\mathbf{u}$ leaves the longitude invariant.
Thus for our purposes it suffices to consider only those $\mbf{x} = \Exp{\theta}{u}$ for $ 0 < \theta < \pi$.
Note that $\rot{\psi}{v} =\rot{-\psi}{-v}$. It follows that $S^2_\psi$ is isomorphic to
$S^2_{-\psi}$ via $\mbf{u} \mapsto -\mbf{u}$. Thus when coloring knots by the family of quandles $S^2_\psi$ we
may restrict $\psi$ to the interval $(0,\pi]$. And for the quandles $\tilde{C}_\theta$ we may restrict $\theta$ to
the interval $[\pi/2, \pi)$.
}
\end{remark}
Fix $\theta \in (0, \pi)$ and $\mbf{x}=\mbf{x}_\theta=\Exp{\theta}{i}$ where $\mathbf{i} = (1,0,0)$ we are interested in computing $\mathcal{L}_\theta = \mathcal{L}_{{\rm SU(2)}}^{\mbf{x}_\theta}.$
\section{Knot Colorings by the Spherical Quandles $S^2_\psi$}
Knot group representations in ${\rm SU(2)}$ were studied in Klassen~\cite{Klassen}, in particular for all torus knots and twist knots.
We present explicit colorings of torus knots $T(2, n)$ and the figure eight knot in this section and we compute the longitudinal mappings of these knots in the next section.
Fix $\psi \in [0, 2 \pi)$ and as above denote by $\rot{\psi}{v}$ the rotation by $\psi$ about ${\mathbf v}$.
Then the quandle structure on $Q=S^2_\psi$ is as defined in Definiton~\ref{def:S^2}, for ${\mathbf u}, {\mathbf v } \in S^2$, by
${\mathbf u} *{\mathbf v }={\mathbf u} \rot{\psi}{v} $ with right action of the rotation.
Denote by $\langle {\mathbf u} , {\mathbf v} \rangle$ the inner product of ${\mathbf u}, {\mathbf v}$ in $\R^3$, so that $S^2_\psi=\{ {\mathbf u } \in \R^3 \ | \ \langle {\mathbf u}, {\mathbf u }\rangle =1 \}$.
We also denote the length of the shortest spherical geodesic segment between ${\mathbf u} , {\mathbf v} \in S^2_\psi$ by $d({\mathbf u}, {\mathbf v} ) = \arccos(\langle {\mathbf u} , {\mathbf v } \rangle )$,
and we denote the (directed) spherical angle at a vertex $\mathbf{v}$ formed by
three unit vectors ${\mathbf u}, {\mathbf v} , {\mathbf w } \in S^2_\psi$
by $\angle({\mathbf u} {\mathbf v } {\mathbf w} )= \psi$ if ${\mathbf w} ={\mathbf u}\rot{ \psi}{v}$ and $0 \le \psi < 2\pi$.
Let $\kappa : \{1,\dots, n \} \rightarrow \{0, \dots, n \}$ and $\epsilon : \{1, \dots, n \} \rightarrow \{1,-1\}$ be the Wirtinger code of a tangle diagram $T$ as described in Definition~\ref{Wirtinger}.
We observe that the coloring condition depicted in Figure~\ref{Xing} is formulated as follows.
Let $Q= S^2_\psi$. Then a coloring $\rho \in {\rm Col}_Q^{\mbf{x}}(T)$ corresponds to a sequence of points
$$ (\rho(0), \ldots, \rho(n) ) \in {(S^2_\psi)}^{n+1}$$ satisfying
\begin{eqnarray*} \label{ColEqns}
\rho(i) = \rho(i-1) \rot{\psi}{\rho(\kappa({\it i}))}^{\epsilon(i)}, \quad i \in \{1,\ldots,n \} .
\end{eqnarray*}
Thus we have the following, as stated in \cite{Klassen}:
\begin{lemma}\label{lem:color}
For a coloring of a knot diagram by $Q= S^2_\psi$,
consider a crossing with the colors $({\mathbf a}, {\mathbf b})$ as depicted in Figure~\ref{Xing}.
Then ${\mathbf c}={\mathbf a} *{\mathbf b} $
if and only if $d({\mathbf a}, {\mathbf b} ) =d ({\mathbf b}, {\mathbf c })$ and
$\angle({\mathbf a }{\mathbf b} {\mathbf c })=\psi$.
In particular, any orientation preserving isometry of the sphere takes a coloring to a coloring.
\end{lemma}
\begin{corollary}\label{cor:rotcolor}
For any coloring $\rho \in {\rm Col}_Q^x (T)$ such that $(\rho(0), \ldots, \rho(n) ) \in {(S^2_\psi)}^{n+1}$,
$$ (\rho(0) \rot{\phi}{ {x} }, \ldots, \rho(n) \rot{\phi}{ {x} })$$ defines a coloring in $ {\rm Col}_Q^x (T)$ for all $\phi \in [0, 2\pi)$.
\end{corollary}
\begin{remark}
{\rm
As $\psi$ varies, we have a continuous family $\{ S^2_\psi : \psi \in (0, 2 \pi) \}$ of quandles.
This leads to continuous family of knot colorings by $\tilde{C}_\theta$,
where $\theta=\pi - \psi/2$.
The longitudinal mapping invariant, then, can be seen as a continuous family of invariants $\mathcal{L}^{\mbf{x}_\theta}_{{\rm SU(2)}}$ over $\theta$.
}
\end{remark}
Let $T$ be a tangle corresponding to a 2-bridge knot $K$.
Then we may choose a diagram of $T$ to be a diagram with two bridges, i.e., there are two arcs $x_0$ and $x_1$
such that $x_0$ is the initial arc of $T$, and the colors of $x_0$ and $x_1$ uniquely determine a color of all arcs of $T$.
Let $Q=S^2_\psi$, and we fix $\mbf{x}=\mbf{i}=(1,0,0)$.
Thus for all elements $\rho \in {\rm Col}^{\mbf{x}}_Q(T)$, we have $\rho (x_0)=\mbf{x}$ as $x_0$ is the initial arc of $T$.
Let $E \subset S^2_\psi$ be half of the equator,
$$ E=\{ ( \cos \phi , \sin \phi, 0) :
0 \leq \phi \leq \pi \} . $$
\begin{lemma}\label{lem:circles}
Let $Q=S^2_\psi$ and let $\mbf{x} \in {\rm SU(2)}$, $T$, $x_0$, and $x_1$ be as above.
Suppose that the number $h$ of elements $\rho \in {\rm Col}^x_Q(T)$ such that $\rho(x_1) \in E$ and $\rho(x_1) \neq \rho (x_0)$
is finite.
Then ${\rm Col}^x_Q(T)$ is homeomorphic to $h$ copies of $S^1$.
\end{lemma}
\begin{proof}
This follows from Corollary~\ref{cor:rotcolor}.
\end{proof}
\begin{remark}
{\rm
In \cite{Klassen}, non-abelian representations of knot groups in ${\rm SU(2)}$ for torus knots and twist knots up to conjugation action were determined by Klassen.
For each $\psi$, ${\rm Col}^x_Q(T) \cap E$, $Q = S^2_\psi$ corresponds to Klassen's representation.
Thus the sets ${\rm Col}^x_Q(T)$ are known from the paper \cite{Klassen}. We determine explicit colorings of $T(2,n)$ and the figure 8 knot by $S_\psi^2$ in the next two subsections and compute the longitudinal mappings for these knots in the next section.}
\end{remark}
\subsection{Colorings of the torus knots $T(2,n)$ by $S^2_{\psi}$}
Let $n=2k+1$ and we label the arcs of $T(2, n)$ by ${u}_i$ as in Figure~\ref{t2nu}.
For later convenience in computing the longitude, we use the notation
${u}_i={q}_{2i}$ and ${u}_{k+i}={q}_{2i-1}$ for $i=0, \ldots, k$ as depicted in Figure~\ref{t2nu}. Note that the subscripts on the $\mbf{u}$'s correspond to the labeling of the Wirtinger code (Definition~\ref{Wirtinger}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.5in]{t2nu}
\end{center}
\caption{Arc labeling diagram for $T(2,n)$}
\label{t2nu}
\end{figure}
Let $\mbf{p}_i$, $i=0, \ldots, n-1$ (subscripts taken modulo $n$), be a set of points on $S^2$ that are the vertices of a spherical regular $n$-gon
arranged in counterclockwise order,
for example, $$\mbf{p}_i=(\sqrt{1-r^2} \cos ( (2 \pi / n) i ), \sqrt{1-r^2} \sin ( (2 \pi / n) i ) , r )$$ where $ r \in (-1, 1)$.
Then the side lengths $d(\mbf{p}_i , \mbf{p}_{i+1})$ and the angles $\angle{\mbf{p}_{i-1}\mbf{p}_i \mbf{p}_{i+1} }$ are constant.
\begin{lemma} \label{lem:color_criterion}
Let $n=2k+1$.
Let $C_h$ be the map ${\cal A}(T(2,n)) \rightarrow S^2_{\psi}$ defined by $C_h( {q}_i ) = \mbf{p}_{hi}$ where the subscripts are
taken modulo $n$. If $\psi=\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$, then $C_h$ defines a coloring of $T(2,n)$.
\end{lemma}
\begin{proof}
From Figure~\ref{t2nu}, $C_h( {q}_i)$, $i=0, \ldots, n-1$, gives rise to a non-trivial coloring if the following equations are satisfied:
$C_h ( {q}_{i-1} )*C_h ( {q}_i)= C_h( {q}_{i+1}) $ for all $i$, where the subscripts are taken modulo $n$.
Since the lengths $d( \mbf{p}_{ih} , \mbf{p}_{(i+1)h} )$ and the angles $\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$ are constant,
the conditions for a coloring in Lemma~\ref{lem:color} are satisfied.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.1in]{t27star71.png}
\includegraphics[width=2.1in]{t27star72.png}
\includegraphics[width=2.1in]{t27star73.png}
\end{center}
\caption{Spherical regular star polygons for $n = 7$}
\label{t25regpentagon}
\end{figure}
\begin{example}
{\rm
For $n=7$ and $h=1$, $2$, $3$ respectively,
the points corresponding to the colorings
are illustrated in Figure~\ref{t25regpentagon}.
For each $h=1,2,3$, the ranges of $\psi$ are computed from Lemma~\ref{lem:thetavalues} below as
$(5/7)\pi< \psi < (9/7)\pi$ ,
$(3/7)\pi< \psi < (11/7)\pi$, and
$(1/7)\pi< \psi < (13/7)\pi$.
}
\end{example}
\begin{lemma}\label{lem:thetavalues}
Let $n=2k+1$.
For $h=1, \ldots, k$,
there exists a regular star $n$-gon with vertices $\mbf{p}_{ih}$, $i=1, \ldots, n-1$, with $\psi=\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$ if and only if $$ ( n-2h ) \pi /n < \psi < (n+ 2h ) \pi / n . $$
\end{lemma}
\begin{proof}
\begin{sloppypar}
Assume that there exists such a regular star $n$-gon with
$\psi=\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$.
The angle $\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$ is smaller as the length $d(\mbf{p}_{ih} , \mbf{p}_{(i+1)h})$ is smaller,
and hence the lower bound of such $\psi$ is computed as the corresponding angle $\angle{\mbf{p}_{(i-1)h}\mbf{p}_{ih} \mbf{p}_{(i+1)h} }$
for a planar, infinitesimal regular $n$-gon formed by $\mbf{p}_{ih}$.
\end{sloppypar}
For the planar regular $n$-gon with vertices $\mbf{p}_i$, $i=0, \ldots, n-1$ in this cyclic order,
the angle $\angle \mbf{p}_{i-1} \mbf{p}_i \mbf{p}_{i+1}$ equals $ [(n-2)/n] \pi$ since there are $n-2$ triangles in a regular $n$-gon.
This angle $\angle \mbf{p}_{0-1} \mbf{p}_0 \mbf{p}_{1}$ at $\mbf{p}_0$
is equally divided to the angle $\angle{\mbf{p}_i \mbf{p}_0 \mbf{p}_{i+1}}$ inscribed by $\mbf{p}_i $ and $\mbf{p}_{i+1}$ for each $i$, hence
$\angle{\mbf{p}_i \mbf{p}_0 \mbf{p}_{i+1}}= \pi / n$.
The angle $\angle \mbf{p}_1 \mbf{p}_0 \mbf{p}_h$ and $\angle \mbf{p}_{kh} \mbf{p}_0 \mbf{p}_{n-1}$ consist of $(h-1)$ parts of $\pi / n$.
Hence the lower bound is computed as
$$\angle \mbf{p}_{h} \mbf{p}_{0} \mbf{p}_{kh} = \angle \mbf{p}_{n-1} \mbf{p}_0 \mbf{p}_1 - ( \angle \mbf{p}_1 \mbf{p}_0 \mbf{p}_h + \angle \mbf{p}_{kh} \mbf{p}_0 \mbf{p}_{n-1} )
= [ \ (n-2) - 2(h-1) \ ] \pi /n = ( n-2h ) \pi /n .$$
See Figure~\ref{angles}.
Since the bounds are symmetric about $\pi$, we obtain the upper bound of
$$ \pi+ (\pi - ( n-2h ) \pi /n ) = (n+ 2h ) \pi / n $$
as desired.
\end{proof}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1.5in]{angles}
\end{center}
\caption{Angles of $C_h$}
\label{angles}
\end{figure}
\begin{corollary}\label{cor:T2ncoloring}
For $n = 2k+1$ there is a non-trivial coloring of $T(2,n)$ by $S_\psi^2$ if and only if
$$ ( n-2h ) \pi /n < \psi < (n+ 2h ) \pi / n, $$
for some $h = 1, \dots, k$.
\end{corollary}
\begin{proof} Immediate from Lemma~\ref{lem:color_criterion} and Lemma~\ref{lem:thetavalues}.
\end{proof}
\begin{remark}\label{rem:Ino}
{\rm
For fixed $n$ and $h$ as $\psi$ ranges over the interval
$(\, ( n-2h ) \pi /n , \pi \, ]$ continuously,
the polygons formed by the lengths $d( \mbf{p}_{ih} , \mbf{p}_{(i+1)h} )$
continuously change from an infinitesimal polygon to a polygon on the equator.
As $\psi$ approaches the lower bound $ ( n-2h ) \pi / n $, the polygon converges to a planar polygon.
The coloring condition holds for the Euclidean rotational quandles investigated in \cite{Ino}, in which Inoue proved
that there exists a non-trivial coloring by planar rotational quandles if and only if the Alexander polynomial has
a root on the unit circle $S^1 \subset {\mathbb C}$. The Alexander polynomial of $T(2,n)$ is a
factor of $x^{2n} - 1$.
}
\end{remark}
\begin{remark}\label{rem:t2nFox}
{\rm
In \cite{Klassen}, ${\rm SU(2)}$ representations up to conjugacy are studied.
Furthermore, in \cite{HK}, under certain conditions satisfied by $T(2,n)$ and twist knots,
the representations are deformations of dihedral representations at $\psi = \pi$.
These results are seen in the above continuous family of star polygons.
They start from infinitesimal planar polygons and converge to the equatorial ``polygons'' that correspond to
Fox colorings by dihedral quandles.
}
\end{remark}
\begin{proposition}
Let $Q=S^2_\psi$ and $T$ be a tangle of $T(2,n)$ as depicted in Figure~\ref{t2nu}.
For $n = 2k+1$ and $h=1, \ldots, k$, if $ ( n-2h ) \pi /n < \psi \leq ( n-2h +2 ) \pi /n $ then
$${\rm Col}^{\mbf{x}}_Q(T)=\sqcup_{h} S^1,$$
$h$ copies of disjoint circles.
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:thetavalues}, if $\psi$ is in the stated range, then for any $h' \leq h$, $h'$ satisfies the condition stated
in Lemma~\ref{lem:thetavalues}.
In Figure~\ref{t2nu}, the arcs ${q}_0$ and ${q}_1$ are taken as $x_0$ and $x_1$ in Lemma~\ref{lem:circles}.
Hence in the notation in Lemma~\ref{lem:circles}, ${\rm Col}^x_Q(T) \cap E$ consists of $h$ points, and
the result follows from Lemma~\ref{lem:circles}.
\end{proof}
\subsection{Colorings of the figure eight knot by $S_\psi^2$}
In this subsection we describe the colorings of a figure eight knot by the spherical quandle $S^2_\psi$.
\begin{lemma}\label{lem:fig8color}
A sequence $U=(\mbf{u}_0, \mbf{u}_1, \mbf{u}_2, \mbf{u}_3)$ defines a coloring if and only if the following conditions are satisfied in $S^2_\psi$:
$ d( \mbf{u}_1 , \mbf{u}_2)= d( \mbf{u}_2 , \mbf{u}_0 )= d( \mbf{u}_0 , \mbf{u}_3), \ d( \mbf{u}_0 , \mbf{u}_1) = d( \mbf{u}_1 , \mbf{u}_3)= d( \mbf{u}_3 , \mbf{u}_2) , $ and
$\angle(\mbf{u}_0 \mbf{u}_2 \mbf{u}_1)=\angle(\mbf{u}_0 \mbf{u}_1 \mbf{u}_3)=\angle(\mbf{u}_2 \mbf{u}_3 \mbf{u}_1)=\angle(\mbf{u}_2 \mbf{u}_0 \mbf{u}_3)=\psi$.
\end{lemma}
\begin{proof}
Direct inspection of Figure~\ref{figeight} gives the following:
$$ \mbf{u}_0 * \mbf{u}_2 = \mbf{u}_1, \
\mbf{u}_0 * \mbf{u}_1 = \mbf{u}_3, \
\mbf{u}_2 * \mbf{u}_3 = \mbf{u}_1, \
\mbf{u}_2 * \mbf{u}_0=\mbf{u}_3, $$
where $\mbf{u}_4=\mbf{u}_0$ and the equalities are derived from the crossings.
By Lemma~\ref{lem:color} the statement follows.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.1in]{figeight}
\end{center}
\caption{Colorings of the figure eight knot }
\label{figeight}
\end{figure}
\begin{lemma}\label{lem:fig8sol}
For $\psi = 2 \pi / 3$ and $\psi = 4 \pi / 3$ there is a unique solution $U$ to the equations in Lemma~\ref{lem:fig8color}
such that $$\mbf{u}_0=\mbf{x_\psi}=(1,0,0) = \mbf{i} \text{\ and \ } \mbf{u}_2 = (cos(\beta), sin(\beta),0) .$$ The solution $U$ forms a regular spherical tetrahedron. In this case $\beta = \arccos \left( - 1/3 \right).$
For $2 \pi / 3 < \psi < 4\pi/3$, there are two nontrivial solutions $U$ to the equations in Lemma~\ref{lem:fig8color}
such that $$\mbf{u}_0=\mbf{x_\psi}=(1,0,0) = \mbf{i} \text{\ and \ } \mbf{u}_2 = (cos(\beta), sin(\beta),0) .$$
The solutions are determined by the two values of $\mbf{u}_2 $,
$\mbf{u}_2= ( \cos(\beta_i), \sin(\beta_i), 0)$, where for $i=1,2$,
\begin{eqnarray*}
\beta_1 &=& \pi - \arccos( \, - 1+\sqrt{4\, \cos^2(\psi)-4\, \cos(\psi)-3 )} / 2\, (\cos (\psi) -1 ), \\
\beta_2&= & \arccos(\, 1+\sqrt{4\, \cos^2(\psi)-4\, \cos (\psi) -3 )} / 2\, (\cos (\psi) -1 ).
\end{eqnarray*}
\end{lemma}
\begin{proof} This comes directly from {\it Maple} computations. The Maple worksheets can be found at \cite{Maple}.
\end{proof}
\begin{remark} {\rm Note that by Lemma~\ref{cor:rotcolor} it suffices to restrict $\beta$ to the interval $(0,\pi]$.}
\end{remark}
\begin{remark}
{\rm
{\it Maple} computations give the above exact solutions.
It was also pointed out by Shin Satoh (via personal communication) that the spherical laws of sine and cosine, together with
the area formula that a spherical triangle with angles $\alpha, \beta, \gamma$ has area
$\alpha + \beta + \gamma - \pi$, yield the solutions.
}
\end{remark}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.5in]{graph_beta1,beta2.png}
\end{center}
\caption{The graphs of $\beta_i$, $i=1,2$, representing colorings of the figure eight knot }
\label{fig8graph}
\end{figure}
\begin{remark}
{\rm
The solutions for $\beta_i$ for $i=1,2$ in Lemma~\ref{lem:fig8sol}
are plotted in Figure~\ref{fig8graph} for $\psi \in [2 \pi / 3, 4\pi/3]$. Each angle $\beta_i$ is 0 outside of this interval. Hence the colorings are trivial for $\psi$ outside this interval.
}
\end{remark}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.1in]{fig8.png}
\includegraphics[width=2.1in]{fig8A.png}
\includegraphics[width=2.1in]{fig8B.png}
\includegraphics[width=2.1in]{fig8C.png}
\end{center}
\caption{Colorings for the figure eight knot by $S_\psi^2$ for $\psi=2\pi / 3, 7\pi / 9$, $19 \pi / 20$ and $\pi.$}
\label{fig8}
\end{figure}
\begin{remark}
{\rm
The solutions $U$ in Lemma~\ref{lem:fig8sol} for $\psi=2\pi / 3, 7\pi / 9, 19 \pi / 20$ and $ \pi $
form vertices of spherical tetrahedra as depicted in Figure~\ref{fig8}.
We recall that the figure eight knot is non-trivially colorable by the tetrahedral quandle
(the solution $U$ at $\psi=2 \pi/3$) and the dihedral quandle $R_5$ (Fox 5-colorable).
Note also that since the minimal diagram in Figure~\ref{figeight} has only four arcs, four colors in $R_5$ are used for
non-trivial colorings. Up to mirror symmetry, there are two choices of elements of $\mbf{u}_2$ from $R_5$ for a fixed
element for $\mbf{u}_0$.
As in Remark~\ref{rem:t2nFox}, there are continuous family of solutions as $\psi$ varies from $2 \pi /3$ to $\pi$.
A single regular tetrahedral coloring bifurcates to two branches of solutions as in Lemma~\ref{lem:fig8sol},
and converges to the two solutions of Fox colorings, as described in \cite{HK}.
Animations of this situation can be found at \url{http://shell.cas.usf.edu/~saito/SphericalQuandle/}.
}
\end{remark}
\begin{remark}
{\rm
More generally, Klassen~\cite{Klassen} described the representations of knot groups in ${\rm SU(2)}$
for twist knots ${\rm Tw}_m$, $m>0$, and proved that up to conjugation it consists of $m/2$ circles if $m$ is even,
and $\lfloor m/2 \rfloor$ circles and a single open arc if $m$ is odd.
The cases $m=1$ and $m=2$ correspond to the trefoil and the figure eight knot, respectively.
}
\end{remark}
\begin{remark}
{\rm
It is well known that the Alexander polynomial of ${\rm Tw}_m$ for odd $m$ is given by
$\Delta_{{\rm Tw}_m}(t)=(m+1) t^2 - 2mt + (m+1)$.
Direct calculations show that $\Delta_{{\rm Tw}_m}(t)$ has roots on $S^1 \subset {\mathbb C}$, and by \cite{Ino},
there is a nontrivial coloring by planar rotational quandle for
$\psi={\rm arg} (\alpha)$, where $\alpha$ is its root. Let $\alpha$ be the root with smaller argument.
Then for odd $m$ there is a non-trivial coloring of ${\rm Tw}_m$ by $S^2_\psi$ for
${\rm arg} (\alpha) < \psi < {\rm arg} (\bar{\alpha}) $.
}
\end{remark}
\section{Longitudinal Mapping Invariant Values }
In this section we determine the invariant values
$ \mathcal{L}_\theta$ for the torus knots $T(2,n)$ and the figure eight knot.
\subsection{Torus knots $T(2,n)$}
We used the labeling of the diagram of $T(2,n)$ in Figure~\ref{t2nu}, where
$n=2k+1$ is odd.
\begin{lemma}for $n = 2k+1$ and $h = 1, \dots, k$, $T(2,n)$ is non-trivially colored by $\tilde{C}_\theta$ if and only if
$${\frac { \left( n-2h \right) \pi}{2n}} < \theta <
{\frac { \left( n+2h \right) \pi}{2n}}.$$
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:QuandleIsomorphism} for $0 < \theta < \pi$ the quandle $S_{\psi}^2$, $\psi =2\pi -2\theta$, is isomorphic to the conjugacy class $\tilde{C}_\theta = \{ \Exp{\theta}{u}: \ \mathbf{u} \in S^2 \}$ considered as a quandle under conjugation:
$\mbf{p*q=q^{-1} p q}$. Clearly the isomorphism $\mbf{u} \mapsto \Exp{\theta}{u}$ takes a coloring to a coloring. By Corollary~\ref {cor:T2ncoloring}
for $n = 2k+1$ there is a non-trivial coloring of $T(2,n)$ by $S_\psi^2$ if and only if for some $h = 1, \dots, k$ we have
$$ ( n-2h ) \pi /n < \psi < (n+ 2h ) \pi / n, $$
since $\psi =2\pi -2\theta$ this is equivalent to
$${\frac { \left( n-2h \right) \pi}{2n}} < \theta <
{\frac { \left( n+2h \right) \pi}{2n}}.$$
\end{proof}
\begin{lemma}~\label{lem:toruscolor}
Let $n=2k+1$, $k\geq 1$.
Let $G$ be a group.
Let $q_i$, $i=0, 1, \ldots, n-1$, be the colors of the arcs, as depicted in Figure~\ref{t2nu},
of a coloring of the diagram by $G$.
Then $q_i $ satisfy
$q_{i+1}=q_{i}^{-1} q_{i-1} q_{i}$ for $i=1, \ldots, n-1$ and
$q_{n}=q_0$.
\end{lemma}
We thank Razvan Teodorescu for the idea of the following proof.
\begin{lemma}\label{lem:longitude}
Let $n=2k+1$, $k\geq 1$,
and $G$ be a group.
For a coloring $C$ of the diagram of $T(2,n)$ in Lemma~\ref{lem:toruscolor},
let $q=q_0 q_1$.
Then
the longitude is given by
$\mathcal{L}(C) = q_0^{-2n} q^n$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:toruscolor},
we have $q_i q_{i+1} = q_{i+1} q_{i+2} $ for $i=0, \ldots, n-2$, and $q_{n-1} q_0=q_0 q_1$.
Note that $q= q_i q_{i+1} $ for all $i$.
For any coloring $C$,
from Figure~\ref{t2nu}, we compute the longitude as
$$\mathcal{L}(C) = q_0^{-n} \ ( q_1 q_3 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) .$$
To evaluate this, we compute
$$q_0^{2n} \mathcal{L}(C) = q_0^{n} \ ( q_1 q_3 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) .$$
Since $q_0 q_1= q_1 q_2$, we have
\begin{eqnarray*}
q_0^{2n} \mathcal{L}(C) &=& (q_0 \cdots q_0) \ ( q_0 q_1 ) \ ( q_3 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) \\
&=& (q_0 \cdots q_0) \ ( q_1 q_2 ) \ ( q_3 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) .
\end{eqnarray*}
Further applying $q_0 q_1= q_1 q_2$ and $q_2 q_3= q_3 q_4$, we obtain
\begin{eqnarray*}
&=& (q_0 \cdots q_0) \ ( q_0 q_1 ) \ ( q_2 q_3 ) \ ( q_5 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) \\
&=& (q_0 \cdots q_0) \ ( q_1 q_2 )\ ( q_3 q_4 ) \ ( q_5 \cdots q_{2k -3} ) \ ( q_0 q_2 \cdots q_{2k} ) .
\end{eqnarray*}
Inductively we obtain
$$ (q_0 \cdots q_0) \ ( q_1 q_2 q_3 q_4 \cdots q_{2k} ) \ ( q_0 q_2 \cdots q_{2k} ) .$$
There are $k+1$ copies of $q_0$ in the first factor, $(q_i)_{i=1}^{2k}$ in the second factor, and consecutive even terms in the third factor.
Then we continue with
\begin{eqnarray*}
&=& (q_0 \cdots q_0)\ ( q_1 q_2 q_3 q_4 \cdots q_{2k-1} )\ (q_{2k} q_0) \ ( q_2 \cdots q_{2k} ) \\
&=& (q_0 \cdots q_0)\ ( q_1 q_2 q_3 q_4 \cdots q_{2k-1} )\ (q_0 q_1 ) \ ( q_2 \cdots q_{2k} ) \\
&=& (q_0 \cdots q_0)\ ( q_1 q_2 q_3 q_4 \cdots q_{2k-1} )\ (q_0 q_1 q_2 ) \ ( q_3 \cdots q_{2k} ) \cdots.
\end{eqnarray*}
In the last line, the left consecutive sequence keeps shifting to the left, as the middle pair $ (q_0 q_1 ) $ shifts to the left.
Inductively, we obtain
$ q_0^{2n} \mathcal{L}(C) = ( \prod_{i=0}^{n-1} q_i )^2 = q^n$.
Hence we obtain $ \mathcal{L}(C) = q_0^{-2n} q^n $.
\end{proof}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.2in]{t2nloops}
\end{center}
\caption{Colored diagram for $T(2,n)$ with loops}
\label{t2nloops}
\end{figure}
\begin{remark}
{\rm
\begin{sloppypar}
In the proof of Lemma~\ref{lem:longitude}, once the computation of
$ \mathcal{L}(C) = q_0^{-2n} ( \prod_{i=0}^{n-1} q_i )^2 $ is obtained, we found a diagrammatic method of
obtaining the same formula. Specifically, from the diagram in Figure~\ref{t2nloops}, we can read off the longitude directly
as $q_0^{2n} \mathcal{L}(C) = ( \prod_{i=0}^{n-1} q_i )^2 $.
\end{sloppypar}
}
\end{remark}
It is noteworthy that in the following theorem, the longitudinal mapping depends only on $\theta$, and not
on the different colorings $C$ corresponding to $\theta$.
\begin{theorem}\label{thm:t2n}
For any non-trivial coloring $C$ of $T(2,n)$, the value of the longitudinal mapping for $({\rm SU(2)}, \mbf{x})$ where $\mbf{x} =\Exp{\theta}{i}$ is given by
$$\mathcal{L}(C) = \Exp{(\pi-2 n \theta)}{i} =-\cos ( 2n \theta) + \sin(2 n \theta)\mathbf{i}.$$
and for the mirror image $m(T(2,n))$ the value of the longitudinal mapping is given by
$$\mathcal{L}(C) = \Exp{(2n \theta-\pi )}{i} =-\cos ( 2n \theta) - \sin(2 n \theta)\mathbf{i}.$$
\end{theorem}
\begin{proof}
In the case of $G={\rm SU(2)}$ in Lemma~\ref{lem:longitude}, we show that $\mbf{q}^{n}=-1$, where
$\mbf{q}=\mbf{q}_0 \mbf{q}_1= \mbf{q}_i \mbf{q}_{i+1}$ for all $i$.
Since
$$\mbf{q}^{-1} \mbf{q}_i \mbf{q} = (\mbf{q}_{i} \mbf{q}_{i+1})^{-1} \mbf{q}_i (\mbf{q}_i \mbf{q}_{i+1} ) = \mbf{q}_{i+2}, $$ we have
$\mbf{q}^{-n} \mbf{q}_i \mbf{q}^n = \mbf{q}_i$ for every $i$.
Then $\mbf{q}^n$ is in $C(\mbf{q}_i)$ for every $i$.
For a non-trivial coloring, there are at least two $ \mbf{q}_i $ and $ \mbf{q}_j$ that do not commute,
hence by Lemma~\ref{lem:Lambda}, $C( \mbf{q}_i ) \cap C( \mbf{q}_j) = \{ \pm 1\}$,
so that
$\mbf{q}^n= \pm 1$.
For each $\theta$, we have $\mbf{q}^n=\pm 1$, and $\mbf{q}^n$ is continuous with respect to $\theta$.
By Corollary~\ref{cor:conj}, for $\theta=\pi/2$, we have
$S_{\pi}^2$ isomorphic to the conjugacy class $\tilde{C}_{\pi/2} $.
In this case, the colorings by $S_{\pi}^2$ up to the action of rotations about $\mbf{x}$
(cf. Corollary~\ref{cor:rotcolor})
are equivalent to Fox colorings by a dihedral quandle $R_m$ for some $m$.
In \cite{Klassen}, it was shown that the non-abelian representations of knot groups of torus knots $T(r,s)$ up to conjugacy consist of $(r-1)(s-1)/2$ open arcs. In our case the result implies that
the set of non-trivial colorings of $T(2, n)$ consists of $n-1$ open arcs each of which contains
a coloring by the dihedral quandle $R_n$.
Hence the fact $\mbf{q}^n=-1$ follows if it is proved for colorings by $\tilde{C}_{\pi/2}$.
Let $\theta=\pi/2$, then $\mbf{q}_0= \Exp{\frac{\pi}{2}}{i} = {\mathbf i}$. In this case $\mbf{q}_1=\cos (2 \pi m / n) {\mathbf i} + \sin (2 \pi m / n) {\mathbf j}$
for some $m$.
Then we compute
$$\mbf{q}= \mbf{q}_0 \mbf{q}_1 = - \cos (2 \pi m / n) + \sin (2 \pi m / n) {\mathbf k} = \Exp{(\pi - 2 \pi m / n) }{\mathbf k}.$$
Hence we obtain
$$\mbf{q}^n= \Exp{ ( \pi - 2 \pi m / n) n } {\mathbf k} = \Exp{\pi (n - 2 m ) } {\mathbf k} = -1$$
since $n$ is odd, as desired.
The resullt for $m(T(2,n))$ follows immediately from the result for $T(2,n)$ via Proposition~\ref{mirror} and the known fact that $r(T(2,n)) = T(2,n)$.
\end{proof}
\subsection{Figure eight knot}
The following Lemma is immediate from Lemma~\ref{lem:fig8sol} and the fact that $S_\psi^2$ is isomorphic to $ \tilde{C}_\theta$ when $\psi = 2\pi-2\theta$ and the fact that the isomorphism $\mbf{u} \mapsto \Exp{\theta}{u}$ takes a coloring to a coloring.
\begin{lemma} The figure 8 knot is non-trivially colored by $\tilde{C}_\theta$ if and only if
$$\frac{\pi}{3} \le \theta \le \frac{2\pi}{3}.$$
In which case there are two solutions for each $\theta \in ( \frac{\pi}{3} , \frac{2\pi}{3})$,
corresponding to the values of $\beta_1$ and $\beta_2$ in Lemma~\ref{lem:fig8sol}.
The colorings for $\theta = \frac{\pi}{3}$ and $\theta = \frac{2\pi}{3}$ are the same.
\end{lemma}
Let $C(i) = \mbf{u}_i$ be a coloring for the figure 8 knot by $\tilde{C}_\theta$ for $\theta \in [ \pi / 3, 2 \pi / 3 ]$,
as shown in Figure~\ref{figeight}. Then from the definition of the longitude
we obtain the following.
\begin{lemma}
$\mathcal{L}_\theta(C) = \mbf{u}_2 \mbf{u}_3^{-1} \mbf{u}_0 \mbf{u}_1^{-1}$.
\end{lemma}
{\it Maple} computations give the following.
\begin{proposition} If $C$ is a coloring of the figure 8 knot by $\tilde{C}_\theta$ then
$${\mathcal L}_\theta(C) = (\cos \left( 4\,\theta \right) -\cos \left( 2\,\theta \right) -1)
\pm \sqrt {-1+2\,\cos \left( 4\,\theta \right) -4\,\cos \left( 2\,\theta
\right) } \, ( \sin \left( 2\,\theta \right) ) \, {\bf i}.$$ The sign $\pm$ depends on the choice of $\beta_i$, $i = 1,2,$ in Lemma~\ref{lem:fig8sol}.
\end{proposition}
The longitude ${\mathcal L}_\theta (C) $ may be written as $ {\bf e}^{\phi {\bf i} } $
where $\phi$ is given in terms of the two argument $\arctan$ by
$$ \phi = \arctan \left( \pm \sqrt {4\, \left( \cos \left( 2\,\theta \right)
\right) ^{2}-4\,\cos \left( 2\,\theta \right) -3} \, ( \sin \left( 2\,
\theta \right) ) \, ,2\, \left( \cos \left( 2\,\theta \right) \right) ^{2}
-\cos \left( 2\,\theta \right) -2 \right) $$
The graph of $\phi$ as a function of $\theta$ is given in Figure~\ref{Longitude1_Longitude2}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.8in]{Longitude1_Longitude2.png}
\end{center}
\caption{The graph of $\phi$ where ${\mathcal L}_\theta (C) = {\bf e}^{\phi {\bf i} }$ for the figure 8 knot.}
\label{Longitude1_Longitude2}
\end{figure}
\section{Concluding Remarks}
In this paper, the knot coloring polynomial defined by Eisermann~\cite{Eis-colpoly} with finite quandles
is generalized to topological quandles as the longitudinal mapping invariant of long knots, which in turn can be
thought of as a generalization of the quandle 2-cocycle invariant defined in \cite{CJKLS} for finite quandles.
Such generalizations for topological quandles have long been called for, and we propose one in this paper.
The invariant values are concretely evaluated for torus knots of closed 2-braids $T(2,n)$ and the figure eight knot.
The following questions, for example, remain to be investigated:
determine the coloring spaces for other knots, in particular knots with more than 2 bridges;
determine the $\theta$-values with non-trivial colorings;
determine the invariant values;
relations to other invariants;
investigate continuous cohomology theories of topological quandles, and relate it to the invariant
discussed in this paper.
\section*{APPENDICES}
|
{
"timestamp": "2018-02-27T02:07:39",
"yymm": "1802",
"arxiv_id": "1802.08899",
"language": "en",
"url": "https://arxiv.org/abs/1802.08899"
}
|
\section{Introduction}
We are interested in training reinforcement learning (RL) agents
to use the Internet (e.g., to book flights or reply to emails) by directly
controlling a web browser. Such systems could expand the capabilities
of AI personal assistants \citep{stone2014amazon}, which are currently limited to interacting
with machine-readable APIs, rather than the much larger world of human-readable
web interfaces.
Reinforcement learning agents could learn to accomplish tasks using these
human-readable web interfaces through trial-and-error \citep{sutton1998reinforcement}.
But this learning process can be very slow in tasks with sparse reward,
where the vast majority of naive action sequences lead to no reward signal \citep{vecerik2017leveraging, nair2017overcoming}.
This is the case for many web tasks, which involve a large action space (the
agent can type or click anything) and require a well-coordinated sequence of actions to succeed.
A common countermeasure in RL is to pre-train the agent to mimic expert
demonstrations via behavioral cloning \citep{pomerleau1991efficient, kim2013learning}, encouraging it to take similar actions in
similar states. But in environments with diverse and complex states such as websites,
demonstrations may cover only a small slice of the state space, and it is
difficult to generalize beyond these states (overfitting). Indeed, previous work has found
that warm-starting with behavioral cloning often fails to improve over pure RL \citep{shi2017wob}.
At the same time, simple strategies to combat overfitting (e.g. using fewer
parameters or regularization) cripple the
policy's flexibility \citep{bitzer2010using}, which is required for complex spatial and structural reasoning in user interfaces.
In this work, we propose a different method for leveraging demonstrations.
Rather than training an agent to directly mimic them, we use demonstrations to
\emph{constrain exploration}. By pruning away bad exploration directions, we
can accelerate the agent's ability to discover sparse rewards. Furthermore,
because the agent
is not directly exposed to demonstrations, we are free to use a sophisticated
neural policy with a reduced risk of overfitting.
\begin{figure}\center
\begin{minipage}[c]{0.56\textwidth}
\includegraphics[width=\textwidth]{approach-overview}
\end{minipage}
\hspace{1.5em}
\begin{minipage}[c]{0.38\textwidth}\small
\textbf{Preprocessing:}
\begin{algorithmic}
\ForAll{demonstrations $d$}
\State Induce workflow lattice from $d$
\EndFor
\end{algorithmic}
\vspace{.5em}
\textbf{Every iteration:}
\begin{algorithmic}
\State Observe an initial environment state
\State $\piw$ samples a workflow from a lattice
\State Roll out an episode $e$ from the workflow
\State Use $e$ to update $\piw$
\If{$e$ gets reward $+1$}
\State Add $e$ to replay buffer
\EndIf
\end{algorithmic}
\vspace{.5em}
\textbf{Periodically:}
\begin{algorithmic}
\If{replay buffer size > threshold}
\State Sample episodes from replay buffer
\State Update $\pin$ with sampled episodes
\EndIf
\State Observe an initial environment state
\State $\pin$ rolls out episode $e$
\State Update $\pin$ and critic $V$ with $e$
\If{$e$ gets reward $+1$}
\State Add $e$ to replay buffer
\EndIf
\end{algorithmic}
\end{minipage}
\caption{
\emph{Workflow-guided exploration (WGE).} After inducing workflow lattices from demonstrations,
the workflow policy $\piw$ performs exploration by sampling episodes from sampled workflows.
Successful episodes are saved to a replay buffer,
which is used to train the neural policy $\pin$.
}\label{fig:approach-overview}
\end{figure}
To constrain exploration, we employ the notion of a ``workflow'' \citep{deka2016erica}. For instance,
given an expert demonstration of how to forward an email, we might infer the
following workflow:
\begin{center}
Click an email title
$\to$ Click a ``Forward'' button\\
$\to$ Type an email address into a textbox
$\to$ Click a ``Send'' button
\end{center}
This workflow is more \emph{high-level} than an actual policy: it does not tell us exactly
which email to click or which textbox to type into, but it helpfully constrains
the set of actions at each time step. Furthermore, unlike a policy, it does not depend
on the environment state: it is just a sequence of steps that can be followed blindly.
In this sense, a workflow is \emph{environment-blind}.
The actual policy certainly should not be environment-blind,
but for exploration, we found environment-blindness to be a good inductive bias.
To leverage workflows, we propose the \emph{workflow-guided exploration} (WGE) framework
as illustrated in \reffig{approach-overview}:
\begin{enumerate}
\item For each demonstration, we extract a lattice of workflows that are
consistent with the actions observed in the demonstration (\refsec{workflows}).
\item We then define a \emph{workflow exploration policy} $\piw$ (\refsec{workflow-policy}),
which explores by first selecting a
workflow, and then sampling actions that fit the workflow. This policy
gradually learns which workflow to select through
reinforcement learning.
\item Reward-earning episodes discovered during exploration enter a replay buffer,
which we use to train a more powerful and expressive neural network policy $\pin$
(\refsec{neural-policy}).
\end{enumerate}
A key difference between the web and traditional RL domains such as robotics
\citep{atkeson1997robot} or game-playing \citep{bellemare2013arcade} is that the state space involves a mix of
structured (e.g. HTML) and unstructured inputs (e.g. natural language and
images). This motivates us to propose a novel neural network policy (\domnet),
specifically designed to perform flexible relational reasoning over the tree-structured HTML
representation of websites.
We evaluate \emph{workflow-guided exploration} and \domnet
on a suite of web interaction tasks, including the MiniWoB benchmark of
\citep{shi2017wob}, the flight booking interface for Alaska Airlines, and a
new collection of tasks that we constructed to
study additional challenges such as noisy environments, variation in
natural language, and longer time horizons.
Compared to previous results on MiniWoB \citet{shi2017wob}, which used 10
minutes of demonstrations per task (approximately 200 demonstrations on
average), our system achieves much higher success rates and establishes new
state-of-the-art results with only 3--10 demonstrations per task.
\section{Setup}
In the standard reinforcement learning setup,
an agent learns a policy $\pi(a|s)$ that
maps a state $s$ to a probability distribution
over actions $a$.
At each time step $t$,
the agent observes an environment state $s_t$ and chooses an action $a_t$,
which leads to a new state $s_{t+1}$ and a reward $r_t = r(s_t, a_t)$.
The goal is to maximize the expected return $\E[R]$,
where $R = \sum_t \gamma^t r_{t+1}$ and $\gamma$ is a discount factor.
Typical reinforcement learning agents
learn through trial-and-error:
rolling out episodes $(s_1, a_1, \dots, s_T, a_T)$
and adjusting their policy based on the results of those episodes.
We focus on settings where the reward is delayed and sparse.
Specifically, we assume that (1) the agent receives reward only at
the end of the episode, and (2) the reward is high (e.g., $+1$)
for only a small fraction of possible trajectories and is uniformly low (e.g., $-1$) otherwise.
With large state and action spaces,
it is difficult for the exploration policy to find episodes with positive rewards,
which prevents the policy from learning effectively.
We further assume that the agent is given a goal $g$,
which can either be
a structured key-value mapping
(e.g., \{task: forward, from: Bob, to: Alice\}) or
a natural language utterance
(e.g., \emph{``Forward Bob's message to Alice''}).
The agent's state $s$ consists of the goal $g$ and the current state of the web page,
represented as a tree of elements (henceforth \emph{DOM tree}).
We restrict the action space to click actions \texttt{Click(e)}
and type actions \texttt{Type(e,t)},
where \texttt{e} is a leaf element of the DOM tree,
and \texttt{t} is a string from the goal $g$ (a value from a structured goal,
or consecutive tokens from a natural language goal).
\reffig{workflow-lattice} shows an example episode for an email processing task.
The agent receives $+1$ reward if the task is completed correctly,
and $-1$ reward otherwise.
\section{Inducing workflows from demonstrations}\label{sec:workflows}
Given a collection of expert demonstrations $d = (\D s_1, \D a_1, \dots, \D s_T, \D a_T)$,
we would like explore actions $a_t$ that are ``similar'' to the demonstrated actions $\D a_t$.
Workflows capture this notion of similarity
by specifying a set of similar actions at each time step.
Formally, a workflow $z_{1:T}$ is a sequence of workflow steps,
where each step $z_t$
is a function that takes a state $s_t$ and returns
a constrained set $z_t(s_t)$ of similar actions.
We use a simple compositional constraint language (\refapp{constraint-language}) to describe workflow steps.
For example,
with $z_t = \texttt{Click(Tag("img"))}$,
the set $z_t(s_t)$ contains click actions on any DOM element in $s_t$ with tag \texttt{img}.
\begin{figure}\center
\includegraphics[width=\columnwidth]{workflow-lattice}
\caption{
From each demonstration, we induce a workflow lattice
based on the actions in that demonstration.
Given a new environment, the workflow policy samples a workflow
(a path in the lattice, as shown in bold) and then samples actions that fit
the steps of the workflow.
}\label{fig:workflow-lattice}
\end{figure}
We induce a set of workflows from each demonstration
$d = (\D s_1, \D a_1, \dots, \D s_T, \D a_T)$ as follows.
For each time step $t$, we enumerate a set $Z_t$ of all possible workflow steps $z_t$
such that $\D a_t \in z_t(\D s_t)$. The set of workflows
is then the cross product $Z_1 \times \dots \times Z_T$ of the steps.
We can represent the induced workflows as paths
in a \emph{workflow lattice} as illustrated in \reffig{workflow-lattice}.
To handle noisy demonstrations where some actions are unnecessary
(e.g., when the demonstrator accidentally clicks on the background),
we add shortcut steps
that skip certain time steps.
We also add shortcut steps for any consecutive actions
that can be collapsed into a single equivalent action
(e.g., collapsing two type actions on the same DOM element
into a single \texttt{Type} step).
These shortcuts allow the lengths of the induced workflows to differ from the length of the demonstration.
We henceforth ignore these shortcut steps to simplify the notation.
The induced workflow steps are not equally effective.
For example in \reffig{workflow-lattice},
the workflow step \texttt{Click(Near(Text("Bob")))} (Click an element near text ``Bob'')
is too specific to the demonstration scenario,
while \texttt{Click(Tag("div"))} (Click on any \texttt{<div>} element)
is too general and covers too many irrelevant actions.
The next section describes how the workflow policy $\piw$
learns which workflow steps to use.
\section{Workflow exploration policy}\label{sec:workflow-policy}
Our workflow policy interacts with the environment to generate an episode in the following manner.
At the beginning of the episode,
the policy conditions on the provided goal $g$,
and selects a demonstration $d$ that carried out a similar goal:
\begin{equation}
d \sim p(d|g) \propto \exp[\text{sim}(g, g_d)]
\end{equation}
where $\text{sim}(g,g_d)$ measures the similarity between $g$ and the goal $g_d$ of demonstration $d$.
In our tasks, we simply let $\text{sim}(g,g_d)$ be 1 if the structured
goals share the same keys, and $-\infty$ otherwise.
Then, at each time step $t$ with environment state $s_t$, we sample a workflow step $z_t$
according to the following distribution:
\begin{equation}
z_t \sim \piw(z | d,t) \propto \exp(\psi_{z,t,d}),
\end{equation}
where each $\psi_{z,t,d}$ is a separate scalar parameter to be learned.
Finally, we sample an action $a_t$ uniformly from the set $z_t(s_t)$.
\begin{equation}
a_t \sim p(a | z_t, s_t) = \frac{1}{|z_t(s_t)|}
\end{equation}
The overall probability of exploring an episode $e = (s_1, a_1, \dots, s_T, a_T)$ is then:
\begin{equation}
p(e | g) = p(d | g) \prod_{t=1}^{T} p(s_t | s_{t-1}, a_{t-1}) \sum_{z} p(a_t | z, s_t) \piw(z | d,t)
\end{equation}
where $p(s_t | s_{t-1}, a_{t-1})$ is the (unknown) state transition probability.
Note that $\piw(z | d, t)$ is not a function of the environment states $s_t$ at all. Its decisions
only depend on the selected demonstration and the current time $t$.
This \emph{environment-blindness}
means that the workflow policy uses far fewer parameters than a
state-dependent policy, enabling it to learn more quickly and preventing
overfitting. Due to \emph{environment-blindness}, the workflow policy cannot
solve the task, but it quickly learns to certain good behaviors, which
can help the neural policy learn.
To train the workflow policy, we use a variant of the REINFORCE algorithm \citep{williams1992simple,sutton1998reinforcement}.
In particular, after rolling out an episode $e = (s_1, a_1, \dots, s_T, a_T)$,
we approximate the gradient using the unbiased estimate
\begin{equation}
\sum_t (G_t - v_{d,t})\nabla_\psi \log \sum_{z} p(a_t | z, s_t) \piw(z | d,t),
\end{equation}
where $G_t$ is the return at time step $t$ and $v_{d,t}$ is a baseline term for variance reduction.
Sampled episodes from the workflow policy that receive a positive reward
are stored in a replay buffer, which will be used for training the neural policy $\pin$.
\section{Neural policy}\label{sec:neural-policy}
As outlined in \reffig{approach-overview},
the neural policy is learned using both on-policy and off-policy updates (where episodes are drawn from the replay buffer).
Both updates use A2C, the synchronous version of the advantage actor-critic
algorithm \citep{mnih2016asynchronous}.
Since only episodes with reward +1 enter the replay buffer, the off-policy updates
behave similarly to supervised learning on optimal trajectories.
Furthermore, successful episodes discovered during on-policy exploration are also added
to the replay buffer.
\paragraph{Model architecture.}
We propose \domnet, a neural architecture that captures the spatial and hierarchical
structure of the DOM tree.
As illustrated in \reffig{neural-architecture},
the model first embeds the DOM elements and the input goal,
and then applies a series of attentions on the embeddings to finally produce a
distribution over actions $\pin(a|s)$ and a value function $V(s)$, the critic.
We highlight our novel DOM embedder,
and defer other details to \refapp{architecture-details}.
We design our DOM embedder to capture the various interactions
between DOM elements, similar to recent work in graph embeddings
\citep{kipf2017semi,pham2017column,hamilton2017inductive}.
In particular, DOM elements that are ``related''
(e.g., a checkbox and its associated label)
should pass their information to each other.
To embed a DOM element $e$,
we first compute the \emph{base embedding} $v_\mathrm{base}^e$
by embedding and concatenating its attributes
(tag, classes, text, etc.).
In order to capture the relationships between DOM elements, we next compute two
types of \emph{neighbor embeddings}:
\begin{enumerate}
\item We define \emph{spatial neighbors} of $e$ to be
any element $e'$ within 30 pixels from $e$,
and then sum up their base embeddings to get the
\emph{spatial neighbor embedding} $v_\mathrm{spatial}^e$.
\item We define \emph{depth-$k$ tree neighbors} of $e$ to be
any element $e'$ such that the least common ancestor of $e$ and $e'$ in the DOM tree
has depth at most $k$.
Intuitively, tree neighbors of a higher depth are more related.
For each depth $k$, we apply a learnable affine transformation $f$ on the base embedding of
each depth-$k$ tree neighbor $e'$, and then apply max pooling to get
$v_{\mathrm{tree}[k]}^e = \max f(v_\mathrm{base}^{e'})$.
We let the \emph{tree neighbor embedding} $v_\mathrm{tree}^e$ be the
concatenation of $v_{\mathrm{tree}[k]}^e$ for $k = 3, 4, 5, 6$.
\end{enumerate}
Finally, we define the \emph{goal matching embedding} $v_\mathrm{match}^e$
to be the sum of the embeddings of all words in $e$ that also appear in the goal.
The final embedding $v_\mathrm{DOM}^e$ of $e$ is the concatenation
of the four embeddings $[v_\mathrm{base}^e; v_\mathrm{spatial}^e; v_\mathrm{tree}^e; v_\mathrm{match}^e]$.
\section{Experiments}
\subsection{Task setups}
We evaluate our approach on three suites of interactive web tasks:
\begin{enumerate}
\item \emph{MiniWoB}: the MiniWoB benchmark of \citet{shi2017wob}
\item \emph{MiniWoB++}: a new set of tasks that we constructed
to incorporate additional challenges not present in
MiniWoB, such as stochastic environments and variation in natural language.
\item \emph{Alaska}: the mobile flight booking interface for Alaska Airlines,
inspired by the FormWoB benchmark of \citet{shi2017wob}.
\end{enumerate}
We describe the common task settings of the MiniWoB and MiniWoB++ benchmarks,
and defer the description of the Alaska benchmark to \refsec{alaska}.
\paragraph{Environment.}
Each task contains a 160px $\times$ 210px
environment and a goal specified in text.
The majority of the tasks return a single sparse reward at
the end of the episode; either $+1$ (success) or $-1$ (failure). For greater
consistency among tasks, we disabled \emph{all} partial rewards in our
experiments.
The agent has access to the environment via a Selenium web driver interface.
The public MiniWoB benchmark\footnote{\url{http://alpha.openai.com/miniwob/}}
contains 80 tasks. We filtered for the 40 tasks that
only require actions in our action space,
namely clicking on DOM elements and typing strings from the input goal.
Many of the excluded tasks involve somewhat specialized reasoning,
such as being able to compute the angle between two lines, or solve algebra problems.
For each task,
we used Amazon Mechanical Turk to collect 10 demonstrations,
which record all mouse and keyboard events along with
the state of the DOM when each event occurred.
\paragraph{Evaluation metric.}
We report \emph{success rate}: the percentage of test episodes with reward $+1$.
Since we have removed partial rewards, success rate is a linear scaling of the average reward,
and is equivalent to the definition of success rate in \citet{shi2017wob}.
\subsection{Main results} \label{sec:main-results}
\newcommand{\textsc{Shi17}\xspace}{\textsc{Shi17}\xspace}
\newcommand{\textsc{DOMnet+BC+RL}\xspace}{\textsc{DOMnet+BC+RL}\xspace}
\newcommand{\textsc{DOMnet+WGE}\xspace}{\textsc{DOMnet+WGE}\xspace}
\begin{figure}\center
\includegraphics[width=\textwidth]{resultsPlotOne}
\includegraphics[width=\textwidth]{resultsPlotTwo}
\includegraphics[scale=.8]{legend}
\caption{
Success rates of different approaches
on the MiniWoB tasks. \textsc{DOMnet+WGE}\xspace outperforms \textsc{Shi17}\xspace on all but two tasks and
effectively solves a vast majority.
}\label{fig:main-results-chart}
\end{figure}
\begin{table}\center\small
\begin{tabular}{@{}ll@{}r|rrr}
\textbf{Task} & \textbf{Description} & \textbf{Steps} & \textbf{BC+RL} &
\textbf{$\piw$ only} & \textbf{WGE} \\ \hline
click-checkboxes & Click 0--6 specified checkboxes & 7 & 98 & 81 & \textbf{100} \\
click-checkboxes-large$^+$ & \dots 5--12 targets & 13 & 0 & 43 & \textbf{84} \\
click-checkboxes-soft$^+$ & \dots specifies synonyms of the targets & 7 & 51 & 34& \textbf{94} \\
click-checkboxes-transfer$^+$ & \dots training data has 0-3 targets & 7 & \textbf{64} & 17 & \textbf{64} \\
multi-ordering$^+$ & Fill a form with varying field orderings & 4 & 5 & 78 & \textbf{100} \\
multi-layout$^+$ & Fill a form with varying UIs layouts & 4 & 99 & 9 & \textbf{100} \\
social-media & Do an action on the specified Tweet & 2 & 15 & 2 & \textbf{100} \\
social-media-all$^+$ & \dots on all matching Tweets & 12 & \textbf{1} & 0
& 0 \\
social-media-some$^+$ & \dots on specified no. of matching Tweets & 12 & 2 &
3 & \textbf{42} \\ \hline
email-inbox & Perform tasks on an email inbox & 4 & 43 & 3 & \textbf{99} \\
email-inbox-nl$^+$ & \dots natural language goal & 4 & 28 & 0 & \textbf{93} \\
\end{tabular}
\caption{Results on additional tasks. ($+$ = MiniWoB++,
Steps = task length as the maximum number of steps needed for a perfect policy to complete the task)
}
\label{tab:analysis}
\end{table}
We compare the success rates across the MiniWoB tasks of the following approaches:
\begin{itemize}
\item \textsc{Shi17}\xspace: the system from \citet{shi2017wob}, pre-trained
with behavioral cloning on 10 minutes of demonstrations (approximately 200
demonstrations on average) and fine-tuned with RL. Unlike \domnet, this system
primarily uses a pixel-based representation of the state.\footnote{It
is augmented with filters that activate on textual elements which overlap with
goal text.}
\item \textsc{DOMnet+BC+RL}\xspace: our proposed neural policy, \domnet, but
pre-trained with behavioral cloning on 10 demonstrations
and fine-tuned with RL, like \textsc{Shi17}\xspace.
During behavioral cloning,
we apply early stopping based on the reward on a validation set.
\item \textsc{DOMnet+WGE}\xspace: our proposed neural policy, \domnet, trained with
workflow-guided exploration on 10 demonstrations.
\end{itemize}
For \textsc{DOMnet+BC+RL}\xspace and \textsc{DOMnet+WGE}\xspace, we report the test success rate at the time step where
the success rate on a validation set reaches its maximum.
The results are shown in \reffig{main-results-chart}.
By comparing \textsc{Shi17}\xspace with \textsc{DOMnet+BC+RL}\xspace, we can roughly evaluate the contribution
of our new neural architecture \domnet, since the two share the same training procedure (BC+RL).
While \textsc{Shi17}\xspace also uses the DOM tree to compute text alignment features
in addition to the pixel-level input,
our \domnet uses the DOM structure more explicitly.
We find \textsc{DOMnet+BC+RL}\xspace to empirically improve the success rate over \textsc{Shi17}\xspace on most tasks.
By comparing \textsc{DOMnet+BC+RL}\xspace and \textsc{DOMnet+WGE}\xspace, we find that
workflow-guided exploration enables \domnet to
perform even better on the more difficult tasks, which we analyze in the next section.
Some of the workflows that the workflow policy $\piw$ learns
are shown in \refapp{example-workflows}.
\subsection{Analysis}\label{sec:analysis}
\subsubsection{MiniWoB++ benchmark}\label{sec:plusplus}
We constructed and released the MiniWoB++ benchmark of tasks to
study additional challenges a web agent might encounter, including:
longer time horizons (click-checkboxes-large),
``soft'' reasoning about natural language (click-checkboxes-soft),
and stochastically varying layouts (multi-orderings, multi-layouts).
\reftab{analysis} lists the tasks and their time horizons (number of steps
needed for a perfect policy to carry out the longest goal) as a crude measure
of task complexity.
We first compare the performance of \domnet trained with BC+RL (baseline)
and \domnet trained with WGE (our full approach).
The proposed WGE model outperforms the BC+RL model by an average of 42\%
absolute success rate.
We analyzed their behaviors and noticed two common failure modes of training
with BC+RL that are
mitigated by instead training with WGE:
\begin{enumerate}
\item The BC+RL model has a tendency to take actions that prematurely
terminate the episode (e.g., hitting ``Submit'' in click-checkboxes-large
before all required boxes are checked).
One likely cause is that these actions occur across all
demonstrations, while other non-terminating actions
(e.g., clicking different checkboxes)
vary across demonstrations.
\item The BC+RL model occasionally
gets stuck in cyclic behavior such as repeatedly checking and unchecking the
same checkbox. These failure modes stem from overfitting to parts of the
demonstrations, which WGE avoids.
\end{enumerate}
Next, we analyze the workflow policy $\piw$ learned by WGE.
The workflow policy $\piw$ by itself is too simplistic to work well at test time for several reasons:
\begin{enumerate}
\item Workflows ignore environment state and therefore cannot respond to the differences
in the environment, such as the different layouts in multi-layouts.
\item The workflow constraint language lacks the expressivity to specify certain
actions, such as clicking on synonyms of a particular word in click-checkboxes-soft.
\item The workflow policy lacks expressivity to select the correct workflow for a given goal.
\end{enumerate}
Nonetheless the workflow policy $\piw$ is sufficiently constrained to discover reward some
of the time, and the neural policy $\pin$ is able to learn the right behavior from such
episodes.
As such, the neural policy can achieve high success rates
even when the workflow policy $\piw$ performs poorly.
\subsubsection{Natural language inputs}\label{sec:nlp}
While MiniWoB tasks provide structured goals,
we can also apply our approach to natural language goals.
We collected a training dataset using the overnight data collection technique \citep{wang2015overnight}.
In the email-inbox-nl task,
we collected natural language templates
by asking annotators to paraphrase the task goals
(e.g., \emph{``Forward Bob's message to Alice''} $\to$
\emph{``Email Alice the email I got from Bob''})
and then abstracting out the fields
(\emph{``Email \texttt{<TO>} the email I got from \texttt{<FROM>}''}).
During training, the workflow policy $\piw$
receives states with both the structured goal
and the natural language utterance generated from a random template,
while the neural policy $\pin$ receives only the utterance.
At test time, the neural policy is evaluated on unseen utterances.
The results in \reftab{analysis} show that
the WGE model can learn to understand natural language goals (93\% success rate).
Note that the workflow policy needs access to the structured inputs
only because our constraint language for workflow steps operates on structured inputs.
The constraint language
could potentially be modified to work with utterances directly
(e.g., \texttt{After("to")} extracts the utterance word after \emph{``to''}),
but we leave this for future work.
\subsubsection{Scaling to real world tasks}\label{sec:alaska}
We applied our approach on the Alaska benchmark,
a more realistic flight search task on the Alaska Airlines mobile site
inspired by the FormWoB task in \citet{shi2017wob}.
In this task, the agent must complete the flight search form
with the provided information (6--7 fields).
We ported the web page to the MiniWoB framework
with a larger 375px $\times$ 667px screen,
replaced the server backend with
a surrogate JavaScript function, and clamped the environment date to March 1, 2017.
Following \citet{shi2017wob},
we give partial reward based on the fraction of correct fields in the submitted form
if all required fields are filled in.
Despite this partial reward, the reward is still extremely sparse:
there are over 200 DOM elements (compared to $\approx$ 10--50 in MiniWoB tasks),
and a typical episode requires at least 11 actions
involving various types of widgets such as autocompletes and date pickers.
The probability that a random agent gets positive
reward is less than $10^{-20}$.
We first performed experiments on Alaska-Shi17, a clone of the original Alaska
Airlines task in \citet{shi2017wob}, where the goal always specifies
a roundtrip flight (two airports and two dates).
On their dataset,
our approach, using only 1 demonstration, achieves an average reward of 0.97,
compared to their best result of 0.57, which uses around 80 demonstrations.
Our success motivated us to test on a more difficult version of the task which
additionally requires selecting flight type (a checkbox for one-way flight),
number of passengers (an increment-decrement counter),
and seat type (hidden under an accordion).
We achieve an average reward of 0.86
using 10 demonstrations. This demonstrates our method can handle long horizons
on real-world websites.
\subsubsection{Sample efficiency}\label{sec:sample}
\begin{figure}\center
\includegraphics[width=\textwidth]{sample_efficiency}
\caption{
Comparison between \textsc{DOMnet+BC+RL}\xspace and \textsc{DOMnet+WGE}\xspace on several of the most difficult tasks,
evaluated on test reward. \textsc{DOMnet+WGE}\xspace trained on 10 demonstrations outperforms
\textsc{DOMnet+BC+RL}\xspace even with 1000 demonstrations.
}\label{fig:sample_efficiency}
\end{figure}
To evaluate the demonstration efficiency of our approach, we compare \textsc{DOMnet+WGE}\xspace
with \textsc{DOMnet+BC+RL}\xspace trained on increased numbers of demonstrations. We compare \textsc{DOMnet+WGE}\xspace
trained on $10$ demonstrations with \textsc{DOMnet+BC+RL}\xspace on $10$, $100$, $300$, and $1000$
demonstrations. The test rewards\footnote{We report test
reward since success rate is artificially high in
the Alaska task due to partial rewards.} on several of the hardest tasks are
summarized in \reffig{sample_efficiency}.
Increasing the number of demonstrations improves the performance of BC+RL, as
it helps prevent overfitting. However, on every evaluated task, WGE trained
with only $10$ demonstrations still achieves much higher test reward than
BC+RL with $1000$ demonstrations. This corresponds to an over 100x sample
efficiency improvement of our method over behavioral cloning in terms of the
number of demonstrations.
\section{Discussion}
\paragraph{Learning agents for the web.}
Previous work on learning agents for web interactions falls into two main
categories. First, simple programs may be specified by the user \citep{yeh2009sikuli}
or may be inferred from demonstrations \citep{allen2007plow}. Second, soft
policies may be learned from scratch or ``warm-started'' from demonstrations
\citep{shi2017wob}. Notably, sparse rewards prevented \citet{shi2017wob} from
successfully learning, even when using a moderate number of demonstrations.
While policies have proven to be more difficult to learn, they have the
potential to be expressive and flexible. Our work takes a step in this
direction.
\paragraph{Sparse rewards without prior knowledge.}
Numerous works attempt to address sparse rewards without incorporating any
additional prior knowledge. Exploration methods \citep{osband2016deep,
chentanez2005intrinsically, weber2017imagination} help the agent better
explore the state space to encounter more reward; shaping rewards
\citep{ng1999policy} directly modify the reward function to encourage certain
behaviors; and other works \citep{jaderberg2016reinforcement,
andrychowicz2017hindsight} augment the reward signal with additional
unsupervised reward. However, without prior knowledge, helping the agent
receive additional reward is difficult in general.
\paragraph{Imitation learning.}
Various methods have been proposed to leverage additional signals from experts.
For instance,
when an expert policy is available, methods such as
\textsc{DAgger} \citep{ross2011reduction} and
\textsc{AggreVaTe} \citep{ross2014reinforce,sun2017deeply}
can query the expert policy
to augment the dataset for training the agent.
When only expert \emph{demonstrations} are available,
inverse reinforcement learning methods
\citep{abbeel2004apprenticeship,ziebart2008maximum,finn2016guided,ho2016generative,baram2017end}
infer a reward function from the demonstrations without using reinforcement signals from the environment.
The usual method for incorporating both demonstrations and reinforcement signals
is to pre-train the agent with demonstrations before applying RL.
Recent work extends this technique by
(1) introducing different objective functions and regularization during pre-training,
and (2) mixing demonstrations and rolled-out episodes during RL updates
\citep{hosu2016playing,hester2018deep,vecerik2017leveraging,nair2017overcoming}.
Instead of training the agent on demonstrations directly,
our work uses demonstrations to \emph{guide exploration}. The core idea is to
explore trajectories that lie in a ``neighborhood'' surrounding an expert
demonstration. In our case, the neighborhood is defined by a workflow, which
only permits action sequences analogous to the demonstrated actions.
Several previous works also explore neighborhoods of demonstrations via reward shaping
\citep{brys2015reinforcement, hussein2017deep} or off-policy sampling \citep{levine2013guided}.
One key distinction of our work is that we define neighborhoods in terms of action
similarity rather than state similarity. This distinction is particularly
important for the web tasks: we can easily and intuitively describe how two
actions are analogous (e.g., ``they both type a username into a textbox''),
while it is harder to decide if two web page states are analogous (e.g., the
email inboxes of two different users will have completely different emails, but
they could still be analogous, depending on the task.)
\paragraph{Hierarchical reinforcement learning.}
Hierarchical reinforcement learning (HRL) methods decompose complex tasks into
simpler subtasks that are easier to learn. Main HRL frameworks include
abstract actions \citep{sutton1999between, konidaris2007building,
hauser2008using}, abstract partial policies \citep{parr1998reinforcement},
and abstract states \citep{roderick2017deep, dietterich1998maxq,
li2006towards}. These frameworks require varying amounts of prior knowledge.
The original formulations required programmers to manually specify the
decomposition of the complex task, while \citet{andreas2016modular} only
requires supervision to identify subtasks, and \citet{bacon2017option,
daniel2016hierarchical} learn the decomposition fully automatically, at the
cost of performance.
Within the HRL methods, our work is closest to \citet{parr1998reinforcement}
and the line of work on constraints in robotics \citep{phillips2016learning,
perez2017c}.
The work in \citet{parr1998reinforcement} specifies partial policies, which
constrain the set of possible actions at each state, similar to our workflow
items. In contrast to previous instantiations of the HAM framework
\citep{andre2003programmable, marthi2005concurrent}, which require programmers to
specify these constraints manually, our work automatically induces
constraints from user demonstrations, which do not require special
skills to provide. \citet{phillips2016learning, perez2017c} also resemble our
work, in learning constraints from demonstrations, but differ in the way they
use the demonstrations. Whereas our work uses the learned constraints for
exploration, \citet{phillips2016learning} only uses the constraints for
planning and \citet{perez2017c} build a knowledge base of constraints to use
at test time.
\paragraph{Summary.}
Our workflow-guided framework represents a judicious combination of demonstrations, abstractions, and
expressive neural policies.
We leverage the targeted information of demonstrations and the inductive bias of workflows.
But this is only used for exploration, protecting the expressive neural policy from overfitting.
As a result, we are able to learn rather complex policies from a very sparse reward signal and very few demonstrations.
\paragraph{Acknowledgments.}
This work was supported by NSF CAREER Award IIS-1552635.
\paragraph{Reproducibility.}
Our code and data are available at \url{https://github.com/stanfordnlp/wge}.
Reproducible experiments are available on the CodaLab platform at
\url{https://worksheets.codalab.org/worksheets/0x0f25031bd42f4aabbc17625fe1484066/}.
\chapter{#2}\label{chp:#1}}
\newcommand\Section[2]{\section{#2}\label{sec:#1}}
\newcommand\Subsection[2]{\subsection{#2}\label{sec:#1}}
\newcommand\Subsubsection[2]{\subsubsection{#2}\label{sec:#1}}
\ifthenelse{\isundefined{\definition}}{\newtheorem{definition}{Definition}}{}
\ifthenelse{\isundefined{\assumption}}{\newtheorem{assumption}{Assumption}}{}
\ifthenelse{\isundefined{\hypothesis}}{\newtheorem{hypothesis}{Hypothesis}}{}
\ifthenelse{\isundefined{\proposition}}{\newtheorem{proposition}{Proposition}}{}
\ifthenelse{\isundefined{\theorem}}{\newtheorem{theorem}{Theorem}}{}
\ifthenelse{\isundefined{\lemma}}{\newtheorem{lemma}{Lemma}}{}
\ifthenelse{\isundefined{\corollary}}{\newtheorem{corollary}{Corollary}}{}
\ifthenelse{\isundefined{\alg}}{\newtheorem{alg}{Algorithm}}{}
\ifthenelse{\isundefined{\example}}{\newtheorem{example}{Example}}{}
\newcommand\cv{\ensuremath{\to}}
\newcommand\cvL{\ensuremath{\xrightarrow{\mathcal{L}}}}
\newcommand\cvd{\ensuremath{\xrightarrow{d}}}
\newcommand\cvP{\ensuremath{\xrightarrow{P}}}
\newcommand\cvas{\ensuremath{\xrightarrow{a.s.}}}
\newcommand\eqdistrib{\ensuremath{\stackrel{d}{=}}}
\newcommand{\E}{\ensuremath{\mathbb{E}}}
\newcommand\KL[2]{\ensuremath{\text{KL}\left( #1 \| #2 \right)}}
|
{
"timestamp": "2018-02-27T02:04:13",
"yymm": "1802",
"arxiv_id": "1802.08802",
"language": "en",
"url": "https://arxiv.org/abs/1802.08802"
}
|
\section{INTRODUCTION}
The R$_{1/3}$Sr$_{2/3}$FeO$_3$ (R = rare earth) family is reported to show a crossover between localized and itinerant behavior by variation of the size of the rare earth ion \cite{Park1999}. For R = La, Pr and Nd, a 2Fe$^{4+}$ $\rightarrow$ Fe$^{3+}$ + Fe$^{5+}$ charge disproportionation (CD) accompanied by Fe$^{3+}$/Fe$^{5+}$ charge ordering (CO), a magnetic ordering, and a metal-insulator (MI) transition was reported to occur at 200, 180 and 165 K, respectively. For smaller rare earth ions no MI transition is observed, the compounds being purely insulating below room temperature.
\begin{figure}[htbp]
\subfigure[]{
\label{Crys}
\includegraphics[width=0.27\textwidth]{Crys.eps}}
\subfigure[]{
\label{ResistivityM}
\includegraphics[width=0.19\textwidth]{ResistivityM.eps}}
\caption{(Color online)(a) The crystal structure of La$_{1/3}$Sr$_{2/3}$FeO$_3$. The rhombohedral space group \hmn{R-3c} is shown in hexagonal setting (the unit cell in pink dot lines) and rhombohedral setting (the unit cell in grey dot lines). The hexagonal [001] is equivalent to the rhombohedral [111]. Purple balls denote La or Sr atoms and green balls denote Fe atoms. For clarity, oxygen atoms are not shown. (b) The temperature evolution of resistivity and magnetic susceptibility. The straight line drawn in the resistivity plotting is a guide to the eye.}
\label{CrysRM}
\end{figure}
\begin{figure*}[htbp]
\subfigure[]{
\label{magnetic_structure_P3_221}
\includegraphics[width=0.45\textwidth]{magStructureP3221.eps}}
\subfigure[]{
\label{P3221_refinement}
\includegraphics[width=0.51\textwidth]{P3221refinement.eps}}
\caption{(Color online) (a) The helical magnetic structure of La$_{1/3}$Sr$_{2/3}$FeO$_3$ at 2 K. (b) The Rietveld refinement of the neutron diffraction data of La$_{1/3}$Sr$_{2/3}$FeO$_3$ collected on HRPT at 2 K ($\lambda$ = 1.89 {\AA}), based on the helical model. The top and bottom rows of ticks below the pattern are the Bragg peak positions for the nuclear and magnetic scattering, respectively. }
\label{helical}
\end{figure*}
The MI transition for R = La, Pr and Nd was explained by CD and CO. For R = La, the CO was found to occur by using M\"ossbauer spectroscopy \cite{Takano1981} and electron microscopy \cite{Li1997}. On the basis of the CO sequence ...-Fe$^{5+}$-Fe$^{3+}$-Fe$^{3+}$-..., the magnetic structure of this compound was reported to be \hmn{P-3m1} \cite{Battle1990} or \hmn{P1} \cite{Yang2003} from the neutron diffraction studies performed at 50 K and 15 K, respectively. The former seems not to be a correct solution since the presence of rotoinversion \hmn{-3} is incompatible with the claimed collinear magnetic structure, with the collinear moments in the \textit{ab}-plane in \hmn{R-3c} metric; and the latter might be a correct solution, but without any symmetry restrictions in space group \hmn{P1}. Moreover, the presence of Fe$^{5+}$ below $T_{\rm{MI}}$ is not consistent with the X-ray absorption data \cite{Blasco2008}, and resonant X-ray scattering measurements indicate that the CD is not significant \cite{Martin2009}. Furthermore, the R = Eu sample is reported to have a change of M\"ossbauer response aross the magnetic ordering transition similar to that of the R = La compound \cite{Huang2009}, which is surprising given the absence of MI transition. The change of the M\"ossbauer response for both compounds was then ascribed to the long-range magnetic ordering with two types of magnetic interactions\cite{Blasco2010}. Therefore the magnetic structure and the associated change of M\"ossbauer spectra are still not well understood yet.
In this paper we report neutron powder diffraction and M\"ossbauer spectroscopy studies in the temperature range 2-300 K for La$_{1/3}$Sr$_{2/3}$FeO$_3$. New models of magnetic structure are presented and their general implications and compatibility with the results of a local probe technique, $^{57}$Fe M\"ossbauer spectroscopy, are discussed.
\section{EXPERIMENTAl DETAILS}
The polycrystalline sample used in this study was prepared by solid state reactions. Stoichiometric amounts of dried La$_2$O$_3$, SrCO$_3$ and Fe$_2$O$_3$ were mixed thoroughly by hand in an agate mortar, placed in an alumina crucible and annealed at 1473 K for 40 h in a muffle furnace in the air. The obtained powder was then ground, pressed into a pellet and sintered at 1673 K for 40 h. The sintering was repeated once with intermediate grinding. To ensure the oxygen stoichiometry, the sample was further annealed under oxygen flow at 873 K for 72 h. Phase purity was checked by laboratory X-ray powder diffraction. The oxygen content was verified by thermogravimetric H$_2$ reduction analysis performed on Netzsch model STA 449C analyser. Resistivity and bulk magnetic properties were measured using a Quantum Design Physical Property Measurement System. The resistivity was measured on cooling and subsequently heating using the four-probe method. The magnetic susceptibility was measured using zero-field-cooled (ZFC) and field-cooled (FC) protocols.
The neutron diffraction data were collected at the Swiss Spallation Neutron Source (SINQ), Paul Scherrer Institute. Approximately 1 g of sample powder was loaded into a 6-mm-diameter vanadium can and the measurements were performed on the High-Resolution Powder Diffractometer for Thermal Neutrons (HRPT) \cite{Fischer2000} using $\lambda$ = 1.89 {\AA} and 1.15 {\AA} at 230 K and 2 K, and on the Cold Neutron Powder Diffractometer (DMC) using $\lambda$ = 2.46 {\AA} at a series of temperatures between 300 and 1.7 K. An absolute comparison on the 10$^{-3}$ level of crystal lattice parameters obtained from these two instruments is not possible, because of systematic uncertainties related to wavelength calibration and peak shape parameters. The neutron diffraction data were analyzed by Rieveld refinement using \emph{FULLPROF} suite \cite{Juan1993}, by using its internal tables of neutron scattering lengths and magnetic form factors. The symmetry analysis was done using the \textit{ISODISTORT} tool \cite{Campbell2006}, BasIreps option incorporated in \emph{FULLPROF} suite \cite{Juan1993} and software tools of the Bilbao crystallographic server \cite{Aroyo2011}.
\begin{table*}[htbp]
\caption{The matrices and basis vectors of the small irreducible representations for Fe in 6b position and \textbf{k}=(0,0,1), where $a = -\tfrac{1}{2}+\tfrac{\sqrt{3}}{2}i$, $b = -\tfrac{1}{2}-\tfrac{\sqrt{3}}{2}i$, $p = \tfrac{\sqrt{3}}{2}+\tfrac{1}{2}i$, $q = i$.
\label{Basireps table} }
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
\multirow{2}{*}{}& \multirow{2}{*}{$\{1|000\}$} & \multirow{2}{*}{$\{3^{+}_{00z}|000\}$} & \multirow{2}{*} {$\{3^{-}_{00z}|000\}$} & \multirow{2}{*}{$\{m_{x-xz}|00\tfrac{1}{2}\}$} & \multirow{2}{*}{$\{m_{x2xz}|00\tfrac{1}{2}\}$} & \multirow{2}{*}{$\{m_{2xxz}|00\tfrac{1}{2}\}$}
& \multicolumn{2}{c}{Basis vector}\\
\cline{8-9}
&&&&&&&\textrm{Fe}$(0,0,0)$ & \textrm{Fe}$(0,0,1/2)$ \\
\colrule
$\Lambda$$_{1}$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $(0,0,1)$ & $(0,0,-1)$\\
$\Lambda$$_{2}$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $(0,0,1)$ & $(0,0,1)$\\
$\Lambda$$_{3}$ & $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ &
$\begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}$ &
$\begin{bmatrix} b & 0 \\ 0 & a \end{bmatrix}$ &
$\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$ &
$\begin{bmatrix} 0 & b \\ a & 0 \end{bmatrix}$ &
$\begin{bmatrix} 0 & a \\ b & 0 \end{bmatrix}$ &
\makecell{$(p^*,q^*,0)$ \\ $(0,0,0)$ \\ $(0,0,0)$ \\ $(p,q,0)$} &
\makecell{$(0,0,0)$ \\ $(q,p,0)$ \\ $(q^*,p^*,0)$ \\ $(0,0,0)$}\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The $^{57}$Fe M\"ossbauer spectra were recorded in transmission geometry using a constant-acceleration spectrometer with a 25 mCi $^{57}$Co source in a Rh matrix. The velocity scale was calibrated with a metallic iron foil at room temperature. The data were analyzed with a least squares fitting program assuming Lorentzian peaks in the first-order approximation \cite{Caer}. Isomer shifts are given with respect to $\alpha$-Fe at room temperature.
\begin{figure}[bp]
\centering
\leavevmode
\includegraphics[width=0.50\textwidth]{Moment_lattice.eps}
\caption{(Color online) The temperature evolution of (a) lattice parameters and (b) total magnetic moment and its components of Fe obtained from the Rietveld refinement of the neutron diffraction data of La$_{1/3}$Sr$_{2/3}$FeO$_3$ collected on DMC, based on the helical model. The straight lines drawn in (a) are guides to the eye. If not visible, the error bars are smaller than the plotting symbols. See the text for details. }
\label{Moment&lattice}
\end{figure}
\section{RESULTS AND DISCUSSION}
\subsection{Electric and magnetic properties}
La$_{1/3}$Sr$_{2/3}$FeO$_3$ crystallizes in \hmn{R-3c} space group at room temperature (see Fig. \ref{Crys}). In Fig. \ref{ResistivityM}, the temperature evolution of the resistivity, $R(T)/R(250~K)$, is presented. A change of slope is visible at about 200 K, a temperature below which the material becomes more insulating. At this temperature a charge disproportionation is expected to take place. The transition observed here is less pronounced than that reported in \cite{Park1999, Zhao2000} on bulk samples, but it is very similar to that seen on thin films \cite{Okamoto2010}. This difference may arise from the oxygen stoichiometry of the sample. In our sample the oxygen stoichiometry is 3.02 $\pm$ 0.02.
An antiferromagnetic (AFM)-like transition is clearly observed at $T_{\rm{N}}$ $\sim$ 200 K in the DC magnetic susceptibility $\chi(T)$ measurement (see Fig. \ref{ResistivityM}), i.e., at the same temperature where a change of the slope in $R(T)/R(250~K)$ is observed. The data measured in ZFC mode diverge from that measured in FC mode below $T_{\rm{N}}$, suggesting that at low temperatures a spin-glass state or weak ferromagnetism develops.
\subsection{Magnetic and crystal structure}
\subsubsection{Symmetry analysis}
The neutron powder diffraction pattern shows the appearance of additional peaks below $\sim$ 200 K, which we interpret as magnetic scattering given the existence of a peak in the macroscopic magnetic susceptibility at this temperature (see Fig. \ref{ResistivityM}). The representation theory analysis has been performed in order to determine the magnetic structure at low temperatures, which is presented as follows.
\begin{table*} [htbp]
\caption{Crystal and magnetic structure parameters of La$_{1/3}$Sr$_{2/3}$FeO$_3$ in (a) parent paramagnetic space group \hmn{R-3c} (No. 167, hexagonal setting) at 300 K and in magnetically ordered state at 2 K in Shubnikov magnetic space group (b) \hmn{P3_{2}21} (No. 154.41 ) or (c) \hmn{C2/c} (No. 15.85). See text for more details.
\label{Structure parameters} }
\begin{ruledtabular}
\begin{tabular}{lccc}
& (a) \hmn{R-3c}, T = 300 K & (b) \hmn{P3_{2}21}, T = 2 K \footnote{Crystal structure parameters in the Shubnikov magnetic space group are derived from the parent group, according to the basis transformation from \hmn{R-3c} to \hmn{P3_{2}21} with a linear part (1,1,0), (-1,0,0), (0,0,1) and an origin shift (2/3,2/3, 1/12) and that from \hmn{R-3c} to \hmn{C2/c} with a linear part (1,-1,0), (1,1,0), (0,0,1) and an origin shift (0,0,0). The lattice parameters and the atomic displacement parameters B for \hmn{C2/c} are further refined.} & (c) \hmn{C2/c}, T = 2 K \footnotemark[1] \\
\colrule
a ({\AA})& 5.48217(8)& 5.47754 & 9.49162(19)\\
b ({\AA})& & & 5.47680(11)\\
c ({\AA})& 13.40521(23) & 13.36215 & 13.36393(13)\\
\textrm{\textbf{La1/Sr1}} & & & \\
\textrm{Wyckoff position} & 6\textit{a} & 3\textit{b} & 4\textit{e} \\
\textrm{x, y, z} & 0, 0, 1/4 & 1/3, 0, 1/6 & 0, 0, 1/4 \\
B ({\AA}$^2$)& 0.686(21) & & 0.311(15)\\
\textrm{\textbf{La2/Sr2}} & & & \\
\textrm{Wyckoff position} & & 3\textit{a} & 8\textit{f} \\
\textrm{x, y, z} & & 1/3, 0, 2/3 & 1/3, 0, -1/12 \\
B ({\AA}$^2$)& & & 0.311(15) \\
\textrm{\textbf{Fe1}} & & & \\
\textrm{Wyckoff position} & 6\textit{b} & 6\textit{c} & 4\textit{a} \\
\textrm{x, y, z} & 0, 0, 0 & 1/3, 0, 11/12 & 0, 0, 0 \\
B ({\AA}$^2$) & 0.441(18) & 0.217 & 0.202(13)\\
M$_x$($\mu_{\rm{B}}$), M$_y$($\mu_{\rm{B}}$), M$_z$($\mu_{\rm{B}}$) & & 1.46(7), 3.67(2), 1.32(2) & 3.26(3), 0, 0 \\
\textrm{\textbf{Fe2}} & & & \\
\textrm{Wyckoff position}& & & 8\textit{f} \\
\textrm{x, y, z} & & & 1/3, 0, 2/3 \\
B ({\AA}$^2$) & & & 0.202(13)\\
M$_x$($\mu_{\rm{B}}$), M$_y$($\mu_{\rm{B}}$), M$_z$($\mu_{\rm{B}}$) & & & -3.67(2), 0, 0 \\
\textrm{\textbf{O1}} & & & \\
\textrm{Wyckoff position}& 18\textit{e} & 6\textit{c} & 8\textit{f} \\
x, y, z & 0.51812(18), 0, 1/4 & 1/3, 0.47410, 1/6 & 0.26295,0.26295, 1/4 \\
B ({\AA}$^2$) & 1.057(19) & & 0.572(12) \\
\textrm{\textbf{O2}} & & & \\
\textrm{Wyckoff position} & & 3\textit{b} & 8\textit{f} \\
\textrm{x, y, z} & & 0.80743, 0, 1/6 & -0.07038, 0.26295, 7/12\\
B ({\AA}$^2$) & & & 0.572(12)\\
\textrm{\textbf{O3}} & & & \\
\textrm{Wyckoff position} & & 6\textit{c} & 8\textit{f}\\
\textrm{x, y, z} & & 1/3, 0.52590, 2/3 & 0.59628, 0.26295, -1/12 \\
B ({\AA}$^2$) & & & 0.572(12)\\
\textrm{\textbf{O4}} & & & \\
\textrm{Wyckoff position} & & 3\textit{a} & 4\textit{e} \\
\textrm{x, y, z} & & 0.85923, 0, 2/3 & 0, 0.47410, 1/4 \\
B ({\AA}$^2$) & & & 0.572(12) \\
\textrm{\textbf{O5}} & & & \\
\textrm{Wyckoff position} & & & 8\textit{f} \\
\textrm{x, y, z} & & & 1/3, 0.47410, -1/12 \\
B ({\AA}$^2$) & & & 0.572(12) \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{figure*}[htp]
\subfigure[]{
\label{magnetic_structure_C2_c}
\includegraphics[width=0.45\textwidth]{magStructureC2c.eps}}
\subfigure[]{
\label{C2c_refinement}
\includegraphics[width=0.51\textwidth]{C2crefinement.eps}}
\caption{(Color online) (a) The collinear magnetic structure of La$_{1/3}$Sr$_{2/3}$FeO$_3$ at 2 K. The green balls denote Fe$^{5+}$ and the blue balls denote Fe$^{3+}$. (b) The Rietveld refinement of the neutron diffraction data of La$_{1/3}$Sr$_{2/3}$FeO$_3$ collected on HRPT at 2 K ($\lambda$ = 1.89 {\AA}), based on the collinear model. The ticks below the pattern are the Bragg peak positions for the nuclear and magnetic scattering. }
\label{collinear}
\end{figure*}
The magnetic order is considered to be characterized by a propagation vector \textbf{k}= (0,0,1) in \hmn{R-3c} metrics, as determined from the Le Bail fit. This is a model-free fit in which peak matching is tested with a certain propagation vector included as an additional phase. The propagation vector found here is the $\Lambda$ point of Brillouin zone, $\Lambda$ =(0,0,g), where g can have any value by symmetry, i.e. in general incommensurate. In this case it is considered to be locked to (0,0,1). It should be noted that this is not equivalent to $\Gamma$ point (0,0,0) because of the presence of R-centering translations. In primitive rhombohedral unit cell the propagation vector is \textbf{k$_p$}= (1/3, 1/3, 1/3). For \textbf{k}= $\Lambda$ in \hmn{R-3c} there are three possible small irreducible representations (irreps) of the \textbf{k}-vector group: $\Lambda$$_1$, $\Lambda$$_2$ and $\Lambda$$_3$, which are one-, one- and two-dimensional, respectively (we use nomenclature for irreps tabulated in \cite{Campbell2006}). For Fe in the 6\textit{b} (0,0,0) position the magnetic representation consists of $\Gamma_{mag} = 1 \Lambda_1 \oplus 1 \Lambda_2 \oplus 2 \Lambda_3$. These irreps and the corresponding basis vectors are listed in Table.\ref{Basireps table}. The $\Lambda$$_{1}$ and $\Lambda$$_{2}$ force the spin to be directed only along the \textit{c}-axis and have to be rejected, because of the presence of a strong (001)-magnetic peak in our experimental data. The solution is inevitably $\Lambda$$_{3}$. For this irrep all the basis vectors are in the \textit{ab}-plane. The irrep $\Lambda$$_3$ is 2-dimensional and enters two times in the magnetic representation. This, together with the fact that \textbf{k}-vector (0,0,g) is not equivalent to (0,0,-g) by symmetry, allows to reduce the symmetry even down to the space group \hmn{P1}. There are 14 different possible Shubnikov groups for a magnetic ordering according to irrep mLD3, found by \emph{ISODISTORT} software. Among them there are four maximal subgroups \hmn{P3_{2}21}, \hmn{P3_{2}2'1}, \hmn{C2/c} and \hmn{C2'/c'}. In the following we restrict the consideration to the maximal subgroups. There are two reasons for such restriction. Firstly, as will be shown below, the goodness of fit for some of them is as good as a Le Bail fit. Secondly, the latter two groups allow two Fe sites which could be compatible with the CO. It is worth to note that, the trigonal space groups \hmn{P3_{2}21} and \hmn{P3_{2}2'1} have their enantiomorphic pairs that should give equivalent description, namely \hmn{P3_{1}21} and \hmn{P3_{1}2'1}, respectively. The choice of space group between the pairs implies a particular domain choice. The enantiomorphic pair group corresponds to an equivalent structure related by the lost inversion center, and could have been equally used to describe the proposed magnetic structure.
\subsubsection{Helical model}
We first consider the most symmetric solution \hmn{P3_{2}21} for the irrep mLD3, which is generated by the order parameter (OP) direction mLD3 (0,0,a,0) \cite{Campbell2006}. It fits nicely to the neutron diffraction data ($\chi$$^2$ = 2.039, $R_{\rm{mag}}$ = 3.39$\%$). The magnetic R-factors are the same as that obtained for the Le Bail fit of the magnetic peaks where all peak intensities are treated independently. This implies that the above model cannot be improved any more. This model allows the presence of a secondary OP from an one-dimensional irrep mGM1+ in addition to the primary OP of mLD3. This results in an additional spin component along the \textit{c}-axis (see Ref. \cite{Gallego2016} for the general description of the symmetry concepts). This is a very good example of the case, where the combined irrep approach with the restriction coming from a particular magnetic space group consistent with the primary irrep gives a direct detection of the additional secondary component in the spin arrangement from the different irrep mGM1+. In the traditional approach that uses only irrep basis functions and is restricted in principle to a single irrep mLD3, this additional AFM canting would be impossible. The fit is considerably improved when the secondary mode mGM1+ is taken into account, as witnessed from the above goodness of fit indicators in comparison to $\chi$$^2$ = 5.170, $R_{\rm{mag}}$ = 10.50$\%$ when only a single irrep mLD3 is considered. The contribution from the secondary mode overlaps with that from the nuclear diffraction, but there is no correlation between them in the present case. Firstly, due to wide Q-range and only one free structure parameter (\textit{x}-position of oxygen atom) all nuclear contributions are practically fixed. Secondly, there are some peaks with significant contribution from \textit{c}-axis canting which are extinct for the nuclear phase due to \hmn{R-3c} symmetry, for instance the (011)-peak at 2$\theta$ = 24.4$^\circ$. We note that intensity of the above peak (and the other similar ones) is also zero for the main mLD3-component, providing convergence of the fit with secondary mode.
In this model, Fe cations are chirally arranged in the unit cell; all the moment directions are dictated by symmetry: the projection of the moments in the \textit{ab}-plane propagates helically along the \textit{c}-axis with \textbf{k}-vector $\Lambda$, and the moments projection on the \textit{c}-axis are antiferromagnetically stacked (see Fig. \ref{helical}). Only single Fe site is allowed by symmetry, with a magnetic moment of 3.46(2) $\mu_{\rm{B}}$ at 2 K (see Fig. \ref{Moment&lattice}). This model appears to exclude long-range CO or CD of Fe ions.
The magnetic moment of Fe obtained from the refinement of the DMC data evolves with temperature and shows a first-order like transition at $T_{\rm{N}}$. It shows no significant change below $T_{\rm{N}}$. The obtained value at 40 K and 20 K is respectively comparable to the averaged moment from Battle \textit{et al.}'s study ($\sim$3.31 $\mu_{\rm{B}}$ at 50 K) \cite{Battle1990} but much higher than that of Yang and coworkers ($\sim$ 2.43 $\mu_{\rm{B}}$ at 15 K)\cite{Yang2003}. The lattice parameters obtained from the refinement of the DMC data show a discontinuity at $T_{\rm{N}}$, which in this scenario may be ascribed to magnetostriction effects.
In the second trigonal group \hmn{P3_{2}2'1} the in-plane helical configuration is similar to that of \hmn{P3_{2}21}, but the secondary spin component is mGM2+ (ferromagnetic (FM) along \textit{c}-axis) and does not yield a convergent fit to the data.
\begin{figure}[bp]
\centering
\leavevmode
\includegraphics[width=0.50\textwidth]{C2c_1p15.eps}
\caption{(Color online) The Rietveld refinement of the neutron diffraction data of La$_{1/3}$Sr$_{2/3}$FeO$_3$ collected on HRPT at 2 K ( $\lambda$ = 1.15 {\AA}), based on the collinear \hmn{C2/c} model. The ticks below the pattern are the Bragg peak positions for the nuclear and magnetic scattering.}
\label{C2c_1p15}
\end{figure}
\subsubsection{Collinear model}
Since CO was reported in the literature for this material \cite{Park1999}, we studied the less symmetric model that could be compatible with CO. The maximal symmetric subgroup would be \hmn{C2/c} and \hmn{C2'/c'}, generated by the OP direction mLD3 (0,0,a,a) and (a,-a,0,0) respectively, based on the propagation vector star (+$\Lambda$,-$\Lambda$). Both groups produce similar description of the experimental data: an amplitude modulation with two independent Fe moments. Both groups can produce the same spin configuration with however different moment direction: for \hmn{C2/c}, it is along \textit{a}-axis (shown in Fig. \ref{collinear}) while for \hmn{C2'/c'} it is along \textit{b}-axis (not shown). The spins of Fe ions are aligned collinearly. The couplings are FM between the ions of different charge and AFM between those of the same charge. In both cases this spin configuration is generated by mLD3 and mGM3+ irreps, the latter being a secondary OP. The contribution of the $\Gamma$ point is important, because it not only improves the fitting of the magnetic peaks, but also allows the proposed CO sequence. The magnetic configuration for other spin component are similar but not the same in both groups and releasing them does not give a convergent fit, as we explain below in this section. In the following we show only the results for the case of \hmn{C2/c}. The position of Fe splits up from 6\textit{b} in \hmn{R-3c} into 4\textit{a} and 8\textit{f} in \hmn{C2/c} (see Table \ref{Structure parameters}). When only mLD3 is considered, the fitting of the magnetic peaks of the neutron diffraction pattern at 2 K is poor ($\chi$$^2$ = 3.093, $R_{\rm{mag}}$ = 9.39$\%$). It gives an AFM spin configuration similar to that shown in Fig. \ref{collinear}, however the CO sequence as suggested from the relative moment size is ...-Fe$^{3+}$-Fe$^{5+}$-Fe$^{5+}$-..., which is not consistent with the previous studies \cite{Battle1990,Yang2003,Park1999}. The mGM3+ mode may give moments along the \textit{a}-axis. When it is taken into account, the magnetic peaks can be fitted well (see Fig. \ref{collinear}). The fitting yields $\chi$$^2$ = 4.140, $R_{\rm{mag}}$ = 4.26$\%$, slightly worse than that for the helical model. In this model, there are two Fe sites dictated by the space group symmetry with the CO sequence ...-Fe$^{5+}$-Fe$^{3+}$-Fe$^{3+}$-..., where Fe$^{3+}$ and Fe$^{5+}$ correspond to 8\textit{f} and 4\textit{a} positions in \hmn{C2/c} symmetry, respectively. The refined magnetic moment of the nominal Fe$^{5+}$ and Fe$^{3+}$ is 3.26(3) $\mu_{\rm{B}}$ and 3.67(2)$\mu_{\rm{B}}$, respectively. The average magnetic moment is 3.53 $\mu_{\rm{B}}$, comparable to the value of the single moment obtained from the helical model.
We also studied in more details the possibility to have other components for the Fe spins, and in particularly the \textit{c}-canting similar to that in the helical model. The components of magnetic moment are possible along \textit{a}, \textit{b} and \textit{c}-axes in \hmn{C2/c}. The AFM configuration for Fe$^{5+}$ and Fe$^{3+}$ spins shown in Fig. \ref{collinear} is possible only along \textit{a}-axis. The component along \textit{b}-axis is FM, and the component along \textit{c} is AFM. In \hmn{C2/c} group similar to \hmn{P3_{2}21} it is possible to have secondary symmetry modes from $\Gamma$ point mGM3+ (as explained above in this section) and/or mGM1+. The irrep mGM1+ gives the same AFM structure of \textit{c}-component as for the helical model for both Fe1 and Fe2 sites together. We attempted to fit in this model but there was no convergence. The convergence could not be reached either when the component along \textit{b}-axis was further released for refinement.
\begin{table*}[htbp]
\caption{The hyperfine parameters of La$_{1/3}$Sr$_{2/3}$FeO$_3$ at 300 K, 200 K and 4 K: line width $\Gamma$, isomer shift $\delta$, apparent quadrupole splitting 2$\epsilon$, and hyperfine field $H$. }
\label{Hyperfine parameters}
\begin{ruledtabular}
\begin{tabular}{cccccc}
& \textrm{Proportion(\%)}& $\Gamma$ (mm/s) & $\delta$ (mm/s) & 2$\epsilon$ (mm/s) & $H$ (T)\\
\colrule
300 K & 100 & 0.37(2) & 0.13(2) & - & -\\
200 K & 100 & 0.35(2) & 0.20(2) & - & -\\
\makecell{4 K, site A\\4 K, site B} & \makecell{64.9\\33.3} & \makecell{0.36(2)\\0.31(2)} & \makecell{0.38(2)\\-0.02(2)} & \makecell{0.00(2)\\0.00(2)} & \makecell{46.4(2)\\26.5(2)}\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{figure}[bp]
\centering
\leavevmode
\includegraphics[width=0.50\textwidth]{Mossbauer.eps}
\caption{(Color online) The M\"ossbauer spectra of La$_{1/3}$Sr$_{2/3}$FeO$_3$ measured at 300 K, 200 K and 4 K.}
\label{Mossbauer}
\end{figure}
The magnetic moment on the Fe$^{3+}$ site is larger than that on the Fe$^{5+}$ site, but the difference in amplitude is much smaller than expected for 3+ and 5+ valences. Thus this model does not support an ideal CO, but does not exclude a partial CD. In the present case of propagation vector \textbf{k$_p$}= (1/3, 1/3, 1/3), the deviation of the crystal structure from the paramagnetic \hmn{R-3c} symmetry should result in additional satellite reflections appearing at the same positions as the magnetic satellites. This makes the separation of nuclear and magnetic contributions more difficult. However the intensities of the structural satellites will not be suppressed by the magnetic form factor at large values of momentum transfer Q. A neutron diffraction pattern measured at $\lambda$ = 1.15 {\AA} allows us to go up to Q$_{max}$ = 11 {\AA}$^{-1}$. A detailed inspection of the measured pattern did not reveal the presence of any separate isolated diffraction peaks allowed in \hmn{C2/c} space group at high Qs (see Fig. \ref{C2c_1p15} ). We tentatively tried to release the atomic positions in \hmn{C2/c} model from the ideal average positions given by \hmn{R-3c} paramagnetic group (see Table \ref{Structure parameters}), but we were not able to obtain a convergent fit.
\subsection{Hyperfine structure}
The $^{57}$Fe M\"ossbauer spectra recorded at 300 K, 200 K and 4 K are shown in Fig. \ref{Mossbauer}. The spectrum at 300 K comprises a single broad line. Since the deviation from local cubic symmetry at the Fe site is very weak, it has been fitted using a singlet. The fitted hyperfine parameters are given in Table \ref{Hyperfine parameters}. Note that fittings using a doublet yield an isomer shift ($\delta$ $\sim$ 0.13 mm/s) identical to that when using a singlet, and a very small quadrupolar interaction ($\Delta$ $\sim$ 0.12 mm/s) with a slightly reduced linewidth ($\Gamma$ $\sim$ 0.32 mm/s). The spectrum recorded at 200 K, just above $T_{\rm{MI}}$, can be fitted in the same manner as the room temperature one. The $\delta$ value above $T_{\rm{MI}}$ lies in between that expected for Fe$^{3+}$ and Fe$^{4+}$. It thus agrees with the formal charge Fe$^{3.66+}$ deduced from the chemical formula.
The spectrum recorded at 4 K comprise two sextets of unequal intensity that correspond to two M\"ossbauer sites, A and B. The fitted hyperfine parameters are given in Table \ref{Hyperfine parameters}. They are in good agreement with those of previous M\"ossbauer studies \cite{Takano1981,Battle1988,Gallagher1967,Chernyshev2014}. In addition, minor non-magnetic contributions in the central part of the velocity scale were also taken into account, however they represent less than 2\% of the resonant area.
The two sextets have different isomer shifts and hyperfine fields. They thus correspond to different Fe charge states. In contrast with conclusions drawn from previous M\"ossbauer studies \cite{Takano1981,Battle1988}, the less intense sextet (Fe B site, $\sim$ 34\%) does not correspond to the rare Fe$^{5+}$ charge state; because its isomer shift ($\sim$ -0.02 mm/s) is not negative enough. The $\delta$ and \textit{H} values are however lower than those of Fe$^{4+}$ in SrFeO$_3$ ($\delta$ $\sim$ 0.146; $H$ $\sim$ 33.1 T)\cite{Gallagher1964}, suggesting that site B corresponds to a non-integer charge state intermediate between Fe$^{4+}$ and Fe$^{5+}$. Similarly, the hyperfine parameters of site A ($\delta$ $\sim$ 0.38 ; $H$ $\sim$ 46.4 T), whose spectral weight is twice that of the Fe B site, do not correspond to that of ‘pure’ Fe$^{3+}$ as in $\alpha$-Fe$_2$O$_3$ ($\delta$ $\sim$ 0.48 ; $H$ $\sim$ 54 T)\cite{Yoshida2013}. This suggests that the Fe A site has also a non-integer charge state, slightly higher than trivalent. We hence conclude that the charge difference below $T_{\rm{MI}}$ is rather limited, involving two iron sites with non-integer charge states Fe$^{(3.66-\zeta)+}$ and Fe$^{(3.66+2\zeta)+}$ for Fe A and Fe B, respectively. Although the Fe charge states cannot be determined precisely, we estimate 0.2 \textless $\zeta$ \textless 0.5. This agrees with the conclusion of the M\"ossbauer study in ref. \cite{Chernyshev2014}. From electronic spectroscopy data, Herrero-Martin \textit{et al.}\cite{Martin2009} also concluded to a modest charge segregation. However, their conclusions included that the higher charge state has twice the spectral weight of the lower one, which is not consistent with the present and past \cite{Takano1981,Battle1988,Gallagher1967,Chernyshev2014} M\"ossbauer data.
\subsection{Discussion}
The collinear model seems to be consistent with the present and previous results of M\"ossbauer spectroscopy \cite{Takano1981,Chernyshev2014} and the previous electron diffraction study \cite{Li1997}. The M\"ossbauer data can be simply analyzed by considering that the Fe$^{3.66+}$ disproportionates below $T_{\rm{MI}}$ into two Fe sites: Fe$^{(3.66-\zeta)+}$ and Fe$^{(3.66+2\zeta)+}$, in the ratio 2:1. This agrees with the collinear model with the two Fe sites in 8\textit{f} and 4\textit{a} positions corresponding to the M\"ossbauer sites A and site B, respectively. However, the magnetic moments obtained from the refinement of the neutron diffraction data are not fully consistent with the fitted hyperfine field values. The hyperfine field is built up from several contributions: the Fermi contact field (valence and core), the dipolar field and the orbital field \cite{Novak2010}. Although only the core contribution to the Fermi contact field scales with the magnetic moment, the hyperfine field to magnetic moment ratio generally lies in the 10-15 T/$\mu_{\rm{B}}$ range for Fe. Hence, the Fe moment deduced from the M\"ossbauer hyperfine field at sites A (8\textit{f}) and B (4\textit{a}) lies in the range between $\sim$ 3.1-4.6 $\mu_{\rm{B}}$ and $\sim$ 1.8-2.7 $\mu_{\rm{B}}$, respectively. The refined magnetic moment at the 4\textit{a} position is significantly higher (3.26 $\mu_{\rm{B}}$), which would imply a conversion factor as low as $\sim$ 8 T/$\mu_{\rm{B}}$.
The helical model appears inconsistent with the above results, however, it may not be fully ruled out. One possibility could be that electronic relaxations occur between two charge states at all temperatures. Relaxations are fast above the transition hence a single state is observed; while below $T_{\rm{MI}}$, they slow down and become slower than the M\"ossbauer probing time (10$^{-7}$ s) thus the two charge states are resolved. In this way, the charge separation below $T_{\rm{MI}}$ might only be apparent and only a single Fe site can be observed from neutron diffraction. The mean hyperfine field ($\sim$ 39.6 T) and the refined iron moment in the helical model (3.46 $\mu_{\rm{B}}$) yields a conversion factor of $\sim$ 11.4 T/$\mu_{\rm{B}}$ which lies in the commonly valid range of 10-15 T/$\mu_{\rm{B}}$. The other possibility could be that at low temperatures Fe cations could have two different valences locally, hence this can be probed by M\"ossbauer spectroscopy and electron diffraction, however the CO may not be long-ranged. Moreover the electrons are partially itinerant below $T_{\rm{MI}}$, which implies that a short-range CO is more likely. It is also worthwhile to add that a helical model is considered to be more energetically favorable than a collinear AFM state \cite{Shraiman1989,Luscher2007}. A spiral structure was proposed for the spin-glass state of La$_{2-x}$Sr$_x$CuO$_4$ \cite{Luscher2007}. In La$_{1/3}$Sr$_{2/3}$FeO$_3$, such a spin-glass ground state is also possible (see Fig. \ref{ResistivityM}).
Next we would like to point out some implications of the one-Fe helical model. It suggests that the first-order like MI transition is driven purely by magnetic ordering. This is in qualitative agreement with a recent experimental observation \cite{Devlin2014}: a negative magnetoresistance and a sign reversal of the Hall effect below $T_{\rm{MI}}$ is reported for R = La, and the exotic low-temperature transport properties are ascribed to a consequence of the unusually long-range periodicity of the AFM ordering.
\section{SUMMARY AND CONCLUSIONS}
The low-temperature magnetic structure of La$_{1/3}$Sr$_{2/3}$FeO$_3$ have been revisited and studied by neutron powder diffraction and a complementary M\"ossbauer spectroscopy. Based on the symmetry analysis, two crystallographic magnetic models, namely a chiral helical maximal symmetry \hmn{P3_{2}21} and a collinear \hmn{C2/c} or \hmn{C2'/c'} model are proposed. We found both models fit equally well with the neutron diffraction pattern at 2 K. The less symmetric \hmn{C2/c} or \hmn{C2'/c'} model allows charge ordering of Fe ions but our experimental data do not show any evidence of the expected structural distortion. The M\"ossbauer spectroscopy results appear to support the collinear model but cannot fully rule out the helical one. The latter model suggests that the metal-insulator transition is of magnetic origin. Polarised neutron diffraction on single crystals is needed to verify the validity of either of the models.
\begin{acknowledgments}
F.L and R.Y acknowledge the financial support from the SNSF (Schweizerischer Nationalfonds zur F\"orderung der Wissenschaftlichen Forschung) (Grant No.200021\_157009). R.Y also acknowledges the financial support from Horizon 2020 (Grant No.654000). We thank Prof M. Kenzelmann for fruitful discussions. The work was partially performed at the Swiss Spallation Neutron Source SINQ (PSI).
\end{acknowledgments}
|
{
"timestamp": "2018-05-18T02:11:31",
"yymm": "1802",
"arxiv_id": "1802.08610",
"language": "en",
"url": "https://arxiv.org/abs/1802.08610"
}
|
\section{Introduction}
Several cosmological models and simulations predict that high redshift quasars (QSOs) hosting supermassive ($M_{\rm BH} \gtrsim 10^9M_{\odot}$) black holes (SMBHs) reside in the most massive dark matter halos and that their environments harbor galaxy overdensities formed by hierachical merging of many galaxies \citep[e.g.,][]{Romano-Diaz11,Costa14}. In contrast, some other simulations and studies suggest that such QSOs may not necessarily be in the most massive halos \citep[e.g.,][]{Overzier09,Angulo12,Fanidakis13,Orsi16}. To investigate the environment in which QSOs hosting SMBHs reside, it is essential to actually observe galaxies around QSOs and examine if they exhibit overdensities. Meanwhile, galaxy overdensities at early cosmic epochs, often called protoclusters, are thought to be progenitors of massive clusters of galaxies seen in the present-day universe \citep{Overzier16}. Hence, observing such galaxy overdensities, their associated galaxies and QSOs, if any, over cosmic time, we can understand how clusters, galaxies and SMBHs have formed and evolved and how overdense environments have affected galaxy formation and evolution.
Some observations to date have found significant galaxy overdensities around QSOs at various epochs $z\sim 2$--6 \citep[e.g.,][]{Steidel05,Kim09,Capak11,Swinbank12,Husband13,Morselli14,Balmaverde17,Decarli17}. Conversely, other observations revealed average galaxy densities or even underdensities around $z\sim 2$--7 QSOs \citep[e.g.,][]{Francis04,Kashikawa07,Kim09,Banados13,Simpson14,Mazzucchelli17,Kikuta17,Uchiyama17}. This would imply that QSOs may not always be hosted by the most massive halos and/or the densest environments.
However, some of these observations typically probed areas of at most tens of arcmin$^2$ around QSOs \citep{Kim09,Banados13,Mazzucchelli17,Simpson14}. In regions close to a luminous QSO, the QSO intense ultraviolet (UV) radiation may be able to suppress galaxy formation by evaporating gas in dark halos before it cools and forms stars (QSO negative feedback), possibly resulting in lack of observed galaxies even if an underlying halo excess may exist \citep[e.g.,][]{Efstathiou92,Thoul96,Benson02,Kashikawa07,Okamoto08}. The QSO radiation also ionizes neutral hydrogen in the surrounding intergalactic medium (IGM) which can shield gases that form galaxies in halos from the QSO radiation. At the reionization epoch, the fraction of residual neutral hydrogen in the IGM may vary significantly from one line of sight to another. Thus, various combination of QSO radiation strength and amount of residual neutral hydrogen that changes from site to site may cause a wide variety of galaxy densities (from overdensities to underdensities) observed in the close vicinities of $z \gtrsim 6$ QSOs \citep{Kim09}. To overcome this issue, we have to observe galaxy sky distributions and number densities over much wider areas around $z \gtrsim 6$ QSOs.
A few previous studies have observed wide areas (hundreds of arcmin$^2$) around $z \gtrsim 6$ QSOs. \citet{Utsumi10} imaged the $z=6.417$ QSO, CFHQS J2329--0301, hosting a black hole with $M_{\rm BH} \sim 2.5 \times 10^8M_{\odot}$ \citep{Willott10b} using the 8.2m Subaru Telescope with its wide field ($27' \times 34'$) prime focus camera, Suprime-Cam \citep{Miyazaki02}, in the broadband $i'$, $z'$ and $z_{\rm R}$ filters. They claim that there is a possible large scale ($\sim 6 \times 6$ physical Mpc$^2$) overdensity of $z'$-band dropout Lyman break galaxies (LBGs) around the QSOs. However, as already pointed out by \citet{Willott11}, most of them are visible in the $i'$-band images, and thus some of them might be located at $z<6$ (unless they have some detectable fluxes left in the spectral trough bluewards of Ly$\alpha$) because the red edge of the $i'$-band is $\lambda \sim 8500$\AA~corresponding to Ly$\alpha$ at $z\sim6$.
Meanwhile, \citet{Morselli14} and \citet{Balmaverde17} imaged $23' \times 25'$ areas around four $z=5.95$--6.41 QSOs all hosting $M_{\rm BH} = 1.0$--$4.9 \times 10^9M_{\odot}$ SMBHs using the Large Binocular Camera at the Large Binocular Telescope or the Wide-field InfraRed Camera (WIRCam) at the Canada-France-Hawaii Telescope (CFHT) and broadbands. They found that all the QSO fields show LBG ($i$-band dropout) densities higher than those in their comparison blank sky fields on a large scale ($\sim 8 \times 8$ physical Mpc$^2$). Hence, these QSOs might live in massive halos embedded in large scale (tens of physical Mpc$^2$) galaxy overdensities.
Nonetheless, these wide-field studies as well as many other previous smaller-field surveys observed only color-selected LBG candidates around $z \gtrsim 6$ QSOs. Their potential redshifts span a wide range $\Delta z \sim 1$ as they are detected by broadband filter dropout technique. On the other hand, a galaxy overdensity at $z\sim6$ seems to have a size of $\Delta z < 0.1$ \citep[e.g., see][for the case of a spectroscopically confirmed $z\sim6$ protocluster]{Toshikawa12,Toshikawa14}. Thus, even if we find an apparent LBG overdensity around a QSO in a projected sky plane, some of the LBGs may not be associated with the overdensity and the QSO. Also, as LBGs tend to be relatively massive older galaxy population, the previous studies missed young low mass galaxies such as Ly$\alpha$ emitters (LAEs) around QSOs. As LAEs are usually observed by detecting their Ly$\alpha$ emission in a narrowband filter, their redshifts span a very narrow range $\Delta z \sim 0.1$. Hence, if we find overdensities of LAEs around QSOs, they are likely associated with the QSOs, possibly forming protoclusters. However, until recently, no QSO at $z>6$ whose redshift matches a bandpass of a narrowband filter targeting $z>6$ Ly$\alpha$ emission has been found, and studies of LAEs around QSOs using narrowbands have been carried out only up to $z\sim 5.7$ \citep[e.g.,][]{Kashikawa07,Banados13,Kikuta17,Mazzucchelli17}. This has made it difficult to reliably estimate real number densities of galaxies of various ages and masses in a narrow redshift range around $z>6$ QSOs.
Another interesting aspect that has been missed when observing $z>6$ QSOs is the possible impact of galaxy overdensities on the structure of cosmic reionization. There are competing theories arguing whether ionization of neutral hydrogen occurs more rapidly in denser environments where galaxies are clustering, or not \citep[e.g.,][]{Morales10}. If $z>6$ QSOs are associated with galaxy overdensities, they can be laboratories to examine environmental effects on reionization if we can estimate galaxy densities around the QSOs accurately and have any observational probe of reionization. LAEs can be the probe as their Ly$\alpha$ luminosity function (LF) could decline or their spatial distribution could modulate as neutral hydrogen absorbs or scatters Ly$\alpha$ photons from LAEs \citep{Rhoads01,McQuinn07}. However, no previous study has observed LAEs around $z>6$ QSOs and investigated the reionization state in overdense environments.
Eventually, if we observe sky areas much larger than QSO radiation fields (which could suppress formation of low mass galaxies within $\sim 1$--3 physical Mpc from the QSOs; e.g., see \citet{Kashikawa07} and Section \ref{Feedback} in this paper) and detect both LBGs and LAEs (galaxies in wide ranges of masses and ages) with a wide field camera and a narrowband ($\Delta z \sim 0.1$) matched to the QSO redshifts, we can reveal if $z > 6$ QSOs are embedded in galaxy overdensities (most massive halos) and how overdense environment affects early galaxy formation and reionization.
Very recently, \citet{Goto17} made a custom narrowband filter for the Subaru Telescope Suprime-Cam whose bandpass matches the redshift of the $z=6.417$ QSO, CFHQS J2329--0301 \citep{Willott10b}, which was observed by \citet{Utsumi10} (see above). Despite their wide-field ($27' \times 34'$) narrowband imaging, they did not detect any LAEs around the QSO. They mentioned that the QSO UV radiation could suppress formation of LAEs in lower halo masses ($< 10^{10}M_{\odot}$) within $\sim1$ physical Mpc from the QSO. However, they could not explain why they did not detect any LAEs over the most of their survey area probably not affected by the QSO radiation. The QSO may not reside in any galaxy overdensity environment even on a large scale as it hosts a relatively less massive black hole with $M_{\rm BH} \sim 2.5 \times 10^8M_{\odot}$ \citep{Willott10b}. However, even if this is the case, it cannot still explain why \citet{Goto17} did not detect any LAEs in their entire survey area. One possibility is the shallowness of their images, especailly the narrowband image, which may have resulted in a potentially lower line sensitivity than expected. This is because some ($\sim$ 1/4) of their narrowband exposures were taken under poor transparency and because the strong skyline existing within the bandpass wavelengths of their narrowband filter may possibly reduce the sensitivity (Goto et al.~2017, private communication). Another possibility is that the range where the QSO UV radiation is effective on suppressing formation of LAEs is wider than expected.
To clarify environments in which QSOs hosting SMBHs reside at reionization epoch $z\gtrsim6$, we have to conduct wide-field observations of both LAEs and LBGs around QSOs hosting an SMBH with $M_{\rm BH} \geq 10^9M_{\odot}$ to a sufficiently deep flux limit to which we already know how many LAEs and LBGs we can expect to detect if a QSO does not exist based on previous LAE/LBG studies or by carefully designed equivalent observations of LAEs/LBGs in a comparison sky field where there is no QSO.
Recently, \citet{Venemans13} discovered a $z = 6.6145\pm0.0001$ QSO, VIKING J030516.92--315056.0 (hereafter J0305--3150), hosting a $M_{\rm BH} \sim 1.0 \times 10^9M_{\odot}$ SMBH. This redshift has been reliably measured from the [CII] line detected by the Atacama Large Millimeter/submillimeter Array \citep{Venemans16} and fortunately fits in the bandpass of the narrowband filter NB921 ($\lambda_{\rm c} = 9196$\AA~and $\Delta \lambda_{\rm FWHM} = 132$\AA~corresponding to $z\sim6.51$--6.62 Ly$\alpha$ emission; see Figures \ref{FilterTransmission} and \ref{NB921_LyaPeakDist}) of the Subaru Telescope Suprime-Cam. This gives us the first opportunity for a wide-field (hundreds of arcmin$^2$) narrowband and broadband search for both LAEs and LBGs around a QSO hosting an SMBH in the reionization epoch at $z>6$. At this moment, J0305--3150 is the highest redshift QSO for which such observations are possible. Moreover, the red-sensitive CCDs of the Suprime-Cam allow us to detect faint LAEs and LBGs at $z\sim6.6$ to fairly deep limits with modest amounts of observing time.
\begin{figure}
\epsscale{1.17}
\plotone{f1.eps}
\caption{Transmission curves of the Suprime-Cam broadband and narrowband filters used for our study ($i'$, $z'$ and NB921; solid curves) as well as the broadbands we did not use but \citet{Taniguchi05} additionally used to select $z\sim6.6$ LAEs in SDF ($B$, $V$ and $R_c$; dashed curves, see Section \ref{LAE-Selections} for details). The transmission curves include the CCD quantum efficiency (MIT-Lincoln Laboratory CCDs for $B$, $V$ and $R_c$, and Hamamatsu CCDs for $i'$, $z'$ and NB921), the reflection ratio of the telescope primary mirror, correction for the prime focus optics and transmission to the atmosphere (airmass sec $z=1.2$). The OH night sky lines are also overplotted with the dotted curve.\label{FilterTransmission}}
\end{figure}
\vspace*{0.5cm}
In addition, the NB921 filter can detect $z\sim6.6$ LAE candidates in a very narrow redshift range ($\Delta z \sim0.1$) with a fairly low contamination rate. \citet{Kashikawa06,Kashikawa11} spectroscopically identified 42 out of 58 $z\sim6.6$ LAE candidates that \citet{Taniguchi05} photometrically detected in the Subaru Deep Field \citep[SDF,][]{Kashikawa04} using NB921 and found that only $\sim 2$--19\% of them are contaminants. Hence, by observing the region around the $z=6.61$ QSO J0305--3150 in the NB921 filter and using the same LAE selection criteria, we can photometrically detect LAE candidates that are mostly real LAEs at the redshifts very close to that of the QSO. Also, these previous studies have constructed the robust $z\sim6.6$ LAE sample by using the Suprime-Cam broadband and NB921 imaging of the SDF \citep{Taniguchi05,Kashikawa06,Kashikawa11}. The SDF is a general blank field, and there is no $z\sim6.6$ QSO, no over/underdensity of $z\sim6.6$ LAEs and LBGs and no clustering of them (we show this in the subsequent sections). Hence, we can use the SDF and the LAE sample (and the LBG sample we construct in this paper) in this field as the control field and the control sample, the rigorous baseline that can be compared with the LAEs and the LBGs we detect around the $z=6.61$ QSO J0305--3150 to reveal the potential LAE/LBG overdensities, if any.
Meanwhile, it should be also noted that although the redshift of the $z=6.61$ QSO J0305--3150 is in the bandpass of the NB921 filter, it is in the red side of the bandpass where the sensitivity to LAEs is lower. Figure \ref{NB921_LyaPeakDist} shows the transmission curve of the NB921 filter and observed wavelength distribution of the Ly$\alpha$ line peaks of the $z \sim 6.6$ LAEs in the SDF (control field) previously detected in the NB921 imaging by \citet{Taniguchi05} and spectroscopically confirmed by \citet{Kashikawa06,Kashikawa11}. As seen in the figure, the NB921 filter has a better sensitivity to LAEs at the blue side of its bandpass as it can detect both Ly$\alpha$ emission and more UV continuum fluxes while the red side of the bandpass detects Ly$\alpha$ emission and less UV continuum fluxes. Hence, we have to keep in mind that we might miss detecting some fraction of LAEs around the $z=6.61$ QSO, especially those located at the further side of the QSO.
In this paper, we present the result of our Subaru Suprime-Cam wide field broadband and narrowband NB921 search for overdensities of both LAEs and LBGs around the $z=6.61$ QSO J0305--3150. This paper is organized as follows. In Section 2, we describe our observations of the QSO field, the data reduction and the control field (SDF) data. Then, in Section 3, we select LAE and LBG candidates in the QSO and the control fields. We compare sky distributions, number densities and clustering of the LAE and LBG candidates in the QSO field with those in the control field to investiagte the possibility of existence of any galaxy overdensities around the QSO in Section 4. We summarize and conclude our study in Section 5. In Appendix A, we check for the nonexistence of $z\sim6.6$ QSOs in the control field. Finally, in Appendix B, we examine the contamination rate of our photometric LAE samples due to not imposing non-detections in the broadbands bluewards of $z\sim6.6$ Ly$\alpha$ as a part of LAE selection criteria. Throughout, we adopt AB magnitudes \citep{Oke74} and a concordance cosmology with $(\Omega_m, \Omega_{\Lambda}, h)=(0.3, 0.7, 0.7)$, unless otherwise specified.
\begin{figure}
\epsscale{1.5}
\hspace*{-2cm}
\plotone{f2.eps}
\caption{The transmission curve of the Subaru Suprime-Cam narrowband NB921 filter (dashed curve) and observed wavelength distribution of the Ly$\alpha$ line peaks of the $z \sim 6.6$ LAEs in the SDF (Control Field) previously detected in the NB921 imaging by \citet{Taniguchi05} and spectroscopically confirmed by \citet{Kashikawa06,Kashikawa11} (solid line). The top axis indicates the redshift of Ly$\alpha$ emission ($z_{{\rm Ly}\alpha}$) corresponding to the wavelength at the bottom axis. The vertical dotted and solid lines are the central wavelength (9196\AA) of the NB921 filter and the redshift of the QSO J0305--3150, respectively.\label{NB921_LyaPeakDist}}
\end{figure}
\vspace*{0.5cm}
\begin{deluxetable*}{ccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Summary of the Imaging Data of the $z\sim 6.6$ QSO Field and the Control Field SDF\label{ImagingData}}
\tablewidth{510pt}
\tablehead{
\colhead{Field} & \colhead{Band} & \colhead{$t_{\rm exp}$$^{\rm c}$} & \colhead{PSF Size$^{\rm d}$} & \colhead{Area$^{\rm e}$} & \colhead{$m_{\rm lim}$$^{\rm f}$} &$N_{\rm LAE}$$^{\rm g}$ & $N_{\rm LBG}$$^{\rm g}$ & \colhead{Observation Date}\\
\colhead{} & \colhead{} & \colhead{(min)} & \colhead{(arcsec)} & \colhead{(arcmin$^2$)} & \colhead{(mag)} & & & \colhead{}
}
\startdata
QSO$^{\rm a}$ & $i'$ & 128 & 0.91 (0.91) & 697 & 27.0 & 14 & 53 & 2014 Aug 25/27\\
& $z'$ & 220 & 0.91 (0.83) & 697 & 26.5 & & & 2014 Aug 26/27\\
& NB921 & 380 & 0.91 (0.77) & 697 & 26.5 & & & 2014 Aug 22/24/25\\
\hline
SDF$^{\rm b}$ & $i'$ & 801 & 0.98 & 876 & 27.4 & 63 & 32 & 2002 Apr 11/14, May 6, 2003 Mar 31, Apr 2/24/25/29/30\\
& $z'$ & 504 & 0.98 & 876 & 26.6 & & & 2002 Apr 9/14, 2003 Mar 7, Apr 1/28\\
& NB921 & 899 & 0.98 & 876 & 26.5 & & & 2002 Apr 9/11/14, May 6, 2003 Mar 7/8, Apr 24\\
\enddata
\tablenotetext{a}{The images of the QSO field were taken by Suprime-Cam with Hamamatsu red-sensitive fully depleted CCDs \citep{Kamata08}.}
\tablenotetext{b}{The SDF public version 1.0 images \citep{Kashikawa04} taken by Suprime-Cam with MIT-Lincoln Laboratory (MIT-LL) CCDs \citep{Miyazaki02}.}
\tablenotetext{c}{Total exposure times. The differences in exposure times between the images of the QSO field and the Control Field SDF to reach the similar depths originate from the different CCDs of the Suprime-Cam used for the observations of each field.}
\tablenotetext{d}{The FWHM of PSFs. The original images were convolved to have the common PSFs for the aperture photometry purpose. The ones in the parentheses are the original PSF FWHMs of the QSO field images before the convolution.}
\tablenotetext{e}{The image area finally used for our science analysis.}
\tablenotetext{f}{The $3\sigma$ limiting magnitude measured in a $2''$ diameter aperture.}
\tablenotetext{g}{The numbers of LAE and LBG candidates detected in each field.}
\end{deluxetable*}
\section{Observation and Data}
\subsection{The $z=6.61$ QSO Field\label{QSOQSOField}}
We imaged the field centered at the $z=6.61$ QSO J0305--3150 with Subaru Telescope Suprime-Cam in the broadbands $i'$ and $z'$ as well as the narrowband NB921. We aimed to reach depths in these bands that are as similar as possible to those of the control field (SDF) images. The observations were carried out during dark nights on 2013 November 27, and 2014 August 22 and 24--27. The sky conditions were partly clear/cloudy with a seeing of $\sim 0.''8$--$1.''3$ in 2013 and photometric with a seeing of $\sim 0.''5$--$0.''9$ in 2014. We took 240, 120--240 and 1200 second individual exposure frames with the $i'$, $z'$ and NB921 bands, respectively, using eight-point dithering patterns.
We reduced the exposure frames using the software SDFRED2 \citep{Ouchi04,Yagi02} in the same standard manner as in \citet{Kashikawa04} and \citet{Ota08}, including bias subtraction, flat-fielding, distortion correction, matching of point spread functions (PSFs) between the CCD chips, sky subtraction and masking of the shadow of the auto guider probe. Then, the dithered exposure frames were matched and stacked. We did not eventually use the exposures taken in 2013 as they were obtained under poor transparency conditions. The integration times of these stacked $i'$, $z'$ and NB921 images amount to 2.1, 3.7 and 6.3 hours, respectively. The $i'$ and $z'$ images were then registered to the NB921 image by using the positions of common stellar objects detected in these images. Finally, we corrected the astrometry of the $i'$, $z'$ and NB921 images by matching pixel positions of the stars in the images to the coordinates of them in the USNO-B1.0 catalog \citep{Monet03} with the WCSTools version 3.8.1 \citep{Mink99}.
Meanwhile, images of the spectrophotometric standard star GD71 \citep{Oke90} taken in all the bands during the observations in 2014 were used to calibrate the photometric zero points. We checked the zero points by comparing the colors of stellar objects detected in the $i'$, $z'$ and NB921 images of the QSO field and 175 Galactic stars calculated from spectra given in \citet{GunnStryker83}\footnote[1]{Taken from ftp://ftp.stsci.edu/cdbs/grid/gunnstryker/} in the $z'-{\rm NB921}$ versus ${\rm NB921}-i'$ diagram. We selected the stellar objects in the QSO field by running SExtractor version 2.8.6 \citep{BA96} on the images and using the criteria with the SExtractor parameters {\tt CLASS\_STAR} (stellarity) $> 0.98$ and {\tt FLAGS} $=0$ (no blending with neighboring objects). We found that the sequence of the stellar objects in the QSO field was offset from that of the \citet{GunnStryker83}'s Galactic stars by $\sim +0.15$--0.20 mag in $z'-{\rm NB921}$ and $\sim -0.15$--0.20 mag in ${\rm NB921}-i'$. Thus, we corrected the zero point of only the NB921 image by $+0.20$ mag and did not correct those of the $i'$ and $z'$ band images, by which colors of the two stellar sequences became consistet within $\sim 0.05$ mag. Finally, the zero points of the QSO field images are ($i'$, $z'$, NB921) $=$ (33.52, 32.30, 32.19) mag ADU$^{-1}$. The summary of our imaging data is given in Table \ref{ImagingData}.
\subsection{The Control Field -- Subaru Deep Field\label{ControlField}}
The objective of this study is to investigate how different the galaxy environment around the $z=6.61$ QSO is from a general field where there is no $z \sim 6.6$ QSO, no over/underdensity of LAEs/LBGs and no clustering of these galaxies. We choose the SDF as our comparison general field (hereafter ``Control Field'') because the previous studies have established a robust sample of $z\sim6.6$ LAEs with low contmination in this field \citep{Kodaira03,Taniguchi05,Kashikawa06,Kashikawa11}. Furthermore, it has been shown that the sky distribution of these LAEs is quite homogeneous and does not show any clustering based on two-point angular correlation function (ACF), two-dimensional Kolmogorov-Smirnov test and the void probability function analyses \citep{Kashikawa06}. The SDF also corresponds to one pointing of the Suprime-Cam and has comparable sky area and survey volume to those of the QSO field (see Table \ref{ImagingData}).
In addition, we should also note and address the following four points when we adopt the SDF as the Control Field. (i) We confirm that there is not any $z \sim 6.6$ QSOs in the SDF as shown in Appendix A. (ii) Our SDF $z\sim6.6$ LAE sample is slightly different from that in the previous studies of \citet{Taniguchi05} and \citet{Kashikawa06,Kashikawa11} in that we use LAE selection criteria without $B$, $V$ and $R_c$ bands (see Figure \ref{FilterTransmission} and the criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) in Section \ref{LAE-Selections}). However, we confirm that the contamination rate is still low in Appendix B and that the LAE candidates still exhibit a sky distribution without any over/underdensity nor clustering as shown in Sections \ref{NdensityContours} and \ref{ACF}. (iii) We newly detect LBG candidates in the SDF and confirm that their sky distribution shows no over/underdensity nor clustering as shown in Sections \ref{NdensityContours} and \ref{ACF}. (iv) The photometric zero points of the $i'$, $z'$ and NB921 images of the SDF were calibrated by \citet{Kashikawa04} in the similar way as we did for the QSO field images in Section \ref{QSOQSOField} by using colors of the \citet{GunnStryker83}'s Galactic stars. Using these photometric zero points, we confirm that colors of stellar objects in the SDF are consistent with those of the \citet{GunnStryker83}'s Galactic stars and stellar objects in the $z=6.6$ QSO field within $\sim 0.05$ mag in the $z'-{\rm NB921}$ versus ${\rm NB921}-i'$ diagram. Thus, the calibrations of the photometric zero points are consistent between the QSO and Control fields.
\begin{figure*}
\epsscale{1.17}
\plottwo{Newf3a.eps}{f3b.eps}
\caption{Detection completeness of the NB921 (left) and $z'$-band (right) images of the QSO field and the Control Field (SDF) per 0.5 mag bin. We can see that the completenesses in the NB921 or $z'$ band are almost the same between the two fields.\label{Completeness}}
\end{figure*}
\section{Galaxy Candidate Selection}
\subsection{Photometry, Object Catalogs and Image Depths\label{Photometry}}
In order to detect LAEs and LBGs in the QSO field, we performed photometry to make the NB921-detected and $z'$-detected object catalogs. The original PSFs of the $i'$, $z'$ and NB921 images were $0.''91$, $0.''83$ and $0.''77$, respectively. We convolved the PSFs of the $z'$ and NB921 images to match that of the $i'$ band image (the worst of the three) because we have to calculate the $i' - z'$ and $z' -$ NB921 colors of objects by measuring the $i'$, $z'$ and NB921 magnitudes using the same aperture in order to select LAE and LBG candidates (see Section \ref{LAE-Selections}--\ref{LBG-Selections}). As shown in Table \ref{ImagingData}, the final PSFs ($0.''91$) of the QSO field images are slightly better but comparable to those of the Control Field images ($0.''98$).
We used the SExtractor version 2.8.6 \citep{BA96} for source detection and photometry. The Suprime-Cam CCDs have a pixel size of $0.''202$ pixel$^{-1}$. We considered an area larger than five contiguous pixels with a flux (mag arcsec$^{-2}$) greater than $2\sigma$ (i.e. two times the background rms) to be an object. Object detection was first made in the NB921 ($z'$) image, and then photometry was performed in the $i'$, $z'$ and NB921 images to detect LAE (LBG) candidates, using the double-image mode. We measured $2''$ diameter aperture magnitudes of detected objects with {\tt MAG\_APER} parameter and total magnitudes with {\tt MAG\_AUTO}. We used a $2''$ aperture, about twice the PSF of the $i'$, $z'$ and NB921 images of the both QSO and Control fields, to measure colors of objects, especially faint ones, with a good signal-to-noise (S/N) ratio. The NB921-detected and $z'$-detected object catalogs were constructed by combining the photometry in all the $i'$, $z'$ and NB921 bands.
Meanwhile, we also measured the limiting magnitudes of the images by placing $2''$ apertures in random blank positions excluding the low S/N regions near the edges of the images (see Section \ref{LAE-Selections} and Section \ref{LBG-Selections} for the details of removing such edge regions). They are ($i'$, $z'$, NB921) $=$ (27.0, 26.5, 26.5) at $3\sigma$. As shown in Table \ref{ImagingData}, these depths are only slightly shallower than or comparable to those of the $i'$, $z'$ and NB921 images of the Control Field.
On the other hand, for the Control Field, we use the SDF public version 1.0 images and NB921-detected and $z'$-detected object catalogs\footnote[2]{Available from http://soaps.nao.ac.jp/SDF/v1/index.html} \citep{Kashikawa04} to detect LAE and LBG candidates as the control samples. We do not use the SDF $i'$ and $z'$ band images deeper than these public images \citep{Poznanski07,Graur11,Toshikawa12}. This is because the well-established $z\sim6.6$ LAE sample constructed by the previous studies by the SDF project \citep{Kodaira03,Taniguchi05,Kashikawa06,Kashikawa11} are based on these public images and catalogs and also because the depths of the public $i'$, $z'$ and NB921 images are comparable to those of the $z=6.61$ QSO field images as shown in Table \ref{ImagingData}.
Based on \cite{Schlegel98}, we estimate the Galactic extinction to be $E(B-V)=0.0123$ ($A_{\lambda}=0.026$, 0.018 and 0.018 mag for $i'$, $z'$ and NB921 bands) in the QSO field and $E(B-V)=0.0173$ ($A_{\lambda}=0.036$, 0.026 and 0.026 mag for $i'$, $z'$ and NB921 bands) in the Controal Field, respectively. As the amount of Galactic extinction in each band in each field and the difference between the two fields are negligibly small, we do not correct the magnitudes of detected objects for Galactic extinction.
\subsection{Detection Completeness\label{CompletenessSec}}
What fraction of real objects in an image we can reliably detect by photometry depends on the magnitudes and blending of objects. To examine what fraction of objects in the NB921 and $z'$ images SExtractor can detect or fails to detect to fainter magnitude, we measured the detection completeness of our photometry as it is important to correct for it when we derive the number counts of LAEs and LBGs and the Ly$\alpha$ LFs of LAEs later (see Sections \ref{NC} and \ref{LyaLFSec}).
Using the IRAF task {\tt starlist}, we first created $\sim 10,000$ artificial objects with the same PSFs as the real objects and random but uniform spatial and magnitude distributions, ranging from 20 to 27 mag. We spread them over the NB921 and $z'$ images of the QSO field by using the IRAF task {\tt mkobject} allowing them to blend with themselves and real objects. Then, SExtractor was run for source detections in exactly the same way as our actual photometry. Finally, we calculated the ratio of the number of detected artificial objects to that of created ones to obtain the detection completeness. We repeated this procedure ten times and averaged the obtained completeness. The result is shown in Figure \ref{Completeness}. The completeness of the QSO field images are $\sim 52$\% at our LAE detection limit of NB921 $= 26.0$ and $\sim 36$\% at our LBG detection limit of $z'= 26.1$ (see Sections \ref{LAE-Selections} and \ref{LBG-Selections} for the LAE and LBG detection limits).
We also estimated the detection completeness of the NB921 and $z'$ band images of the Control Field in the same manner and also show them in Figure \ref{Completeness}. As seen in the figure, detection completeness of the NB921 and $z'$ images of the QSO field are fairly comparable to those of the Control Field, as expected from the same/similar limiting magnitudes of the QSO and Control field NB921 and $z'$ images (see Table \ref{ImagingData}). This also means that differences in the impact of object blending on the completeness between the QSO and Control field images are negligibly small. Eventually, we can fairly compare LAEs and LBGs selected in the QSO field and the Control Field with the comparable detection completeness. The detection completeness is corrected when the number counts of LAEs and LBGs and the Ly$\alpha$ LFs of LAEs are derived in Sections \ref{NC} and \ref{LyaLFSec}.
\subsection{Selection of $z \simeq 6.6$ LAE Candidates in the QSO Field\label{LAE-Selections}}
We use the photometric criteria similar to the ones adopted for the previous NB921 $z\sim6.6$ LAE survey in the Control Field SDF \citep{Taniguchi05,Kashikawa06,Kashikawa11} to newly select $z\sim6.6$ LAE candidates in both $z=6.61$ QSO field and Control Field. We choose to use these criteria because they have been already proven to be reliable by yielding the robust $z\sim6.6$ LAE sample in the Control Field with a low contamination rate ($\sim 2$--19\%) confirmed by spectroscopy as mentioned earlier in Section 1 \citep[see also][]{Kashikawa11}. Another reason is that to investigate the sky distribution, number density and clustering of LAEs around the QSO, we compare the LAE sample in the QSO field with that in the SDF. Hence, we should use exactly the same selection criteria to detect LAEs in both fields for fair and rigorous comparison.
We use the following criteria (all the magnitudes are those measured in a $2''$ aperture) to newly select $z\sim6.6$ LAE candidates in the both QSO and Control fields.
\begin{eqnarray}
i' - z' > 1.3 \nonumber\\
z' - {\rm NB921} > 1.0 \nonumber\\
z' - {\rm NB921} > 3\sigma_{\rm SDF} \nonumber\\
{\rm NB921} \leq 26.0
\label{Criteria-1}
\end{eqnarray}
or else
\begin{eqnarray}
i' > i'_{2\sigma, {\rm SDF}} \nonumber\\
z' > i'_{2\sigma, {\rm SDF}} - 1.3 \nonumber\\
z' - {\rm NB921} > 1.0 \nonumber\\
z' - {\rm NB921} > 3\sigma_{\rm SDF} \nonumber\\
{\rm NB921} \leq 26.0
\label{Criteria-2}
\end{eqnarray}
As the $z\sim6.6$ Ly$\alpha$ emission appears in the middle of the $z'$ band (see the relative locations of the NB921 and $z'$ bands in wavelength in Figure \ref{FilterTransmission}) and fluxes of LAEs bluewards of Ly$\alpha$ are absorbed by the IGM \citep{Madau95}, they should have red $i' - z'$ colors. Also, the LAEs should show NB921 flux excess against the continuum band ($z'$ band). The criterion $z' - $ NB921 $> 3\sigma_{\rm SDF}$ means the $3\sigma$ NB921 flux excess against the $z'$ band flux in the SDF $z'$ and NB921 images: $z' - $ NB921 $> -2.5 \log [(f_{\rm NB} - 3 \sqrt{\sigma_{z', {\rm SDF}}^2 + \sigma_{\rm NB, {\rm SDF}}^2})/f_{\rm NB}]$. Here, $f_{\rm NB}$ is the NB921 flux. $\sigma_{z', {\rm SDF}}$ and $\sigma_{\rm NB, SDF}$ are the fluxes corresponding to the $1 \sigma$ limiting magnitudes of the SDF $z'$ and NB921 images, respectively. Also, $i'_{2\sigma, {\rm SDF}}=27.87$ is the $2\sigma$ limiting magnitude of the SDF $i'$ band image. We limit our LAE sample to NB921 $\leq 26.0$ (5$\sigma$ in both QSO and Control fields).
We adopted $\sigma_{z', {\rm SDF}}$, $\sigma_{\rm NB, SDF}$ and $i'_{2\sigma, {\rm SDF}}$, not the $1\sigma$ fluxes and $2\sigma$ magnitude of our QSO field images, although the depths of the QSO field $i'$ and $z'$ band images are slightly shallower than those of the SDF images (see Table \ref{ImagingData}). However, as shown in Figure \ref{Completeness}, as for the $z'$ and NB921 images, the differences in detection completeness (including the effect of object blending) between the QSO field and the SDF at $< 27$ mag are negligible. Hence, the difference in depths does not cause any significant bias on the selection of LAEs in the QSO field against that in the SDF.
Meanwhile, using Equations (6) and (7) in their paper, \citet{Taniguchi05} estimated that the narrowband excess criterion $z'-{\rm NB921} > 1.0$ corresponds to the rest-frame Ly$\alpha$ equivalent width (EW) threshold of EW$_0({\rm Ly}\alpha) > 7$\AA~for an LAE with its Ly$\alpha$ emission located at the center of the NB921 bandpass (i.e., $\lambda_{{\rm Ly}\alpha} = 9196$\AA~or $z_{{\rm Ly}\alpha} = 6.5625$). If we assume that redshift of an LAE is the same as that of the $z=6.61$ QSO (i.e., $z_{{\rm Ly}\alpha} = 6.61$ and replacing $\Delta\lambda_{\rm NB921}/2$ in Equation (7) in \citet{Taniguchi05} by $9196{\rm \AA}+\Delta\lambda_{\rm NB921}/2 - 1216{\rm \AA}(1+z_{{\rm Ly}\alpha})$), the EW threshold is EW$_0({\rm Ly}\alpha) > 15$\AA. Hence, the NB921 image of the QSO field has a better sensitivity to LAEs at the front side of the $z=6.61$ QSO as discussed in Section 1 and Figure \ref{NB921_LyaPeakDist}.
The criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) are exactly same as the ones previously used by \citet{Taniguchi05} to detect $z\sim 6.6$ LAEs in the SDF except that they also additionally used the null detections in the wavebands bluewards of $z\sim6.6$ Ly$\alpha$ to reduce the contaminations from low-$z$ interlopers: $B > B_{3\sigma}$, $V > V_{3\sigma}$ and $R_c > R_{c 3\sigma}$, where $B$, $V$ and $R_c$ are the magnitudes in the $B$, $V$ and $R_c$ band filters for Suprime-Cam, respectively (see Figure \ref{FilterTransmission}), and the $B_{3\sigma}$, $V_{3\sigma}$ and $R_{c 3\sigma}$ are the $3\sigma$ limiting magnitudes of the $B$, $V$ and $R_c$ images of the SDF, respectively. As we did not take $B$, $V$ and $R_c$ band images of the $z=6.61$ QSO field, we use the above LAE selection criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) without $B > B_{3\sigma}$, $V > V_{3\sigma}$ and $R_c > R_{c 3\sigma}$ to select $z\sim 6.6$ LAEs in the QSO field and re-select those in the Control Field SDF for a fair comparison. This would increase contaminations. However, in Appendix B, we have evaluated the increase in the number of contaminants due to the lack of $B$, $V$ and $R_c$ bands and confirmed that it is small.
\begin{figure*}
\epsscale{1.17}
\includegraphics[angle=0,scale=0.69]{Newf4a.eps}
\includegraphics[angle=0,scale=0.69]{Newf4b.eps}
\includegraphics[angle=0,scale=0.69]{Newf4c.eps}
\hspace{0.4cm}
\includegraphics[angle=0,scale=0.69]{Newf4d.eps}
\caption{Upper panels: $z' - {\rm NB921}$ color as a function of NB921 ($2''$ aperture) magnitude of all the objects detected in our NB921 images of the QSO and Control fields (shown by dots). The solid curves show the $3\sigma$ error track of $z' - {\rm NB921}$ color of the objects in our Control Field, SDF, not that of the objects in the QSO field (see Section \ref{LAE-Selections} for details). The horizontal solid lines are a part of our LAE color selection criteria, $z'-{\rm NB921} > 1.0$. The vertical solid lines indicate the limiting magnitude, ${\rm NB921} = 26.0$ ($5\sigma$ in both QSO and Control fields). The selected $z\sim6.6$ LAE candidates are denoted by the triangles with the arrows showing the $1\sigma$ limits on $z'-{\rm NB921}$ colors. By the open circles, we denote the LAB candidate, VIKING-z66LAB, found near the highest galaxy density peak in the south-west of the QSO field and the spectroscopically confirmed $z=6.541$ LAB, SDF J132415.7+273058 found in the Control Field SDF \citep{Kodaira03,Taniguchi05} (see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} and \ref{IsophotalArea} and Section \ref{LAB}). Lower panels: $z'-{\rm NB921}$ versus $i'-z'$ plots of all the objects detected in our NB921 images of the QSO and Control fields (shown by dots). The upper right rectangles surrounded by the solid line indicate parts of our LAE selection criteria (\ref{Criteria-1}), $i'-z'>1.3$ and $z'-{\rm NB921} > 1.0$ (see Section \ref{LAE-Selections} for details). The selected $z\sim6.6$ LAE candidates are denoted by the triangles with the arrows showing the $1\sigma$ limits on their colors. The LAE candidates with their $z'$ band magnitudes fainter than $1\sigma$ are placed at $i'-z'=2.8$. The LAB candidate, VIKING-z66LAB, and the LAB, SDF J132415.7+273058, are denoted by the open circles. Our $z' - {\rm NB921}$ versus $i'-z'$ diagram of the $z\sim6.6$ LAEs in the Control Field plotted here looks slightly different from the one plotted in Figure 3 in \citet{Taniguchi05} as we adopt the $1\sigma$ limits for $i'-z'$ colors while they did not.\label{CMDand2CD}}
\end{figure*}
To select $z\sim6.6$ LAE candidates in the $z=6.61$ QSO field, we applied the criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) to our NB921-detected object catalog made in Section \ref{Photometry}. However, the criteria yielded a large number of objects, and most of them are located in the noisy regions near the edges of the NB921 image where the S/N is low. This implies that most of them could be noise. Such low S/N edge regions originate from the dithering of the exposure frames taken during the observations.
To examine if they are noise, we created the negative NB921 image by multiplying each pixel value by $-1$, performed source detection runnig SExtractor and limited the detected objects to NB921 $\leq 26.0$. Each edge of the NB921 image was dominated with negative detections, which are considered noise. These edge regions coincide with the locations where most of the sources selected with the $z\sim6.6$ LAE candidate criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) distribute. Hence, we trimmed these edge regions off the original NB921 image. Then, running SExtractor on this NB921 image as well as $i'$ and $z'$ images with the same edge regions trimmed, we constructed the NB921-detected object catalog again and applied the criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) to it. In this process, if $i'$ and/or $z'$ band magnitudes of a source in the catalog are fainter than $1\sigma$ (i.e., $i'> i'_{1\sigma, {\rm SDF}}$ and/or $z'> z'_{1\sigma, {\rm SDF}}$), we replaced them by $i'_{1\sigma, {\rm SDF}}$ and/or $z'_{1\sigma, {\rm SDF}}$.
To further remove spurious sources, we visually inspected $i'$, $z'$ and NB921 images of each source that satisfies the selection criteria. We especially removed obviously spurious sources such as columns of bad pixels, pixels saturated with bright stars, noise events of deformed shapes, and scattering pixels having anomalously large fluxes. Finally, we were left with 14 sources that are the final $z\sim 6.6$ LAE candidates in the QSO field. The color-magnitude ($z'-{\rm NB921}$ versus NB921) and two color ($z'-{\rm NB921}$ versus $i'-z'$) diagrams of the LAE candidates and all the NB921-detected objects in the QSO field are plotted in Figure \ref{CMDand2CD}.
To remove the noise in the process of the LAE candidate selection, we trimmed the low S/N edge regions off the NB921 image. After this, the total area of the image became 697 arcmin$^2$ (or $29.'8 \times 23.'4$). The comoving distance along the line of sight corresponding to the redshift range $6.51 \leq z \leq 6.62$ for LAEs covered by the FWHM of the NB921 filter is 41 Mpc. Therefore, we have probed a total of $\sim 1.7 \times 10^5$ Mpc$^3$ volume for our $z\sim6.6$ LAE selection in the QSO field.
The limiting magnitude of $i'$-band image of the QSO field ($i'_{2\sigma, {\rm QSO}}=27.4$ mag at $2\sigma$) is shallower than that of the Control Field ($i'_{2\sigma, {\rm SDF}} = 27.87$ mag). Thus, the criteria $i' > i'_{2\sigma, {\rm SDF}}$ and $z' > i'_{2\sigma, {\rm SDF}} - 1.3$ in the LAE selection criteria (\ref{Criteria-2}) might be very stringent to selecting LAEs in the QSO field and could result in missing detecting some LAEs. To investigate this issue, we repeat the LAE selection with the criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) but using $i' > i'_{2\sigma, {\rm QSO}}$ and $z' > i'_{2\sigma, {\rm QSO}} - 1.3$. We detect 4 additional objects after removing spurious sources by visual inspection. All of them are faintly visible in the $i'$-band image. Usually, a large majority of $z=6.6$ LAEs are not visible in the $i'$-band due to IGM absorption of their fluxes at the wavelengths bluewards of $z=6.6$ Ly$\alpha$ while a small minority of $z=6.6$ LAEs, especially bright ones, are visible in the $i'$-band if detecatable fluxes are left at the wavelengths bluewards of $z=6.6$ Ly$\alpha$ \citep[e.g.,][]{Taniguchi05}. Thus, we cannot tell whether the additionally detected 4 objects are $z=6.6$ LAEs or lower-$z$ interlopers. Nonetheless, even if we assume that all of them are $z=6.6$ LAEs, we stick to only the LAE candidates selected using $i'_{2\sigma, {\rm SDF}}$ based on the following two reasons. (1) Adding the four objects to our LAE sample would not change our conclusion of this study. More specifically, the LAE number density excess contours shown later in Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} would not change much because of the locations of the four objects in the QSO field; ($\Delta$DEC [arcmin], $\Delta$RA [arcmin]) $=$ (4.8, 2.6), (17.7, 2.0), (25.0, 0.2) and (25.9, 7.8). (2) We want to use exactly the same LAE selection criteria for both QSO and Control fields for consistency despite the difference in depth in the $i'$-band between the two fields.
\subsection{Selection of LBGs in the QSO Field\label{LBG-Selections}}
In addition to LAEs, we also investigate LBGs around the $z=6.61$ QSO. To detect them, we first examined the expected colors of LBGs and potential contaminants and determined the LBG selection criteria. Figure \ref{i-z_vs_redshift} shows $i'-z'$ color (convolved with the Suprime-Cam $i'$ and $z'$ filters) as a function of redshift of model LBGs as well as other types of galaxies and M/L/T type dwarf stars that can be contaminants. We modeled LBGs with power-law spectra $f_{\lambda} \propto \lambda^{\beta}$ with UV continuum slopes $\beta=-3$ to 0, IGM absorption applied using the \citet{Madau95} prescription and no Ly$\alpha$ emission. The $z\sim 6.6$ LBGs are expected to have colors of $i'-z' \sim 2.5$--3.1. In Figure \ref{i-z_vs_redshift}, we also plot colors of E (elliptical), Sbc, Scd and Im (irregular) galaxies using \citet{Coleman80} template spectra as well as M/L/T dwarfs (types M3--M9.5, L0--L9.5 and T0--T8) using their real spectra provided by \citet{Burgasser04,Burgasser06a,Burgasser06b,Burgasser08,Burgasser10} and \citet{Kirkpatrick10} at the SpeX Prism Spectral Libraries\footnote[3]{http://pono.ucsd.edu/{\textasciitilde}adam/browndwarfs/spexprism/library.html}. While low-$z$ Sbc, Scd and Im galaxies show bluer colors of $i'-z' \lesssim 1.1$, low-$z$ ellipticals could have red colors up to $i' - z' \sim 2.0$ due to their 4000\AA~Balmer breaks. Moreover, M/L/T dwarfs exhibit a wide range of colors $i'-z' \sim 0.5$--3.5.
Based on these color information, we selected the LBG candidates in the QSO field by applying the following $i'$-dropout criteria (all the magnitudes are those measured in a $2''$ aperture) to the $z'$-detected object catalog constructed in Section \ref{Photometry}.
\begin{eqnarray}
i' > i'_{2\sigma, {\rm SDF}} \nonumber\\
25.0 \leq z' \leq 26.1 \nonumber\\
i' - z' > 1.8
\label{Criteria-3}
\end{eqnarray}
If an LBG is at $z\sim6.6$ (i.e., if it is associated with the $z=6.61$ QSO), then its Lyman break is located at $\sim9240$\AA~in the middle of the $z'$ band wavelength range (see Figure \ref{FilterTransmission}). As fluxes of the LBG bluewards of the $z\sim6.6$ Lyman break should be absorbed by IGM, we require the null detection ($< 2\sigma$) in $i'$ band (the first criterion where $i'_{2\sigma, {\rm SDF}}=27.87$ mag is the $2\sigma$ limiting magnitude of the $i'$ band image of the Control Field SDF). This limits the expected redshift of the selected LBGs to $z>6$ as the red edge of the Suprime-Cam $i'$ band is at $\sim8500$\AA~corresponding to $z\simeq 6$ Ly$\alpha$.
Also, we limit our LBG sample to $z'\leq 26.1$ where $z'=26.1$ is the 4$\sigma$ (5$\sigma$) limiting magnitude of the $z'$ band image of the QSO field (Control Field SDF). We adopt $z' \leq 26.1$ ($5\sigma$ in the SDF) as well as $i'_{2\sigma, {\rm SDF}}$, not the $5\sigma$ and $2\sigma$ limiting magnitudes of the QSO field images, because we should fairly compare the LBGs in the QSO field and the Control Field by selecting them with exactly the same criteria, although the depths of the QSO field images are slightly shallower than those of the SDF images (see Table \ref{ImagingData}). As shown in Figure \ref{Completeness}, as for the $z'$ band images, the difference in detection completeness (including the effect of object blending) between the QSO field and the SDF at $< 27$ mag is small and almost negligible. Hence, the difference in depth does not cause any significant bias on the selection of LBGs in the QSO field against that in the Control Field.
The two criteria, $i' > i'_{2\sigma, {\rm SDF}} = 27.87$ and $z' \leq 26.1$, automatically require that the LBG candidates should have an $i' - z' > 1.77$ color. Figure \ref{i-z_vs_redshift} shows that the $z\sim 6.6$ LBGs are expected to have colors of $i'-z' \sim 2.5$--3.1 due to their Lyman breaks. To cover the possible variety of $i'-z'$ colors of real LBGs, we adopt the inclusive color cut of $i' - z' > 1.8$. This color cut, together with $i' > i'_{2\sigma, {\rm SDF}}$, detects LBGs at $6 < z < 7$. This has been previously independently confirmed by \citet{Toshikawa12}, who also used a similar color cut $i' - z' > 1.5$ to select and study $i'$-dropout galaxies in the SDF. They produced various galaxy spectra using the population synthesis model of \citet{BC03} and determined a color cut of $i' - z' > 1.5$ which detects Lyman breaks at $5.6 \lesssim z \lesssim 6.9$.
Figure \ref{i-z_vs_redshift} shows that low redshift ellipticals could be contaminants having colors of $i' - z' > 1.8$ due to their 4000\AA~Balmer breaks in principle. However, \citet{Ota05} and \citet{Toshikawa12} have already shown that low-$z$ ellipticals actually have $i' - z' < 1.5$ colors and do not contaminate the $i'$-dropout LBG samples by examining $i' - z'$ colors of $z \sim 1$--4 extremely red objects (EROs consisting of old ellipticals and dusty starbursts) detected in the Suprime-Cam $i'$ and $z'$ bands by \citet{Miyazaki03}. Also, \citet{Toshikawa12,Toshikawa14} carried out spectroscopy of 31 $i' - z' > 1.5$ $i'$-dropouts and detected no low redshift elliptical. Therefore, we consider contamination of our $i' - z' > 1.8$ LBG sample by low redshift ellipticals negligible.
\begin{figure}
\epsscale{1.22}
\plotone{f5.eps}
\caption{$i'-z'$ color (Suprime-Cam $i'$ and $z'$ bands) as a function of redshift of model LBGs, various types of galaxies and M/L/T type dwarf stars. The colors of LBGs are calculated assuming power-law spectra $f_{\lambda} \propto \lambda^{\beta}$ with several different UV continuum slopes $\beta$ and no Ly$\alpha$ emission. We applied IGM absorption to the spectra using the \citet{Madau95} prescription. The colors of E (elliptical), Sbc, Scd and Im (irregular) galaxies are calculated using \citet{Coleman80} template spectra. Also, colors of M/L/T dwarfs (types M3--M9.5, L0--L9.5 and T0--T8) are calculated using their observed spectra provided by \citet{Burgasser04,Burgasser06a,Burgasser06b,Burgasser08,Burgasser10} and \citet{Kirkpatrick10} at the SpeX Prism Spectral Libraries (see footnote 3). The horizontal and vertical dashed lines denote our color cut $i'-z' > 1.8$ for $z>6$ LBG selection and the corresponding lower redshift cut $z=6$, respectively. The vertical solid line is the redshift of the QSO.\label{i-z_vs_redshift}}
\end{figure}
\begin{figure*}
\includegraphics[angle=0,scale=0.69]{f6a.eps}
\includegraphics[angle=0,scale=0.69]{f6b.eps}
\caption{Left: $i' - z'$ color as a function of $z'$ ($2''$ aperture) magnitude of all the objects detected in the $z'$ band image of the $z=6.61$ QSO field (shown by dots). The horizontal solid line is a part of our LBG selection criteria, $i' - z' > 1.8$. The vertical solid line indicates the limiting magnitude, $z' = 26.1$. The selected LBG candidates in the QSO field are denoted by the filled circles with the arrows showing the $1\sigma$ limits on $i' - z'$ colors. The LAB candidate, VIKING-z66LAB, found near the highest galaxy density peak in the south-west of the QSO field (see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} and Section \ref{LAB}) is also plotted by the open circle and labeled so. Its $i'-z'$ color is comparable to those of the LBG candidates, but it is not selected as an LBG candidate as its $z'$ band magnitude is slightly fainter than the limiting magnitude. Right: The same diagram as the left panel but for all the objects (dots) and the LBG candidates (circles) detected in the $z'$ band image of the Control Field SDF.\label{CMD_LBGs_both_fields}}
\end{figure*}
Moreover, we also limit our LBG sample to $z' \geq 25.0$ to reduce the contamination by dwarf stars. Figure \ref{i-z_vs_redshift} shows that $i'-z'$ colors of M/L/T dwarfs can have colors of $i' - z' > 1.8$ to contaminate our LBG sample. Again, \citet{Toshikawa12,Toshikawa14}, who studied $i' - z' > 1.5$ $i'$-dropouts in the SDF, estimated that the contamination rate of M/L/T dwarfs in their sample was high at $z' < 25.0$ but as low as only $\sim6$\% at $z' \geq 25.0$. As mentioned earlier, they also carried out spectroscopy of 31 $i' - z' > 1.5$ $i'$-dropouts and detected no dwarf, either. Hence, we adopt the criterion $z' \geq 25.0$ in our LBG selection criteria (\ref{Criteria-3}) to reduce the number of contaminating M/L/T dwarfs in our sample. However, actually, no $i' - z' > 1.8$ object exists at $z' < 25.0$ in the QSO field.
To avoid spurious LBG detections at the low S/N edge regions (due to dithering) of the $z'$ band image of the QSO field, we performed a negative image test with the $z'$ band image. This is the test similar to the one we did for the NB921 image of the QSO field when selecting the $z\sim 6.6$ LAE candidates (see Section \ref{LAE-Selections}). Based on this test, we identified the borders of the low S/N edge regions and trimmed them off the $z'$ band image. Running SExtractor on this $z'$ band image as well as the $i'$ and NB921 band images with the same edge regions trimmed, we then created the $z'$-detected object catalog again and applied the LBG selection criteria (\ref{Criteria-3}) to it.
To further remove the spurious sources, we visually inspected $i'$, $z'$ and NB921 images of each source that satisfies the selection criteria in the same way as we did for the selection of $z\sim6.6$ LAE candidates. Finally, we were left with 53 $z>6$ LBG candidates in the QSO field. We show their color-magnitude diagram ($i'-z'$ versus $z'$) in the left panel of Figure \ref{CMD_LBGs_both_fields}.
The limiting magnitude of $i'$-band image of the QSO field ($i'_{2\sigma, {\rm QSO}}=27.4$ mag at $2\sigma$) is shallower than that of the Control Field ($i'_{2\sigma, {\rm SDF}} = 27.87$ mag). Thus, the criterion $i' > i'_{2\sigma, {\rm SDF}}$ in the LBG selection criteria (\ref{Criteria-3}) might be very stringent to selecting LBGs in the QSO field and could result in missing detecting some LBGs. To investigate this issue, we repeat the LBG selection with the criteria (\ref{Criteria-3}) but relaxing the first criterion to $i' > i'_{2\sigma, {\rm QSO}}=27.4$ mag. We detect 12 more objects after removing spurious sources by visual inspection. Out of the 12, 8 are faintly visible in the $i'$-band image, so they are not $z >$ 6 LBGs but either $z < 6$ LBGs or interlopers.
The remaining four are not seen in the $i'$-band image and thus could be $z > 6$ LBGs. Thus, at most only four LBG candidates are missed. Nonetheless, we stick to only the LBG candidates selected using $i' > i'_{2\sigma, {\rm SDF}}$ based on the following four reasons. (1) There still remains a possibility that the four objects might become visible in the $i'$-band image of the QSO field and not be $z > 6$ LBG candidates if the image is as deep as the $i'$-band image of the Control Field. (2) Adding the four objects to our LBG sample would not change our conclusion of this study. More specifically, the LBG number density excess contours shown later in Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} would not change much because of the locations of the four objects in the QSO field; ($\Delta$DEC [arcmin], $\Delta$RA [arcmin]) $=$ (21.0, 17.5), (9.6, 4.3), (6.9, 9.0) and (12.8, 16.4). (3) We want to use exactly the same LBG selection criteria for both QSO and Control fields for consistency despite the difference in depth in the $i'$-band between the two fields. (4) The majority (8/12) of the extra objects detected using the relaxed criterion $i' > i'_{2\sigma, {\rm QSO}}=27.4$ mag are visible in the $i'$-band image and not the $z>6$ LBGs we want to include in our LBG sample. Thus, this criterion is not stringent enough.
\subsection{The LAE and LBG Samples in the Control Field\label{SDFsample}}
In Appendix B, we have examined the validity of our $z \sim 6.6$ LAE selection criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) using only the $i'$, $z'$ and NB921 images of the SDF and not using the SDF $B$, $V$ and $R_c$ images. Using these criteria, we have successfully re-selected the same 58 photometric $z\sim 6.6$ LAE candidates as the ones the previous work by the SDF project \citep{Taniguchi05} had selected and also 5 additional objects (cases 1--5 in Figure \ref{woBVR_Objects}). As we show in Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{ACF_LAEs_LBGs_SDF_vs_QSOField} and Sections \ref{NdensityContours} and \ref{ACF}, these 63 $z\sim6.6$ LAE candidates in the SDF neither show any significant over/underdensity nor clustering. Hence, we adopt the 63 SDF LAE candidates as our final Control Field LAE sample. We show their color-magnitude ($z'-{\rm NB921}$ versus NB921) and two-color ($z'-{\rm NB921}$ versus $i'-z'$) diagrams in Figure \ref{CMDand2CD}.
On the other hand, in exactly the same way as we did to construct the QSO field LBG sample in Section \ref{LBG-Selections}, we selected 32 LBG candidates in the Control Field by applying the selection criteria (\ref{Criteria-3}) to the SDF public version 1.0 $z'$-detected object catalog. As we show in Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{ACF_LAEs_LBGs_SDF_vs_QSOField} and Sections \ref{NdensityContours} and \ref{ACF}, the LBG candidates in the SDF neither show any significant over/underdensity nor clustering. Hence, we adopt these SDF LBG candidates as our final Control Field LBG sample. We show their color-magnitude diagram ($i'-z'$ versus $z'$) in the right panel of Figure \ref{CMD_LBGs_both_fields}. The numbers of the LAE and LBG candidates detected in the QSO and Control fields are listed in Table \ref{ImagingData}.
\section{Result and Discussion\label{Result}}
As we have constructed LAE and LBG samples in the QSO and Control fields in a consistent manner to comparable depths, completeness and effective survey areas/volumes, in the subsequent sections we derive and compare their sky distributions, number density contours, number counts and clustering properties in both fields to elucidate if the $z=6.61$ QSO resides in a galaxy overdensity indicative of a massive halo.
\subsection{Sky Distributions and Number Density Excess Contours of LAEs and LBGs\label{NdensityContours}}
If there are any galaxy overdensities in the vicinity of the QSO, we can identify them by comparing sky distributions and surface number density excess contours of the LAEs and the LBGs over the entire QSO and Control fields as the observed sky areas covering these fields are quite large (one Suprime-Cam FoV each, $\sim 34' \times 27'$ or $\sim 11 \times 9$ physical Mpc$^2$ at $z=6.6$). We can draw surface number densitiy excess contours by measuring the number density of galaxies at many positions in an image, deriving its mean and dispersion $\sigma$ and plotting the mean $\pm$ $1 \sigma$, mean $\pm$ $2 \sigma$ and so on. Previous studies that found no significant galaxy overdensities around QSOs derived the contours based on the mean and $\sigma$ measured in the field in which the contours are drawn \citep[e.g.,][]{Kashikawa07,Kikuta17}. In our study, we have the Control Field that includes no QSO at $z \sim 6.6$ but has comparable depth, completeness and area to those of the $z=6.61$ QSO field. Hence, in the subsequent sections, we first show the number density excess contours of the LAEs and the LBGs in each of the QSO and Control fields based on the mean and the $\sigma$ measured in each field just like previous studies. Then, we show the same contours of the LAEs and the LBGs in the QSO field but based on the mean and the $\sigma$ measured in the Control Field. In this way, we try to elucidate how much more significant the number density excess of the LAEs and the LBGs in the QSO field are than those in a general blank field.
\begin{figure*}
\includegraphics[angle=0,scale=0.68]{Newf7a.eps}
\includegraphics[angle=0,scale=0.75]{Newf7b.eps}
\hspace*{-1.2cm}
\includegraphics[angle=0,scale=0.73]{f7c.eps}
\caption{Sky distributions and surface number density excess contours of the LAEs (red triangles and contours) and the LBGs (blue circles and contours) in the $z=6.61$ QSO field (left panel) and the Control Field SDF (right panel). The symbols for the LAEs (LBGs) are scaled with their NB921 ($z'$) total magnitudes. East (North) is up, and South (East) to the left in the QSO (Control) field. The black solid line rectangle in the Control Field corresponds to the size of the QSO field. In each field, the surface number densities of LAEs or LBGs were measured by randomly distributing 100,000 comoving 8 Mpc radius circles and counting the numbers of LAEs or LBGs in them. Then, we derived their mean and dispersion $\sigma$ for LAEs or LBGs in each field. In the both panels, the red and blue dashed contours indicate the mean numbers of LAEs and LBGs in a circle, respectively. The dotted contours denote mean$-1\sigma$ number deficits while the solid thin to thick contours show mean$+1\sigma$, mean$+2\sigma$ and mean$+3\sigma$ number excess. We do not plot the mean$-1\sigma$ contour of the LAEs in the QSO field as it is below zero. The orange square in the left panel is the location of the $z=6.61$ QSO. The grey shade shows the ``proximity'' region within 21 comoving Mpc ($\sim3$ physical Mpc) distances in projection from the QSO where the 21 comoving Mpc is half of the comoving distance along the line of sight corresponding to the redshift range $\Delta z \sim 0.1$ for LAEs covered by the FWHM of the NB921 filter. If an LAE/LBG in this region is also located within $\Delta z \sim 0.05$ from the QSO, it is likely associated with the QSO. The large black open square around the QSO in the left panel is the $\sim 12$ arcmin$^2$ field of view (FoV) of the {\it Hubble Space Telescope} (HST) Advanced Camera for Surveys (ACS) often used by previous studies to search for galaxy overdensities around $z \gtrsim 6$ QSOs, which resulted in finding a variety of galaxy densities \citep[e.g.,][]{Kim09,Simpson14}. The black open circles in the both panels show locations of a bright extended LAB candidate, VIKING-z66LAB (QSO field), and an LAB, SDF J132415.7+273058 (Control Field) (see Sections \ref{NC} and \ref{LAB}).\label{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}}
\end{figure*}
\subsubsection{Density Contours in the QSO and Control Fields Based on the Mean and the Dispersion in Each Field\label{NdensityContours_ownMeanSigma}}
Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} shows sky distributions of the LAEs and the LBGs and their surface number density excess contours in the QSO and the Control fields. In each of the QSO and the Control fields, we spread circles of an 8 comoving Mpc radius to 100,000 homogeneously distributed random positions, measured the number of LAEs or LBGs in each circle and derived the mean and the dispersion $\sigma$ for the LAEs or the LBGs. Then, we drew the surface number density contours of mean$- 1\sigma$, mean, mean$+ 1\sigma$, mean$+ 2\sigma$, mean$+ 3\sigma$, ... for the LAEs or the LBGs in each field in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}.
We chose 8 comoving Mpc for the radius of the circle as it is not too small to encompass LAEs or LBGs in sparse regions in each field (especially those in the right half of the QSO field; see Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}) and not too large compared to the size of each field. The number of circles (100,000) are large enough to cover the entirety of each field. The numbers of LAEs or LBGs counted in the circles at the positions near the edges of the images (regions within 8 comoving Mpc from the edges) may affect the derived values of means and $\sigma$'s to some extent because parts of the circles are outside the image. To examine this effect, we calculated means and $\sigma$'s by spreading circles over the image avoiding the regions within 8 comoving Mpc from the edges. For both LAEs and LBGs, means and $\sigma$'s changed only very slightly from those derived including the edge regions. The amount of changes is very small and negligible. Accordingly, the derived number density excess contours of LAEs and LBGs look very similar to those derived including the edge regions. Hence, we did not make any corrections to the numbers of LAEs or LBGs counted in the circles at the positions near the edges of the images.
As seen in the right panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, both the LAEs and the LBGs in the Control Field exhibit no significant overdensity but show mostly mean $\pm$ $1 \sigma$ densities with a few small peaks of $2\sigma$ number density excess. This means that a general blank sky field like this Control Field is almost flat with only $1 \sigma$ fluctuation in surface number density distributions of LAEs and LBGs on $\sim 9 \times 11$ physical Mpc$^2$ ($\sim 67 \times 86$ comoving Mpc$^2$) scale.
On the other hand, the left panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} shows the sky distributions and number density contours of the LAEs and LBGs in the QSO field. The $z=6.61$ QSO is located at the center of the field. We define and show the ``proximity'' region around the QSO by the gray shade in the similar way to the one used in the previous study of environments around $z\sim5$ QSOs/radio galaxy conducted by \cite{Kikuta17}. \cite{Kikuta17} observed two QSOs and a radio galaxy at $z\sim5$ by the Subaru Suprime-Cam and narrow/broadband filters, detected LAE and LBG candidates in the QSO/radio galaxy fields and analyzed their sky distributions and number density contours in the similar way to ours. They defined the proximity regions around the QSOs/radio galaxy whose sizes (3 or 5 physical Mpc) were determined by the FWHMs of the two narrowband filters they used ($\sim72$\AA~or 120\AA) and are sufficiently small to detect galaxies associated with the QSOs/radio galaxy.
In the left panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, we also show the proximity region within 21 comoving Mpc ($\sim 3$ physical Mpc) distances in projection from the $z=6.61$ QSO with the gray shade. Here, the 21 comoving Mpc is half of the comoving distance along the line of sight corresponding to the redshift range $\Delta z \sim 0.1$ for LAEs covered by the FWHM of the NB921 filter. Hence, if an LAE or an LBG in this region is also located within $\Delta z \sim 0.05$ from the QSO (though we need to take the spectrum of it to know this), it is likely associated with the QSO. We find 3 LAE candidates and 11 LBG candidates in the proximity region. In this region, the number densities of both LAE and LBG candidates show only mean to mean$+ 1\sigma$ values. Hence, we see apparently no significant overdensities of LAEs and LBGs in the proximity of the QSO.
However, the sky distributions and number density contours of the LAEs and the LBGs in the entire QSO field look quite different from those in the Control Field on a larger scale. In the right (north) half of the QSO field, LAEs and LBGs are very sparse mostly exhibiting the densities between mean and mean$- 1\sigma$ (underdensity at $1\sigma$ level). Meanwhile, in the left (south) half of the QSO field, the number densities of both LAEs and LBGs are between mean and mean$+ 3\sigma$ and higher than those in the Control Field on a large scale. Both of these LAE and LBG overdensities in the QSO field also show filamentary structures side by side extending from east to west (top to bottom in the figure). The LAE structure also extends from south to north (left to right in the figure). The LAE structure includes a $3\sigma$ density peak between east and west while the LBG structure contains a $3\sigma$ density peak in the west. These large scale structures of LAEs and LBGs with weak overdensities partly include the proximity region of the QSO and the QSO itself though the densities of LAEs and LBGs in the proximity region are mean to mean$+ 1\sigma$. Hence, the QSO might possibly be associated with the filamentary large scale structures of LAEs and LBGs, and such structures seem to be highly biased compared to the relatively flat large scale structures of LAEs and LBGs with low density fluctuations in a general blank field like our Control Field.
Interestingly, we found that there is a bright (${\rm NB921}_{\rm total} \sim 23.78$ mag) and extended (diameter $\gtrsim 3''$ or 16 physical kpc) Ly$\alpha$ Blob (LAB) candidate near the $3\sigma$ LBG overdensity peak in the west (hereafter VIKING-z66LAB, see Section \ref{LAB} for further details). This LAB candidate is one of the LAE candidates we detected by the LAE selection criteria (\ref{Criteria-1}) and (\ref{Criteria-2}) and the most extended one. LABs often show evidence of galaxy interaction or merging within their extended Ly$\alpha$ clouds \citep[e.g.,][]{Ouchi09} and tend to be found in/around dense environments like protoclusters or galaxy overdensity regions \citep[e.g.,][]{Steidel00,Chapman04,Colbert06,Matsuda11,Bridge12,Yang12,Yang14,Badescu17}. VIKING-z66LAB appears to show a sign of interaction or merging of two sources in the $z'$-band image (see Figure \ref{LABFig}). Thus, the existence of this LAB candidate could support the validity of the overdensity of LBGs.
For comparison, in the left panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, we also depict the $\sim 12$ arcmin$^2$ field of view (FoV) of the {\it Hubble Space Telescope} (HST) Advanced Camera for Surveys (ACS) often used by previous studies to search for galaxy overdensities around $z \gtrsim 6$ QSOs, which resulted in finding high, average and even low galaxy densities \citep[e.g.,][]{Kim09,Simpson14}. The ACS FoV includes only one LBG candidate in our QSO field and is much smaller than the size of the QSO proximity region and the large scale structures of LAEs and LBGs. This suggests that it is difficult to see a positional relation between a $z>6$ QSO and large scale spatial and number density distributions of galaxies within a small FoV, and that large area imaging by a wide-field camera is essential to exploring the large scale galaxy environment of a QSO. This was also pointed out by \citet{Morselli14} who observed $z\sim6$ $i$-dropout LBGs around four $z\gtrsim6$ QSOs with the wide-field ($\sim 23' \times 25'$) Large Binocular Camera (LBC) on the Large Binocular Telescope. Three out of the four QSO fields had been also previously observed by \citet{Kim09} using the HST ACS (one pointing each) and known to show overdensity, average density and underdensity of $i$-dropout LBGs, respectively, within the ACS FoV. However, \citet{Morselli14} found that the LBG number densities in all the four QSO fields are higher than that in a blank field when seen in the LBC FoV and emphasized that wide-field imaging can capture possible large-scale galaxy overdensities around QSOs.
\begin{figure*}
\epsscale{1.18}
\plotone{Newf8.eps}
\caption{Sky distributions and surface number density excess contours of the LAEs (red triangles and contours) and the LBGs (blue circles and blue or color coded contours) in the $z=6.61$ QSO field. The symbols for the LAEs (LBGs) are scaled with their NB921 ($z'$) total magnitudes as in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}. The surface number densities of LAEs or LBGs were measured by randomly distributing 100,000 comoving 8 Mpc radius circles in the QSO field and counting the numbers of LAEs or LBGs in them. Then, we draw the contours of mean$_{\rm SDF}-1\sigma_{\rm SDF}$ (dotted contours), mean$_{\rm SDF}$ (dashed contour), mean$_{\rm SDF}+1\sigma_{\rm SDF}$, mean$_{\rm SDF}+2\sigma_{\rm SDF}$, mean$_{\rm SDF}+3\sigma_{\rm SDF}$ ... number excess (solid contours) for the LAEs or the LBGs. Note that the mean$_{\rm SDF}$ and the $\sigma_{\rm SDF}$ are the mean number of LAEs or LBGs in a circle and the dispersion both measured in the Control Field SDF in the right panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, not the mean and the dispersion measured in the QSO field. In this way, we can elucidate how much more significant the number density excess of the LAEs or the LBGs in the QSO field are than those in the Control Field. The orange square and the grey shade are the $z=6.61$ QSO and its proximity region. The large black open square around the QSO is the FoV of the HST ACS often used by previous studies to search for galaxy overdensities around $z \gtrsim 6$ QSOs, resulting in finding high, average and even low galaxy densities. The black open circle is a bright extended LAB candidate VIKING-z66LAB (see Sections \ref{NC} and \ref{LAB}).\label{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF}}
\end{figure*}
\subsubsection{Density Contours in the QSO Field Based on the Mean and the Dispersion in the Control Field\label{NdensityContours_SDFMeanSigma}}
The number density excess contours of the LAEs and the LBGs in the $z=6.61$ QSO field shown in the left panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} are based on the means and the $\sigma$'s of the LAEs and the LBGs measured in the QSO field itself. However, as we discussed above, the spatial and number density distributions of the LAEs and the LBGs in the QSO field are quite different from and apparently highly biased compared to those in the Control Field. Hence, it might not be ideal to use the means and the $\sigma$'s of LAEs and LBGs measured in the QSO field to draw the number density excess contours, which previous studies did.
On the other hand, the Control Field represents a blank field without any extremely biased galaxy spatial and number density distributions as seen in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and described in Section \ref{NdensityContours_ownMeanSigma}. Also, as we will show in Section \ref{NC}, the cosmic variance of $z\sim6.6$ LAEs in the Control Field is only $\sigma_v \sim 0.19$. Moreover, using the Subaru Suprime-Cam and the NB921 filter, \citet{Ouchi10} detected 207 $z\sim 6.6$ LAE candidates in the $\sim 1.0$ deg$^2$ Subaru/{\it XMM-Newton} Deep Survey (SXDS) field (5 Suprime-Cam pointings). They compared their surface number density (number/0.5 mag/arcmin$^2$ versus NB921 magnitude) with that of 58 $z\sim 6.6$ LAE candidates \citet{Taniguchi05} detected in SDF (Control Field) and showed that they are almost consistent \citep[see Figure 5 in][]{Ouchi10}. Hence, the area of the Control Field is large enough not to be affected much by cosmic variance, and it is more appropriate to use its means and $\sigma$'s to plot the number density excess contours. Therefore, we also draw the contours of the LAEs and the LBGs in the QSO field based on the means and the $\sigma$'s of LAEs and LBGs measured in the Control Field; i.e., mean$_{\rm SDF}-1\sigma_{\rm SDF}$, mean$_{\rm SDF}$, mean$_{\rm SDF}+1\sigma_{\rm SDF}$, mean$_{\rm SDF}+2\sigma_{\rm SDF}$, ... and so on in Figure \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF}.
In this figure, we see the shapes of LAE and LBG number density contours similar to those we have found in the left panel of Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}, but the LAE contours show much lower densities while the LBG contours exhibit higher significances of overdensities compared to the Control Field. The large scale structure of LAEs in the QSO field traces only mean$_{\rm SDF}-1\sigma_{\rm SDF}$ to mean$_{\rm SDF}$, equivalent to the mean to even underdensity of the Control Field. Conversely, the number density of LBGs in their filamentary structure varies from mean$_{\rm SDF}+ 1\sigma_{\rm SDF}$ to the highest peak at mean$_{\rm SDF}+7\sigma_{\rm SDF}$ level in the west (lower left in the figure). The LAB candidate VIKING-z66LAB is located in the place of $\sim1$--$3\sigma_{\rm SDF}$ excess of LBGs. There is also $\sim5$--$6\sigma_{\rm SDF}$ excess of LBGs at the eastern part of the LBG large scale structure. Even in the right (northern) half of the QSO field, where LAEs and LBGs are relatively sparse, LBGs are mostly exhibiting the densities typical of the Control Field (mean$_{\rm SDF}$ to mean$_{\rm SDF}+1\sigma_{\rm SDF}$). In this region, the density of the LAEs is entirely below the mean of the Control Field ($\leq$ mean$_{\rm SDF}-1\sigma_{\rm SDF}$).
The large scale structures of the LAEs and the LBGs again partly include the QSO itself and its proximity region. The density of the LBGs in the proximity region varies from mean$_{\rm SDF}$ to mean$_{\rm SDF}+ 4\sigma_{\rm SDF}$ and mean$_{\rm SDF}+ 1\sigma_{\rm SDF}$ to mean$_{\rm SDF}+ 2\sigma_{\rm SDF}$ right at the position of the QSO. Hence, the number density of LBGs is moderately high in the vicinity of the QSO compared to that in a general field (Control Field). However, the density of the LAEs in the QSO proximity region is entirely below mean$_{\rm SDF}$ including the position of the QSO. Thus, the number density of LAEs in the vicinity of the QSO is below the mean of a general field.
Eventually, the sky distributions and number density excess contours of LAE and LBG candidates lead to three important implications. (1) The number density of the LAE candidates in the proximity of the $z=6.6$ QSO is below the mean density of the LAE candidates in a general blank field at $\sim 1\sigma$ level. (2) The number density of the LBG candidates in the proximity of the $z=6.6$ QSO varies from the mean value to the $4\sigma$ excess of the LBG candidate density in a general blank field. (3) The $z=6.6$ QSO is included in the filamentary large scale structure of LBG candidates and might be associated with the structure but is not located exactly at the highest density peaks of LBG candidates in the structure. Therefore, there still remains the possibility that galaxy (LBG) overdensities exist in the proximity of the QSO. However, we cannot tell if this is real unless we spectroscopically confirm the redshifts of a significant fraction of the LBG candidates in the both QSO and Control fields and draw the contours of mean$_{\rm SDF} - 1\sigma$, mean$_{\rm SDF}$, mean$_{\rm SDF} + 1\sigma$, mean$_{\rm SDF} + 2\sigma$ ... of the LBGs at the confirmed redshifts close to that of the QSO.
\begin{figure*}
\includegraphics[angle=0,scale=0.69]{Final_f9a.eps}
\includegraphics[angle=0,scale=0.69]{Final_f9b.eps}
\caption{Left: ACFs of the $z\sim6.6$ LAE candidates in the Control Field SDF and the QSO field shown by open and filled triangles, respectively. Error bars show the $1\sigma$ Poisson errors. There are no LAE-LAE pairs in the QSO field with separation angles corresponding to the first three $\theta$ bins. Right: ACFs of the LBG candidates in the Control Field SDF and the QSO field shown by open and filled circles, respectively. Error bars show the $1\sigma$ Poisson errors.\label{ACF_LAEs_LBGs_SDF_vs_QSOField}}
\end{figure*}
\subsection{Clustering of LAEs and LBGs\label{ACF}}
We have seen that sky and number density distributions of the LAE and LBG candidates in the QSO field are quite extreme showing their large scale overdensity structures mostly in half side of the entire field. This is in stark contrast to a general blank field (Control Field) where LAE and LBG candidates distribute much more uniformly with small fluctuations. Namely, the LAE and LBG candidates in the QSO field look much more clustered than those in the Control Field. To further quantitatively investigate this trend, we derive and compare two-point angular correlation functions (ACFs) of the LAE and LBG candidates in the QSO and Control fields in Figure \ref{ACF_LAEs_LBGs_SDF_vs_QSOField}. ACF is the estimator of clustering strength defined by \citet{LandySzalay93} as follows.
\begin{equation}
\omega (\theta) = \frac{DD(\theta) - 2DR(\theta) + RR(\theta)}{RR(\theta)}
\label{ACF-equation}
\end{equation}
where the $DD(\theta)$, $DR(\theta)$, and $RR(\theta)$ are the number of galaxy-galaxy, galaxy-random and random-random pairs having angular separations between $\theta$ and $\theta+\delta\theta$. We generated 100,000 random points in each of the QSO and the Control fields to reduce the Poisson noise in random pair counts and normalized $DD(\theta)$, $DR(\theta)$, and $RR(\theta)$ by the total number of pairs in each pair count. We created the random points having exactly the same boundary conditions as the LAE or LBG samples in the QSO and Control fields by avoiding the regions of their NB921 or $z'$ band images masked, removed or trimmed during the data reduction and the LAE/LBG selection.
We estimated the Poisson errors for the ACFs \citep{LandySzalay93} as
\begin{equation}
\sigma_{\omega} (\theta) = \frac{1+\omega (\theta)}{\sqrt{DD(\theta)}}
\label{ACF-equation}
\end{equation}
For a large number of sources, the Poisson errors tend to underestimate true errors for an ACF compared to the bootstrap or the Jackknife technique \citep{Ling86,Harikane16,Harikane17}. However, in the case of our galaxy samples, the numbers of LAE and LBG candidates are small, and the Poisson errors would not underestimate the errors of the ACFs much \citep[see e.g.,][]{Khostovan17}.
Figure \ref{ACF_LAEs_LBGs_SDF_vs_QSOField} shows ACFs of the LAE and LBG candidates in the QSO and the Control fields. Both LAE and LBG candidates in the QSO field exhibit clustering signals while those in the Control Field do not. In the QSO field, LAEs are clustering especially in a wide range of angular distances, 8--20 comoving Mpc while LBGs in small angular distance, 4-8 comoving Mpc. This result, together with the implications obtained in the previous sections, suggests that LAEs and LBGs are clustering in different angular scales and forming large scale structures separately that contain the QSO and its proximity region at their near-edge locations as well as a few high density clumps/peaks. However, it should be noted that the different clustering angular scale between the LAEs and the LBGs in the QSO field could be simply due to the effect of the different volumes or redshift ranges probed by the LAE and LBG selections. In any case, the clustering properties of the LAE and LBG candidates in the QSO field are clearly different from those in a general blank field (Control Field).
\subsection{Number Counts of LAEs and LBGs\label{NC}}
So far, we have found that distributions of LAE and LBG candidates are spatially quite different between the QSO and the Control fields. Next, we examine if there are any differences in number {\bf counts} of LAEs or LBGs per brightness (surface number density per total NB921 or $z'$ band magnitude) between these fields.
When we performed photometry in Section \ref{Photometry}, we used the {\tt MAG\_AUTO} parameter of SExtractor to measure total NB921 and $z'$ magnitudes of the NB921-detected and $z'$-detected objects. However, the {\tt MAG\_AUTO} does not always measure a total magnitude of an object accurately especially when it has close neighbors and/or blends with them or noise. Hence, we visually inspected all the LAE and LBG candidates in the QSO and Control fields, also checked their SExtractor paremeter {\tt FLAGS} and split them into isolated LAEs/LBGs and blended LAEs/LBGs. We checked the SExtractor {\tt FLAGS} value of each LAE/LBG candidate because {\tt FLAGS} $=$ 1 means that an object has neighbors or bad pixels affecting its {\tt MAG\_AUTO} photometry while {\tt FLAGS} $=$ 2 means that an object was originally blended with another one but deblended by SExtractor when performing photometry \citep{BA96}. We defined the isolated LAEs/LBGs as those (1) having {\tt FLAGS} $=$ 0 and (2) visually appearing not to blend with any objects or noise. We considered the LAE/LBG candidates not satisfying either or both of the conditions (1) and (2) to be blended.
For the isolated LAEs/LBGs, we adopted {\tt MAG\_AUTO} measurements as their total NB921/$z'$ magnitudes. For the blended LAEs/LBGs, we applied aperture corrections to their $2''$ aperture NB921/$z'$ magnitudes (those measured by the SExtractor {\tt MAG\_APER} parameter) to estimate their total NB921/$z'$ magnitudes. We estimated the aperture corrections by taking the medians of the differences between the total and $2''$ aperture NB921/$z'$ magnitudes of the isolated LAEs/LBGs in each field. We found 7 (31) LAE and 29 (14) LBG candidates isolated in the QSO (Control) field. The aperture corrections were $-0.06$ ($-0.24$) mag for NB921 magnitudes of the LAEs and $-0.14$ ($-0.14$) mag for $z'$ magnitudes of the LBGs in the QSO (Control) field. The origin of the difference between the NB921 aperture corrections for the LAEs in the QSO and the Control fields is not clear but possibly comes from combination of differences in PSF sizes of the NB921 images ($0.''91$ and $0.''98$) and intrinsic LAE sizes (mostly in Ly$\alpha$ emission) between the two fields.
On the other hand, as mentioned earlier and described in Section \ref{LAB}, we found that one of the LAE candidates in the QSO field is a bright extended LAB candidate, VIKING-z66LAB, whose angular diameter is $\gtrsim 3''$ (see Figure \ref{LABFig} for its images and see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} for its location in the QSO field). Also, the LAEs in the Control Field include one bright extended LAB whose angular diameter is $\gtrsim 3''$. This is the $z=6.541$ LAE, SDF J132415.7+273058, previously spectroscopically confirmed by \citet{Kodaira03} (see Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} for its location in the Control Field). They have {\tt FLAGS} $> 0$ and are not isolated. In addition, as they are exceptionally much more extended than other LAE candidates whose NB921 magnitudes were used to estimate the aperture correction values, the aperture correction method would underestimate the total NB921 magnitudes of VIKING-z66LAB and SDF J132415.7+273058. Hence, we estimated their total NB921 magnitudes in different ways as follows.
VIKING-z66LAB has 4 neighbors and more or less blends with all of them, resulting in {\tt FLAGS} $=$ 3 ($=1+2$) in the NB921 image convolved to have a PSF FWHM of $=0.''91$ for the $2''$ aperture photometry. Hence, we ran SExtractor on the ${\rm PSF}=0.''77$ NB921 image of the QSO field that is the original image before the convolution for the aperture photometry. In this image, VIKING-z66LAB slightly blends with one of the four neighbors and has only {\tt FLAGS} = 2. This means that VIKING-z66LAB originally blended with the neighbor but SExtractor automatically corrected parts of the {\tt MAG\_AUTO} elliptical aperture which are contaminated by the neighbor by mirroring the opposite, cleaner side of the measurement ellipse \citep{BA96}. If the {\tt MAG\_AUTO} measurement had been also affected by more than 10\% of the integrated area due to the blending with the neighbor, SExtractor would have also returned {\tt FLAGS} = 1 (resulting in {\tt FLAGS} = 1 + 2 = 3). Thus, we adopt the NB921 magnitude of VIKING-z66LAB measured by {\tt MAG\_AUTO} on the ${\rm PSF}=0.''77$ NB921 image as its total NB921 magnitude (NB921$_{\rm total}=23.78$), as it would be more robust and reliable than the one measured by the aperture correction method. Meanwhile, SDF J132415.7+273058 has three neighbors and slightly blends with two of them, resulting in {\tt FLAGS} $=$ 1 in the NB921 image of the Control Field SDF (PSF FWHM $=0.''98$). We used a $4''$ aperture that mostly encompasses the entirety of SDF J132415.7+273058 but minimizes the contaminations by the fluxes of the neighbors to measure its total NB921 magnitude, which is NB921$_{\rm total}=23.69$.
\begin{figure*}
\includegraphics[angle=0,scale=0.69]{Newf10a.eps}
\includegraphics[angle=0,scale=0.69]{Newf10b.eps}
\caption{Left: Surface number densities of the $z \sim6.6$ LAE candidates in the Control Field SDF (black open triangle) and the QSO field (red filled triangle) as a function of NB921 total magnitude. The data points for the two samples are slightly horizontally shifted for clarity. The error bars include Poisson errors \citep{Gehrels86} and cosmic variance. Detection completeness is corrected for each 0.5 mag bin by using the data in the left panel of Figure \ref{Completeness}. For the bins where no LAE candidate is detected, the upper limits are shown by the arrows. Right: Surface number densities of the $z > 6$ LBG candidates in the Control Field SDF (black open circle) and the QSO field (blue filled circle) as a function of $z'$-band total magnitude. The data points for the two samples are slightly horizontally shifted for clarity. Error bars include the Poisson error and cosmic variance. Detection completeness is corrected for each 0.2 mag bin by using the data in the right panel of Figure \ref{Completeness}.\label{NumberCounts}}
\end{figure*}
We derived the surface number densities of the LAE (LBG) candidates in the QSO and the Control fields by counting their numbers in each 0.5 NB921 (0.2 $z'$) total magnitude bin, correcting them for the NB921 ($z'$) detection completeness estimated in Section \ref{CompletenessSec} and shown in Figure \ref{Completeness} and dividing them by the effective survey area of each of the QSO and the Control fields. For the error of the number density in each bin, we include Poisson errors for small number statistics and cosmic variance estimated in the same way as in \citet{Ota08,Ota10}. We use the Poisson upper and lower limits listed in the second columns of Tables 1 and 2 in \citet{Gehrels86}. For the cosmic variance $\sigma_v$ estimate, \citet{Ota08,Ota10} used the relation, $\sigma_v = b \sigma_{\rm DM}$, adopting a bias parameter of $b=3.4\pm1.8$ derived from the sample of 515 $z\sim5.7$ LAEs detected by \citet{Ouchi05} in the $\sim 1.0$ deg$^2$ SXDS field and the dark matter variance $\sigma_{\rm DM}=0.053$ at $z=6.6$ obtained by using an analytic cold dark matter model \citep{Sheth99,MoWhite02} and their survey volumes. In this study, we use a bias parameter of $b=3.6\pm0.7$ derived from the sample of 207 $z\sim6.6$ LAEs detected by \citet{Ouchi10} in the SXDS field. As our QSO field and Control Field (SDF) survey volumes are similar to those of \citet{Ota08,Ota10} (all of them are the volumes based on one Suprime-Cam pointing), we adopt the same $\sigma_{\rm DM}=0.053$ value as they used. This gives a cosmic variance estimate of $\sigma_v \sim 0.19$ for each of the QSO and the Control fields. We also corrected the errors for the detection completeness estimated in Section \ref{CompletenessSec} and shown in Figure \ref{Completeness}.
In the left panel of Figure \ref{NumberCounts}, we compare surface number densities of LAE candidates per total NB921 magnitude (0.5 mag bin) in the QSO and the Control fields. The LAE number densities are consistent between the two fields at the brighter (23.0--24.5 mag) and the faintest (25.5--26.0 mag) NB921 magnitudes. On the other hand, the LAE number density in the QSO field is significantly lower than that in the Control Field at the intermediate NB921 magnitudes (24.5--25.5 mag) beyond statistical errors and cosmic variance. This deficit of LAEs in the QSO field is also clearly visible when we compare sky distributions of LAEs in the QSO and Control fields in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} (see the sizes and corresponding NB921 total magnitudes of the red triangle symbols in the figure).
We also confirm that the brightest LAE candidate (one in NB921 = 23.5--24.0 mag bin) is not located in any LBG overdense regions in the QSO field and thus seem to be irrelevant to overdense environment (see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF}). However, note that the bright extended LAB candidate VIKING-z66LAB (the second brightest LAE candidate in NB921 = 23.5--24.0 mag bin) is located close to the LBG overdensity region containing the highest density peak in the lower left (south-west) of the QSO field and might be possibly associated with the overdense environment (see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} and Section \ref{LAB}). This is consistent with the observational trend that LABs have been often found in/around dense environments such as protoclusters to date \citep[e.g.,][]{Steidel00,Chapman04,Colbert06,Matsuda11,Bridge12,Yang12,Yang14,Badescu17}. In contrast, as mentioned earlier, there is the $z=6.541$ LAB, SDF J132415.7+273058, in the average LAE and LBG density region in the Control Field. Thus, LABs can be also found in a normal environment in a general field, and the positional relation between VIKING-z66LAB and the overdensity region in the QSO field could be alternatively the product of chance.
On the other hand, the right panel of Figure \ref{NumberCounts} compares surface number densities of LBG candidates per total $z'$ magnitude (0.2 mag bin) in the QSO and the Control fields. In contrast to the case of LAEs, there is a clear excess in the faintest ($z'$ = 25.8--26.0 mag) LBGs in the QSO field compared to the Control field. Though consistent within errors, there is also a trend of excess in LBGs at $z'$ = 25.4--25.8 mag in the QSO field against the Control Field. Otherwise, the LBG number densities in both fields are consistent at $z'$ = 25.2--25.4 mag or the QSO field exhibits deficit of LBGs at the brightest magnitudes $z'$ = 25.0--25.2 mag (though consistent within the errors). This agrees with the fact that fainter LBG candidates at $z'$ = 25.4--26.0 mag are forming the overdensities in the QSO field as seen in Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} (see the sizes and corresponding $z'$-band total magnitudes of the blue circle symbols in the figures).
\subsection{Ly$\alpha$ Luminosity Functions of LAEs\label{LyaLFSec}}
Many of the LBG candidates are expected to be bright in the rest frame UV continuum and very faint in Ly$\alpha$ emission or having no Ly$\alpha$ emission because of their dropout selection method. Some fraction of LBGs may exhibit strong Ly$\alpha$ emisson. However, at $z>6$, the fraction of Ly$\alpha$ emitting LBGs is observed to be low, possibly due to the attenuation of Ly$\alpha$ emission by neutral hydrogen \citep{Stark10,Stark11,Pentericci11,Pentericci14,Ono12,Schenker12,Schenker14,Tilvi14,Caruana12,Caruana14,Treu12,Treu13,Furusawa16}. Thus, the $z'$ band mostly detects UV continua of the LBG candidates except for low fraction of LBGs exhibiting strong Ly$\alpha$ emission. Hence, their surface number density as a function of $z'$ band magnitude shown in the right panel of Figure \ref{NumberCounts} mostly reflects the surface number density as a function of only UV luminosity with low contamination by Ly$\alpha$ emitting LBGs. We cannot remove contamination by such Ly$\alpha$ fluxes.
On the other hand, most of the LAE candidates are expected to be bright in Ly$\alpha$ emission and very faint in the UV continuum because of their narrowband NB921 excess selection method. Hence, the NB921 band detects both their Ly$\alpha$ emission and UV continua (except for LAEs with an undetectably faint UV continuum). Thus, their surface number density as a function of NB921 magnitude shown in the left panel of Figure \ref{NumberCounts} reflects the number density as a function of a mixture of Ly$\alpha$ and UV luminosities. However, we can estimate Ly$\alpha$ and UV luminosities of the LAE candidates separately from their NB921 and $z'$ band total magnitudes and derive the number density as a function of only Ly$\alpha$ luminosity. This is because these bands both cover $z\sim6.6$ Ly$\alpha$ emission and UV continuum redwards of it. To see the trend of the number density of LAEs as a function of only Ly$\alpha$ luminosity, we derive and compare Ly$\alpha$ luminosity functions (LFs) of the LAE candidates in the QSO and Control fields in Figure \ref{LyaLF}.
We followed the same method as the one used by \citet{Kashikawa11} to estimate Ly$\alpha$ luminosities of our $z\sim6.6$ LAE candidates from their NB921 and $z'$ magnitudes. \citet{Kashikawa11} used the following formula to estimate the Ly$\alpha$ line flux ($f_{\rm line}$ in erg s$^{-1}$ cm$^{-2}$) and the rest frame UV continuum flux density ($f_c$ in erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ in observer's frame) of the $z \sim 6.6$ LAEs they detected in the SDF (Control Field) from their narrowband NB921 (NB) and broadband $z'$ (BB) magnitudes, $m_{\rm NB}$ and $m_{\rm BB}$:
\begin{equation}
m_{\rm NB,BB} + 48.6 = -2.5\log\frac{\int^{\nu_{{\rm Ly}\alpha}}_0 (f_c + f_{\rm line})T_{\rm NB,BB}d\nu/\nu}{\int T_{\rm NB,BB}d\nu/\nu}
\label{Eqn_LyaUVLum}
\end{equation}
where $\nu_{{\rm Ly}\alpha}$ is the observed frequency of Ly$\alpha$, and $T_{\rm NB}$ and $T_{\rm BB}$ are the transmission bandpasses of the NB921 (NB) and $z'$ (BB) filters as a function of observed frequency, respectively (see Figure \ref{FilterTransmission}). \citet{Kashikawa11} used $2''$ aperture NB921 and $z'$ magnitudes of each LAE for $m_{\rm NB}$ and $m_{\rm BB}$. They also used the central frequency of the NB921 filter for $\nu_{{\rm Ly}\alpha}$ if an LAE is not spectroscopically identified. Moreover, they assumed that an SED of an LAE has a constant $f_c$ (i.e., flat continuum), $\delta$-function Ly$\alpha$ emission profile (i.e., flux value of $f_{\rm line}$ at $\nu_{{\rm Ly}\alpha}$ and 0 otherwise) and zero flux at the wavelength bluewards of Ly$\alpha$ due to the IGM absorption. Also, if an LAE was not detected in $z'$-band, $z'$-band $1\sigma$ limiting magnitude was used for $m_{\rm BB}$.
\citet{Kashikawa11} compared the Ly$\alpha$ fluxes of 45 spectroscopically identified $z\sim6.6$ LAEs in the Control Field estimated photometrically from their $2''$ aperture NB921 and $z'$ magnitudes using the equation (\ref{Eqn_LyaUVLum}) with those measured from their spectra, and confirmed that they are in fairly good agreement within a factor of two (see Figure 5 in their paper and the right panel of Figure \ref{LyaLF} in the present paper). Because they used a large spectroscopic $z\sim6.5$ LAE sample including LAEs with bright to faint Ly$\alpha$ luminosities, the validity of the method was statistically proven to be highly reliable.
We used the central frequency of the NB921 filter for $\nu_{{\rm Ly}\alpha}$, $2''$ aperture NB921 and $z'$ magnitudes of our $z\sim6.6$ LAE candidates in the QSO and Control fields and equation (\ref{Eqn_LyaUVLum}) to estimate their Ly$\alpha$ fluxes ($f_{\rm line}$). Then, we converted the fluxes to the Ly$\alpha$ luminosities. Also, we estimated the number density of LAE candidates by dividing their observed differential numbers in each Ly$\alpha$ luminosity bin by the effective survey volume of the QSO field or the Control Field \citep[see Section \ref{LAE-Selections} and][for the detials of the survey volumes]{Taniguchi05}. Moreover, we estimated the errors on the LAE number densities including the Poisson errors and cosmic variance in the same way as we did in Section \ref{NC}. Finally, we corrected the number densities and the errors for the detection completeness estimated in Section \ref{CompletenessSec} and shown in Figure \ref{Completeness} by number weighting according to the NB921 magnitude.
\begin{figure*}
\epsscale{1.5}
\hspace*{-2cm}
\includegraphics[angle=0,scale=0.9]{f11a.eps}
\hspace*{-2cm}
\includegraphics[angle=0,scale=0.9]{f11b.eps}
\caption{(Left) The differential Ly$\alpha$ LFs of the $z\sim6.6$ LAE candidates in the Control Field SDF and the QSO field with their Ly$\alpha$ luminosities $L({\rm Ly}\alpha)_{\rm phot}$'s photometrically estimated from their NB921 and $z'$-band total magnitudes (black open and red filled triangles for the Control and QSO fields, respectively) or $L({\rm Ly}\alpha)_{\rm phot}$'s photometrically estimated from their NB921 and $z'$-band $2''$ aperture magnitudes (grey open and orange filled triangles for the Control and QSO fields, respectively). The data points of the LFs are slightly horizontally shifted from each other for clarity. The errors include both Poission error and cosmic variance. For the Ly$\alpha$ luminosiy bins where no LAE candidate is detected, upper limits are shown by the arrows. We also plot previous measurements of the $z\sim6.6$ LAE Ly$\alpha$ LFs (their best-fit Schechter functions assuming the faint end slope of $\alpha=-1.5$) by \citet[][K11, the magenta dashed curve]{Kashikawa11} and \citet[][O10, the green solid curve, their data points are also shown by the green filled circles]{Ouchi10}. (Right) Ratios of a photometrically estimated Ly$\alpha$ luminosity $L({\rm Ly}\alpha)_{\rm phot}$ to a spectroscopically measured Ly$\alpha$ luminosity $L({\rm Ly}\alpha)_{\rm spec}$ as a function of $L({\rm Ly}\alpha)_{\rm spec}$ of the $z\sim 6.6$ LAEs spectroscopically confirmed in the Control Field SDF. $L({\rm Ly}\alpha)_{\rm phot}$'s were estimated by us from either NB921 and $z'$-band total or $2''$ aperture magnitudes of the LAEs using Equation (\ref{Eqn_LyaUVLum}) while $L({\rm Ly}\alpha)_{\rm spec}$'s were measured from spectra of the LAEs by \citet{Taniguchi05}, \citet{Kashikawa06} and \citet{Kashikawa11}. The horizontal solid line corresponds to $L({\rm Ly}\alpha)_{\rm phot}/L({\rm Ly}\alpha)_{\rm spec} = 1.0$. The vertical dashed line shows the Ly$\alpha$ luminosity $\log L({\rm Ly}\alpha)_{\rm spec}$/[erg s$^{-1}$] $= 42.6$ below which the photometric measurements based on NB921 and $z'$-band total magnitudes largely overestimate real Ly$\alpha$ luminosities measured from spectra such that $L({\rm Ly}\alpha)_{\rm phot}/L({\rm Ly}\alpha)_{\rm spec} \sim 1.5$--3.3 or $\log [L({\rm Ly}\alpha)_{\rm phot}/L({\rm Ly}\alpha)_{\rm spec}] \sim 0.2$--0.5. This overestimation causes many LAEs to move from the faintest Ly$\alpha$ luminosity bin to the next three brighter bins in the Ly$\alpha$ LF of LAEs in the Control Field, making the photometrically estimated LF (black open triangles) higher in the three bins and lower in the faintest bin than the mostly spectroscopically estimated LF (magenta dashed curve) in the left panel.\label{LyaLF}}
\end{figure*}
In the left panel of Figure \ref{LyaLF}, we compare the differential Ly$\alpha$ LFs of the LAE candidates in the QSO and the Control fields (orange filled and grey open triangles, respectively). We also plot the previous measurement of the $z\sim6.6$ LAE Ly$\alpha$ LF (its best-fit Schechter function assuming the faint end slope of $\alpha=-1.5$) in the Control Field SDF derived by \citet{Kashikawa11}. They used spectroscopically measured Ly$\alpha$ fluxes for the spectroscopically identified LAEs ($\sim 80$\% of the LAE candidates) and Ly$\alpha$ fluxes photometrically inferred from Equation (\ref{Eqn_LyaUVLum}) for the remaining unidentified LAE candidates ($\sim 20$\% of the LAE candidates). We see that their LF (magenta dashed curve in the figure) and our LF in the Control Field are almost consistent. This suggests that we have been able to successfully photometrically reproduced the LF in the Control Field that was accurately measured by spectroscopy. However, strictly speaking, the number density of LAEs in the second faintest bin of our LF is larger than that of the \citet{Kashikawa11}'s spectroscopic LF due to the following two reasons. (1) The second faintest bin of our LF includes four out of the five additional LAE candidates detected without imposing non-detections in wavebands ($B$, $V$ and $R_c$) bluewards of $z\sim6.6$ Ly$\alpha$ (the case 1--5 objects in Figure \ref{woBVR_Objects}, see Sections \ref{SDFsample} and \ref{lackBVR}) while the \citet{Kashikawa11} LF does not contain them. (2) For the faintest LAEs with spectroscopically measured Ly$\alpha$ luminosities $\log L({\rm Ly}\alpha)_{\rm spec}$/[erg s$^{-1}$] $\sim$ 42.4--42.6 in the faintest LF bin, Equation (\ref{Eqn_LyaUVLum}) and $2''$ aperture NB921 and $z'$ magnitudes photometrically tend to overestimate their Ly$\alpha$ luminosities $L({\rm Ly}\alpha)_{\rm phot}$ by a factor of $\log L({\rm Ly}\alpha)_{\rm phot}/L({\rm Ly}\alpha)_{\rm spec} \sim 0.1$--0.3 (see the right panel of Figure \ref{LyaLF}). This causes some LAEs in the faintest bin to move to the second faintest bin, decreasing the LAE number density in the faintest bin and increasing the LAE number density in the second faintest bin when comparing the photometric LF with the LF mostly spectroscopically derived by \citet{Kashikawa11}. Nonetheless, despite the larger LAE number in the second faintest bin, our photometric LF is entirely well consistent with the \citet{Kashikawa11}'s spectroscopic LF. Thus, we presume that our photometric LF of LAE candidates in the QSO field estimated by using Equation (\ref{Eqn_LyaUVLum}) also well reproduces the realistic LF in the QSO field that can be ideally derived by spectroscopic measurements of Ly$\alpha$ luminosities of LAEs.
In the left panel of Figure \ref{LyaLF}, we see that the LFs in the QSO and Control fields (orange filled and grey open triangles) are consistent at the four brightest Ly$\alpha$ luminosity bins $\log L({\rm Ly}\alpha)$/[erg s$^{-1}] = 42.8$--43.6 within statistical error and cosmic variance. Although consistent within errors, there is a trend that the number density of LAEs in the QSO field is lower than that in the Control Field at the intermediate Ly$\alpha$ luminosities $\log L({\rm Ly}\alpha)$/[erg s$^{-1}] = 42.8$--43.2. Moreover, the number density of the LAEs in the QSO field is lower than that in the Control Field beyond statistical error and cosmic variance at the fainter Ly$\alpha$ luminosities $\log L({\rm Ly}\alpha)$/[erg s$^{-1}] = 42.4$--42.8. Therefore, the number density of the LAEs at the intermediate to faint Ly$\alpha$ luminosities is lower in the QSO environment than a general blank field. This is consistent with the same trend we can see when we compare sky distributions and number density contours of LAEs in the QSO and Control fields in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}.
Moreover, in the left panel of Figure \ref{LyaLF}, we also plot the $z\sim6.6$ LAE Ly$\alpha$ LF (its data points and best-fit Schechter function assuming the faint end slope of $\alpha=-1.5$) derived by \citet{Ouchi10} although their LAE selection criteria used to select $\sim$ 78\% of their LAEs are slightly different from ours (they did not impose $z'-{\rm NB921}>3\sigma$ to select LAEs in the SXDS field). Their LF is based on the $z\sim6.6$ LAEs detected in the SDF and SXDS fields (6 Suprime-Cam pointing) whose area is 6 times larger than that of our Control Field SDF. Thus, their LF represents more typical trend reducing the field-to-field variance effect. \citet{Kashikawa11}'s LF in the Control Field is consistent with \citet{Ouchi10}'s LF. As our photomerically derived Control Field LF is mostly consistent with \citet{Kashikawa11}'s LF (see text above), it is also compatible with \citet{Ouchi10}'s LF. This means that the Control Field represents a typical field for $z\sim6.6$ LAEs, which supports validity of our choice of the SDF as a control field for this study. This further ensures that the LAE number density is significantly lower in the QSO field than a typical blank field.
Finally, for comparison, we also photometrically derive the Ly$\alpha$ LFs of LAE candidates in the QSO and Control fields by using their ``total'' NB921 and $z'$ magnitudes (rather than $2''$ aperture magnitudes) for $m_{\rm NB}$ and $m_{\rm BB}$ in Equation (\ref{Eqn_LyaUVLum}) to estimate their Ly$\alpha$ fluxes $f_{\rm line}$ and then their Ly$\alpha$ luminosities $L({\rm Ly}\alpha)_{\rm phot}$. We do this because $2''$ aperture magnitudes may underestimate Ly$\alpha$ $+$ UV continuum fluxes of each LAE possibly missing detecting some fluxes lost outside of the $2''$ aperture. We plot the derived Ly$\alpha$ LFs in the QSO and Control fields (red filled and black open triangles, respectively) in the left panel of Figure \ref{LyaLF}. As for the QSO field, the LFs derived from total and $2''$ aperture NB921 and $z'$ magnitudes (red and orange filled triangles) are well consistent with each other. Although consistent within errors, there is a trend that the number density of LAEs in the total magnitude LF is lower in the faintest Ly$\alpha$ luminosity bin and higher in the next two bins than that in the $2''$ aperture magnitude LF. This is because total magnitudes and Equation (\ref{Eqn_LyaUVLum}) give Ly$\alpha$ luminosities higher than those estimated using $2''$ aperture magnitudes, making some LAEs move from the faintest bin to the next two bins when comparing the total magnitude LF with the $2''$ aperture magnitude LF.
On the other hand, as for the Control Field, the LFs derived from total and $2''$ aperture NB921 and $z'$ magnitudes (black and grey open triangles) are consistent only at the two brightest Ly$\alpha$ luminosity bins and the second faintest bin. The number density of LAEs in the total magnitude LF is lower in the faintest Ly$\alpha$ luminosity bin and higher in the third and fourth faintest bins than that in the $2''$ aperture magnitude LF. Also, the number densities of LAEs in the second to fourth faintest Ly$\alpha$ luminosity bins in the total magnitude LF are higher than those of the LF mostly spectroscopically derived by \citet{Kashikawa11}. This is mainly because of the following reason. For the faintest LAEs with spectroscopically measured Ly$\alpha$ luminosities $\log L({\rm Ly}\alpha)_{\rm spec}$/[erg s$^{-1}$] $\sim$ 42.4--42.6 in the faintest LF bin, Equation (\ref{Eqn_LyaUVLum}) and total NB921 and $z'$ magnitudes photometrically tend to overestimate their Ly$\alpha$ luminosities $L({\rm Ly}\alpha)_{\rm phot}$ by a factor of $\log L({\rm Ly}\alpha)_{\rm phot}/L({\rm Ly}\alpha)_{\rm spec} \sim 0.2$--0.5 (see the right panel of Figure \ref{LyaLF}). This causes many LAEs in the faintest bin to move to the second to fourth faintest bins, decreasing the LAE number density in the faintest bin and increasing the LAE number density in the second to fourth faintest bins when comparing the total magnitude LF with the LF spectroscopically derived by \citet{Kashikawa11}. Despite the difference between the LFs derved from total and $2''$ aperture NB921 and $z'$ magnitudes, both LFs in the QSO and Control fields show almost the same trend that the number density of LAEs with intermediate to faint Ly$\alpha$ luminosities is lower in the QSO field than the Control Field.
\subsection{Can QSO Feedback Suppress the Formation of LAEs and LBGs?\label{Feedback}}
In Section \ref{NdensityContours} and Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF}, we found that LAE candidates are sparse (3 LAEs) while there are more LBG candidates (12 LBGs) in the QSO proximity region ($< 3$ physical Mpc from the QSO). Also, the proximity region is located at the lower density edge of the large scale structures of the LAE and LBG candidates. The UV radiation from the QSO might have suppressed formation of LAEs (lower mass galaxies) while it may have not affected formation of LBGs (higher mass galaxies) to finally form these biased sky and density distributions of LAEs and LBGs. To examine this scenario, we quantitatively estimate the strength of the QSO radiation around it and see if it has any effects on the formation of LAEs and LBGs in the proximity region.
We follow the same method taken by \cite{Kashikawa07}. We assume that the QSO spectrum can be approximated by a power low, $F^Q_{\nu} \propto \nu^{\beta}$, where $\beta$ is the UV continuum slope of the QSO spectrum. Then, the local flux density at the Lyman limit frequency, $\nu_{\rm L}$, at a radius $r$ from the QSO is given by
\begin{equation}
F^Q_{\nu}(\nu_{\rm L}, r) = \frac{L_{\nu}(\nu_{\rm L})}{4 \pi r^2}
\label{QSO-flux}
\end{equation}
where $L_{\nu}(\nu_{\rm L}) = 4 \pi D_{\rm L}^2 F^Q_{\nu}(\nu_{\rm L}, D_{\rm L}) (1+z)$ is the QSO luminosity at $\nu_{\rm L}$ and $D_{\rm L}$ is the luminosity distance. We estimate the continuum slope $\beta$ of the $z=6.61$ QSO from its magnitudes at the rest frame UV wavelengths by using
\begin{equation}
\beta = \frac{m_1 - m_2}{2.5 \log (\lambda_{c,1}/\lambda_{c,2})}
\label{UV-slope}
\end{equation}
where $m_1$, $m_2$, $\lambda_{c,1}$ and $\lambda_{c,2}$ are apparent magnitudes and central wavelengths of broadband filters 1 and 2. The QSO was observed in the VISTA $Y$, $J$, $H$ and $K_{\rm s}$ bands in the VIKING survey. All of them cover the rest-frame UV wavelengths redwards of Ly$\alpha$ and do not include the Ly$\alpha$ emission and the continuum trough bluewards of Ly$\alpha$. The central wavelengths are 1.020, 1.252, 1.645 and 2.147 $\mu$m for $Y$, $J$, $H$ and $K_{\rm s}$ bands, respectively\footnote[4]{http://casu.ast.cam.ac.uk/surveys-projects/vista/technical/ filter-set}. \citet{Venemans13} measured the magnitudes of the QSO in these bands to be $Y=20.89$, $J=20.68$, $H=20.72$ and $K_{\rm s}=20.27$ AB mag. We calculate $\beta$'s from the combinations of ($Y$, $J$), ($J$, $H$) and ($H$, $K_{\rm s}$) and the Equation (\ref{UV-slope}) and adopt the average of the three, $\beta = -0.79$, as the UV continuum slope of the QSO.
From this $\beta$ and the absolute magnitude of the QSO at a rest-frame wavelength of 1450\AA, $M_{1450} = -25.96$ AB mag, measured by \citet{Venemans13}, we obtain $L_{\nu}(\nu_{\rm L}) \sim 8.5 \times 10^{31}$ erg s$^{-1}$ Hz$^{-1}$. For the radius of the proximity region, $r = 3$ physical Mpc, we obtain $F^Q_{\nu}(\nu_{\rm L}, 3 {\rm pMpc}) \sim 7.9 \times 10^{20}$ erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ from the Equation (\ref{QSO-flux}).
We assume the UV intensity in the form of a power-law spectrum,
\begin{equation}
J(\nu) = J_{21} \times (\nu/\nu_{\rm L})^{\alpha} \times 10^{-21} {\rm erg} {\rm s}^{-1} {\rm cm}^{-2} {\rm Hz}^{-1} {\rm sr}^{-1}
\label{UV-intensity}
\end{equation}
where $J_{21}$ is an isotropic UV intensity at the Lyman limit and $\alpha$ is the continuum slope. As $J(\nu_{\rm L}) = F^Q_{\nu}(\nu_{\rm L}, r)/4 \pi$, we obtain $J_{21} \sim 6.3$ for the UV intensity at the edge of the proximity region ($r = 3$ physical Mpc).
On the other hand, \citet{Calverley11} measured the UV background (UVB) at $z=4.6$--6.4 using QSO proximity effect. They derived the correlation between the HI photoionization rate by the UVB and redshift, $\log \Gamma_{\rm bkg} \sim -0.87 z - 7.7$ (see Figure 10 in their paper). This gives $\log \Gamma_{\rm bkg} \sim -13.45$ at $z=6.61$ that corresponds to the UVB intensity of $J_{21} \sim 0.013$ (using Equation (8) in their paper and Equation (\ref{UV-intensity}) in this paper). Hence, the QSO radiation is 485 times stronger than the UVB at $r = 3$ physical Mpc from the $z=6.61$ QSO. Meanwhile, the LAE candidate nearest to the QSO is located at the projected distance $r \sim 1$ physical Mpc from the QSO. This is the minimum possible distance between the observed LAE and the QSO. At $r \sim 1$ physical Mpc, we estimate the QSO radiation intensity to be $J_{21} \sim 56.4$.
Does this affect the formation of LAEs and LBGs? We examine this by using Figure 8 in \citet{Kashikawa07} that predicts the delay time $t_{\rm delay}$ of star formation as a function of radiation intensity $J_{21}$ for a given virial mass of a halo $M_{\rm vir}$. \citet{Kashikawa07} derived this relation between $t_{\rm delay}$, $J_{21}$ and $M_{\rm vir}$ by performing radiation-hydrodynamic simulations to examine the effect of radiation of the $z=4.87$ QSO they observed on star formation. In the simulations, gas was set to collapse (in the absence of thermal pressure) at $z=4.87$. We assume that the same relation also holds in the case of gas that collapses at $z=6.61$. When $J_{21} \sim 6.3$ ($r = 3$ physical Mpc from the $z=6.61$ QSO), star formation is suppressed in a halo with $M_{\rm vir} < 10^{10} M_{\odot}$. If $J_{21} \sim 56.4$ ($r = 1$ physical Mpc from the QSO), star formation is suppressed in a halo with $M_{\rm vir} < 3 \times 10^{10} M_{\odot}$.
\begin{figure*}
\epsscale{1.17}
\plottwo{Newf12a.eps}{f12b.eps}
\caption{Isophotal area as a function of NB921 total magnitude of $z\sim6.6$ LAE candidates (open circles) and point sources (gray dots) measured with the PSF FWHM $= 0.''77$ NB921 image of the QSO field (left panel) and the PSF FWHM $= 0.''98$ NB921 image of the Control Field SDF (right panel) and measured by the SExtractor parameter {\tt ISOAREA\_IMAGE}. The isophotal area is defined as an area corresponding to the pixels with values above $2\sigma$ sky fluctuation in each image. The point sources were selected with the SExtractor parameters {\tt CLASS\_STAR} (stellarity) $> 0.9$ and {\tt FLAGS} $=0$. The $z\sim6.6$ LAB candidate, VIKING-z66LAB, and the spectroscopically confirmed $z=6.541$ LAB, SDF J132415.7+273058 \citep{Kodaira03,Taniguchi05}, are denoted by the red filled circles. The $z\sim6.6$ LAE candidates with the red crosses are located in very noisy regions (blended with noises) in the NB921 image and their isophotal area measurements are unreliable.\label{IsophotalArea}}
\end{figure*}
\begin{figure}
\epsscale{1.18}
\plotone{newf13.eps}
\caption{The $10'' \times 10''$ $i'$, $z'$ and NB921 images of the LAB candidate, VIKING-z66LAB, found near the most overdense region in the QSO field. It is extended in NB921 (Ly$\alpha$ + UV continuum) with its angular diameter $\gtrsim 3''$ or $\gtrsim 16$ physical kpc. It appears that at least two objects are interacting at the position of VIKING-z66LAB in the $z'$-band image. All the images have been convolved to have the same PSF FWHM of $0.''91$. This PSF size is displayed in the NB921 image to clearly show how extended VIKING-z66LAB is.\label{LABFig}}
\end{figure}
\begin{deluxetable*}{ccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Photometric Properties of the $z\sim 6.6$ LAB candidate VIKING-z66LAB in the QSO Field\label{LABPhotometry}}
\tablewidth{510pt}
\tablehead{
\colhead{R.A.(J2000)} & \colhead{Decl.(J2000)} & \colhead{$i'_{2''}$} & \colhead{$z'_{2''}$} & \colhead{$z'_{\rm total}$$^{\rm b}$} & \colhead{NB921$_{2''}$} & \colhead{NB921$_{\rm total}$$^{\rm c}$} & \colhead{stellarity$^{\rm d}$} & \colhead{$L({\rm Ly}\alpha)$$^{\rm e}$} & \colhead{$M_{\rm UV}$$^{\rm e}$} & \colhead{EW$_0$$^{\rm f}$}\\
\colhead{} & \colhead{} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{} & \colhead{(erg s$^{-1}$)} & \colhead{(mag)} & \colhead{(\AA)}
}
\startdata
03:04:41.091 & $-$32:04:27.30 & $>$28.1$^{\rm a}$ & 26.16 & 25.96 & 24.41 & 23.78 & 0.01 & $2.6 \times 10^{43}$ & $-20.63$ & 164\\
\enddata
\tablecomments{Units of coordinate are hours: minutes: seconds (right ascension) and degrees: arcminutes: arcseconds (declination) using J2000.0 equinox. The $i'_{2''}$, $z'_{2''}$ and NB921$_{2''}$ are the aperture magnitudes (we first detected VIKING-z66LAB in the NB921 image and then measured the $2''$ aperture magnitudes in the $i'$, $z'$ and NB921 images using the SExtractor double image mode). VIKING-z66LAB has colors of $i'-z' > 1.94$ ($1\sigma$ limit) and $z'-{\rm NB921} =1.75$ calculated from those $2''$ aperture magnitudes and/or its $1\sigma$ limit.}
\tablenotetext{a}{$1\sigma$ limit.}
\tablenotetext{b}{The total $z'$-band magnitude. We detected VIKING-z66LAB in the $z'$-band image, then measured the $2''$ aperture $z'$-band magnitude (in this case, 26.10 mag) using the SExtractor single image mode and finally applied the aperture correction of $-0.14$ mag to obtain $z'_{\rm total}$. We used the aperture correction estimated for LBGs as VIKING-z66LAB is more likely a Ly$\alpha$ emitting LBG (see text). We did not use the $z'_{2''}=26.16$ measured by the SExtractor double image mode to estimate $z'_{\rm total}$ because SExtractor adopts the detection position (X,Y) in the NB921 image as the position of the $2''$ aperture placed in the $z'$-band image but in this case VIKING-z66LAB is not located exactly at the center of the $2''$ aperture with some fraction of its flux is lost out of the aperture. When we use the SExtractor single image mode, the $2''$ aperture is placed at the detection position in the $z'$-band image and VIKING-z66LAB is located at the center of the $2''$ aperture minimizing the flux loss.}
\tablenotetext{c}{The total NB921 magnitude measured by SExtractor {\tt MAG\_AUTO} in the ${\rm PSF}=0.''77$ NB921 image of the QSO field (the original image before the convolution for the $2''$ aperture photometry). See Section \ref{NC} for the details.}
\tablenotetext{d}{The star/galaxy classifier index measured in the ${\rm PSF}=0.''77$ NB921 image and given as {\tt CLASS\_STAR} parameter by SExtractor. It is 0 for a galaxy, 1 for a star, or any intermediate value for more ambiguous objects \citep{BA96}.}
\tablenotetext{e}{The Ly$\alpha$ luminosity $L({\rm Ly}\alpha)$ and the absolute UV continuum magnitude $M_{\rm UV}$ estimated from the NB921$_{\rm total}$ and $z'_{\rm total}$ magnitudes by using the equation (\ref{Eqn_LyaUVLum}).}
\tablenotetext{f}{The rest-frame Ly$\alpha$ equivalent width calculated from $L({\rm Ly}\alpha)$ and $M_{\rm UV}$.}
\end{deluxetable*}
Using a semi-analytic model of galaxy formation, \citet{Garel15} predicted that halo masses of typical ($L({\rm Ly}\alpha) = 10^{42}$--$10^{43}$ erg s$^{-1}$) and bright ($L({\rm Ly}\alpha) = 10^{43}$--$10^{44}$ erg s$^{-1}$) LAEs at $z=6.6$ are $\log (M_{\rm h}/M_{\odot}) = 10.8_{-0.2}^{+0.4}$ and $\log (M_{\rm h}/M_{\odot}) = 11.5_{-0.1}^{+0.1}$, respectively. Meanwhile, \citet{Ouchi10} estimated a halo mass of a $z\sim 6.6$ LAE from the clustering of 207 $z\sim 6.6$ LAE candidates detected over 1 deg$^2$ sky of SXDS field in the Suprime-Cam NB921 band to the same depth as our observations of the $z=6.61$ QSO (NB921 $<$ 26.0 or $L({\rm Ly}\alpha) > 2.5 \times 10^{42}$ erg s$^{-1}$). They estimated the minimum, average and maximum halo masses to be $\log (M_{\rm h}^{\rm min}/M_{\odot}) = 9.9_{-0.6}^{+0.4}$, $\log (M_{\rm h}/M_{\odot}) = 10.3_{-0.4}^{+0.4}$ and $\log (M_{\rm h}^{\rm max}/M_{\odot}) = 11.1_{-0.4}^{+0.3}$, respectively. More recently, based on the halo occupation distribution (HOD) models, \citet{Ouchi17} also estimated a halo mass of a $z\sim 6.6$ LAE from 873 $z\sim 6.6$ LAE candidates detected using the early data of the Hyper Suprime-Cam Subaru Strategic Program survey to the brighter limit (NB921 $<$ 25.0 or $L({\rm Ly}\alpha) > 7.9 \times 10^{42}$ erg s$^{-1}$ or $\gtrsim L^*$) over 21.2 deg$^2$ area of sky. They estimated the minimum and average halo masses to be $\log (M_{\rm h}^{\rm min}/M_{\odot}) = 9.1_{-1.9}^{+0.7}$ and $\log (\langle M_{\rm h} \rangle /M_{\odot}) = 10.8_{-0.5}^{+0.3}$, respectively. Eventually, $z\sim 6.6$ LAEs could have a halo mass from $<10^{10} M_{\odot}$ to $\sim 10^{11} M_{\odot}$. Thus, formation of $M_{\rm h} < 1$--$3 \times 10^{10} M_{\odot}$ LAEs could be suppressed by the QSO radiation in the QSO proximity region while that of LAEs hosted by higher mass halos is not. The three LAE candidates seen in projection in the QSO proximity region could have $M_{\rm h} > 1$--$3 \times 10^{10} M_{\odot}$. Otherwise, they are located at $r > 3$ physical Mpc foreground from the QSO in nearly line-of-sight direction as the NB921 band have the better sensitivity to the foreground LAEs due to the redshift of the QSO as mentioned in Section 1 and Figure \ref{NB921_LyaPeakDist}.
On the other hand, our LBG candidates in the QSO field have the rest-frame UV continuum magnitudes (at 1250\AA) of $M_{\rm UV} \leq -20.8$ converted from the limiting magnitude for the LBG selection $z' \leq 26.1$ assuming that LBGs are at $z=6.61$. \citet{Garel15} predicted that halo masses of bright ($-20.8 > M_{1500} > -23.3$) LBGs at $z=6.6$ are $\log (M_{\rm h}/M_{\odot}) = 11.5_{-0.1}^{+0.2}$. Meanwhile, based on the HOD models, \citet{Harikane17} estimated the halo mass of $z\sim6.8$ LBGs with $M_{\rm UV} < -19.5$ to be $\log (M_{\rm h}/M_{\odot}) = 11.00_{-0.08}^{+0.07}$. As the halo masses of the LBGs are sufficiently high (i.e., $M_{\rm h} > 3 \times 10^{10} M_{\odot}$), their formation would not be suppressed by the radiation from the $z=6.61$ QSO even if they are within the QSO proximity region. In addition, as the redshift range of the LBG candidates spans $6 < z < 6.9$, if some of them are far from the QSO in the line-of-signt direction, their formation is of course not affected by the QSO radiation.
All the discussions above can explain the fact that there are much less LAE candidates than LBG candidates within the QSO proximity region. Moreover, this can also be caused by the difference in the probed volume between LAEs and LBGs ($\Delta z \sim 0.1$ for LAEs and $\Delta z \sim 0.9$ for LBGs) as more galaxies are detected in a larger volume. However, the sparsity of LAE candidates in the QSO proximity region compared to the number of LAE candidates outside the proximity region may imply that formation of some of the lower mass LAEs was possibly suppressed by the QSO radiation.
\subsection{A Bright Extended Ly$\alpha$ Blob Candidate near the Most Overdense Region in the QSO Field\label{LAB}}
We found a $z\sim6.6$ bright extended LAB candidate, VIKING-z66LAB, near the most overdense region consisting of LBG candidates located in the south-west of the QSO field (see Figures \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma} and \ref{NumberDensityContours-QSO-SDF-mean-and-sigma_in_SDF} for its location in the QSO field). VIKING-z66LAB was detected as one of the $z\sim6.6$ LAE candidates in the QSO field and satisfies the LAE selection criteria (\ref{Criteria-1}). Figure \ref{CMDand2CD} shows the locations of VIKING-z66LAB in the $z'-{\rm NB921}$ versus NB921 color-magnitude diagram and the $z'-{\rm NB921}$ versus $i'-z'$ two-color diagram. VIKING-z66LAB is also identified as an object with a very bright NB921 magnitude and its size in NB921 band (Ly$\alpha$ + UV continuum) much extended than stellar sources and any other LAE candidates in the isophotal area versus NB921 magnitude diagram in Figure \ref{IsophotalArea}. Figure \ref{LABFig} also shows VIKING-z66LAB is much more extended than the PSF FWHM size in the NB921 image.
VIKING-z66LAB also has a very red color of $i'-z'>1.94$ comparable to the $i'-z'>1.8$ color cut of the LBG selection criteria (\ref{Criteria-3}). However, it was not selected as an LBG candidate as it has a $2''$ aperture magnitude of $z'=26.16$ and is slightly fainter than the limiting magnitude $z'=26.1$ for the LBG selection criteria (\ref{Criteria-3}). Hence, VIKING-z66LAB is more likely a Ly$\alpha$ emitting LBG with strong Ly$\alpha$ emission and a faint UV continuum. Actually, it has a high Ly$\alpha$ luminosity of $L({\rm Ly}\alpha) \sim 2.6 \times 10^{43}$ erg s$^{-1}$ and a faint UV continuum magnitude of $M_{\rm UV} \sim -20.63$ mag both estimated from its total NB921 and $z'$ magnitudes and using the equation (\ref{Eqn_LyaUVLum}). Figure \ref{CMD_LBGs_both_fields} shows the location of VIKING-z66LAB in the $i'-z'$ versus $z'$ color-magnitude diagram. Also, the photometric properties of VIKING-z66LAB are shown in Table \ref{LABPhotometry}.
Figure \ref{LABFig} shows the images of VIKING-z66LAB in $i'$, $z'$ and NB921 bands. It is not detected in $i'$, faintly detected in $z'$ and bright and extended in NB921 (Ly$\alpha$ + UV continuum) with its angular diameter $\gtrsim 3''$ or $\gtrsim 16$ physical kpc. This size in NB921 is comparable to those of the previously found $z\sim6.6$ LABs also detected in NB921 with Subaru Suprime-Cam such as Himiko and CR7 \citep{Ouchi09,Sobral15}. The HST Wide Field Camera 3 (WFC3) near-infrared high resolution images of Himiko and CR7 revealed that each of them consists of three objects merging or interacting within a large Ly$\alpha$ cloud \citep{Ouchi13,Sobral15}. VIKING-z66LAB appears to consist of two sources merging or interacting in the $z'$-band as seen in Figure \ref{LABFig}. Thus, VIKING-z66LAB might turn out to be a merger system if seen in higher resolution images because it is located near the most overdense region in the QSO field, and because merging of galaxies tends to frequently occur in/around such dense environments. Therefore, if VIKING-z66LAB is found to be a multiple merger system at $z\sim6.6$ by follow-up high resolution imaging and spectroscopy, this would support the reality of the most overdense region of the LBG candidates in the south-west of the QSO field, which is a part of the larger scale structure of LBG candidates also containing the $z=6.61$ QSO and its proximity region.
On the other hand, as mentioned in Section \ref{NC}, there is also the comparably bright (NB921$_{\rm total}$ = 23.69) and extended ($\gtrsim 3''$) $z=6.541$ LAB, SDF J132415.7+273058 \citep{Kodaira03,Taniguchi05}, in the average LAE and LBG density region in the Control Field SDF as seen in Figure \ref{NumberDensityContours-QSO-SDF-each-own-mean-and-sigma}. This LAB can be also identified with its size more extended than stellar sources in the isophotal area versus NB921 magnitude diagram in Figure \ref{IsophotalArea}. \citet{Jiang13} carried out high resolution observations of this LAB in the HST WFC3 near-infrared bands and found that it does not show multiple components or tails but is extended and elongated. They also found that there is a misalignment between the positions of its Ly$\alpha$ and UV continuum emission in this LAB with the Ly$\alpha$ position close to that of the fainter component of the UV continuum emission. Hence, they concluded that this LAB could be in the end of the merging process. If this is the case, this means that galaxy merging also occurs at $z\sim6.6$ in an average galaxy density environment in a general blank field, thereby forming an LAB. Thus, the positional relation between VIKING-z66LAB and the highest overdensity region in the QSO field could be alternatively interpreted as the product of chance. To examine this, follow-up spectroscopy of the LBG candidates in/around the highest overdensity region is required.
\section{Summary and Conclusion}
We conducted Subaru Suprime-Cam $i'$, $z'$ and NB921 band imaging of a sky area of $\sim 700$ arcmin$^2$ around the $z=6.61$ QSO J0305--3150 hosting an $M_{\rm BH} \sim 1 \times 10^9 M_{\odot}$ SMBH and detect both LAE and LBG candidates in the QSO field. In the same way and to the comparable depths, area and completeness, we also detect LAE and LBG candidates as a control sample in the Control Field (SDF), a general blank sky field where we confirm that there exist neither $z \sim 6.6$ QSOs, clustering of LAEs and LBGs nor over/underdensities of them. This allows us, for the first time, to probe galaxies with a wide range of masses and ages around a $z>6$ QSO in a large sky area to elucidate potential large scale galaxy overdensities by measuring galaxy densities around the QSO accurately using the control sample as a rigorous baseline for comparison. This makes up for the shortcomings of previous studies that probed only LBGs (biased to massive older galaxies) in small areas (only tens of arcmin$^2$) around $z>6$ QSOs not using consistently constructed control samples as a baseline and possibly causing puzzling results of finding a wide variety of galaxy densities right around $z>6$ QSOs. We compare sky distributions, surface number density contours, number counts, Ly$\alpha$ LFs and ACFs of the LAEs/LBGs in the $z=6.61$ QSO and Control fields.
The sky distributions and the number density contours indicate that LAE and LBG candidates are spreading on a large scale mostly over a $\sim 30 \times 60$ comoving Mpc$^2$ area in the south half part of the QSO field. Over this area, the number density of LAEs is almost equivalent to the mean to mean$-1\sigma$ desnsity of LAEs in the Control Field. Conversely, over this area, LBGs exhibit a filamentary overdensity structure running from east to west. The LBG structure contains several 3--$7\sigma$ high density excess clumps. On the other hand, LAEs and LBGs are very sparse in the north half of the QSO field, both showing the number densities equivalent to the mean to mean$-1\sigma$ desnsities of LAEs and LBGs in the Conrol Field.
The QSO and its proximity region (projected circular region around the QSO equivalent to the size of the NB921 filter's FWHM) could be part of the large scale LBG structure but are located at its near-edge region and not exactly at the highest density peaks. In this proximity region of the QSO, LAEs show lower number densities while LBGs exhibit average to $4\sigma$ excess number densities compared to the Control Field. Thus, the QSO may be part of a moderate galaxy overdensity at most. If this environment reflects a halo mass, the QSO may be in a moderately massive halo, not the most massive one.
The number counts of LAEs in the NB921 total magnitude and the Ly$\alpha$ LF of LAEs in the QSO field are consistent with those in the Control Field within statistical errors and cosmic variance at the faintest (NB921 $=$ 25.5--26.0 and $\log L({\rm Ly}\alpha)$/[erg s$^{-1}$] $=$ 42.4--42.6) and bright (NB921 $=$ 23.0--24.5 and $\log L({\rm Ly}\alpha)$/[erg s$^{-1}$] $=$ 43.2--43.6) magnitudes and Ly$\alpha$ luminosities. However, the number counts and the Ly$\alpha$ LF of LAEs are lower in the QSO field than the Control Field at the intermediate (NB921 $=$ 24.5--25.5 and $\log L({\rm Ly}\alpha)$/[erg s$^{-1}$] $=$ 42.6--43.2) magnitudes and Ly$\alpha$ luminosities. This is consistent with the fact that the sky distribution and the number density contours of LAEs in the QSO field exhibit only those equivalent to the mean to mean$-1\sigma$ densities of LAEs in the Control Field.
Meanwhile, the number counts of LBGs in the $z'$ band total magnitude show a clear excess of the faintest ($z'=25.8$--26.0) LBGs in the QSO field against the Control Field. At the brighter magnitudes ($z'=25.0$--25.8), the number counts are consistent between the QSO and Control fields within statistical errors and cosmic variance. However, though cosistent within the uncertainties, there is a sign that the number counts of LBGs in the QSO field tend to be higher than that in the Control Field at even intermidiate to faint magnitudes ($z'=25.4$--25.8). This suggests that the high density clumps seen in the large scale structure of LBGs in the QSO field would comprise mainly relatively fainter LBGs. We confirm this trend in the sky distribution and the number density contours of LBGs in the QSO field where LBGs are plotted with symbols whose sizes are proportional to brightness ($z'$ band magnitudes) of the LBGs.
Moreover, the ACFs indicate that in the QSO field the LAEs are clustering over a wide range of angular scales $\sim 8$--20 comoving Mpc while LBGs small angular scales of $\sim 4$--8 comoving Mpc. The highest LBG density clump located in the west of the LBG large scale structure includes a bright (NB921$_{\rm total} =$ 23.78) and extended (diameter $\gtrsim 3''$ or 16 physical kpc) LAB candidate. As LABs are often found in/around overdense environments such as protoclusters, the highest density clump could be a protocluster at $z\sim 6.6$. This might support the validity of the highest density clump and the large scale LBG overdense structure it is associated with.
All of those phenomena observed in the QSO field is in stark contrast to the Control Field where the number density distributions of LAEs and LBGs over the field are almost flat within the mean $\pm 1\sigma$ fluctuations, and both LAEs and LBGs exhibit no clustering signals in their ACFs. Hence, the QSO field is quite different from and seems to be more biased than a general blank field in terms of galaxy spatial and density distributions on a large scale and clustering of them.
We also investigate the possible effect of the QSO UV radiation on the formation of LAEs and LBGs. We find that star formation of the LAEs hosted by halos at the lowest mass end ($M_{\rm h}<1$--$3 \times 10^{10} M_{\odot}$) could be suppressed by the QSO UV radiation within $<$ 3 physical Mpc from the QSO (i.e., within the proximity region), but those of LAEs with higher halo masses and LBGs would not be suppressed. This can explain the fact that there are much less LAE candidates than LBG candidates within the QSO proximity region. This may also be caused by the difference in the probed volume between LAEs and LBGs ($\Delta z \sim 0.1$ for LAEs and $\Delta z \sim 0.9$ for LBGs) as more galaxies are detected in a larger volume. However, the sparsity of LAE candidates in the QSO proximity region compared to the number of LAE candidates outside the QSO proximity region may imply that formation of some of the lower mass LAEs was possibly suppressed by the QSO radiation.
Our result presented in this paper is based on only one QSO. To see whether it is a universal trend of environments of high redshift QSOs or any diversity exists, we need to observe more QSOs. Also, the redshift of the $z=6.61$ QSO J0305--3150 we observed is in the red side of the bandpass of the NB921 filter where the sensitivity to LAEs is lower than nominal (see Figure \ref{NB921_LyaPeakDist}). Hence, we might have missed detecting some fraction of LAEs around the $z=6.61$ QSO, especially those located at the far side of the QSO. This might result in our finding of average or low LAE density in the proximity of the QSO. Therefore, it is important to find $z\sim6.6$ QSOs whose redshifts are located in the blue side of the bandpass of the NB921 filter where the sensitivity to LAEs reaches its peak. There are several ongoing high redshift ($z>6$) QSO searches exploiting wide area multiwavelength survey data. It is possible that those searches will find the QSOs whose redshifts best match the bandpass of the NB921 filter. In fact, at this moment, \citet{Venemans15} and \citet{Banados16} found such a QSO at $z=6.5412$, PSO J036.5078+03.0498, from the Pan-STARRS1 survey. Even though Subaru Suprime-Cam has been decommissioned recently, a similar NB921 filter is also available for its currently working successor Hyper Suprime-Cam that has a seven times wider FoV \citep{Konno17,Ouchi17,Shibuya17a,Shibuya17b}. Hence, if appropriate QSOs are found, it is possible to investigate galaxy densities around $z>6$ QSOs more accurately over even much larger volumes. This will yield a more general picture of a variety of $z>6$ QSO environments.
\acknowledgments
We are grateful to the staff at the Subaru Telescope for their support during our observations. We thank Kazuhiro Shimasaku for his helpful comments. We thank Akie Ichikawa and Tomoe Takeuchi for helping us conduct our observations, Tomoki Morokuma and Masao Hayashi for providing us the $R_c$, $i'$, $z'$ and $J$ band images of the SDF, Masaru Ajiki for providing us the detailed information about selecting the $z=6.6$ LAEs in the SDF, and Tomotsugu Goto and Yousuke Utsumi for providing us the information about their narrowband observations and data analysis. We also thank our referee for carefully reading and examining the manuscript and providing very valuable comments and suggestions that helped us improve the paper significantly. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser at http://pono.ucsd.edu/{\textasciitilde}adam/browndwarfs/spexprism. K.O. acknowledges the Kavli Institute Fellowship at the Kavli Institute for Cosmology in the University of Cambridge supported by the Kavli Foundation. B.P.V. and F.W. acknowledge funding through the ERC grant ``Cosmic Dawn''. R.O. received support from CNPq (400738/2014-7) and FAPERJ (E-26/202.876/2015). D.R. acknowledges support from the National Science Foundation under grant number AST-1614213 to Cornell University. The authors recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
\vspace{5mm}
\facility{Subaru (Suprime-Cam)}
\software{SDFRED2 \citep{Yagi02,Ouchi04}, IRAF (http://iraf.noao.edu/), SExtractor\\ \citep{BA96}}
|
{
"timestamp": "2018-03-05T02:10:20",
"yymm": "1802",
"arxiv_id": "1802.08912",
"language": "en",
"url": "https://arxiv.org/abs/1802.08912"
}
|
\section{INTRODUCTION} \label{sec:intro}
The controller design of safety critical dynamical systems, such as power systems, autonomous vehicles, industrial robots, and chemical reactors, requires simultaneous satisfaction of performance specifications and multiple safety constraints \cite{chesi2011domain, romdlony2016stabilization, chesi2008analysis}. Violation of safety constraints might result in system failures and injuries. The problem of safe stabilization, i.e., to stabilize the system while staying in a given safe set, poses a serious challenge to the controller design task.
The formal design for stabilization of nonlinear dynamical systems is oftentimes achieved using Control Lyapunov Functions (CLFs). Meanwhile, the safety of dynamical systems can be established with barrier certificates, which guarantee that the state of the system never enters specified unsafe regions \cite{prajna2007framework}. Barrier certificates are useful tools for safety verification in autonomous dynamical systems, see \cite{prajna2007framework, sloth2012compositional}, and references therein. While in control dynamical systems, barrier certificates can provably enforce dynamical safety constraints in various applications, e.g., adaptive cruise control \cite{xu2016correctness}, bipedal walking \cite{hsu2015control}, and multi-agent robotics \cite{wang2016multiobj, wang2017multidrone}. It is important to see that safe stabilization is not guaranteed in the intersection of the DoA and the safe region. Since the safety and stabilization objectives might be in conflict, a common control that satisfies both objectives does not necessarily exist \cite{romdlony2014uniting, xu2016control}.
In order to simultaneously achieve safety and stabilization of dynamical systems, a number of control design methods have been proposed in the literature to unite CLF with barrier certificates. For example, a barrier function was explicitly incorporated into the design phase of the CLF \cite{tee2009barrier, romdlony2014uniting}, which resulted in a single feedback control law if a ``control Lyapunov barrier function" inequality was satisfied. However, no feedback controller can be designed if these two objectives were in conflict. The condition for multiple barrier constraints to be compatible with each other was characterized in \cite{xu2016control, wang2016multiobj}. To deal with conflicting safety and stabilization objectives, an optimization based controller was developed in \cite{ames2014CBF} such that safety is strictly guaranteed while convergence to goal is relaxed when conflict occurs.
In contrast to the aforementioned methods, this paper deals with the conflict between the safety and stabilization objectives by finding a region of safe stabilization, which is both contractive to the equilibrium and safe with respect to state constraints. The region of safe stabilization is a subset of the intersection of the Domain of Attraction (DoA) and the safe region. Similar to the problem of estimating the DoA, it is usually not easy to obtain the exact region of safe stabilization for arbitrary dynamics. Thus, a good approximation algorithm to compute the region of safe stabilization is needed. For instance, safe stabilization funnels were designed to be sublevel sets of the Lyapunov function in \cite{majumdar2013control}. In this paper, we will present an approximation algorithm based on barrier certificates, which generates an estimate of the region that is strictly larger than the estimate based on Lyapunov sublevel set. In contrast to \cite{ames2014CBF,xu2016correctness}, no relaxation on the Lyapunov constraint is needed when it is united with the permissive barrier certificates, because the certificates and the Lyapunov constraint are always compatible by construction.
Estimating the region of safe stabilization is closely related to estimating the DoA of an equilibrium state, except for the extra consideration of safety constraints. Among the various DoA approximation methods proposed in the literature, methods using the subset of Lyapunov-like functions, such as quadratic Lyapunov functions \cite{tibken00cdc} and rational polynomial Lyapunov functions \cite{chesi13auto}, are proved to be effective \cite{parrilo00cit}. Further improvements on the Lyapunov sublevel set based methods are developed in \cite{henrion2014tac,valmorbida14acc, han2016estimating} to reduce the conservativeness with invariant sets. In this paper, the set invariance property is established with barrier certificates, which are allowed to take arbitrary shapes rather than the sublevel set of the Lypapunov function. This method leads to a non-conservative estimate of the DoA.
The contribution of this paper is threefold. First, permissive barrier certificates that are guaranteed compatible with the Lyapunov function are synthesized to ensure simultaneous stabilization and safety enforcement of control dynamical systems. Second, iterative search algorithms to compute permissive barrier certified region of safe stabilization are developed based on sum-of-squares (SOS) programs. Third, barrier certificates are used to construct a non-conservative estimate of DoA by allowing the contractive region to take arbitrary shapes.
The rest of the paper is organized as follows. Preliminary results on barrier certificates are briefly revisited in Section \ref{sec:prelim}. Barrier certificates for DoA estimation and safe stabilization are the topics of Sections \ref{sec:auto} and \ref{sec:control}, respectively. Conclusions are discussed in Section \ref{sec:conclude}.
\section{Preliminaries: Barrier Certificates for Dynamical Systems}\label{sec:prelim}
Preliminary results on barrier certificates are revisited here to set the stage for DoA estimation and safe stabilization. More specifically, applications of barrier certificates in safety verification of autonomous systems and safe controller synthesis for control dynamical systems will be discussed.
\subsection{Barrier Certificates for Autonomous Dynamical Systems}
Using the invariant set principle, barrier certificates can certify that state trajectories starting from an initial set $\mathcal{X}_0$ do not enter an unsafe set $\mathcal{X}_u$. Consider an autonomous system
\begin{equation}\label{eqn:sysauto}
\dot{x} = f(x),
\end{equation}
where $x\in\mathcal{X}$, and $f$ is locally Lipschitz continuous. Both $\mathcal{X}_0$ and $\mathcal{X}_u$ are subsets of $\mathcal{X}$. The barrier certificate \cite{prajna2007framework}, $h(x):\mathbb{R}^n\to\mathbb{R}$, needs to satisfy
\begin{eqnarray}
h(x)\geq 0, &\forall x\in \mathcal{X}_0, \nonumber\\
h(x)< 0, &\forall x\in \mathcal{X}_u, \nonumber \\
\frac{\partial h(x)}{\partial x}f(x) \geq 0, & \forall x\in\mathcal{X}, \label{eqn:dh0}
\end{eqnarray}
so that the safety of the system is guaranteed.
The condition (\ref{eqn:dh0}) is often too restrictive, since $h(x)$ has to be non-decreasing. A more permissive barrier certificate is presented in \cite{ames2014CBF,xu2016correctness}. The condition (\ref{eqn:dh0}) can be relaxed to
\begin{equation}
\frac{\partial h(x)}{\partial x}f(x) \geq -\kappa(h(x)), \forall x\in\mathcal{X},\label{eqn:dh1}
\end{equation}
where $\kappa:\mathbb{R}\to\mathbb{R}$ is an extended class-$\kappa$ function (strictly increasing and $\kappa(0)=0$). Let the certified safe area be defined as $\mathcal{C}=\{x\in\mathcal{X}~|~h(x)\geq0\}$. By allowing the derivative of the barrier certificate to grow within the safe set $\mathcal{C}$, this barrier certificate can ensure the forward invariance of $\mathcal{C}$ in a non-conservative manner.
\begin{figure}[h]
\centering
\resizebox{3.2in}{!}{\includegraphics{Fig/compare_barrier_v4.eps}}
\caption{Comparison of two types of barrier certificates. The barrier certified safe region based on (\ref{eqn:dh1}) (area between the solid green lines) is significantly larger than the safe region based on (\ref{eqn:dh0}) (area between the dashed red lines). }
\label{fig:cpbarrier}
\end{figure}
The difference between these two types of barrier certificates can be illustrated with a simple example. Using the SOS technique described in \cite{prajna2007framework}, we can compute the certified safe regions for both barrier certificates.
Consider a 2D autonomous dynamical system,
\begin{equation*}
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix}
= \begin{bmatrix} x_2 \\ -x_1+\frac{1}{3}x_1^3-x_2 \end{bmatrix}.
\end{equation*}
The initial and unsafe sets are specified as $\mathcal{X}_0=\{x~|~0.25 - (x_1-1.5)^2 - (x_2+1)^2\geq 0\}$ and $\mathcal{X}_u = \{x~|~0.25 - (x_1+1.4)^2 - (x_2+1.6)^2\geq 0\}$, respectively. Both types of barrier certificates can be illustrated in Fig. \ref{fig:cpbarrier}. The area of the barrier certified safe region generated with (\ref{eqn:dh1}) is much larger than (\ref{eqn:dh0}), which means that (\ref{eqn:dh1}) allows for a significantly more permissive safety certificate than (\ref{eqn:dh0}).
\subsection{Barrier Certificates for Control Dynamical Systems}
For a control-affine dynamical system
\begin{equation} \label{eqn:sys2}
\dot{x} = f(x) + g(x)u,
\end{equation}
where $x\in \mathcal{X}$ and $u\in U$ are the state and control of the system, and $f$ and $g$ are both locally Lipschitz continuous. The safe set $\mathcal{C}=\{x\in\mathcal{X}~|~h(x)\geq0\}$ is defined as a superlevel set of a smooth function $h:\mathcal{X}\to \mathbb{R}$.
Barrier certificate can be designed to regulate the controller $u$, such that the safety constraint is never violated. The barrier certificate for control system is designed with control barrier functions (CBF). The function $h(x)$ is a CBF, if there exists an extended class-$\kappa$ function $\kappa$ such that
\begin{equation}
\sup_{u\in U} \left\{\frac{\partial h(x)}{\partial x}f(x) + \frac{\partial h(x)}{\partial x}g(x) u +\kappa(h(x))\right\}\geq 0, \forall x\in \mathcal{X}. \nonumber
\end{equation}
With $h(x)$, barrier certificates for (\ref{eqn:sys2}) are defined as
\begin{equation}
K(x) = \left\{u\in U~\middle|~\frac{\partial h(x)}{\partial x}f(x) + \frac{\partial h(x)}{\partial x}g(x) u +\kappa(h(x))\geq 0\right\}. \nonumber
\end{equation}
By constraining the controller $u$ in $K(x)$, the state trajectory will never leave the safe set $\mathcal{C}$ \cite{ames2014CBF,xu2016correctness}.
The stabilization task can be encoded into a control Lyapunov function (CLF) $V(x)$. Since a common control that satisfies both the CBF and the CLF does not necessarily exist, a typical way to unite the pre-designed CLF and CBF is to use a QP-based controller \cite{xu2016correctness,ames2014CBF,nguyen2016exponential}, i.e.,
\begin{equation}
\label{eqn:QPclfcbf}
\begin{aligned}
& u^* &= \:\: \underset{ u\in\mathbb{R}^{n}}{\text{argmin}}
&\:\:J( u) + k_\delta\delta^2\\
& \text{s.t.}
& \frac{\partial V(x)}{\partial x}g(x) u &\leq -\frac{\partial V(x)}{\partial x}f(x) + \delta, \\
&
& -\frac{\partial h(x)}{\partial x}g(x) u &\leq \frac{\partial h(x)}{\partial x}f(x) + \kappa(h(x)),
\end{aligned}
\end{equation}
where $\delta$ is a CLF relaxation factor, such that the non-negotiable safety constraint is always satisfied. However, simultaneous stabilization and safety enforcement are not guaranteed. In this paper, instead of relaxing the stabilization term, we will compute an estimate of the region of safe stabilization with permissive barrier certificates, such that both the stabilization and safety constraints are strictly respected.
\section{DoA Estimation with Barrier Certificates for Autonomous Dynamical Systems} \label{sec:auto}
Computing estimates of the region of safe stabilization is closely related to computing estimates of DoA, because both try to maximize the volume of interested region where certain matrix inequalities are satisfied. In this section, we will show that the DoA estimate derived with barrier certificates is strictly larger than the maximum contractive sublevel set of the Laypunov function. An iterative optimization algorithm based on SOS program is provided to numerically compute the most permissive barrier certificates for polynomial systems. Building upon the results developed in this section, permissive barrier certificates for safe stabilization will be presented in Section \ref{sec:control}.
\subsection{Expanding Estimate of DoA with Barrier Certificates}
Assume the system (\ref{eqn:sysauto}) is locally asymptotically stable at the origin. Let $\psi(t;x_0)$ denote the state trajectory of the system (\ref{eqn:sysauto}) starting from $x_0$. The DoA of the origin is defined as the set of all initial states which eventually converge to the origin as time goes to infinity,
$$\mathcal{D}=\{x_0\in\mathcal{X}~|~\lim_{t\to\infty}\psi(t;x_0) = 0\}.$$
A commonly used method to estimate the DoA is to compute the sublevel set of a given Lyapunov function $V(x)$. This Lyapunov function should be positive definite, and its derivative should be locally negative definite. Let $\mathcal{V}(c) = \{x\in\mathcal{X}~|~V(x)\leq c\}$ be a sublevel set of $V(x)$. The largest inner estimate of the DoA using the sublevel set of the Lyapunov function can be computed with
\begin{equation}
\label{eqn:vcmaxopt}
\begin{aligned}
& c^* = & \underset{ c\in\mathbb{R}}{\text{max}}
&\:\:c\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x)& > 0,\quad \forall x\in\mathcal{V}(c)\setminus \{0\}.
\end{aligned}
\end{equation}
The estimate $\mathcal{V}(c^*)$ is straightforward to compute, but often conservative compared to invariant set based methods. This is because the shape of $\mathcal{V}(c^*)$ is restricted to the Lyapunov sublevel set.
Next, we will show that the estimate of DoA can be further expanded using barrier certificates and the given Lyapunov function. This is achieved by allowing the barrier certificates to take an arbitrary shape instead of the sublevel set of $V(x)$. The most permissive barrier certified region $\mathcal{C}=\{x\in\mathcal{X}~|~h(x)\geq 0\}$ can be computed as,
\begin{equation}
\label{eqn:bmaxopt}
\begin{aligned}
\quad \quad h^*(x) = \underset{ h(x)\in\mathcal{P}}{\text{argmax}} \quad
&\mu(\mathcal{C})\\
\text{s.t.}
-\frac{\partial V(x)}{\partial x}f(x)&> 0,\quad &\forall x\in\mathcal{C}\setminus \{0\},\\
\frac{\partial h(x)}{\partial x}f(x) &\geq - \kappa(h(x)),\quad &\forall x\in\mathcal{C},
\end{aligned}
\end{equation}
where $\mu(\mathcal{C})$ is the volume of $\mathcal{C}$. The largest estimate of the DoA with barrier certificates is achieved with $\mathcal{C}^*=\{x\in\mathcal{X}~|~h^*(x)\geq 0\}$. By maximizing the volume of the barrier certified region, $\mathcal{C}^*$ is guaranteed to be larger than $\mathcal{V}(c^*)$. This fact can be shown with the following lemma.
\vspace{0.1in}
\begin{lemma} \label{lm:bdoa}
Given an autonomous system (\ref{eqn:sysauto}) that is locally asymptotically stable at the origin, the estimate of DoA with barrier certificates is no smaller than the estimate with the sublevel set of Lyapunov function, i.e., $\mu(\mathcal{V}(c^*))\leq \mu(\mathcal{C}^*)$.
\end{lemma}
\begin{proof}
The largest inner estimate of DoA using the sublevel set of a given Lyapunov function is $\mathcal{V}(c^*) = \{x\in\mathcal{X}~|~V(x)\leq c^*\}$. A candidate barrier certificate can be designed as $\bar{h}(x) = c^*-V(x)$, and the corresponding certified safe region is $\bar{\mathcal{C}}=\{x\in\mathcal{X}~|~\bar{h}(x)\geq 0\}$. The time derivative of $\bar{h}(x)$ is
\begin{equation*}
\frac{\partial \bar{h}(x)}{\partial x}f(x) = -\frac{\partial V(x)}{\partial x}f(x), \quad \forall x\in\bar{\mathcal{C}},
\end{equation*}
which is always nonnegative within $\bar{\mathcal{C}}$. By definition, $\bar{h}(x)$ is also nonnegative in $\bar{\mathcal{C}}$, i.e.,
\begin{equation*}
\frac{\partial \bar{h}(x)}{\partial x}f(x) \geq 0 \geq -\kappa(\bar{h}(x)), \quad \forall x\in\bar{\mathcal{C}},
\end{equation*}
which means $\bar{h}(x)$ is a valid barrier certificate and a feasible solution to (\ref{eqn:bmaxopt}). But $\bar{h}(x)$ is not necessarily the optimal solution. So we have $\mu(\mathcal{V}(c^*)) = \mu(\bar{\mathcal{C}}) \leq \mu(\mathcal{C}^*)$.
\end{proof}
\vspace{0.1in}
\textit{Remark $1$}:~ With \textit{Lemma} \ref{lm:bdoa}, (\ref{eqn:vcmaxopt}) can be reformulated into an optimization problem similar to (\ref{eqn:bmaxopt}), i.e.,
\begin{equation*}
\label{eqn:vcopt2}
\begin{aligned}
& c^* = & \underset{ c\in\mathbb{R}}{\text{max}}
&\:\:c\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x)& > 0,\, &\forall x\in\mathcal{V}(c)\setminus \{0\}, \\
&
& \frac{\partial (c-V(x))}{\partial x}f(x)& \geq -\kappa(c-V(x)),\, &\forall x\in\mathcal{V}(c).
\end{aligned}
\end{equation*}
We can see that (\ref{eqn:vcmaxopt}) also searches for a maximum barrier certificate. The shape of the certified region is constrained to be a sublevel set of $V(x)$. Since a specific shape of the certified region is not required, (\ref{eqn:bmaxopt}) is more permissive than (\ref{eqn:vcmaxopt}). In addition, $h(x)$ is allowed to decrease within the estimated DoA instead of monotone increasing.
\vspace{0.1in}
The fact that $\mathcal{C}^*$ is an inner estimate of the DoA can be established with the following theorem.
\vspace{0.1in}
\begin{theorem} \label{thm:bdoa}
Given an autonomous dynamical system (\ref{eqn:sysauto}) that is locally asymptotically stable at the origin, the estimate of the DoA with barrier certificates, $\mathcal{C}^*$, is a subset of the true DoA $\mathcal{D}$. And $\mathcal{C}^*$ is guaranteed to be non-empty.
\end{theorem}
\begin{proof}
Given an arbitrary initial state $x_0\in\mathcal{C}^*$, the trajectory of the state $\psi(t; x_0), t\in[0,\infty),$ is guaranteed to be contained within $\mathcal{C}^*$, due to the forward invariance property of barrier certificates.
By the construction of $\mathcal{C}^*$ in (\ref{eqn:bmaxopt}), $\frac{\mathrm{d} V(\psi(t; x_0))}{\mathrm{d} t}$ is negative definite for $\psi(t; x_0)\in\mathcal{C}^*$. Therefore, $V(\psi(t; x_0))$ is strictly decreasing along the trajectory $\psi(t; x_0), t\in[0,\infty),$ except at $0_n$. Since $V(x_0)$ is bounded and $0_n$ is the only equilibrium point in $\mathcal{C}^*$, we can get $\lim_{t\to\infty}\psi(t; x_0) = 0_n$. By the definition of the DoA, $x_0\in\mathcal{D}$ for any $x_0\in\mathcal{C}^*$, which means $\mathcal{C}^*\subseteq \mathcal{D}$.
It is shown in \cite{chesi2011domain} that $\mathcal{V}(c^*)$ is non-empty. From Lemma \ref{lm:bdoa}, $\mu(\mathcal{V}(c^*)) \leq \mu(\mathcal{C}^*)$, thus $\mathcal{C}^*$ is also non-empty.
\end{proof}
\vspace{0.1in}
\subsection{Iterative Search of Permissive Barrier Certificates}
The optimization problem (\ref{eqn:bmaxopt}) is difficult to solve for general systems, since checking non-negativity is often computationally intractable \cite{papachristodoulou2002construction}. However, if non-negativity constraints are relaxed to SOS constraints, (\ref{eqn:bmaxopt}) can be converted to a numerically efficient convex optimization problem. To this end, we will restrict (\ref{eqn:sysauto}) to polynomial dynamical systems.
Let $\mathcal{P}$ be the set of polynomials for $x\in\mathbb{R}^n$. The polynomial $l(x)$ can be written in Square Matrix Representation (SMR) \cite{chesi2011domain} as $Z^T(x)QZ(x)$, where $Z(x)$ is a vector of monomials, and $Q\in\mathbb{R}^{k\times k}$ is a symmetrical coefficient matrix. A polynomial function $l(x)$ is nonnegative if $l(x)\geq 0, \forall x\in\mathbb{R}^n$. Furthermore, $p(x)$ is a SOS polynomial if $p(x)=\sum_{i=1}^{m}p_i^2(x)$ for some $p_i(x)\in\mathcal{P}$. $\mathcal{P}^\text{SOS}$ is the set of SOS polynomials. If written in SMR form, $p(x)$ has a positive semidefinite coefficient matrix $Q\succeq 0$. The trace and determinant of a square matrix $A\in\mathbb{R}^{n\times n}$ are $\text{trace}(A)$ and $\text{det}(A)$, respectively.
Since the proposed method is an under-approximation method, we would like to maximize the volume of $\mathcal{C}$ such that the best estimate of DoA can be achieved. However, this objective $\textrm{max}(\textrm{vol}(\mathcal{C}))$ is non-convex and usually cannot be described by an explicit mathematical expression. In order to solve this issue, a typical way adopted in the literature is to approximate the volume by using $\textrm{trace}(Q)$, where $h(x)=Z(x)^TQZ(x)$. In this paper, we would like to maximize $\textrm{trace}(Q)$ to get the largest $\mathcal{C}$ similar to \cite{chesi2011domain}.
To deal with nonnegativity constraints over semialgebraic sets, we will introduce the Positivestellensatz (P-satz).
\begin{lemma}(\cite{putinar1993positive})
For polynomials $a_1,\dots,a_m$, $b_1,\dots,b_l$ and $p$, define a set
\begin{equation*}
\begin{array}{rcl}
\mathcal{B}&=&\{x\in \mathbb{R}^n: a_i(x)=0,~\forall i=1,\dots,m, \\
&& b_i(x)\geq 0,~\forall j=1,\dots, l\}.
\end{array}
\end{equation*}
Let $\mathcal{B}$ be compact. The condition $p(x)> 0,\forall x \in \mathcal{B}$ holds if the following condition is satisfied:
\begin{equation*}
\left\{
\begin{array}{l}
\exists r_1,\dots,r_m \in \mathcal{P},~ s_1,\dots, s_l \in \mathcal{P}^{\text{SOS}},\\
p-\sum^{m}_{i=1}r_i a_i-\sum^{l}_{i=1}s_i b_i \in \mathcal{P}^{\text{SOS}}.
\end{array}
\right.
\end{equation*}
\label{l:psatz}
\end{lemma}
This lemma provides an important perspective that any strictly positive polynomial $p(x)\in\mathcal{F}$ is actually in the cone generated by $a_i$ and $b_i$. Using the Real P-satz and the SMR form of $h(x)$, (\ref{eqn:bmaxopt}) can be formulated into a SOS program,
\begin{equation}
\label{eqn:boptsos0}
\begin{aligned}
&& \underset{ \substack{h(x)\in\mathcal{P},~L_1(x)\in\mathcal{P}^{\text{SOS}}\\ L_2(x)\in\mathcal{P}^{\text{SOS}}}}{\text{max}} \quad
\text{Trace}(Q)&\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \frac{\partial h(x)}{\partial x}f(x) + \gamma h(x) - L_2(x) h(x)&\in \mathcal{P}^{\text{SOS}},
\end{aligned}
\end{equation}
where a linear function $\kappa(x)=\gamma x$ is adopted. The SOS program (\ref{eqn:boptsos0}) involves bilinear decision variables. It can be solved efficiently by splitting into several smaller SOS programs, which leads to the following iterative search algorithm.
\vspace{0.1in}
\textit{Remark $2$}:~ Notice that (\ref{eqn:boptsos0}) requires an initial value of $h(x)$ to start with. From \textit{Lemma} \ref{lm:bdoa}, a good initial value can be picked as $\bar{h}(x)=c^*-V(x)$. This SOS program is guaranteed to generate a barrier certificate better than $\bar{h}(x)$.
\vspace{0.1in}
\textbf{Algorithm 1}:
\textit{Step 1: Calculate an initial value for $h(x)$}
Specify a Lyapunov function $V(x)$, and find $c^*$ using the bilinear search method, i.e.,
\begin{equation*}
\label{eqn:vcopt}
\begin{aligned}
& c^* = & \underset{ c\in\mathbb{R}, L(x)\in \mathcal{P}^\text{SOS}}{\text{max}}
&\:\:c\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x) - L(x)(c-V(x)) & \in \mathcal{P}(x)^{\text{SOS}}.
\end{aligned}
\end{equation*}
Set the initial value for $h(x)$ as $\bar{h}(x)=c^*-V(x)$.
\textit{Step 2: Fix $h(x)$, and search for $L_1(x)$ and $L_2(x)$}
Using the $h(x)$ from previous step, we can search for $L_1(x)$ and $L_2(x)$ that give the largest margin on the barrier constraint. This is achieved by solving
\begin{equation*}
\label{eqn:boptsos}
\begin{aligned}
&& \underset{ \substack{\epsilon\geq 0,~L_1(x)\in\mathcal{P}^{\text{SOS}}\\L_2(x)\in\mathcal{P}^{\text{SOS}}}}{\text{max}} \quad
\epsilon &\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \frac{\partial h(x)}{\partial x}f(x) + \gamma h(x) - L_2(x) h(x) - \epsilon &\in \mathcal{P}^{\text{SOS}}.
\end{aligned}
\end{equation*}
\textit{Step 3: Fix $L_1(x)$ and $L_2(x)$, and search for $h(x)$}
With $L_1(x)$ and $L_2(x)$ from previous step, a most permissive barrier certificate can be searched for. The barrier certificate is written in the SMR form $h(x)=Z(x)^TQZ(x)$. The most permissive barrier certificate is computed by maximizing the trace of $Q$,
\begin{equation*}
\label{eqn:boptsos}
\begin{aligned}
&& \underset{ \substack{h(x)\in\mathcal{P}}}{\text{max}} \quad
\text{trace}(Q) &\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}f(x) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \frac{\partial h(x)}{\partial x}f(x) + \gamma h(x) - L_2(x) h(x) &\in \mathcal{P}^{\text{SOS}}.
\end{aligned}
\end{equation*}
This searching process is terminated if $\text{trace}(Q)$ stops increasing, otherwise go back to \textit{Step 2}.
\vspace{0.1in}
\textit{Remark $3$}:~ In \textit{Step 2}, the common approach is to just search for feasible $L_1(x)$ and $L_2(x)$. However, there are multiple $L_1(x)$ and $L_2(x)$ available. By maximizing the margin $\epsilon$ of the barrier constraint, better options of $L_1(x)$ and $L_2(x)$ can be chosen. This method will expand the feasible space of $h(x)$ for optimization in \textit{Step 3}, which can help speed up the optimization procedure.
\vspace{0.1in}
\subsection{Simulation Results for Autonomous Dynamical Systems}
The iterative search algorithm \textbf{1} is implemented on two examples of autonomous dynamical systems. In the simulation, the Matlab toolboxes SeDuMi, SMRSOFT \cite{chesi2011domain}, SOSTOOLS\cite{prajna2002introducing}, and YALMIP \cite{lofberg2005yalmip} are used for solving the semidefinite and SOS programming problems.
\vspace{0.1in}
\textit{Example $1$}:~ Given the two-dimensional autonomous system
\begin{equation*}
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix}
= \begin{bmatrix} x_2 \\ -x_1-x_2 -x_1^3 \end{bmatrix},
\end{equation*}
which has a locally stable equilibrium at the origin. A forth order Lyapunov function for this system can be picked as $V(x) = x_1^2+x_1x_2+x_2^2+x_1^4+x_2^4$. Using the sublevel set of $V(x)$, we can get the largest estimate of DoA as $$\mathcal{A}_1 = \{x\in\mathbb{R}^2~|~V(x)\leq 0.9759\}.$$ With the iterative search algorithm for barrier certificates, a larger estimate of DoA can be obtained as
\begin{eqnarray*}
\mathcal{A}_2 = \{x\in\mathbb{R}^2~|~h(x) = 0.0428+0.0033x_1^2-0.1396x_1x_2\\+0.0206x_2^2 -0.0976x_1^4-0.0913x_2^4-0.0079x_1^3x_2\\+0.0061x_1x_2^3+0.0779x_1^2x_2^2 \geq 0\}.
\end{eqnarray*}
For comparison under the same condition, the order of the barrier certificate is also restricted to be forth-order. As illustrated in Fig. \ref{fig:doaeg1}, the barrier certificate expands the estimate of DoA significantly.
\begin{figure}[h]
\centering
\resizebox{3.2in}{!}{\includegraphics{Fig/unboundedDA_Comp2.eps}}
\caption{Estimates of DoA for a two-dimensional autonomous dynamical system. The barrier certified DoA estimate (region enclosed by the dashed blue curve) is significantly larger than the Lyapunov sublevel set based DoA estimate (region enclosed by the solid green curve).}
\label{fig:doaeg1}
\end{figure}
Note that the Lyapunov function in the example is randomly picked, one can also compute the maximal Lyapunov function \cite{chesi2011domain} and show that barrier certified DoA is larger as seen in \cite{wang2017safe}.
\vspace{0.1in}
\textit{Example $2$}:~ Consider the three-dimensional system
\begin{eqnarray*}
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{bmatrix} =
\begin{bmatrix}
-x_1+x_2x_3^2 \\ -x_2 \\ -x_3
\end{bmatrix},
\end{eqnarray*}
which has a locally stable equilibrium at the origin. A Lyapunov function for this system can be picked as $V(x) = x_1^2+x_2^2+x_3^2$. The largest estimate of DoA based on the sublevel set of Lyapunov function is $$\mathcal{A}_1 = \{x\in\mathbb{R}^3~|~V(x)\leq 8\}.$$ With barrier certificates, the largest estimate of the DoA is
\begin{eqnarray*}
\mathcal{A}_2 = \{x\in\mathbb{R}^3~|~h(x) = 7.9999-1.2828x_3^2-0.2850x_1^2 \\ -0.5652x_2^2-0.6685x_1x_2 \geq 0\}.
\end{eqnarray*}
The barrier certificate is restricted to the same order as $V(x)$. Both estimates of DoA are illustrated in Fig.\ref{fig:doa3Dauto}. Since both regions are ellipsoids, the volume of the estimated DoA can be analytically calculated. With the barrier certificate, the volume of the estimated region is increased by $\frac{\mu(\mathcal{A}_2)-\mu(\mathcal{A}_1)}{\mu(\mathcal{A}_1)}=297.4\%$.
\begin{figure}[h]
\centering
\resizebox{3.2in}{!}{\includegraphics{Fig/fig3_v4.eps}}
\caption{Estimates of DoA for a three-dimensional autonomous dynamical system. The black and blue ellipsoids represent the largest estimate of DoA based on the Lyapunov function sublevel set and barrier certificates, respectively. }
\label{fig:doa3Dauto}
\end{figure}
From these two examples, we can see that the barrier certificate based method provides a more permissive estimate of the DoA than the Lyapunov sublevel set based method.
\section{Safe stabilization of Control Dynamical Systems}\label{sec:control}
Permissive barrier certificates are developed in this section to maximize the estimated region of safe stabilization, where the system state is both stabilized and contained within the safe set. Based on the DoA estimation method for autonomous systems in section \ref{sec:auto}, the safe stabilization of control dynamical systems is addressed.
We will consider the safe stabilization problem described by (\ref{eqn:QPclfcbf}) for a locally stabilizable control-affine dynamical system (\ref{eqn:sys2}). Note that the locally stabilizable assumption ensures that an invariant and compact set for initial DoA estimation exists. Instead of relaxing the stabilization term with $\delta$ to resolve conflicts, we will synthesize a permissive barrier certificate with the maximum volume possible that strictly respects both the stabilization and safety constraints. This permissive barrier certificate can be found using
\begin{equation}
\label{eqn:bumaxopt}
\begin{aligned}
\quad \quad h^*(x) = \underset{ h(x)\in\mathcal{P}, u(x)\in\mathcal{P}}{\text{argmax}} \quad
&\mu(\mathcal{C})\\
\text{s.t.} \hspace{0.2in}
-\frac{\partial V(x)}{\partial x}f(x) -\frac{\partial V(x)}{\partial x}g(x)u(x) &> 0, &\forall x\in\mathcal{C}\setminus \{0\},\\
\frac{\partial h(x)}{\partial x}f(x) +\frac{\partial h(x)}{\partial x}g(x) u(x) + \kappa(h(x))&\geq 0, &\forall x\in\mathcal{C},
\end{aligned}
\end{equation}
where $\mu(\mathcal{C})$ is the volume of the certified safe region ($\mathcal{C}=\{x\in\mathcal{X}~|~h(x)\geq 0\}$). Note that (\ref{eqn:bumaxopt}) is a semi-infinite program that generates a feedback controller $u(x)$ for every $x\in\mathcal{C}$, while (\ref{eqn:QPclfcbf}) only products a point-wise optimal controller.
To enforce the safety constraints, it is required that the barrier certified region is contained within the complement of the unsafe region, i.e., $\mathcal{C}\subseteq \mathcal{X}_u^c$. For generality, the unsafe region is encoded with multiple polynomial inequalities,
\begin{equation}\label{eqn:xu}
\mathcal{X}_u=\{x\in\mathcal{X}~|~q_i(x)< 0, ~\forall i\in\mathcal{M}\},
\end{equation}
where $q_i(x)$ are polynomials, and $\mathcal{M}=\{1,2,...,M\}$ is the index set of all the safety constraints.
Similar to \textit{Lemma} \ref{lm:bdoa}, we can show that the region of safe stabilization estimated with barrier certificates is larger than the estimated region with Lyapunov sublevel set in \cite{majumdar2013control}.
\vspace{0.1in}
\begin{lemma} \label{lm:bdoau}
Given a dynamical control system (\ref{eqn:sys2}) that is locally stabilizable at the origin, the barrier certified region of safe stabilization estimate is no smaller than the estimated region of safe stabilization using sublevel set of the Lyapunov function, i.e, $\mu(\mathcal{V}(c^*))\leq \mu(\mathcal{C}^*)$.
\end{lemma}
\begin{proof}
Similar to Lemma \ref{lm:bdoa}.
\end{proof}
\vspace{0.1in}
In order to maximize the volume of the safe operating region, the barrier certificate is rewritten into SMR form, i.e., $h(x)=Z(x)^TQZ(x)$. Using the Real P-satz, the optimization problem (\ref{eqn:bumaxopt}) is formulated into a SOS program,
\begin{equation}
\label{eqn:boptsosu}
\begin{aligned}
& & \underset{ \substack{h(x)\in\mathcal{P}, ~u(x)\in\mathcal{P}\\L_1(x)\in\mathcal{P}^{\text{SOS}},~ L_2(x)\in\mathcal{P}^{\text{SOS}}\\J_i(x)\in\mathcal{P}^{\text{SOS}}, i \in\mathcal{M}} }{\text{max}} \quad
\text{Trace}(Q)&\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}(f(x) + g(x)u(x)) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \frac{\partial h(x)}{\partial x}(f(x) + g(x)u(x)) + \gamma h(x) - L_2(x) h(x)&\in \mathcal{P}^{\text{SOS}}, \\
&
& -h(x) + J_i(x)q_i(x) \in \mathcal{P}^{\text{SOS}}, \forall i &\in \mathcal{M}.
\end{aligned}
\end{equation}
The optimal barrier certificate obtained by solving the SOS program (\ref{eqn:boptsosu}) is denoted by $h^*(x)$. The corresponding controller is $u^*(x)$. The following theorem shows that guaranteed safe stabilization can be achieved within the barrier certified region $\mathcal{C}^*$.
\vspace{0.1in}
\begin{theorem}
Given a dynamical control system (\ref{eqn:sys2}) that is locally stabilizable at the origin, a Lyapunov function $V(x)$, an unsafe region $\mathcal{X}_u$ in (\ref{eqn:xu}), and the solution $h^*(x)$ to (\ref{eqn:boptsosu}), for any initial state $x_0$ in $\mathcal{C}^*=\{x\in\mathcal{X}~|~h^*(x)\geq 0\}$, there always exists a controller that drives the system to the origin without violating safety constraints.
\end{theorem}
\begin{proof}
Starting from any state $x_0\in\mathcal{C}^*$, the state trajectory of the system (\ref{eqn:sys2}) is denoted by $\psi(t; x_0)$ when the controller $u^*(x)$ from (\ref{eqn:boptsosu}) is applied.
By Real P-satz, the second constraint in (\ref{eqn:boptsosu}) implies that the barrier constraint in (\ref{eqn:bumaxopt}) is always satisfied, which ensures that the state trajectory $\psi(t; x_0)$ is always contained in $\mathcal{C}^*$.
Similarly, the first constraint in (\ref{eqn:boptsosu}) implies that $\frac{\mathrm d V(\psi(t; x_0))}{\mathrm d t}$ is always negative in $\mathcal{C}^*$ except at the origin. Thus $\lim_{t\to \infty}\psi(t; x_0)=0$.
The third constraint in (\ref{eqn:boptsosu}) ensures that ``if $-q_i(x)>0$, then $-h(x)>0$". Consider the contrapositive of this statement, we have ``if $h(x)\geq 0$, then $q_i(x)\geq 0$". This statement holds for any state $x\in\mathcal{C}^*$ and any safety constraint $i\in\mathcal{M}$, which means $\mathcal{C}^*\subseteq \mathcal{X}_u^c$. Because $\psi(t; x_0)$ is contained in $\mathcal{C}^*$, $\psi(t; x_0)$ is also contained in the safe space $\mathcal{X}_u^c$.
Combining these statements above, the controller $u^*(x)$ from (\ref{eqn:boptsosu}) will drive any state in $\mathcal{C}^*$ to the origin without violating any safety constraint.
\end{proof}
\vspace{0.1in}
\textit{Remark $4$}:~ With the generated permissive barrier certificates, it is guaranteed by construction that the QP-based controller (\ref{eqn:QPclfcbf}) is always feasible when $\delta$ is set to zero. This is because $u^*(x)$ is always a feasible solution for any $x\in\mathcal{C}^*$. The advantage of using a QP-based controller (\ref{eqn:QPclfcbf}) instead of $u^*(x)$ is that it minimizes the control effort by leveraging the part of nonlinear dynamics that contributes to stabilization.
\vspace{0.1in}
The optimization problem (\ref{eqn:boptsosu}) contains bilinear decision variables and requires a feasible initial barrier certificate. It can be split into several SOS programs and solved with the following iterative search algorithm.
\vspace{0.1in}
\textbf{Algorithm 2}:
\textit{Step 1: Calculate an initial guess for $h(x)$}
Specify a Lyapunov function $V(x)$, and find $c^*$ using bilinear search
\begin{equation*}
\label{eqn:vcopt}
\begin{aligned}
& c^* = & \underset{\substack{ c\in\mathbb{R}^+, ~u(x)\in \mathcal{P},~ L(x)\in\mathcal{P}^\text{SOS} \\ J_i(x)\in\mathcal{P}^\text{SOS}, ~i\in\mathcal{M} } }{\text{max}}
&\:\:c\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}(f(x)+g(x)u(x)) - L(x)(c-V(x)) & \in \mathcal{P}^{\text{SOS}}, \\
&
& -(c-V(x)) +J_i(x)q_i(x) \in \mathcal{P}^{\text{SOS}}, i &\in \mathcal{M}.
\end{aligned}
\end{equation*}
With the result of the bilinear search, set the initial guess for the barrier certificate as $\bar{h}(x)=c^*-V(x)$,
\textit{Step 2: Fix h(x), search for $u(x)$, $L_1(x)$, and $L_2(x)$}
Using the $h(x)$ from previous step, we can search for feasible $u(x)$, $L_1(x)$, and $L_2(x)$, while maximizing the barrier constraint margin $\epsilon$.
\begin{equation*}
\label{eqn:boptsos}
\begin{aligned}
&& \underset{ \substack{\epsilon\geq 0,~u(x)\in\mathcal{P}\\L_1(x)\in\mathcal{P}^{\text{SOS}},~L_2(x)\in\mathcal{P}^{\text{SOS}}}}{\text{max}} \quad
\epsilon &\\
& \text{s.t.}
& \hspace{-0.2in} -\frac{\partial V(x)}{\partial x}(f(x)+g(x)u(x)) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \hspace{-0.2in} \frac{\partial h(x)}{\partial x}(f(x)+g(x)u(x)) + \gamma h(x) - L_2(x) h(x) - \epsilon &\in \mathcal{P}^{\text{SOS}}.
\end{aligned}
\end{equation*}
\textit{Step 3: Fix $u(x)$, $L_1(x)$, and $L_2(x)$, search for $h(x)$}
Rewrite the barrier certificate into SMR form $h(x)=Z(x)^TQZ(x)$. With the $u(x)$, $L_1(x)$, and $L_2(x)$ from the previous step, we can search for the maximum volume barrier certificate that respects all the safety constraints,
\begin{equation*}
\label{eqn:boptsos}
\begin{aligned}
&& \underset{ \substack{h(x)\in\mathcal{P} \\ J_i(x)\in\mathcal{P}^\text{SOS}, ~i\in\mathcal{M} }}{\text{max}} \quad
\text{trace}(Q) &\\
& \text{s.t.}
& -\frac{\partial V(x)}{\partial x}(f(x)+g(x)u(x)) - L_1(x) h(x)&\in \mathcal{P}^{\text{SOS}},\\
&
& \frac{\partial h(x)}{\partial x}(f(x)+g(x)u(x)) + \gamma h(x) - L_2(x) h(x) &\in \mathcal{P}^{\text{SOS}}, \\
&
& -h(x) +J_i(x)q_i(x) \in \mathcal{P}^{\text{SOS}}, i &\in \mathcal{M}.
\end{aligned}
\end{equation*}
Terminate if $\text{trace}(Q)$ stops increasing, otherwise go back to \textit{Step 2}.
\vspace{0.1in}
\textit{Remark $5$}:~ In \textit{Step 2}, the safety constraints $q_i(x)\geq 0, i\in\mathcal{M}$ do not need to be included. This is because $h(x)$ from previous step already satisfies these safety constraints.
\vspace{0.1in}
\textit{Remark $6$}:~ To avoid unbounded control inputs, an additional constraint can be added to limit the magnitude of the coefficients of the polynomial controller $u(x)$.
\vspace{0.1in}
This iterative search algorithm is implemented on two control dynamical systems to achieve safe stabilization.
\textit{Example $3$}:~ Consider the simple two-dimensional mechanical dynamical system,
\begin{equation}\label{eqn:egu1}
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix}
x_2 \\ -x_1\end{bmatrix} + \begin{bmatrix} 0 \\ 1\end{bmatrix}u,
\end{equation}
where $x=[x_1, x_2]^T\in \mathbb{R}^2$ and $u\in\mathbb{R}$ are the state and control of the system. A Lyapunov function $V(x) = x_1^2+x_1x_2+x_2^2$ can be picked for the system.
The unsafe area is encoded with polynomial inequalities, $\mathcal{X}_u=\{x\in\mathbb{R}^2~|~q_i(x)< 0, i=1,2,3\}$, where
\begin{eqnarray*}
q_1(x) = (x_1-3)^2+(x_2-1)^2-1 < 0, \\
q_2(x) = (x_1+3)^2+(x_2+4)^2-1 < 0, \\
q_3(x) = (x_1+4)^2+(x_2-5)^2-1 < 0.
\end{eqnarray*}
The largest estimate of the region of safe stabilization with sublevel set of V(x) can be obtained as $$\mathcal{A}_1 = \{x\in\mathbb{R}^2~|~V(x)\leq 5.8628\}.$$ With the barrier certificate, this estimate can be enlarged to
\begin{eqnarray*}
\mathcal{A}_2 =\{x\in\mathbb{R}^2~|~h(x)=0.5189-0.0669x_1-0.1196x_2\\ -0.0546x_1^2-0.0630x_1x_2-0.0294x_2^2\geq 0\}.
\end{eqnarray*}
For comparison purpose, the barrier certificate is restricted to be second order polynomial. These estimates are illustrated in Fig. \ref{fig:doau1}. By allowing the barrier certificate to be not centered around the equilibrium, the estimate of the region of safe stabilization is expanded significantly.
\begin{figure}[h]
\centering
\resizebox{3.2in}{!}{\includegraphics{Fig/fig5_CLFCBF2.eps}}
\caption{Region of safe stabilization estimates for system (\ref{eqn:egu1}). The red circles represent unsafe regions. The magenta vector field represents the system dynamics when $u^*(x)$ is applied. The barrier certified region of safe stabilization (dashed blue ellipse) is significantly larger than the estimated region (solid green ellipse) with Lyapunov sublevel set based methods. }
\label{fig:doau1}
\end{figure}
\vspace{0.1in}
\textit{Example $4$}:~ Consider the three-dimensional system with multiple inputs,
\begin{equation}\label{eqn:egu2}
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{bmatrix}
= \begin{bmatrix} x_2-x_3^2 \\ x_3-x_1^2+u_1 \\ -x_1 -2x_2 -x_3+x_2^3+u_2 \end{bmatrix},
\end{equation}
where $x=[x_1, x_2, x_3]^T\in \mathbb{R}^3$ and $u=[u_1, u_2]^T\in\mathbb{R}^2$ are the state and control of the system.
\begin{figure}[h]
\centering
\resizebox{3.2in}{!}{\includegraphics{Fig/fig6_3D_v2.eps}}
\caption{Region of safe stabilization estimates for system (\ref{eqn:egu2}). The red spheres represent unsafe regions. The barrier certified region of safe stabilization (blue ellipsoid) is significantly larger than the region (black ellipsoid) obtained with Lyapunov sublevel sets.}
\label{fig:doau2}
\end{figure}
A Lyapunov function for the system is picked to be $$V(x)=5x_1^2+10x_1x_2+2x_1x_3+10x_2^2+6x_2x_3+4x_3^2.$$ The unsafe region $\mathcal{X}_u=\{x\in\mathbb{R}^3~|~q_i(x) < 0, i=1,2,3,4\}$ is represented with polynomial inequalities
\begin{eqnarray*}
q_1(x) &=& (x_1-2)^2+(x_2-1)^2+(x_3-2)^2-1 < 0, \\
q_2(x) &=& (x_1+1)^2+(x_2+2)^2+(x_3+1)^2-1 < 0, \\
q_3(x) &=& (x_1+0)^2+(x_2-0)^2+(x_3-6)^2-9 < 0, \\
q_4(x) &=& (x_1+0)^2+(x_2+0)^2+(x_3+5)^2-9 < 0.
\end{eqnarray*}
The region of safe stabilization estimated with sublevel set of Lyapunov is $$\mathcal{A}_1 = \{x\in \mathbb{R}^3~|~V(x)\leq 13.0124\}.$$ Using the iterative search algorithm, the maximum permissive barrier certificate is
\begin{eqnarray*}
\mathcal{A}_2 = \{x\in\mathbb{R}^3~|~h(x)=114.3555+1.4686x_1+7.2121x_2\\ +19.8479x_3-24.5412x_3^2-14.7734x_1^2-26.0129x_1x_2\\ -15.5440x_1x_3-28.3492x_2^2-27.5651x_2x_3\geq 0\}.
\end{eqnarray*}
The results for region of safe stabilization estimates are shown in Fig. \ref{fig:doau2}. In both examples, the Lyapunov sublevel set search terminates as soon as the boundary of one safety constraint is reached, while the barrier certificate search terminates when all safety boundaries are touched. This also demonstrates the non-conservativeness of barrier certificates.
\section{Conclusions}\label{sec:conclude}
A theoretical framework to generate permissive barrier certified region of safe stabilization was developed in this paper to strictly ensure simultaneous stabilization and safety enforcement of dynamical systems. Iterative search algorithms using SOS programming techniques were designed to compute the most permissive barrier certificates. In addition, the proposed barrier certificates based method significantly expands the DoA estimate for both autonomous and control dynamical systems. The effectiveness of the iterative search algorithm was demonstrated with simulation results.
Iterative algorithms were developed in this paper to cope with the non-convexity of the barrier certificated region maximization problems \eqref{eqn:boptsos0} and \eqref{eqn:boptsosu}. To get less conservative results, a promising way is to synthesize convex finite-dimensional LMIs rather than a bilinear matrix inequality using the moment theory and the occupation measure \cite{henrion2014tac}, to which our future efforts will be devoted.
\addtolength{\textheight}{-12cm}
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-02-27T02:07:57",
"yymm": "1802",
"arxiv_id": "1802.08917",
"language": "en",
"url": "https://arxiv.org/abs/1802.08917"
}
|
\section{Introduction}
In this paper, we consider finite simple graphs. For graph-theoretical terminologies and notation not defined here, we follow \cite{Bondy}.
For a graph $G$, we use $\kappa'(G)$ to denote the $edge$-$connectivity$ of $G$. The $complement$ of a graph $G$ is denoted by $G^c$. For $X\subseteq E(G^c)$, $G+X$ is the graph with vertex set $V(G)$ and edge set $E(G)\cup X$. We will use $G+e$ for $G+\{e\}$. The $floor$ of a real number $x$, denoted by $\lfloor x\rfloor$, is the greatest integer not larger than $x$; the $ceil$ of a real number $x$, denoted by $\lceil x\rceil$, is the least integer greater than or equal to $x$. For two integers $n$ and $k$, we define $(_k^n)=\frac{n!}{k!(n-k)!}$ when $k\leq n$ and $(_k^n)=0$ when $k>n$.
Given a graph $G$, Matula \cite{Matula} defined the $strength$ $\overline{\kappa}'(G)$ of $G$ as $max\{\kappa'(G'): G'\subseteq G\}$. For a positive integer $k$, the graph $G$ is $k$-$edge$-$maximal$ if $\overline{\kappa}'(G)\leq k$ but for any edge $e\in E(G^c)$, $\overline{\kappa}'(G+e)>k$. Mader \cite{Mader} and Lai \cite{Lai} proved the following results.
\begin{thm}
Let $k\geq1$ be an integer, and $G$ be a $k$-edge-maximal graph on $n>k+1$ vertices. Each of the following holds.
(i) (Mader \cite{Mader}) $|E(G)|\leq(n-k)k+(_2^k)$. Furthermore, this bound is best possible.
(ii) (Lai \cite{Lai}) $|E(G)|\geq (n-1)k-\lfloor\frac{n}{k+2}\rfloor(_2^k)$. Furthermore, this bound is best possible.
\end{thm}
In \cite{Anderson} and \cite{Lin}, $k$-edge-maximal digraphs are investigated, and the upper bound and the lower bound of the sizes of the $k$-edge-maximal digraphs are determined, respectively. Motivated by these results, we will study $k$-edge-maximal hypergraphs in this paper.
Let $H=(V,E)$ be a hypergraph, where $V$ is a finite set and $E$ is a set of non-empty subsets of $V$, called edges. An edge of cardinality 2 is just a graph edge. For a vertex $u\in V$ and an edge $e\in E$, we say $u$ is $incident$ $with$ $e$ or $e$ is $incident$ $with$ $u$ if $u\in e$.
If all edges of $H$ have the same cardinality $r$, then $H$ is a $r$-$uniform$ $hypergraph$; if $E$ consists of all $r$-subsets of $V$, then $H$ is a $complete$ $r$-$uniform$ $hypergraph$, denoted by $K_n^r$, where $n=|V|$.
For $n<r$, the complete $r$-uniform hypergraph $K_n^r$ is just the hypergraph with $n$ vertices and no edges.
The $complement$ of a $r$-uniform hypergraph $H=(V,E)$, denoted by $H^c$, is the $r$-uniform hypergraph with vertex set $V$ and edge set consisting of all $r$-subsets of $V$ not in $E$. A hypergraph $H'=(V',E')$ is called a $subhypergraph$ of $H=(V,E)$, denoted by $H'\subseteq H$, if $V'\subseteq V$ and $E'\subseteq E$.
For $X\subseteq E(H^c)$, $H+X$ is the hypergraph with vertex set $V(H)$ and edge set $E(H)\cup X$; for $X'\subseteq E(H)$, $H-X'$ is the hypergraph with vertex set $V(H)$ and edge set $E(H)\setminus X'$. We use $H+e$ for $H+\{e\}$ and $H-e'$ for $H-\{e'\}$ when $e\in E(H^c)$ and $e'\in E(H)$.
For $Y\subseteq V(H)$, we use $H[Y]$ to denote the hypergraph $induced$ by $Y$, where $V(H[Y])=Y$ and $E(H[Y])=\{e\in E(H): e\subseteq Y\}$. $H-Y$ is the hypergraph induced by $V(H)\setminus Y$.
For a hypergraph $H=(V,E)$ and two disjoint vertex subsets $X, Y\subseteq V$, let $E_H[X,Y]$ be the set of edges intersecting both $X$ and $Y$ and $d_H(X,Y)=|E_H[X,Y]|$. We use $E_H(X)$ and $d_H(X)$ for $E_H[X,V\setminus X]$ and $d_H(X,V\setminus X)$, respectively. If $X=\{u\}$, we use $E_H(u)$ and $d_H(u)$ for $E_H(\{u\})$ and $d_H(\{u\})$, respectively. We call $d_H(u)$ the $degree$ of $u$ in $H$. The $minimum$ $degree$ $\delta(H)$ of $H$ is defined as $min\{d_H(u): u\in V\}$; the $maximum$ $degree$ $\Delta(H)$ of $H$ is defined as $max\{d_H(u): u\in V\}$. When $\delta(H)=\Delta(H)=k$, we call $H$ $k$-$regular$.
For a nonempty proper vertex subset $X$ of a hypergraph $H$, we call $E_H(X)$ an $edge$-$cut$ of $H$. The $edge$-$connectivity$ $\kappa'(H)$ of a hypergraph $H$ is $min\{d_H(X):\O\neq X\subsetneqq V(H)\}$. By definition, $\kappa'(H)\leq \delta(H)$. We call a hypergraph $H$ $k$-$edge$-$connected$ if $\kappa'(H)\geq k$. A hypergraph is connected if it is 1-edge-connected. A maximal connected subhypergraph of $H$ is called a $component$ of $H$. A $r$-uniform hypergraph $H=(V,E)$ is $k$-$edge$-$maximal$ if every subhypergraph of $H$ has edge-connectivity at most $k$, but for any edge $e\in E(H^c)$, $H+e$ contains at least one subhypergraph with edge-connectivity at least $k+1$. Since $\kappa'(K_n^r)=(_{r-1}^{n-1})$, we note that $H$ is a complete $r$-uniform hypergraph if $H$ is a $k$-edge-maximal $r$-uniform hypergraph such that $(_{r-1}^{n-1})\leq k$, where $n=|V(H)|$.
For results on the connectivity of hypergraphs, see [2,4,5] for references.
The main goal of this research is to determine, for given integers $n$, $k$ and $r$, the extremal sizes of a $k$-edge-maximal $r$-uniform hypergraph on $n$ vertices.
Section 2 below is devoted to the study of some properties of $k$-edge-maximal $r$-uniform hypergraphs. In section 3, we give the upper bound of the sizes of $k$-edge-maximal $r$-uniform hypergraphs and characterize these $k$-edge-maximal $r$-uniform hypergraphs attained this bound. We obtain the lower bound of the sizes of $k$-edge-maximal $r$-uniform hypergraphs and show that this bound is best possible in section 4.
\section{Properties of $k$-edge-maximal $r$-uniform hypergraphs}
For a $1$-edge-maximal $r$-uniform hypergraph $H$ with $n=|V(H)|$, we can verify that $\lceil\frac{n-1}{r-1}\rceil\leq|E(H)|\leq n-r+1$. If $H$ is the hypergraph with vertex set $V(H)=\{v_1,\cdots,v_n\}$ and edge set $E(H)=\{e_1,\cdots,e_{n-r+1}\}$, where $e_i=\{v_1,\cdots,v_{r-1},v_{r-1+i}\}$
for $i=1,\cdots,n-r+1$, then $H$ is a $1$-edge-maximal $r$-uniform hypergraph $H$ with $|E(H)|=n-r+1$. If $H$ is the hypergraph with vertex set $V(H)=\{v_1,\cdots,v_n\}$ and edge set $E(H)=\{e_1,\cdots,e_{s}\}$, where $s=\lceil\frac{n-1}{r-1}\rceil$, $e_i=\{v_{(i-1)(r-1)+1},\cdots,v_{i(r-1)},v_{n}\}$
for $i=1,\cdots,s-1$ and $e_s=\{v_{n-r+1}\cdots,v_{n-1},v_{n}\}$, then $H$ is a $1$-edge-maximal $r$-uniform hypergraph $H$ with $|E(H)|=\lceil\frac{n-1}{r-1}\rceil$.
Thus, from now on, we always assume $k\geq2$.
\noindent{\bf Definition 1.} For two integers $k$ and $r$ with $k,r\geq2$, define $t=t(k,r)$ to be the largest integer such that $(^{t-1}_{r-1})\leq k$. That is, $t$ is the integer satisfying $(^{t-1}_{r-1})\leq k<(^{t}_{r-1})$.
\begin{lem}
Let $H=(V,E)$ be a $k$-edge-maximal $r$-uniform hypergraph on $n$ vertices, where $k,r\geq2$.
Assume $n\geq t$ when $(^{t-1}_{r-1})= k$ and $n\geq t+1$ when $(^{t-1}_{r-1})<k$, where $t=t(k,r)$. Then $\kappa'(H)=\overline{\kappa}'(H)=k$.
\end{lem}
\noindent{\bf Proof.} Since $H$ is $k$-edge-maximal, we have $\kappa'(H)\leq\overline{\kappa}'(H)\leq k$. In order to complete the proof, we only need to show that $\kappa'(H)\geq k$.
Let $X$ be a minimum edge-cut of $H$, and let $H_1$ be a component of $H-X$ with minimum number of vertices and $H_2=H-V(H_1)$. Denote $n_1=|V(H_1)|$ and $n_2=|V(H_2)|$. Thus we have $X=E_{H}[V(H_1), V(H_2)]$, $n=n_1+n_2$ and $n_1\leq n_2$. To prove the lemma, we consider the following two cases.
\noindent{\bf Case 1.} $E_{H^c}[V(H_1), V(H_2)]\neq\O$.
Pick an edge $e\in E_{H^c}[V(H_1), V(H_2)]$. Since $H$ is $k$-edge-maximal, we have $\overline{\kappa}'(H+e)>k$. Let $H'\subseteq H+e$
be a subhypergraph such that $\kappa'(H')\geq k+1$. By $\overline{\kappa}'(H)\leq k$, we have $e\in H'$. It follows that $(X\cup\{e\})\cap E(H')$ is an edge-cut of $H'$. Thus $|X|+1\geq |(X\cup\{e\})|\geq \kappa'(H')\geq k+1$, implying $|X|\geq k$. Thus $\kappa'(H)\geq k$.
\noindent{\bf Case 2.} $E_{H^c}[V(H_1), V(H_2)]=\O$.
Since $E_{H^c}[V(H_1), V(H_2)]=\O$, we know that $E_{H}[V(H_1), V(H_2)]$ consists of all $r$-subsets of $V(H)$ intersecting both $V(H_1)$ and $V(H_2)$. Thus
$$|E_{H}[V(H_1),V(H_2)]|=\sum_{s=1}^{r-1}(_s^{n_1})(_{r-s}^{n_2})=
(_r^{n})-(_r^{n_1})-(_r^{n_2}). $$
Let $g(x)=(_r^{x})+(_r^{n-x})$. It is routine to verify that $g(x)$ is a decreasing function when $1\leq x\leq n/2$. If $n_1\geq2$, then as $H$ is connected we have $r\leq n_1\leq n/2$. Thus
$$\kappa'(H)=|E_{H}[V(H_1),V(H_2)]|=
(_r^{n})-(_r^{n_1})-(_r^{n_2})\geq (_r^{n})-(_r^{2})-(_r^{n-2})>(_{r-1}^{n-1})\geq \delta(H), \eqno(1) $$
which contradicts to $\kappa'(H)\leq\delta(H)$. Thus, we assume $n_1=1$. Now we have
$$\kappa'(H)=|E_{H}[V(H_1),V(H_2)]|=
(_r^{n})-(_r^{n_1})-(_r^{n_2})= (_r^{n})-(_r^{1})-(_r^{n-1})=(_{r-1}^{n-1})\geq \delta(H), $$
which implies $\kappa'(H)=\delta(H)=(_{r-1}^{n-1})$ and so $H$ is a complete $r$-uniform hypergraph. Since $n\geq t$ when $(^{t-1}_{r-1})= k$ and $n\geq t+1$ when $(^{t-1}_{r-1})<k$, we have $\kappa'(H)=(_{r-1}^{n-1})\geq k$.
$\Box$
\begin{lem}
Suppose that $H=(V,E)$ is a $k$-edge-maximal $r$-uniform hypergraph, where $k,r\geq2$. Let $X\subseteq E(H)$ be a minimum edge-cut of $H$ and let $H_1$ be a union of some but not all components of $H-X$. Then $H_1$ is a $k$-edge-maximal $r$-uniform hypergraph.
\end{lem}
\noindent{\bf Proof.} If $H_1$ is complete, then $H_1$ is $k$-edge-maximal by definition. Thus assume $H_1$ is not complete.
For any edge $e\in E(H_1^c)\subseteq E(H^c)$, $H+e$ has a subhypergraph $H'$ with $\kappa'(H')\geq k+1$. Since $X$ is a minimum edge-cut of $H$, we have $|X|=\kappa'(H)\leq \overline{\kappa}'(H)\leq k$. Thus $X\cap E(H')=\O$. As $e\in E(H')\cap E(H_1^c)$, we conclude that $H'$ is a subhypergraph of $H_1+e$, and so $\overline{\kappa}'(H_1+e)\geq k+1$. Since $\overline{\kappa}'(H_1)\leq\overline{\kappa}'(H)\leq k$, it follows that $H_1$ is a $k$-edge-maximal $r$-uniform hypergraph.
$\Box$
\begin{lem}
Let $H=(V,E)$ be a $k$-edge-maximal $r$-uniform hypergraph on $n$ vertices, where $k,r\geq2$. Assume $n\geq t$ when $(^{t-1}_{r-1})= k$ and $n\geq t+1$ when $(^{t-1}_{r-1})<k$, where $t=t(k,r)$. Let $X\subseteq E(H)$ be a minimum edge-cut of $H$ and let $H_1$ be a union of some but not all components of $H-X$. If $r\leq|V(H_1)|\leq n-2$, then $|V(H_1)|\geq t$. Moreover, if $H_1$ is complete, then $|V(H_1)|=t$; if $H_1$ is not complete, then $|V(H_1)|\geq t+1$.
\end{lem}
\noindent{\bf Proof.} By Lemmas 2.1 and 2.2, we have $|X|=\kappa'(H)=k$ and $H_1$ is a $k$-edge-maximal $r$-uniform hypergraph, respectively. If $H_1$ is not complete, then there is a subhypergraph $H_1'$ of $H_1+e$ such that $\kappa'(H_1')\geq k+1$ for any $e\in E(H_1^c)$. Since $(^{t-1}_{r-1})\leq k$ and $\delta(H_1')\geq \kappa'(H_1')\geq k+1$, we have
$|V(H_1)|\geq|V(H_1')|\geq t+1$.
Now we assume $H_1$ is a complete $r$-uniform hypergraph. Let $H_2=H-V(H_1)$. If $n_1=|V(H_1)|<t$, then, in order to ensure each vertex in $H_1$ has degree at least $k$ in $H$ (because $\delta(H)\geq \kappa'(H)=k$), we must have $n_1=t-1$ and $k=(^{t-1}_{r-1})$. Moreover, each vertex in $H_1$ is incident with exact $(^{t-2}_{r-2})$ edges in $E_H[H_1,H_2]$, and thus $d_H(u)=k$ for each $u\in V(H_1)$. By (1), there is an $e$ intersecting both $V(H_1)$ and $V(H_2)$ but $e\notin X$. Since $n_1\geq r$, there is a vertex $w\in V(H_1)$ such that $w$ is not incident with $e$. Then $d_{H+e}(w)=k$. This implies $w$ is not contained in a $(k+1)$-edge-connected subhypergraph of $H+e$. But then each vertex in $V(H_1)\setminus\{w\}$ has degree at most $k$ in $(H+e)-w$, and thus each vertex in $V(H_1)\setminus\{w\}$ is not contained in a $(k+1)$-edge-connected subhypergraph of $H+e$. This illustrates that there is no $(k+1)$-edge-connected subhypergraph in $H+e$, a contradiction. Thus we have $n_1\geq t$. If $n_1> t$, then $\kappa'(H_1)=(^{n_1-1}_{r-1})\geq (^{t}_{r-1})>k$, contrary to $H$ is $k$-edge-maximal. Therefore, $n_1\leq t$, and thus $n_1=t$ holds.
$\Box$
\section{The upper bound of the sizes of $k$-edge-maximal $r$-uniform hypergraphs}
\noindent{\bf Definition 2.} Let $n,k,r$ be integers such that $k,r\geq2$ and $n\geq t$, where $t=t(k,r)$. A hypergraph $H\in\mathcal{M}(n;k,r)$ if and only if it is constructed as follows:
($i$) Start from the complete hypergraph $H_0\cong K_t^r$;
($ii$) If $n-t=s=0$, then $H_s=H_0$. If $n-t=s\geq1$, then we construct, recursively, $H_i$ from $H_{i-1}$ by adding a new vertex $v_i$ and $k$ new edges containing $v_i$ and intersecting $V(H_{i-1})$ for $i=1,\cdots,s$;
($iii$) Set $H=H_s$.
It is known that $\kappa'(H)\leq\delta(H)$ holds for any hypergraph $H$. If $\kappa'(H)=\delta(H)$, then we say $H$ is $maximal$-$edge$-$connected$.
An edge-cut $X$ of $H$ is $peripheral$ if there exists a vertex $v$ such that $X=E_H(v)$. A hypergraph $H$ is $super$-$edge$-$connected$ if every minimum edge-cut of $H$ is peripheral. By definition, every super-edge-connected hypergraph is maximal-edge-connected.
\begin{lem}
Let $k$ and $r$ be integers with $k,r\geq2$. If $n\geq t$ when $(^{t-1}_{r-1})= k$ and $n\geq t+1$ when $(^{t-1}_{r-1})<k$, where $t=t(k,r)$, then for any $H\in \mathcal{M}(n;k,r)$, we have
(i) $\delta(H)=k$;
(ii) $H$ is super-edge-connected; and
(iii) $H$ is $k$-edge-maximal.
\end{lem}
\noindent{\bf Proof.} Let $H=H_s$, where $H_s$ is recursively constructed from $H_0,\cdots,H_{s-1}$ as in Definition 2. Then $V(H_s)=V(H_0)\cup\{v_1,\cdots,v_s\}$. We will prove this lemma by induction on $n$.
($i$) If $n=t$ and $(^{t-1}_{r-1})= k$, then $H\cong K_t^r$ and $\delta(H)=(^{t-1}_{r-1})=k$. If $n=t+1$ and $(^{t-1}_{r-1})<k$, then $H$ is obtained from $K_t^r$ by adding a new vertex $v_1$ and $k$ edges with cardinality $r$ such that each added edge is incident with $v_1$. Let $k=(^{t-1}_{r-1})+i$. As $(^{t-1}_{r-1})<k<(^{t}_{r-1})$, we have $1\leq i\leq (^{t-1}_{r-2})-1$.
If there exists a vertex $u\in V(K_t^r)$ such that at most $i-1$ edges are incident with both $u$ and $v_1$ in $H$, then by $k=(^{t-1}_{r-1})+i$, we have $|E_H[\{v_1\}, V(H)\setminus\{u,v_1\}]|>(^{t-1}_{r-1})$. But this can not happen because $|V(H)\setminus\{u,v_1\}|=t-1$. Thus for any vertex $u\in V(K_t^r)$, there are at least $i$ edges incident with both $u$ and $v_1$ in $H$. This implies $d_H(v)\geq (^{t-1}_{r-1})+i=k$ for any $u\in V(K_t^r)$. As $d_H(v_1)=k$, we have $\delta(H)=k$.
Now we assume $n\geq t+1$ when $(^{t-1}_{r-1})= k$ and $n\geq t+2$ when $(^{t-1}_{r-1})<k$. Since $H=H_s$ is obtained from $H_{s-1}$ by adding a new vertex $v_s$ and $k$ edges with cardinality $r$ such that each added edge is incident with $v_s$, then by the induction assumption that $\delta(H_{s-1})=k$, we obtain $\delta(H)=\delta(H_s)=k$.
($ii$) If $n=t$ and $(^{t-1}_{r-1})= k$, then $H\cong K_t^r$ and $|E_H[X,V(H)\setminus X]|>\delta(H)=k$ for any $X\subseteq V(H)$ with $2\leq|X|\leq n-2$ by (1). Thus $H$ is super-edge-connected.
If $n=t+1$ and $(^{t-1}_{r-1})<k$, then $H$ is obtained from $K_t^r$ by adding a new vertex $v_1$ and $k$ edges with cardinality $r$ such that each added edge is incident with $v_1$. Let $k=(^{t-1}_{r-1})+i$. As $(^{t-1}_{r-1})<k<(^{t}_{r-1})$, we have $1\leq i\leq (^{t-1}_{r-2})-1$.
In order to prove that $H$ is super-edge-connected, we only need to verify that $d_H(X)>k$ for any $X\subseteq V(H)\setminus\{v_1\}$ with $2\leq |X|\leq |V(H)|-2$. If $|X|\leq |V(H)|-3$, then $|E_{K_t^r}[X, V(K_t^r)\setminus X]|> (^{t-1}_{r-1})$ by (1). Since for any vertex $u\in V(K_t^r)$, there are at least $i$ edges incident with both $u$ and $v_1$ in $H$ (by the proof of $(i)$), we have $|E_{H}(X)\cap E_{H}(v_1)|\geq i$. Thus $d_H(X)=|E_{K_t^r}[X, V(K_t^r)\setminus X]|+|E_{H}(X)\cap E_{H}(v_1)|> (^{t-1}_{r-1})+i=k$. Assume $|X|= |V(H)|-2$ and $V(H)\setminus X=\{v_1,w\}$. If $r\geq3$, then $d_H(X)=|E_{K_t^r}[X, V(K_t^r)\setminus X]|+|E_{H}(X)\cap E_{H}(v_1)|=(^{t-1}_{r-1})+k>k$. If $r=2$, then $d_H(X)=|E_{K_t^r}[X, V(K_t^r)\setminus X]|+|E_{H}(X)\cap E_{H}(v_1)|\geq (^{t-1}_{r-1})+k-1>k$.
Now we assume $n\geq t+1$ when $(^{t-1}_{r-1})= k$ and $n\geq t+2$ when $(^{t-1}_{r-1})<k$. On the contrary, assume $H_s$ is not super-edge-connected. Then there is a minimum edge-cut $X=E_{H_s}[V(J_1), V(J_2)]$ of $H_s$ with $|X|\leq \delta(H_s)=k$, where $J_1$ is a component of $H_s-X$ and $J_2=H_s-V(J_1)$ with $min\{|V(J_1)|, |V(J_2)|\} \geq2$. Without loss of generality, assume $v_{s}\in V(J_1)$. If $E_{H_s}(v_s)\cap X\neq\O$, then as $X\neq E_{H_s}(v_s)$, $X-E_{H_s}(v_s)$ is an edge-cut of $H_{s-1}$, and so $\kappa'(H_{s-1})\leq |X-E_{H_s}(v_s)|< k$, contradicts to the induction assumption that $H_{s-1}$ is super-edge-connected. It follows that $E_{H_s}(v_s)\cap X=\O$ and so $X=E_{H_{s-1}}[V(J_1-v_s), V(J_2)]$ is an edge-cut of $H_{s-1}$. Since $H_{s-1}$ is super-edge-connected, we conclude that either $|V(J_1-v_s)|=1$ or $|V(J_2)|=1$. If $|V(J_2)|=1$, then it contradicts to $min\{|V(J_1)|, |V(J_2)|\} \geq2$. If $|V(J_1-v_s)|=1$, then $|V(J_1)|=2$, $r=2$ and $k=1$, contrary to $k\geq2$.
($iii$) If $n=t$ and $(^{t-1}_{r-1})= k$, then $H\cong K_t^r$ is $k$-edge-maximal by the definition.
If $n=t+1$ and $(^{t-1}_{r-1})<k$, let $k=(^{t-1}_{r-1})+i$. As $(^{t-1}_{r-1})<k<(^{t}_{r-1})$, we have $1\leq i\leq (^{t-1}_{r-2})-1$.
In order to prove that $H$ is $k$-edge-maximal, it suffices to verify that
$\overline{\kappa}'(H+e)\geq k+1$ for any $e\in E(H^c)$. By definition 2, $H+e$ is obtained from $K_t^r$ by adding a new vertex $v_1$ and $k+1$ edges with cardinality $r$ such that each added edge is incident with $v_1$.
If there exists a vertex $u\in V(K_t^r)$ such that at most $i$ edges are incident with both $u$ and $v_1$ in $H+e$, then by $k=(^{t-1}_{r-1})+i$, we have $|E_{H+e}[\{v_1\}, V(H)\setminus\{u,v_1\}]|>(^{t-1}_{r-1})$. But this can not happen because $|V(H+e)\setminus\{u,v_1\}|=t-1$. Thus for any vertex $u\in V(K_t^r)$, there are at least $i+1$ edges incident with both $u$ and $v_1$ in $H+e$. This implies $d_{H+e}(u)\geq (^{t-1}_{r-1})+i+1=k+1$ for any $u\in V(K_t^r)$. By $d_{H+e}(v_1)=k+1$, we have $\delta(H+e)=k+1$.
For any edge-cut $W$ of $H+e$, if $W$ is peripheral, then $|W|\geq\delta(H+e)=k+1$. Suppose $W$ is not peripheral, and so $W-e$ is a non peripheral edge-cut of $H$. Since $H$ is super-edge-connected, $|W|\geq|W-e|\geq\delta(H)+1=k+1$. Thus $\overline{\kappa}'(H+e)\geq \kappa(H+e)\geq k+1$.
Now we assume $n\geq t+1$ when $(^{t-1}_{r-1})= k$ and $n\geq t+2$ when $(^{t-1}_{r-1})<k$. On the contrary, assume $H_s$ is not $k$-edge-maximal. Then there is an edge $e\in E(H_s^c)$
such that $\overline{\kappa}'(H_s+e)\leq k$. If $e\in E(H_{s-1}^c)$, then by induction assumption, $\overline{\kappa}'(H_{s-1}+e)\geq k+1$, a contradiction. Hence $e\notin E(H_{s-1}^c)$. Since $H_s$ is obtained from $H_{s-1}$ by adding a new vertex $v_s$ and $k$ edges incident with $v_s$, we have $e\in E_{H_{s}+e}(v_s)$.
Let $Y=E_{H_{s}+e}[V(F_1),V(F_2)]$ be a minimum edge-cut of $H_{s}+e$ with $|Y|\leq k$, where $F_1$ is a component of $(H_{s}+e)-X$ and $F_2=(H_{s}+e)-V(F_1)$. Since $H_s$ is super-edge-connected, we have $\kappa'(H_s)=\delta(H_s)=k$, and so $e\notin Y$ and $Y\neq E_{H_s}(v_s)$. This implies $Y\subseteq E(H_s)$. Without loss of generality, assume that $v_s\in V(F_1)$. By $H_{s-1}$ is super-edge-connected, we have $\kappa'(H_{s-1})=\delta(H_{s-1})=k$. If $Y\cap E_{H_s}(v_s)\neq\O$, then as
$Y\neq E_{H_s}(v_s)$, $Y-E_{H_s}(v_s)$ is an edge-cut of $H_{s-1}$. It follows that $\kappa'(H_{s-1})\leq |Y-E_{H_s}(v_s)|<k=\kappa'(H_{s-1})$, a contradiction. Hence we must have $Y\cap E_{H_s}(v_s)=\O$, and so $Y\subseteq E(H_s)-E_{H_s}(v_s)=E(H_{s-1})$. By $H_{s-1}$ is super-edge-connected, there exists a vertex $w\in V(H_{s-1})$ such that $Y=E_{H_{s-1}}(w)$. As $N_{H_s}(v_s)\cup\{v_s\}\subseteq V(F_1)$, we have $V(F_2)=\{w\}$.
Let $H'=H_s-w$. Then $e\in E((H')^c)$. If $w\in V(H_s)\setminus V(H_0)$, then $H'\in \mathcal{M}(n-1;k,r)$. If $w\in V(H_0)$, then by $d_{H_s}(w)=|Y|=k$, we have $d_{H_1}(w)=k$. By Definition 2,
there are exact $k-(^{t-1}_{r-1})$ edges containing $\{w,v_1\}$ in $H_1$ and $|E_{H_1}[v_1,V(H_0)\setminus w]|=(^{t-1}_{r-1})$.
Thus the hypergraph induced by $(V(H_0)\setminus\{w\})\cup\{v_1\}$ in $H_s$ is complete, and so $H'\in \mathcal{M}(n-1;k,r)$. By induction assumption, $\overline{\kappa}'(H'+e)\geq k+1$, and so $\overline{\kappa}'(H_s+e)\geq\overline{\kappa}'(H'+e)\geq k+1$, contrary to $\overline{\kappa}'(H_s+e)\leq k$.
$\Box$
\begin{thm}
Let $H$ be a $k$-edge-maximal $r$-uniform hypergraph on $n$ vertices, where $k,r\geq2$. If $n\geq t$, where $t=t(k,r)$, then each of the following holds.
(i) $|E(H)|\leq (^{t}_{r})+(n-t)k$.
(ii) $|E(H)|=(^{t}_{r})+(n-t)k$
if and only if $H\in \mathcal{M}(n;k,r)$.
\end{thm}
\noindent{\bf Proof.} By Definition 2, we have $|E(H)|=(^{t}_{r})+(n-t)k$ if $H\in \mathcal{M}(n;k,r)$.
We will prove the theorem by induction on $n$. If $n=t$, then by $H$ is $k$-edge-maximal and $(^{t-1}_{r-1})\leq k$, we have $H\cong K_t^r$. Thus
$|E(H)|=(^{t}_{r})+(n-t)k$ and
$H\in \mathcal{M}(n;k,r)$.
Now suppose $n>t$.
We assume that if $t\leq n'<n$ and if $H'$ is a $k$-edge-maximal $r$-uniform hypergraph with $n'$ vertices, then $|E(H')|\leq (^{t}_{r})+(n'-t)k$ and $H'\in \mathcal{M}(n';k,r)$ if $|E(H')|=(^{t}_{r})+(n'-t)k$.
Let $X$ be a minimum edge-cut $H$. By Lemma 2.1, we have $|X|=k$. We consider two cases in the following.
\noindent{\bf Case 1.} There is a component, say $H_1$, of $H-X$ such that $|V(H_1)|=1$.
Let $H_2=H-V(H_1)$. By Lemma 2.2, $H_2$ is $k$-edge-maximal. Since $|V(H_2)|=n-1\geq t$, by induction assumption, we have $|E(H_2)|\leq (^{t}_{r})+(n-1-t)k$ and $H_2\in \mathcal{M}(n-1;k,r)$ if $|E(H_2)|=(^{t}_{r})+(n-1-t)k$. Thus $|E(H)|=|E(H_2)|+k\leq (^{t}_{r})+(n-t)k$. If $|E(H)|=(^{t}_{r})+(n-t)k$, then $|E(H_2)|=(^{t}_{r})+(n-1-t)k$ and $H_2\in \mathcal{M}(n-1;k,r)$. Thus, by $|V(H_1)|=1$ and $|X|=k$, we have
$H\in \mathcal{M}(n;k,r)$ if $|E(H)|=(^{t}_{r})+(n-t)k$.
\noindent{\bf Case 2.} Each component of $H-X$ has at least two vertices.
Let $H_1$ be a component of $H-X$ and $H_2=H-V(H_1)$. By Lemma 2.2, both $H_1$
and $H_2$ are $k$-edge-maximal. Assume $n_1=|V(H_1)|$ and $n_2=|V(H_2)|$. Then $n_1+n_2=n$. Since each edge contains $r$ vertices, we have $n_1,n_2\geq r$. By Lemma 2.3, we have $n_1,n_2\geq t$. By induction assumption, we have $|E(H_i)|\leq (^{t}_{r})+(n_i-t)k$ and $H_i\in \mathcal{M}(n_i;k,r)$ if $|E(H_i)|=(^{t}_{r})+(n_i-t)k$ for $i\in \{1,2\}$. Thus
\ \ \ \ $|E(H)|=|E(H_1)|+|E(H_2)|+k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\leq(^{t}_{r})+(n_1-t)k+(^{t}_{r})+(n_2-t)k+k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $=(^{t}_{r})+(n_1+n_2-t)k+(^{t}_{r})-(t-1)k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\leq(^{t}_{r})+(n_1+n_2-t)k+(^{t}_{r})-(t-1)(^{t-1}_{r-1})$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $=(^{t}_{r})+(n_1+n_2-t)k+(\frac{t}{r}-(t-1))(^{t-1}_{r-1})$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\leq(^{t}_{r})+(n-t)k$.
If $|E(H)|=(^{t}_{r})+(n-t)k$, then $\frac{t}{r}-(t-1)=0$ and $k=(^{t-1}_{r-1})$, which imply $t=r=2$ and $k=1$, contrary to $k\geq2$.
Thus $|E(H)|<(^{t}_{r})+(n-t)k$ holds.
$\Box$
If $r=2$, then $H$ is a graph and $t=k+1$. Mader's \cite{Mader} result for the upper bound of the sizes of $k$-edge-maximal graphs is a corollary of Theorem 3.2.
\begin{cor} (Mader \cite{Mader})
Let $G$ be a $k$-edge-maximal graph with $n$ vertices, where $k\geq2$. If $n\geq k+1$, then we have $|E(G)|\leq (^{k+1}_{2})+(n-k-1)k=(^{k}_{2})+(n-k)k$. Furthermore, $|E(G)|=(^{k}_{2})+(n-k)k$
if and only if $G\in \mathcal{M}(n;k,2)$.
\end{cor}
\section{The Lower bound of the sizes of $k$-edge-maximal $r$-uniform hypergraphs}
\begin{thm}
Let $H$ be a $k$-edge-maximal $r$-uniform hypergraph with $n$ vertices, where $k,r\geq2$. If $n\geq t$, where $t=t(k,r)$, then we have $|E(H)|\geq (n-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n}{t}\rfloor$.
\end{thm}
\noindent{\bf Proof.}
We will prove the theorem by induction on $n$. If $n=t$, then by $H$ is $k$-edge-maximal and $(^{t-1}_{r-1})\leq k$, we have $H\cong K_t^r$. Thus
$|E(H)|=(^{t}_{r})=(n-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n}{t}\rfloor$.
Now suppose $n>t$.
We assume that if $t\leq n'<n$ and if $H'$ is a $k$-edge-maximal $r$-uniform hypergraph with $n'$ vertices, then $|E(H')|\geq (n'-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n'}{t}\rfloor$.
Let $X$ be a minimum edge-cut $H$. By Lemma 2.1, we have $|X|=k$. We consider two cases in the following.
\noindent{\bf Case 1.} There is a component, say $H_1$, of $H-X$ such that $|V(H_1)|=1$.
Let $H_2=H-V(H_1)$. By Lemma 2.2, $H_2$ is $k$-edge-maximal. Since $|V(H_2)|=n-1\geq t$, by induction assumption, we have
$|E(H_2)|\geq (n-2)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n-1}{t}\rfloor$. Thus
\ \ \ \ $|E(H)|=|E(H_2)|+k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\geq(n-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n-1}{t}\rfloor$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\geq(n-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n}{t}\rfloor$,
\noindent{the} last inequality holds because $(t-1)k-(^{t}_{r})\geq (t-1)(^{t-1}_{r-1})-\frac{t}{r}(^{t-1}_{r-1})\geq0$.
\noindent{\bf Case 2.} Each component of $H-X$ has at least two vertices.
Let $H_1$ be a component of $H-X$ and $H_2=H-V(H_1)$. By Lemma 2.2, both $H_1$
and $H_2$ are $k$-edge-maximal. Assume $n_1=|V(H_1)|$ and $n_2=|V(H_2)|$. Then $n_1+n_2=n$. Since each edge contains $r$ vertices, we have $n_1,n_2\geq r$. By Lemma 2.3, we have $n_1,n_2\geq t$. By induction assumption, we have $|E(H_i)|\geq(n_i-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n_i}{t}\rfloor$ for $i\in \{1,2\}$. Thus
\ \ \ \ $|E(H)|=|E(H_1)|+|E(H_2)|+k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\geq(n_1-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n_1}{t}\rfloor+(n_2-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n_2}{t}\rfloor+k$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $=(n-1)k-((t-1)k-(^{t}_{r}))(\lfloor\frac{n_1}{t}\rfloor+
\lfloor\frac{n_2}{t}\rfloor)$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\geq(n-1)k-((t-1)k-(^{t}_{r}))\lfloor\frac{n_1+n_2}{t}\rfloor$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $=(n-1)k-((t-1)k-(^{t}_{r}))\lfloor\frac{n}{t}\rfloor$.
The theorem is thus holds.
$\Box$
\noindent{\bf Definition 3.} Let $k, t, r$ be integers such that $t>r>2$, $k=(^{t-1}_{r-1})$ and $kr\geq 2t$. Assume $n=st$, where $s\geq2$. For any tree $T$ with $V(T)=\{v_1,\cdots,v_s\}$, we define a family of $r$-uniform hypergraphs
$\mathcal{N}(T)$ as follows. Firstly, we replace each $v_i$ by a complete $r$-uniform hypergraph $K_t^r(i)$ with $t$ vertices. Then whenever there is an edge $v_iv_j\in E(T)$, we add a set $E_{ij}$ of $k$ edges with cardinality $r$ such that ($i$) $e\subseteq V(K_t^r(i))\cup V(K_t^r(j))$, $e\cap V(K_t^r(i))\neq\O$ and $e\cap V(K_t^r(j))\neq\O$ for any $e\in E_{ij}$, and
($ii$) each vertex in $V(K_t^r(i))\cup V(K_t^r(j))$ is incident with some edge in $E_{ij}$ (we can do this because $kr\geq 2t$).
\begin{thm}
If $H\in \mathcal{N}(T)$, then $H$ is a $k$-edge-maximal $r$-uniform hypergraph.
\end{thm}
\noindent{\bf Proof.} By definition, $\overline{\kappa}'(H)\leq k$. We will prove the theorem by induction on $s$. If $s=2$, then $|V(H)|=2t$ and $\delta(H)\geq (^{t-1}_{r-1})+1=k+1$. Since $K_t^r(1)$ and $K_t^r(2)$ are super-edge-connected and $\delta(H)\geq (^{t-1}_{r-1})+1=k+1$, each edge-cut of $H$ except for $E_H[V(K_t^r(1),V(K_t^r(2)]$ has cardinality at least $k+1$. For any $e\in E(H^c)$, we have $e\in E_{H^c}[V(K_t^r(1),V(K_t^r(2)]$. Thus every edge-cut of $H+e$ has cardinality at least $k+1$, that is, $\kappa'(H+e)\geq k+1$. This shows $\overline{\kappa}'(H+e)\geq \kappa'(H+e)\geq k+1$, and thus $H$ is $k$-edge-maximal.
Now suppose $s\geq 3$. We assume that each hypergraph constructed in Example 1 with less than $st$ vertices is $k$-edge-maximal. In the following, we will show that each $H$ in $\mathcal{N}(T)$ with $st$ vertices is also $k$-edge-maximal.
By contradiction, assume that there is an edge $e\in E(H^c)$ such that $\overline{\kappa}'(H+e)\leq k$. Let $E_{H+e}[X,V(H)\setminus X]$ be an edge-cut in $H+e$ with cardinality at most $k$. Since $K_t^r(i)$ is super-edge-connected for $1\leq i\leq s$ and $\delta(H)\geq k+1$, edge-cuts in $H$ with cardinality at most $k$ are these $E_{ij}$, where $v_iv_j\in E(T)$.
Thus $E_{H+e}[X,V(H)\setminus X]=E_{ij}$ for some $1\leq i,j\leq s$ with $v_iv_j\in E(T)$. Then $e\in E_{H_i+e}(H_i+e)$, where $H_i$ is a component of $H-E_{ij}$. Since $H_i\in \mathcal{N}(T_i)$, where $T_i$ is a components of $T-v_iv_j$, by induction assumption, $H_i+e$ contains a subhypergraph $H'$ with $\kappa'(H')\geq k+1$. But $H'$ is also a subhypergraph of $H+e$, contrary to $\overline{\kappa}'(H+e)\leq k$.
$\Box$
For any $H\in \mathcal{N}(T)$, we have $|E(H)|=(n-1)k -((t-1)k-(^{t}_{r}))\lfloor\frac{n}{t}\rfloor$. By Theorem 4.2, $H$ is $k$-edge-maximal. Thus, the lower bound given in Theorem 4.1 is best possible.
\vspace{1cm}
|
{
"timestamp": "2018-07-18T02:09:34",
"yymm": "1802",
"arxiv_id": "1802.08843",
"language": "en",
"url": "https://arxiv.org/abs/1802.08843"
}
|
\section{Introduction}
Let $X$ be any complex algebraic variety of dimension $d$ and let $0\in X$ be an isolated singular point.
A classical way to analyze the geometry of $X$ near its singular point is to consider its (archimedean) \emph{link}, which is defined by embedding the complex analytic germ $(X,0)$ in the germ of a complex affine space $(\bb{C}^n,0)$ and taking the intersection with the boundary of a small ball around the origin.
More precisely, if $z_1,...,z_n$ are coordinates for $\bb{C}^n$ at zero, the intersection of $X$ with any sphere centered at $0$ of small enough radius $\varepsilon>0$ is transversal, so that
$
\text{L}_\bb{C}^\varepsilon(X,0) = \big\{ x\in X(\bb{C}) \text{ s.t. } \sum_{i=1}^n|z_i(x)|_\bb{C}^2 = \varepsilon \big\}
$
is a smooth manifold of real dimension $2d-1$.
Its diffeomorphism type does not depend on the embedding nor on $\varepsilon$, provided that $\varepsilon$ is small enough, and we define the link of $(X,0)$ to be this diffeomorphism type.
Note that the topology of a neighborhood of $0$ in $X$ is completely determined by its link, since one can show that the intersection of $X$ with a small ball is homeomorphic to the cone over $\text{L}_\bb{C}^\varepsilon(X,0)$.
The complex structure on $X$ also induces a canonical contact structure on the link which has attracted a lot of attention recently, see for example \cite{CaubelNemethiPopescu-Pampu2006,McLean2015}.
When the algebraic variety $X$ is defined over an algebraically closed field $k$, then
a non-archimedean version of the link can be defined as follows.
Endow $k$ with the trivial absolute value $|\cdot|$, that is the one such that $|k^\times|=1$, and denote by $X^\mathrm{an}$ the non-archimedean analytic space associated with $X$, in the sense of Berkovich~\cite{berkovich:book}.
Then, the space $\NL^\varepsilon(X,0) = \big\{ x\in X^\mathrm{an} \text{ such that} \max_{i}|z_i(x)| = \varepsilon \big\}$, with the topology induced from the one of $X^\mathrm{an}$, does not depend on the embedding nor on $\varepsilon\in\left]0,1\right[$.
We will call it the \emph{non-archimedean link} of $(X,0)$ and we will simply denote it by $\NL(X,0)$.
Observe that, thanks to the non-archimedean triangular inequality, the equation $\max_{i}|z_i(x)| = \varepsilon$ defines the boundary of the ball of radius $\varepsilon$ in the non-archimedean analytification of $\bb{C}^n$, making the definition of the non-archimedean link completely analogous to the classical one.
Concretely, $\NL(X,0)$ is the set of semi-valuations $v$ on the complete local ring $\widehat{\mathcal O_{X,0}}$ of $X$ at $0$ that are normalized by the condition $\min_{f\in{\mathfrak M}}v(f)=1$, where $\mathfrak M$ is the maximal ideal of $\widehat{\mathcal O_{X,0}}$, endowed with the pointwise convergence topology.
The homotopy type of the non-archimedean link $\NL(X,0)$ is well understood in terms of the resolutions of singularities of $(X,0)$, whenever those exist.
Recall that any resolution of singularities $\pi\colon X_\pi\to X$ of $(X,0)$ whose exceptional divisor $\pi^{-1}(0)$ has simple normal crossing singularities gives rise to a \emph{dual simplicial complex} $\Delta_\pi$, which is a finite simplicial complex encoding the incidence relations between the components of $\pi^{-1}(0)$.
It follows from the work of Thuillier \cite{thuillier:geometrietoroidale} that $\Delta_\pi$ can be embedded in $\NL(X,0)$ and that there is a deformation retraction of the latter onto the former.
Since every connected finite simplicial complex is the dual complex of an isolated normal singularity by Koll\'ar \cite{kollar:links}, the homotopy type of $\NL(X,0)$ can be arbitrarily complicated.
However, de Fernex--Koll\'ar--Xu \cite{defernex-kollar-xu:dualcomplex} have proved that $\Delta_\pi$ is contractible for isolated log terminal singularities.
On the other hand, the topology of non-archimedean links is poorly understood and has been analyzed in depth only in the case of surfaces.
One can show that $\NL(\bb{A}^2_k,0)$ is a compact real tree, that is a union of segments which does not contain any non-trivial loop, see~\cite{berkovich:book,jonsson:berkovich,favre-jonsson:valtree}.
Its structure is however quite intricate since it has a dense set of ramification points (corresponding to points of \emph{type 2} as in~\cite{berkovich:book}), and the set of branches at such a point
is naturally parameterized by $\bb{P}^1(k)$, which may be uncountable. When $k$ is a countable field, $\NL(\bb{A}^2_k,0)$ is metrizable and homeomorphic to the Wa\.{z}ewski universal dentrite by~\cite{HrushovskiLoeserPoonen2014}.
The non-archimedean link of a surface singularity $(X,0)$ can be obtained by gluing copies of $\NL(\bb{A}^2_k,0)$ to a finite graph. This picture has enabled
de Felipe \cite{deFelipe2017} to completely describe the homeomorphism types of non-archimedean links of surface singularities, but
her result shows that the topology of $\NL(X,0)$ fails to encode much information about the singularity. For example, $\NL(X,0)$ is homeomorphic to $\NL\big(\bb{A}^2_k,0\big)$ when $(X,0)$ is a rational singularity, or when $X$ admits a good resolution whose exceptional locus is irreducible. The topology of $\NL(X,0)$ thus forgets the function fields of exceptional components of a resolution. In order to characterize interesting classes of singularities one needs to retain some of this information.
To do so, we consider the sheaf on $\NL(X,0)$ which is induced by the sheaf of analytic functions on $X^\mathrm{an}$.
The resulting ringed space was studied by the first author in \cite{fantini:normspaces}.
Note that
we cannot expect $\NL(X,0)$ with this additional analytic structure to be isomorphic to a proper subspace of itself, as any such isomorphism would have to send the endpoints of $\NL(X,0)$ to endpoints, forcing it to be surjective.
In particular, already in the case of a smooth point in a surface the non-archimedean link is isomorphic to a proper subspace of itself only after removing finitely many endpoints (more precisely, finitely many points of \emph{type 1}, that are the endpoints corresponding to semi-valuations with nontrivial kernel).
Whenever such an isomorphism exists we will say that the non-archimedean link $\NL(X,0)$ is \emph{self-similar} (see the condition \ref{condition_prime} on page~\pageref{conditiondag}).
In this paper we show that the non-archimedean link $\NL(X,0)$ of a normal surface singularity $(X,0)$ is self-similar if and only if $(X,0)$ is a \emph{sandwiched singularity}.
Over the complex numbers, sandwiched singularities were defined by Spivakovisky in \cite{spivakovsky:sandsingdesingsurfNashtransf} as those normal surface singularities whose complex analytic germs dominate bimeromorphically a smooth germ; in \emph{loc. cit.} they play a crucial role in the proof of the desingularization of surfaces via Nash transformations.
Several authors have further contributed to the study of sandwiched singularities, for example their deformation theory has been investigated by T. de Jong and van Straten \cite{deJongvanStraten1998}, while their Milnor fibers have been described by N\'emethi and Popescu-Pampu \cite{NemethiPopescu-Pampu2010}.
In order to work over an algebraically closed field $k$ of arbitrary characteristic we will need to replace complex analytic germs with formal germs, that is we will work with complete local rings; a precise definition will be given in Section~\ref{section_preliminariessandwiched}.
Our main result is the following theorem which give several characterizations of sandwiched singularities.
\begin{letthm}\label{mainthm}
Let $(X,0)$ be a normal surface singularity.
The following are equivalent:
\begin{enumerate}[label=(\roman{enumi})]
\item \label{condition_sandwiched}
$(X,0)$ is sandwiched;
\item \label{condition_strongly_selfsim}
there exists a finite set $T$ of type 1 points of $\NL(X,0)$ such that every point of $\NL(X,0)$ that is not of type 2 has a basis of neighborhoods each isomorphic to $\NL(X,0)\setminus T$;
\item \label{condition_valtree}
there exists a finite set $T$ of type 1 points of $\NL(X,0)$ such that $\NL(X,0)\setminus T$ is isomorphic to an open subspace of $\NL(\bb{A}^2_k,0)$;
\item \label{condition_selfsim}
there exists a finite set $T$ of type 1 points of $\NL(X,0)$ such that every open subset of $\NL(X,0)$ contains an open subset isomorphic to $\NL(X,0) \setminus T$;
\item \label{condition_graph}
there exists a good resolution of $(X,0)$ whose weighted dual graph is self-similar;
\item \label{condition_kato}
there exists a proper birational morphism of algebraic $k$-surfaces $\pi\colon X'\to X$ which is not an isomorphism above $0$, together with a point $p\in \pi^{-1}(0)$ and an isomorphism of complete local rings $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$.
\end{enumerate}
\end{letthm}
Both \ref{condition_strongly_selfsim} and \ref{condition_selfsim} can be interpreted as self-similarity properties for $\NL(X,0)$, and imply the condition~\ref{condition_prime}.
In \ref{condition_graph}, a vertex of a dual graph has as weights the genus and the self intersection of the corresponding component; such a weighted graph is said to be \emph{self-similar} if it is isomorphic to a graph modification of itself, see Section~\ref{section_graphs} for more details.
A datum like the one of \ref{condition_kato} will be called a \emph{Kato datum} for $(X,0)$.
Let us now illustrate the main ingredients of the proof of Theorem~\ref{mainthm}, which requires a combination of methods from resolution of singularities, non-archimedean analytic geometry, formal geometry, and combinatorics.
Begin by observing that any sandwiched singularity can be obtained by performing a composition of point blowups $(Y,D) \to (\bb{A}^2_k,0)$, followed by the contraction of a connected divisor $E$ on $Y$ that is supported on $D$.
This procedure yields two maps $Y\to X \to \bb{A}^2_k$ that shows that the contracted surface $X$, whose singular point $0$ is the image of $E$ through the contraction map, is ``sandwiched'' between two smooth surfaces, justifying the terminology.
By picking any point $y$ in $E$ and performing on it the same sequence of blowups and contraction (see the subsection~\ref{sec:164} for a detailed explanation of how this can be done), one obtains a surface $X'$ above $Y$ with a singular point $p$ such that $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$, that is a Kato datum. This proves the implication \ref{condition_sandwiched} $\implies$ \ref{condition_kato}.
In the sections \ref{section_preliminarieslink} and \ref{section_links_surfaces} we study in detail the structure of non-archimedean links.
In particular, with any modification $(Y',D')$ of $(X,0)$ is associated a \emph{center map} $\cent_{Y'}\colon \NL(X,0)\to D'$, and if $y'$ is a closed point of $D'$ then its inverse image $\cent_{Y'}^{-1}(y')$ is an open subspace of $\NL(X,0)$ that is isomorphic to the complement of finitely many points of type 1 in the non-archimedean link $\NL(Y',y')$ of $y'$ in $Y'$ (see Proposition~\ref{proposition_propertiesNL}).
When applied to the Kato datum $X'\to X$ above, this shows that $\NL(X,0)$ contains a strict open subspace that is isomorphic to the complement of finitely many points of type 1 in $\NL(X,0)$ itself, since $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$ implies that $\NL(X',p)$ and $\NL(X,0)$ are isomorphic, that is the condition~\ref{condition_prime} holds.
Similar arguments based on the study of the structure of non-archimedean links permit to obtain the implications \ref{condition_sandwiched} $\implies$ \ref{condition_valtree} $\implies$ \ref{condition_strongly_selfsim} $\implies$ \ref{condition_selfsim} of Theorem~\ref{mainthm}, as is explained in Section~\ref{section_preliminariessandwiched}.
Showing that singularities having self-similar links have also self-similar weighted dual graphs, that is the implication \ref{condition_prime} $\implies$ \ref{condition_graph} of Theorem~\ref{mainthm}, is a slightly more delicate matter, undertaken in Section~\ref{section_proof_main}.
We prove an extension result for morphisms of a punctured disc into $\NL(X,0)$, Proposition~\ref{lem:keyboundary}, and deduce that we can assume that the boundary $\partial U$ of an open subset $U$ of $\NL(X,0)$ isomorphic to the complement of finitely many points of type 1 in $\NL(X,0)$ consists only of finitely many points of type 2.
We then use other results on the structure of $\NL(X,0)$, proven in subsection \ref{subsection_existencemodifications}, to produce a (formal) modification $(Y',D')$ of $(X,0)$ together with a closed point $y'$ of $D'$ such that $U\cong \cent_{Y'}^{-1}(y)$.
Finally, we show that the dual graph associated with $(X,0)$ is self-similar by carefully choosing compatible resolutions of $(Y',y)$ and $(X,0)$.
The proof of the remaining implication, that is \ref{condition_graph} $\implies$ \ref{condition_sandwiched}, requires two distinct steps that we believe to be of independent interest.
First of all, we prove a purely combinatorial result, Theorem~\ref{thm:graph-sdw}, showing that every self-similar weighted graph is \emph{sandwiched}, which means that it can be embedded in a graph modification of the trivial graph (the dual graph of the blowup of $\bb{A}^2_k$ at $0$).
We then show in Theorem~\ref{thm:extend-spiv} that, if $(X,0)$ is a normal surface singularity admitting a good resolution whose associated weighted dual graph is sandwiched, then $(X,0)$ is a sandwiched singularity.
Over the complex numbers this result was originally proven by Spivakovsky in \cite{spivakovsky:sandsingdesingsurfNashtransf}, but his proof relies on plumbing techniques for complex analytic spaces; we proceed in a similar way, using an analogue of plumbing in formal geometry.
\medskip
\begin{center}
$\diamond$
\end{center}
\medskip
Since sandwiched singularities can be characterized in terms of their non-archimedean links, it is also natural to look for a characterization of them in terms of their archimedean links.
We have not been able to find a self-similar property reminiscent of Theorem~\ref{mainthm}, \ref{condition_strongly_selfsim} or \ref{condition_selfsim}.
However, building on Theorem~\ref{mainthm}, \ref{condition_kato}, in Section~\ref{section_complexanalytic} we observe that links of sandwiched singularities are exactly those arising on a specific class of smooth compact complex surfaces.
To state precisely our results, we need to introduce some terminology. A compact complex surface $S$ contains \emph{a global spherical shell} if it admits a biholomorphic copy of a neighborhood of the $3$-sphere in $\bb{C}^2$ that does not disconnect $S$. Surfaces containing a global spherical shell have been completely described by Kato~\cite{kato:cptcplxmanifoldsGSS} (see also the subsequent work of G.~Dloussky~\cite{dloussky:phdthesis}).
They are non-k\"ahler compact surfaces of Kodaira dimension equal to $-\infty$ that play a special role in the Kodaira classification of compact complex surfaces, see the introduction of \cite{MR2726099}.
Primary Hopf surfaces are the most emblematic examples of such surfaces: they are obtained as the orbit space of a contracting germ of biholomorphism of $(\bb{C}^2,0)$, and are diffeomorphic to the product of spheres $\mathbb S^3\times \mathbb S^1$. Any surface containing a global spherical shell is a deformation of a modification of a primary Hopf surface.
\begin{letthm}\label{thm3}
Let $(X,0)$ be a complex sandwiched
singularity, and choose a local embedding $X \subset \bb{C}^n$.
Then, for any $\varepsilon$ small enough, there exist a smooth compact complex surface $S$ having a global spherical shell and a holomorphic embedding
\[
\imath : X\cap \{z\in \bb{C}^n,\, \varepsilon/2 < \|z\| < 2 \varepsilon\} \longrightarrow S
\]
such that $S \setminus \LCep(X,0)$ is connected.
\end{letthm}
Observe that the link $\LCep(X,0)$ is included in the domain $X\cap \{z\in \bb{C}^n,\, \varepsilon/2 < \|z\| < 2 \varepsilon\}$ so that one can phrase the property in the previous theorem as an embedding property for the archimedean link of a sandwiched singularity. Following the terminology of M.~Kato, this says that
$\LCep(X,0)$ can be realized as a real-analytic global strongly pseudoconvex $3$-fold
in a surface containing a global spherical shell, see Section~\ref{section_complexanalytic} for a discussion of these notions.
We prove that this property characterizes sandwiched singularities.
\begin{letthm}\label{thm2}
Let $(X,0)$ be a complex normal surface singularity, and choose a local embedding $X \subset \bb{C}^n$.
Suppose that for some small enough $\varepsilon>0$, the archimedean link $\LCep (X,0)$ can be realized as a real-analytic global strongly pseudoconvex $3$-fold in a compact complex surface $S$.
Then we are in exactly one of the following situations:
\begin{enumerate}[label=(\roman{enumi})]
\item\label{item:thm2a}
$(X,0)$ is a weighted homogeneous singularity which is not rational, and $S$ is an elliptic surface of Kodaira dimension either $0$ or $1$;
\item\label{item:thm2b}
$(X,0)$ is a quotient singularity, and $S$ is a secondary Hopf surface;
\item\label{item:thm2c}
$(X,0)$ is a sandwiched singularity, and $S$ carries a global spherical shell.
\end{enumerate}
\end{letthm}
An elliptic surface is a compact complex surface carrying an elliptic fibration~\cite[p.200]{barth-hulek-peters-vanderven:compactcomplexsurfaces}. A
Hopf surface is a compact complex surface whose universal cover is biholomorphic to $\bb{C}^2\setminus\{0\}$. When its fundamental group is cyclic, then it is a primary Hopf surface in the previous sense.
Otherwise it admits a non-trivial finite cyclic cover by a primary Hopf surface, in which case it is called secondary Hopf surface.
A secondary Hopf surface is never elliptic and does not contain any global spherical shell, so that the three classes of surfaces arising in Theorem~\ref{thm2} are really disjoint.
Observe that Theorems~\ref{thm3} and~\ref{thm2} imply immediately the following result.
\begin{letcor}\label{cor2}
A normal surface singularity is sandwiched if and only if its archimedean link can be realized as a
real-analytic global strongly pseudoconvex $3$-fold in a compact complex surface containing a global spherical shell.
\end{letcor}
\noindent {\bf Acknowledgements.} We would like to warmly thank B. Teissier, who asked us about a characterization of singularities having self-similar Riemann-Zariski spaces.
This paper is a tentative answer to his question in the framework of normalized Berkovich analytic spaces.
\section{Preliminaries on non-archimedean links}\label{section_preliminarieslink}
In this section we recall the construction of the non-archimedean link $\NL(X,Z)$ of a subscheme $Z$ in an algebraic variety $X$ from \cite{fantini:normspaces} (where it is called normalized non-archimedean link).
\subsection{Berkovich analytifications}
We begin by recalling the definition of the Berkovich analytification of an algebraic variety, following \cite{berkovich:book}.
Let $K$ be a field complete with respect to a non-archimedean absolute value $|\cdot|$, that is an absolute value such that $|a+b|\leq \max\{|a|,|b|\}$ for every $a$ and $b$ in $K$.
We denote by $K^\circ =\{ |a| \le 1\}$ the valuation ring of $K$.
In the sequel $K$ will either be the field of Laurent series $k((t))$ with a $t$-adic absolute value for some algebraically closed field $k$,
or any field endowed with the trivial absolute value, that is the absolute value such that $|K^\times|=1$.
Let $X=\Spec(A)$ be an affine algebraic variety over $K$.
The \emph{analytification} $X^\mathrm{an}$ of $X$ is defined as the following set of multiplicative semi-norms:
\[
X^{\mathrm{an}}=\Big\{x\colon A\to \bb{R}_{\geq0} \,\big|\, x(ab)=x(a)x(b), x(a+b)\leq x(a)+x(b), x(c)=|c| \, \forall a,b\in A, c\in K\Big\},
\]
with the topology of the pointwise convergence, that is the topology induced by the product topology of $(\bb{R}_{\geq0})^A$.
The definition extends by gluing to any algebraic variety over $K$ (and more generally to any $K$-scheme of finite type).
If $x$ is a point of $X^\mathrm{an}$ then its kernel $\ker x$ is a prime ideal of $A$ and $x$ induces an absolute value on the quotient $A/\ker x$, which is the residue field of $X$ at $\ker x$. The completion of $A/\ker x$ with respect to this absolute value is a complete valued field extension of $K$ which will be denoted by $\mathscr H(x)$ and called the \emph{complete residue field} of $X^\mathrm{an}$ at $x$.
If $f$ is an algebraic function on $X$, we will denote by $f(x)$ its image in $\mathscr H(x)$, and by $|f(x)|\in\bb{R}_{\geq0}$ the image of $f(x)$ through the absolute value of $\mathscr H(x)$.
More generally, $X^\mathrm{an}$ comes equipped with a sheaf of $K$-algebras of \emph{analytic functions}, consisting of the functions
which can be written locally as a uniform limit of rational functions without poles.
If $U$ is an open subspace of $X^\mathrm{an}$, then an analytic function $f\in\mathcal O_{X^\mathrm{an}}(U)$ can be evaluated in a point $x$ of $U$, yielding an element $f(x)\in \mathscr H(x)$, and therefore a positive real number $|f(x)|$.
An analytic funtion $f$ on $U$ is said to be \emph{bounded} if $|f(x)|\leq1$ for every $x$ in $U$.
Bounded analytic functions form a subsheaf $\mathcal O^\circ_{X^\mathrm{an}}$ of $K^\circ$-modules of $\mathcal O_{X^\mathrm{an}}$.
Moreover, $X^\mathrm{an}$ can be endowed with an additional Grothendieck topology.
We will not discuss this last aspect in the rest of the paper since we will only be considering open subspaces of analytic spaces.
\subsection{The center map}
For the remaining of this section we work with an algebraic variety $X$ over a trivially valued and algebraically closed field $k$.
Any point $x$ of $X^\mathrm{an}$ comes together with a morphism $\alpha\colon \Spec\big(\mathscr H (x)\big)\to X$.
We say that $x$ \emph{has center} on $X$ if $\alpha$ extends to a morphism $\overline{\alpha} \colon \Spec\big(\mathscr H (x)^\circ\big)\to X$, where $\mathscr H (x)^\circ$ is the valuation ring of $\mathscr H (x)$.
The \emph{center} of $x$ is then defined as the point $\mathrm{c}_X(x)=\overline{\alpha}(s)$ of $X$, where $s$ is the closed point of $\Spec\big(\mathscr H (x)^\circ\big)$.
This coincides with the notion, classical in valuation theory, of the center of the valuation ring $\mathscr H(x)^\circ$.
Observe that, since $X$ is separated by hypothesis, whenever $x$ has center on $X$ then its center $\mathrm{c}_X(x)$ is a well defined point of $X$.
If moreover $X$ is proper over $k$, then by the valuative criterion of properness every point of $X^\mathrm{an}$ has center on $X$, but this is not true in general.
The \emph{center map} $\mathrm c_X\colon \big\{x\in X^\mathrm{an} \text{ having center on } X\}\to X$ is anticontinuous, which means that $\mathrm c_X^{-1}(Z)$ is open whenever $Z$ is a closed subvariety of $X$.
The notion of center describes the property for a point of $X^\mathrm{an}$ to be close to a point of $X$, so whenever $Z$ is a closed subvariety of $X$ the subset $\mathrm c_X^{-1}(Z)$ of $X^\mathrm{an}$ can be thought of as a tubular neighborhood of $Z$ in $X^\mathrm{an}$.
The complement $\mathrm c_X^{-1}(Z)\setminus Z^\mathrm{an}$ of $Z^\mathrm{an}$ in the tubular neighborhood can then be thought of as a \emph{punctured tubular neighborhood} of $Z$ in $X^\mathrm{an}$.
\begin{rmk}\label{rmk:thu}
In the terminology of \cite{thuillier:geometrietoroidale}, the punctured tubular neighborhood $\mathrm c_X^{-1}(Z)\setminus Z^\mathrm{an}$ coincides with the analytic space $\big(\widehat{X/Z}\big)_\eta$ associated with the formal completion $\widehat{X/Z}$ of $X$ along $Z$.
\end{rmk}
\subsection{Non-archimedean links}
We fix an algebraic variety $X$ over $k$, and a nonempty and nowhere dense closed subscheme $Z$ of $X$.
\smallskip
An element $\lambda$ of $\bb{R}_{>0}$ acts on a point $x$ of $X^\mathrm{an}$ by raising the semi-norm $x$ to the power $\lambda$.
Indeed, the condition for $x^\lambda$ to be in $X^\mathrm{an}$, that is the fact that it is trivial on $k$, is satisfied since the trivial absolute value is invariant under exponentiation by elements of $\bb{R}_{>0}$.
Observe that as an abstract field the completed residue field $\mathscr H(x)$ is isomorphic to $\mathscr H(x^\lambda)$, but the absolute value of the latter is obtained by raising to the power $\lambda$ the one of the former.
Therefore, neither the abstract valuation ring $\mathscr H(x)^\circ$ nor the morphism of schemes $\Spec\big(\mathscr H(x)\big)\to X$ associated with $x$ change when replacing $x$ by $x^\lambda$.
It follows that the action of $\bb{R}_{>0}$ on $X^\mathrm{an}$ induces an action on the punctured tubular neighborhood $\mathrm c_X^{-1}(Z)\setminus Z^\mathrm{an}$.
\smallskip
We define the \emph{non-archimedean link} $\NL(X,Z)$ of $Z$ in $X$ as the quotient of the punctured tubular neighborhood of $Z$ in $X^\mathrm{an}$ by this action:
\[
\NL(X,Z) = \big( \mathrm{c}_X^{-1}(Z)\setminus Z^{\mathrm{an}} \big) \big/ \bb{R}_{>0}.
\]
We endow $\NL(X,Z)$ with the quotient topology, and see it as a ringed space by endowing it with the following two sheaves.
The \emph{sheaf of analytic functions} $\mathcal O_{\NL(X,Z)}$ on $\NL(X,Z)$ is by definition the push-forward to $\NL(X,Z)$ of the sheaf of analytic functions $\mathcal O_{\mathrm c_X^{-1}(Z)\setminus Z^\mathrm{an}}$ on the Berkovich analytic space $\mathrm c_X^{-1}(Z)\setminus Z^\mathrm{an}$ via the quotient map.
Analogously, the \emph{sheaf of bounded analytic functions} $\mathcal O_{\NL(X,Z)}^\circ$ on $\NL(X,Z)$ is the push-forward of the sheaf of bounded analytic functions $\mathcal O_{\cent_X^{-1}(Z)\setminus Z^\mathrm{an}}^\circ$.
Both are local sheaves of $k$-algebras and $\mathcal O_{\NL(X,Z)}^\circ$ is a subsheaf of $\mathcal O_{\NL(X,Z)}$.
We will say more about the analytic structure of $\NL(X,Z)$ in the subsequent subsections.
Observe that $\NL(X,Z)$ is a compact topological space by \cite[Proposition 5.9]{fantini:normspaces}.
It is worth noticing that by Remark~\ref{rmk:thu} the space $\mathrm c_X^{-1}(Z)\setminus Z^{\mathrm{an}} $ only depends on the formal completion of $X$ along $Z$, so the same is true for $\NL(X,Z)$.
In particular, if $0$ and $0'$ are closed points of algebraic $k$-varieties $X$ and $X'$ respectively, then the non-archimedean links $\NL(X,0)$ and $\NL(X',0')$ are isomorphic (as locally ringed spaces) if and only if the corresponding complete local rings $\widehat{\mathcal O_{X,0}}$ and $\widehat{\mathcal O_{X',0'}}$ are isomorphic.
It follows that if $k=\bb{C}$ then analytically isomorphic singularities have isomorphic non-archimedean links.
One important feature of non-archimedean links is their invariance under modifications.
We use the following non-conventional terminology: we define a \emph{modification} of $(X,Z)$ to be a pair $(Y,D)$, where $Y$ is a normal algebraic $k$-variety and $D$ is a Cartier divisor of $Y$, together with a proper morphism $\pi\colon Y\to X$ which is an isomorphism out of $D$ and such that $D=\pi^{-1}(Z)$ is the (schematic) inverse image of $Z$ through $\pi$.
\medskip
The following result is then a consequence of the valuative criterion of properness.
\begin{prop}[{\cite[Proposition 1.11]{thuillier:geometrietoroidale}}]\label{proposition_invariance_modifications}
If $\pi\colon(Y,D)\to(X,Z)$ is a modification of $(X,Z)$, then $\pi$ induces an isomorphism of locally ringed spaces $\NL(Y,D)\stackrel{\sim}{\to}\NL(X,Z)$.
\end{prop}
The map $\mathrm{c}_X$ induces an anticontinuous map $\mathrm{c}_X\colon \NL(X,Z)\to Z$ which we still call the \emph{center map}.
Thanks to the result above, we also have a center map $\mathrm c_Y\colon \NL(X,Z)\to D$ associated with any modification $(Y,D)$ of $(X,Z)$.
\subsection{Analytic structure of non-archimedean links}
We fix an algebraic variety $X$ over $k$, and a nonempty and nowhere dense closed subscheme $Z$ of $X$.
We now explain some properties of the ringed space structure of the non-archimedean link $\NL(X,Z)$.
\smallskip
Some caution is needed when working with analytic functions on $\NL(X,Z)$, as they cannot be evaluated at points of $\NL(X,Z)$, given that such a point is only a $\bb{R}_{>0}$-equivalence class of semi-norms.
However, it is possible to say whether the value of a function $f\in\mathcal O_{\NL(X,Z)}(U)$ at a point $x\in U$ lies in $\{0\}$, $]0,1[$, $\{1\}$, or $]1,+\infty[$, as those are the orbits of $\bb{R}_{\geq0}$ under the action of $\bb{R}_{>0}$ by exponentiation.
In particular it makes sense to ask whether a function vanishes at a point.
Moreover, for any open set $U$ one can interpret $\mathcal O_{\NL(X,Z)}^\circ(U)$ as the ring of those functions on $U$ which are bounded by $1$.
\medskip
As follows from the definition, with any point $x$ of $\NL(X,Z)$ is associated a field extension $\mathscr H(x)$ of $k$, endowed with a rank $1$ valuation (but not with an absolute value) trivial on $k$, with respect to which $\mathscr H(x)$ is complete, and therefore a rank $1$ valuation ring which we still denote by $\mathscr H(x)^\circ$.
Conversely, every complete (rank 1) valuation ring on the residue field of a scheme-theoretic point of $X\setminus Z$ such that its center on $X$ is in $Z$ comes from a point of $\NL(X,Z)$.
Observe that a function $f\in\mathcal O_{\NL(X,Z)}(U)$ is bounded by $1$ on $U$ if and only if $f(x)\in \mathscr H(x)^\circ$ for every $x$ in $U$.
The valued field $\mathscr H(x)$ can be defined more intrinsically, in a way that only depends on the ringed space structure of $\NL(X,Z)$, as the completion of the residue field of the local ring $\mathcal O_{\NL(X,Z),x}$ with respect to the valuation induced by $\mathcal O_{\NL(X,Z),x}^\circ$.
The analytic structure of $\NL(X,0)$ contains abundant information about the pair $(X,0)$, as is clear from the following result of \cite{fantini:normspaces}, which we recall for the reader's convenience.
\begin{prop}[{\cite[Corollary 4.14]{fantini:normspaces}}]\label{proposition_propertiesNL_1}
Let $0$ be any closed point in an algebraic variety $X$ and assume that $X$ is normal at $0$.
Then the canonical morphism
\[
\widehat{\mathcal O_{X,0}}
\to
\mathcal O_{\NL(X,0)}^\circ \big( \NL(X,0) \big)
\]
is an isomorphism.
\end{prop}
\subsection{Non-archimedean links and $k((t))$-analytic spaces}
A crucial property of non-archimedean links is that they are locally isomorphic to analytic spaces over a field of Laurent series $k((t))$ with $t$-adic absolute value, in the following sense.
Choose $\varepsilon\in]0,1[$ and endow $k((t))$ with the $t$-adic absolute value such that $|t|=\varepsilon$.
As we have seen, any Berkovich analytic space $\mathfrak X$ over $k((t))$, for example an open or closed subspace of the analytification of an algebraic $k((t))$-variety, comes equipped both with a sheaf of $k((t))$-algebras $\mathcal O_{\mathfrak X}$ and with a sheaf of $k[[t]]$-algebras $\mathcal O_{\mathfrak X}^\circ$.
We can see these two sheafs only as sheaves in $k$-algebras, yielding a triple which we denote by $\For(\mathfrak X)=\big({\mathfrak X},\mathcal O_{\mathfrak X},\mathcal O^\circ_{\mathfrak X}\big)$.
It then makes sense to ask whether a non-archimedean link is isomorphic as such a triple to a triple of the form $\For(\mathfrak{X})$.
In general, this is true only locally, in the following sense.
If $X$ is affine, then $\NL(X,Z)$ can be covered by finitely many open subspaces which, as ringed spaces in $k$-algebras, are of the form $\For(\mathfrak X)$ for some $k((t))$-analytic space $\mathfrak X$.
Observe that this is also the case if $Z$ is a single point of $X$, since $X$ can then be replaced by an affine neighborhood of $Z$.
If $X$ is not affine, then $\NL(X,Z)$ is covered by the compact domains $\mathrm c_X(U\cap Z)$, for $U$ ranging among an open affine cover of $X$, and each $\mathrm c_X(U\cap Z)$ is locally isomorphic to a $k((t))$-analytic space in the sense above.
Mind that the datum consisting of such a covering and $k((t))$-analytic structures is non canonical.
A proof of this fact can be found in \cite[Corollary 4.10]{fantini:normspaces}, but to help the reader familiarize with the structure of non-archimedean links we illustrate here what happens in the case when $Z=\{0\}$ is a closed point of $X$.
Let $f$ be an element of the completed local ring $\widehat{\mathcal O_{X,0}}$ of $X$ at $0$.
This defines a $k$-analytic map from $\mathrm{c}_X^{-1}(0)$ into the open unit ball in ${\mathbb A}^{1,\mathrm{an}}_k$.
The latter is canonically homemorphic to the interval $[0,1[$, and under
this homeomorphism this analytic map is given by the absolute value $\mathrm{c}_X^{-1}(0) \stackrel{|f|}{\longrightarrow} \left[0,1\right[$.
The fiber $|f|^{-1}(\varepsilon)$ of $|f|$ at $\varepsilon \in \left]0,1\right[$ is then an analytic space over $k((t))$, since the completed residue field of $\mathbb A^{1,\mathrm{an}}_k$ at the point corresponding to $\varepsilon$ is the field $k((t))$ with the $t$-adic absolute value such that $|t|=\varepsilon$.
Then the projection $\pi\colon \mathrm{c}_X^{-1}(0)\setminus\{0\}^\mathrm{an} \to \NL(X,0)$ defining $\NL(X,0)$ identifies $\For\big(|f|^{-1}(\varepsilon)\big)$ with its image in $\NL(X,0)$, which is the complement $\NL(X,0)\setminus V(f)$ of the zero locus $V(f)$ of $f$ in $\NL(X,0)$.
Finally, by having $f$ range among a finite set of generators of the maximal ideal of $\widehat{\mathcal O_{X,0}}$, we obtain a cover $\NL(X,0)$ with finitely many open subspaces, each of which is isomorphic to a $k((t))$-analytic space.
\begin{rmk}
Observe that the $k((t))$-analytic space $|f|^{-1}(\varepsilon)$ is the analytic Milnor fiber $\mathscr F_{f,0}$ of $f$ at $0$, an object defined and studied in \cite{nicaise-sebag:analyticmilnorfiber}.
The non-archimedean link $\NL(X,0)$ can be thought of as a global version of $\mathscr F_{f,0}$, dependent only on the germ of $X$ at $0$ and not on $f$.
\end{rmk}
\subsection{Discs and annuli}\label{sec:DSK}
Some specific subspaces of non-archimedean links of surfaces are particularly important and deserve to be studied in depth.
Let $T$ denote a coordinate function on the $k((t))$-analytic affine line $\bb{A}^{1,\mathrm{an}}_{k((t))}$.
We say that a subspace $U$ of a non-archimedean link $\NL(X,Z)$ is a \emph{disc} if, as a ringed space in $k$-algebras, $U$ is isomorphic to $\For(D)$, where $D=\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |T(x)|<1\big\}$ is an open unit $k((t))$-analytic disc.
We say that $U$ is an \emph{annulus} if, as a ringed space in $k$-algebras, $U$ is isomorphic to $\For(A)$, where $A=\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |t|<|T(x)|<1\big\}$ is an open $k((t))$-analytic annulus of modulus one.
We collect in the following statement some well known results about the topology of discs and annuli.
\begin{thm}[See {{\cite[\S4.1 and \S4.2]{berkovich:book}}}]
\label{thm:R-tree}
Let $D=\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |T(x)|<1\big\}$ and $A=\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |t|<|T(x)|<1\big\}$ be a $k((t))$-analytic disc and a $k((t))$-analytic annulus respectively.
For any $r\in (0,1)$ (respectively for $r\in (|t|,1)$), denote by $x_r$ the point of $D$ (respectively of $A$) defined by $|P(x_r) | = \sup_{|z|\le r} |P(z)|$ for any $P\in k((t))[T]$. \label{def of xr}
\begin{enumerate}
\item
$D$ and $A$ are uniquely arcwise connected: any two distint points $x, y$ are included in a unique closed subset $I$ which is homeomorphic to the closed interval $[0,1]$
and such that $I \setminus \{x,y\}$ is homeomorphic to the open interval $(0,1)$.
\item
$D$ has a single endpoint.
Given a continuous, proper, and injective map $\gamma \colon \bb{R}_+ \to D$, there exists a constant $C >0$ such that
$\gamma \big([C, +\infty)\big) = \{ x_r \textrm{, } 1 - \varepsilon \le r < 1\}$ for some $0<\varepsilon <1$.
\item
$A$ has two endpoints.
Given a continuous, proper, and injective map $\gamma\colon \bb{R}_+ \to D$, there exist constants $C>0$ and $|t|<\varepsilon <1$ such that $\gamma \big([C, +\infty)\big)$ is either equal to $\{ x_r \textrm{, } 1 - \varepsilon \le r < 1\}$, or to $\{ x_r\textrm{, } |t| < r\le |t| + \varepsilon\}$.
\end{enumerate}
\end{thm}
Now let $U$ be a subset of a non-archimedean link $\NL(X,Z)$ and assume that $U$ is an annulus.
Fix a $k((t))$-analytic annulus $A$ such that $U \cong \For(A)$, and denote by $\Sigma(U)$ the subset of $U$ corresponding to the subset $\{ x_r \textrm{ s.t. } |t| < r < 1\}$ of $A$.
It consists of points of type 2 or 3 of $A$.
The fact that the subset $\Sigma(U)$ of $U$ does not depend on the choice of a $k((t))$-analytic annulus $A$ such that $U \cong \For(A)$ is a consequence of the following proposition.
\begin{prop}\label{proposition skeleton annulus}
The subset $\Sigma(U)$ of $U$ coincides with the set of points of $U$ which have no neighborhood isomorphic to a disc.
\end{prop}
\begin{proof}
Any neighborhood of a point of $\Sigma(U)$ in $U$ has at least two endpoints, and thus can't be a disc.
Conversely, let as above $A$ be a $k((t))$-analytic annulus such that $U \cong \For(A)$.
The complement of $\Sigma(U)$ in $A$ is the union of the open balls $D(z,|z|)$, for $z$ ranging among the rigid points of $A$.
If $D=D(z,|z|)$ is such a ball, $z$ is a defined over a finite extension $k((s))$ of $k((t))$, and the analytic function $s$ is globally defined on $D$.
Having a rational point as center and a rational radius, $D$ can be seen as an open $k((s))$-analytic disc, and therefore $\For(D)$ is a disc.
\end{proof}
The following lemma, which will be used in section \ref{section_proof_main}, shows how discs and annuli appear as open subspaces of $\NL(\bb{A}^2_k,0)$.
\begin{lem}\label{lemma_disc_annulus}
Let $T$ and $t$ denote two coordinates for $\mathbb A^2_k$ at $0$.
Then:
\begin{enumerate}
\item $\NL(\bb{A}^2_k,0) \setminus V(t)$ is a disc;
\item $\NL(\bb{A}^2_k,0) \setminus V(tT)$ is an annulus.
\end{enumerate}
\end{lem}
\begin{proof}
Note that
$\mathrm{c}_{\bb{A}^{2}_{k}}^{-1}(0) = \big\{x\in \bb{A}^{2,\mathrm{an}}_{k} \,\big|\, |T(x)|<1, |t(x)|<1 \big\}$.
Therefore we have
\begin{align*}
\NL(\bb{A}^{2}_{k},0) \setminus V(t)
& \cong \big\{ x \in \mathrm{c}_{\bb{A}^{2}_{k}}^{-1}(0) \,\big|\, |t(x)|=\varepsilon\big\} \\
& = \big\{x\in \bb{A}^{2,\mathrm{an}}_{k} \,\big|\, |T(x)|<1, |t(x)|=\varepsilon\big\} \\
& \cong \For\big(\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |T(x)|<1\big\}\big) = \For(D).
\end{align*}
The isomorphisms above respect the ringed space structure, proving $(i)$.
Set now $t'=tT$.
Then we have
\begin{align*}
\NL(\bb{A}^2_k,0)\setminus V(t')
& \cong \big\{\mathrm{c}_{\bb{A}^{2}_{k}}^{-1}(0) \,\big|\, |t(x)|=\varepsilon\big\}
\\
& = \big\{x\in \bb{A}^{2,\mathrm{an}}_{k} \,\big|\, |T(x)|<1, |t(x)|<1, |t'(x)|=\varepsilon\big\}
\\
& = \big\{x\in \bb{A}^{2,\mathrm{an}}_{k} \,\big|\, |t'(x)|<|T(x)|<1, |t'(x)|=\varepsilon\big\}
\\
& \cong \For\big(\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t'))} \,\big|\, |t'|<|T(x)|<1\big\}\big)
= \For(A),
\end{align*}
which concludes the proof of $(ii)$.
\end{proof}
\begin{rmk}\label{remark_disc_minus_point}
Note that $V(t)$ and $V(T)$ are single points of $\NL(\bb{A}^{2}_{k},0)$.
Observe that the lemma also shows that a punctured open $k((t))$-analytic disc $D\setminus\{0\}$ and an open $k((t))$-analytic annulus of modulus one, which are not isomorphic as analytic spaces, have isomorphic underlying ringed spaces in $k$-algebras.
\end{rmk}
\subsection{Morphisms}
A morphism $\NL(X,Z) \to \NL(X',Z')$ between two non-archimedean links is a morphism of the underlying ringed spaces
\[
\Big(\NL(X,Z),\mathcal O_{\NL(X,Z)},\mathcal O^\circ_{\NL(X,Z)}\Big)
\longrightarrow
\Big(\NL(X',Z'),\mathcal O_{\NL(X',Z')},\mathcal O^\circ_{\NL(X',Z')}\Big)
\]
which can be locally lifted to a morphism of $k((t))$-analytic spaces.
Rather than giving the precise definition from \cite[6.1]{fantini:normspaces}, we content ourselves with giving a list of example of morphism of non-archimedean links, including all the morphisms which will be considered in this paper.
\begin{enumerate}[label=(\roman{enumi})]
\item
An \emph{isomorphism} of non-archimedean links $\NL(X,Z) \to \NL(X',Z')$ is an isomorphism of the underlying ringed spaces.
\item \label{item:bloqups_induce_isomorphism}
As noted in Proposition~\ref{proposition_invariance_modifications}, if $ f \colon (X,Z) \to (X',Z')$ is a modification then the induced morphism $f \colon \NL(X,Z) \to \NL(X',Z')$ is an isomorphism.
\item
A morphism of $k$-varieties $f \colon X \to X'$ such that $f^{-1}(Z') = Z$ induces a morphism of non-archimedean links $f \colon \NL(X,Z) \to \NL(X',Z')$.
\item
In particular, if $Y \subsetneq X$ is a subvariety such that $Y\cap Z$ is nowhere dense in $Y$,
then the induced morphism $\NL(Y, Y\cap Z) \to \NL(X,Z)$ is a closed immersion, that is a map which lifts to closed immersions of $k((t))$-analytic spaces in the sense of Berkovich.
Its image is the closed subspace of $\NL(X,Z)$ consisting of the elements coming from elements of $Y^\mathrm{an}$; note that each such point is a semi-norm with nontrivial kernel.
\item
If $(X,Z)$ are as above and $F$ is a closed subvariety of $Z$, then $\cent_X^{-1}(F)$, which is isomorphic to $\NL(X,F)\setminus Z^\mathrm{an}$, is an open subspace of $\NL(X,Z)$.
\end{enumerate}
\begin{rmk}
Observe that the converse to \ref{item:bloqups_induce_isomorphism} is partially true, in the following sense.
By \cite[Theorem~6.4]{fantini:normspaces}, if $\NL(X,Z)$ is isomorphic to $\NL(X',Z')$ then there exist modifications $(Y,D)\to(X,Z)$ and $(Y',D')\to(X',Z')$ such that the corresponding formal completions $\widehat{Y/D}$ and $\widehat{Y'/D'}$ are isomorphic as formal schemes.
\end{rmk}
\section{Non-archimedean links of surfaces}
\label{section_links_surfaces}
In this section we assume that $X$ is a \emph{normal algebraic surface} over an algebraically closed field $k$ and that $0$ is a closed point of $X$.
Our approach follows the one of \cite[Sections 7, 9, 10]{fantini:normspaces}, but for some results we will provide more concrete proofs.
\subsection{Topology of $\NL(X,0)$ and center maps}
In dimension two the topology of $\NL(X,0)$ can be described in terms of its center maps as follows.
\begin{prop}\label{prop:topo_NL}
Let $(X,0)$ be a normal surface singularity, and let $x$ be a point of $\NL(X,0)$.
Then the family of all sets of the form $\mathrm c_Y^{-1}\big(\overline{\mathrm c_Y(x)}\big)$, for $(Y,D)$ ranging over all the modifications of $(X,0)$, is a basis of neighborhoods of $x$ in $\NL(X,0)$.
\end{prop}
\begin{proof}
Let $\mathfrak{m}$ be the maximal ideal of $\mc{O}_{X,0}$, which consists of the functions vanishing at $0$, let $V$ be any affine neighborhood of $0$ in $X$, and denote by $\mathrm{L}(X,\mathfrak{m})$ the subset of $U^{\mathrm{an}}$ consisting of the multiplicative semi-norms $x$ on $\mathcal O_X(U)$ whose restriction to $k$ is trivial and such that
$\min_{f\in \mathfrak{m}} -\log|f(x)| = 1$.
It is a compact subset of $\big( \mathrm{c}_X^{-1}(0)\setminus \{0\}^{\mathrm{an}} \big)$ and the natural projection $\mathrm{L}(X,\mathfrak{m})\to \NL(X,0)$,
is bijective hence a homeomorphism.
It is then sufficient to prove that the family of all subsets of $\mathrm L(X,\mathfrak{m})$ of the form $\mathrm c_Y^{-1}\big(\overline{\mathrm c_Y(x)}\big)$, for $(Y,D)$ ranging over all the modifications of $(X,0)$, is a basis of neighborhoods of any given point $x$ of $\mathrm L(X,\mathfrak{m})$.
Any set of the form $\mathrm c_Y^{-1}\big(\overline{\mathrm c_Y(x)}\big)$ contains $x$, and it is open because the center map is anti-continuous.
We have to prove that, given any open subset $U$ of $\mathrm L(X,\mathfrak{m})$ containing $x$, there exists a modification $(Y,D) \to (X,0)$ such that $\mathrm c_Y^{-1}\big(\overline{\mathrm c_Y(x)}\big) \subset U$.
Since $X^{\mathrm{an}}$ is endowed with the weakest topology making all evaluations maps $y \mapsto |f(y)|$ continuous, it is sufficient to prove it assuming that $U$ is a finite intersection of sets of the form
$U_<(f,p,q)= \{y, |f(y)| <e^{-p/q}\}$ or $U_>= \{y, |f(y)| >e^{-p/q}\}$, where $f$ is an element of $\mathcal O_X(U)$ and $p,q$ are coprime integers.
Moreover, since $|f| =1$ on $\mathrm{L}(X,\mathfrak{m})$ whenever $f\notin \mathfrak{m}$, we can also assume that $f\in \mathfrak{m}$.
It is sufficient to find a modification $(Y,D) \to (X,0)$ such that $U_<(f,p,q)$ (resp. $U_>(f,p,q)$) is included in a set of the form $ c_Y^{-1} (\mathcal{E}_<)$ (resp. $ c_Y^{-1} (\mathcal{E}_>)$), for some closed subschemes $\mathcal{E}_<$ and $\mathcal{E}_>$ of $D$.
In order to do so we proceed as follows.
Let $g_1, \ldots, g_N$ be a finite set of generators of $\mathfrak{m}$, and choose a modification $\pi\colon (Y,D) \to (X,0)$
such that $g_i^p/f^q \circ \pi $ defines a regular map $Y \to \mathbb{P}^1_k$ for all $i=1, \ldots, N$.
Let $\mathcal{E}_>$ be the union of all (scheme-theoretic) points $\xi$ of $D$ such that
\begin{equation}\label{eq:>}
\ord_\xi( f^q\circ \pi)> \ord_\xi(\mathfrak{m}^p) := \mathrm{min}_{g\in\mathfrak M^p}\{\ord_\xi(g\circ\pi)\}
= \mathrm{min}_{g\in\mathfrak M}\{\ord_\xi(g^p\circ\pi)\}~.
\end{equation}
Observe that $z\in \mathcal{E}_>$ if and only if the value of the rational function $g_i^p/f^q (\pi(z))$ is equal to $\infty \in \mathbb P^1_k$ for at least one index $i$, which implies that $\mathcal{E}_>$ is a finite union of closed points in $D$ together with those irreducible components $E$ of $D$ whose generic point $\xi_E$ satisfies~\eqref{eq:>}.
In particular $\mathcal{E}_>$ is closed in $D$. One defines in the same way $\mathcal{E}_<$ as the set of points $\xi\in D$ satisfying
\[
\ord_E( f^q\circ \pi)< \ord_E(\mathfrak{m}^p),
\]
and similarly $\mathcal{E}_<$ is closed (it is in fact a finite union of irreducible components of $D$).
Since $g_i^p/f^q \circ \pi $ is regular, we have that $\mathcal{E}_> \cap \mathcal{E}_< =\emptyset$, and moreover
\begin{align*}
U_<(f,p,q) & = \{y, \, -\log |f^q(y)| > p\} = c_Y^{-1} (\mathcal{E}_<)
\\
U_>(f,p,q) & = \{y,\, -\log |f^q(y)| < p\} = c_Y^{-1} (\mathcal{E}_>)~,
\end{align*}
which concludes the proof.
\end{proof}
\subsection{Types of points}
Observe that a point $x$ of $\NL(X,0)$ corresponds to an equivalence class of semi-norms on $X$; being centered in $0$, those induce semi-norms on the completed local ring $\widehat {\mathcal O_{X,0}}$.
The points of $\NL(X,0)$ can then be divided into four different types by looking at the associated valued field ${\mathscr H(x)}$, its trascendence degree over $k$, and its valuative invariants.
We say that a point $x$ of $\NL(X,0)$ is a \emph{rigid point} (or of \emph{type 1}) if the transcendence degree $\mathrm{trdeg}_{k}{\mathscr H(x)}$ of $\mathscr H(x)$ over $k$ is equal to $1$.
Equivalently, $x$ is a rigid point if it corresponds to an equivalence class of semi-norms on $\widehat {\mathcal O_{X,0}}$ with nontrivial kernel.
Moreover, this kernel is generated by an irreducible element of $\widehat {\mathcal O_{X,0}}$, so that $x$ corresponds to an irreducible germ of a formal curve on $(X,0)$; then $x$ can be seen as the equivalence class of the order of vanishing along this germ.
When this is the case, the rational rank
\[
\mathrm{rank}_\bb{Q} \big( |\mathscr H(x)^\times|/|k^\times|\otimes_\bb{Z}\bb{Q} \big)
\]
of $\mathscr H(x)$ is equal to $1$ and its residue field $\widetilde{\mathscr H(x)}$ is equal to $k$.
In every other case we have $\mathrm{trdeg}_{k}{\mathscr H(x)}=2$.
We say that $x$ is a \emph{divisorial point} (or of \emph{type 2}) if the residue field $\widetilde{\mathscr H(x)}$ is a nontrivial extension of $k$, while we say that $x$ is of \emph{type 3} if the rational rank of $\mathscr H(x)$ is equal to $2$.
Finally, we say that $x$ is of \emph{type 4} if it satisfies $\mathrm{rank}_\bb{Q} \big( |\mathscr H(x)^\times|/|k^\times|\otimes_\bb{Z}\bb{Q} \big)=1$ and $\widetilde{\mathscr H(x)}=k$ but it is not of type 1 (that is, the corresponding semi-norms on $\widehat{\mc{O}_{X,0}}$ are norms).
Every point of $\NL(X,0)$ is of one (and only one) of the four types above, since by Abhyankar's inequality \cite{abhyankar:valuationscentered} we have
\[
\mathrm{rank}_\bb{Q} \big( |\mathscr H(x)^\times|/|k^\times|\otimes_\bb{Z}\bb{Q} \big)+\mathrm{trdeg}_{k}\widetilde{\mathscr H(x)}\leq \mathrm{trdeg}_{k}{\mathscr H(x)} \leq 2,
\]
and $\mathrm{rank}_\bb{Q} \big( |\mathscr H(x)^\times|/|k^\times|\otimes_\bb{Z}\bb{Q} \big)\geq1$ because the valuation associated with $x$ is nontrivial.
Observe that an isomorphism of non-archimedean links respects the complete residue fields, and therefore must send a point to a point of the same type.
\begin{rmk}
Recall that $\NL(X,0)$ is locally isomorphic to a $k((t))$-analytic curve. Under any such isomorphism the type of a
point as defined above coincides with its type as defined by Berkovich.
For a definition of types of points in Berkovich curves in terms of their valuative invariants see e.g. \cite[3.3.2]{ducros:structurecourbesanalytiques}.
\end{rmk}
\begin{rmk}
Let $(Y,D)$ be a modification of $(X,0)$ and let $x$ be a point of $\NL(X,0)$.
The existence of a morphism $\overline\alpha\colon \Spec \big( \mathscr H(x)^\circ \big) \to Y$ associated with $x$ implies that the residue field $\widetilde{\mathscr H(x)}$ of $\mathscr H(x)$ is a field extension of the schematic residue field of $Y$ at $\mathrm{c}_Y(x)$.
It follows that $\cent_Y(x)$ is a closed point of $Y$ if $x$ is not divisorial.
Proposition~\ref{prop:topo_NL} implies then that such a point $x$ has a basis of neighborhoods of the form $\cent_Y^{-1}\big({\mathrm c_Y(x)}\big)$, for $(Y,D)$ ranging over all the modifications of $(X,0)$.
\end{rmk}
\begin{rmk}\label{rmk:divisorial}
Let $(Y,D)$ be a modification of $(X,0)$ and let $x$ be a point of $\NL(X,0)$.
If $x$ is divisorial, then $\mathrm{c}_Y(x)$ is either a closed point of $Y$ or the generic point of an irreducible curve in $Y$.
Moreover, it is always possible to find a modification $(Y',D')$ of $(X,0)$ that dominates $(Y,D)$ and such that $\mathrm{c}_{Y'}(x)$ is the generic point of an irreducible curve $E$ in $Y'$, which explains our terminology.
Furthermore, the residue field $\widetilde{\mathscr H(x)}$ of ${\mathscr H(x)}$ is then isomorphic to the function field $k(E)$ of $E$.
\end{rmk}
\subsection{Formal modifications and fibers of the center maps}
Let $X$ be a normal $k$-surface and let $0$ be a closed point of $X$.
We start by fixing some notation.
A modification $(Y,D)$ of $(X,0)$ is said to be a \emph{resolution} of $(X,0)$ if $Y$ is regular.
If moreover the irreducible components of $D$ are all non-singular, intersect transversally and no three of them meet at a point, then $(Y,D)$ is said to be a \emph{good resolution} of $(X,0)$.
Whenever $C$ is a germ of (formal) curve in $(X,0)$, then a good resolution $(Y,D)$ of $(X,0)$ is also said to be a good resolution of $C$ if the strict transform of $C$ in $Y$ meets $D$ transversally.
We mentioned that the non-archimedean link of a pair $(Y,D)$ only depends on an infinitesimal neighborhood of $D$ in $Y$.
The notions above can then be slightly generalized by working in a suitable category of formal $k$-schemes.
A formal $k$-scheme $\mathscr Y$ is called \emph{special} if it is covered by formal subschemes $\mathscr Y_i$ such that each $\mathscr Y_i$ can be written as the formal completion of a $k$-scheme of finite type $Y_i$ along a closed subscheme $D_i$.
We can then define the non-archimedean link $\NL(\mathscr Y)$ of $\mathscr Y$ by gluing the non-archimedean links $\NL(Y_i,D_i)$.
Observe that when $\mathscr Y=\widehat{Y/D}$ is the formal completion of $Y$ along $D$, then $\NL(\mathscr Y)=\NL(Y,D)\cong\NL(X,0)$.
We also obtain a center map $\cent_\mathscr Y\colon \NL(\mathscr Y) \to \mathscr Y_0$, where the reduction $\mathscr Y_0$ of $\mathscr Y$ is a scheme of finite type over $k$ covered by the reduced schemes associated to the $D_i$.
A special formal $k$-scheme $\mathscr Y$ is called a \emph{formal modification} of the pair $(X,0)$ if it is normal and it comes endowed with an adic morphism $f\colon\mathscr Y\to\mathscr X=\widehat{X/0}$ that induces an isomorphism of non-archimedean links $\NL(\mathscr Y)\stackrel{\sim}{\longrightarrow}\NL(X,0)$ and such that the fiber product $\mathscr Y\times_\mathscr X \{0\}$ is a Cartier divisor of $\mathscr Y$.
If $(Y,D)\to(X,0)$ is a modification, then the formal completion $\mathscr Y=\widehat{Y/D}\to\mathscr X$ of $Y$ along $D$ is a formal modification of $(X,0)$.
Such a formal modification $\mathscr Y$ of $(X,0)$ is said to be \emph{algebraizable}, and a modification $(Y,D)$ of $(X,0)$ such that $\mathscr Y\to\mathscr X$ is isomorphic to $\widehat{Y/D}\to\mathscr X$ is called an \emph{algebraization} of $\mathscr Y$.
If $\mathscr Y$ is a formal modification of $(X,0)$ such that $\mathscr Y$ is regular, then by \cite[Proposition~7.6]{fantini:normspaces} $\mathscr Y$ is algebraized by a resolution of $(X,0)$.
Observe that a formal modification $\mathscr Y$ of $(X,0)$ induces an isomorphism of non-archimedean links $\NL(\mathscr Y)\cong\NL(X,0)$, and therefore also a center map $\cent_\mathscr Y\colon \NL(X,0) \to \mathscr Y_0$.
If $\mathscr Y$ is a formal modification of $(X,0)$, we denote by $\Div(\mathscr Y)$ the finite nonempty subset of $\NL(X,0)$ consisting of the divisorial points associated with the components of $\mathscr Y_0$.
Whenever $\mathscr Y$ is algebraized by a modification $(Y,D)$, we will also denote $\Div(\mathscr Y)$ by $\Div(Y)$.
The following proposition is a simple consequence of Lemmas 7.14 and 9.3 of \cite{fantini:normspaces}.
\begin{prop}\label{proposition_propertiesNL}
Let $(X,0)$ be a normal surface singularity and let $\mathscr Y$ be a formal modification of $(X,0)$.
Then the following properties hold:
\begin{enumerate}[label=(\roman{enumi}),ref=\thethm.({\roman*})]
\item \label{proposition_propertiesNL_2}
The map $\cent_{\mathscr Y}^{-1}$ gives a bijection between the set of closed points of $\mathscr Y_0$ and the set of connected components of $\NL(X,0)\setminus \Div(\mathscr Y)$.
\item \label{proposition_propertiesNL_3}
Let $W$ be a connected component of $\NL(X,0)\setminus \Div(\mathscr Y)$, let $p$ be the corresponding closed point of $\mathscr Y_0$, let $\mathscr Y_p=\Spf \big(\widehat{\mc{O}_{\mathscr Y,p}} \big)$ be the formal completion of $\mathscr Y$ along $p$, let $\mc I_p$ be the ideal of $\widehat{\mc{O}_{\mathscr Y,p}}$ which defines $\mathscr Y_0$ locally around $p$, and denote by $\varphi\colon\NL(\mathscr Y_p)\to \NL(X,0)$ the map induced by the composition $\mathscr Y_p\to\mathscr Y\to\mathscr X$.
Then $\varphi$ maps the zero locus $V(\mc I_p)$ of $\mc I_p$ in $\NL(\mathscr Y,p)$ to a finite set of type 1 points of $\NL(X,0)$, and it induces an isomorphism $\NL(\mathscr Y_p)\setminus V(\mc I_p) \cong W \subset \NL(X,0)$.
\end{enumerate}
\end{prop}
\begin{rmk}\label{remark_closure_component}
Let $\mathscr Y$ and $W$ be as above.
It follows from the first part of the proposition that the closure $\overline W$ of $W$ in $\NL(X,0)$ is obtained by adding to $W$ a subset of $\Div(\mathscr Y)$.
Indeed, the union of all the connected components of $\NL(X,0)\setminus \Div(\mathscr Y)$ different from $W$, that is the complement of $W\cup \Div(Y)$ in $\NL(X,0)$, is an open subset of $\NL(X,0)$.
\end{rmk}
\subsection{Existence of formal modifications and resolutions}
\label{subsection_existencemodifications}
We will now explain how to produce formal modifications of $(X,0)$ with prescribed exceptional divisors and how to detect when such a modification is a good resolution of $(X,0)$.
\begin{thm}\label{theorem_existence_modifications}
Let $S$ be a finite subset of divisorial points of $\NL(X,0)$.
Then there exists a formal modification $\mathscr Y$ of $(X,0)$ such that $\Div(\mathscr Y)=S$.
\end{thm}
\begin{proof}
As it readily follows from the observation made in Remark~\ref{rmk:divisorial}, there exists a resolution $(Y,D)$ of $(X,0)$ such that $S\subset \Div(Y)$.
The contractibility criterion of Grauert-Artin \cite{artin:contractibilityalgebraicspaces} guarantees that we can contract every component of $D$ which does not correspond to an element of $S$, yielding a normal algebraic space over $k$ with a proper morphism $f$ to $X$.
Indeed, the intersection matrix of the divisor that we want to contract is negative definite because the entire exceptional divisor of $Y$ can be contracted to the point $0$ in $X$.
By taking the formal completion of this algebraic space along $f^{-1}(0)$ we obtain the formal modification $\mathscr Y$ that we wanted.
\end{proof}
\begin{rmk}
If we are working over the field of complex numbers, we can apply Grauert contractibility criterion \cite{grauert:ubermodifikationen} instead of Artin's and obtain $\mathscr Y$ as a complex analytic space.
Of course this is the same as analytifying the algebraic space given by Artin's criterion.
On the other hand, observe that if $k$ is the algebraic closure of $\mathbb F_p$ for some prime $p$ or if $(X,0)$ is a rational singularity, then the contractibility results \cite[2.3, 2.9]{artin:contractibilitycriterion} grant that $\mathscr Y$ is an algebraic variety.
\end{rmk}
Recall that each type 1 point can be seen as the equivalence class of the order of vanishing along an irreducible germ of a formal curve on $(X,0)$.
In particular, with a finite set of type 1 points of $\NL(X,0)$ is associated the germ of a curve on $(X,0)$.
\begin{thm}\label{theorem_characterization_resolutions}
Let $\mathscr Y$ be a formal modification of $(X,0)$, let $T$ be a finite set of type 1 points of $\NL(X,0)$ and let $C$ be the germ of curve on $(X,0)$ associated with $T$.
Then $\mathscr Y$ can be algebraized by a good resolution of both $(X,0)$ and the germ $C$ if and only if each connected component $V$ of $\NL(X,0)\setminus \Div(\mathscr Y)$ has one of the following three forms:
\begin{enumerate}
\item $V$ is a disc and $V\cap T=\emptyset$;
\item $V$ is an annulus and $V\cap T=\emptyset$;
\item $V$ is a disc and $V\cap T$ can be taken to be its origin.
\end{enumerate}
\end{thm}
Observe that in the statement can take $T$ to be empty, so that there is no curve $C$ and we simply obtain a good resolution of $(X,0)$.
The following proof follows the lines of the proof of Proposition 10.2 of \cite{fantini:normspaces}, where more details are given.
\begin{proof}[Proof]
By Proposition~\ref{proposition_propertiesNL_2} the connected components of $\NL(X,0)\setminus \Div(X)$ are the inverse images through the center map of the closed points of $\mathscr Y_0$.
Let $W$ be such a component, let $p$ be a closed point of $\mathscr Y_0$, and let $\mathcal I_p$ be the image of the ideal defining $\mathscr Y_0$ in $\mathcal O_{\mathscr Y,p}$.
By Cohen theorem, $\mathscr Y$ is regular at $p$ if and only if $\widehat{\mathcal O_{\mathscr Y,p}}\cong k[[x,y]]$, that is if and only if $\NL(\mathscr Y_p)\cong \NL(\mathbb A^2_k,0)$.
Moreover, $p$ is a smooth point of $\mathscr Y_0$ if and only we can take $\mathcal I_p=(x)$ in the isomorphism above, while $p$ is an ordinary double point of $\mathscr Y_0$ if and only if we can take $\mathcal I_p=(xy)$.
Since by Proposition~\ref{proposition_propertiesNL_3} we have a canonical isomorphism $\cent_\mathscr Y^{-1}(p)\cong\NL({\mathscr Y_p})\setminus V(\mathcal I_p)$, it follows from Proposition~\ref{proposition_propertiesNL_1} that $\mathscr Y$ is a (formal) good modification of $(X,0)$ if and only if every connected component of $\NL(X,0)\setminus \Div(\mathscr Y)$ is either a disc or an annulus.
Whenever this is the case, $\mathscr Y$ is algebraized by a good resolution $(Y,D)$ of $(X,0)$ by \cite[Proposition 7.6]{fantini:normspaces}.
Finally, by definition $(Y,D)$ is also a good resolution of the germ $C$ if and only if the strict transform of $C$ in $Y$ meets $D$ transversally.
This means that if $p$ is a point of $D$ contained in the strict transform of $C$ we can find an isomorphism $\widehat{\mathcal O_{Y,p}} \cong k[[x,y]]$ as above with $\mathcal I_p=(x)$, and the strict transform locally defined by $y$ at $p$.
This corresponds precisely to the conditions on $T$ in the statement.
\end{proof}
\subsection{Structure of $\NL(X,0)$}
The structure of the non-archimedean link $\NL(X,0)$ can be described using a resolution of the singularities of the pair $(X,0)$ and the results of the previous section.
\begin{cor}\label{corollary_global_topology}
Let $(Y,D)$ be a good resolution of $(X,0)$ and let $y$ be a closed point of $D$.
If $D$ is smooth (respectively singular) at $y$ then $c_Y^{-1}(y)$ is a disc (resp. an annulus), and the boundary
of $c_Y^{-1}(y)$ in $\NL(X,0)$ consists of one type 2 point (resp. two type 2 points).
In particular, $\NL(X,0) \setminus \Div(Y)$ is a disjoint union of discs and finitely many annuli.
\end{cor}
\begin{proof}
As above, the space $Y$ is smooth at $y$, so by Cohen theorem we can find an isomorphism $\widehat{\mathcal O_{Y,y}} \cong k[[U,V]] \cong \widehat{\mathcal O_{\mathbb A^2_k,0}}$ such that a local equation for $D$ at $y$ is either $U$ (if $y$ is a smooth point of $D$) or $UV$ (if $y$ is a double point of $D$).
Since $\NL\big(\mathscr Y_{y}\big)\cong\NL\big(\mathbb A^2_k,0\big)$, Proposition~\ref{proposition_propertiesNL_3} and Lemma~\ref{lemma_disc_annulus} imply that $\cent_Y^{-1}\big(y \big)$ is isomorphic to an open $k((t))$-analytic disc in the first case and an annulus in the second case, where the $k((t))$-analytic structure is defined by sending $t$ to a local equation of $D$ at $y$.
The fact that the boundary $\cent_Y^{-1}(y)$ consists of one or two points follows from Remark~\ref{remark_closure_component}.
The last statement is now a consequence of Proposition~\ref{proposition_propertiesNL_2}.
\end{proof}
We can now deduce some results about the type 1 points of $\NL(X,0)$.
\begin{cor}\label{corollary_topology}
Any type 1 point $x$ of $\NL(X,0)$ admits a basis of neighborhoods consisting of discs centered in $x$.
Moreover, the set of type 1 points is dense in $\NL(X,0)$.
\end{cor}
\begin{proof}
By Proposition~\ref{prop:topo_NL} a basis of neighborhoods of $x$ consists of the open subsets of $\NL(X,0)$ of the form $\big\{\cent_Y^{-1}\big({\overline{\cent_Y(x)}}\big)\big\}$, for $(Y,D)$ ranging among the good resolutions of $(X,0)$, since the family of good resolutions is cofinal among the partially ordered set of modifications of $(X,0)$.
Since $x$ is not of type $2$, $\cent_Y(x)$ is a closed point of $D$, and we can find an isomorphism $\widehat{\mathcal O_{Y,y}} \cong k[[U,V]] \cong \widehat{\mathcal O_{\mathbb A^2_k,0}}$ such that $U$ is a local equation for $D$ at $y$ and $V$ defines the germ of curve at $y$ associated to the type one point $x$.
This shows that $\cent_Y^{-1}\big(\overline{\cent_Y(x)}\big)=\cent_Y^{-1}\big(\cent_Y(x)\big)$ is an open $k((t))$-analytic disc centered in $x$.
It remains to prove the density of the set of type 1 points of $\NL(X,0)$.
As noted in Remark~\ref{remark_closure_component}, each of the divisorial points associated with a good resolution of $(X,0)$ is not isolated in $\NL(X,0)$.
The result then follows from Corollary~\ref{corollary_global_topology}, Lemma~\ref{lemma_disc_annulus}, and the fact that the set of its points of type 1 is dense in a $k((t))$-analytic annulus $A=\big\{x\in \bb{A}^{1,\mathrm{an}}_{k((t))} \,\big|\, |t|_{k((t))}<|T(x)|<1\big\}$.
The latter is a classical fact that can be proven directly by exhibiting suitable type 1 points; as an example, the semi-norm that sends an element $P$ of $k((t))[T]$ to $|P(x)| = |P(t^{1/2})|_{k((t))}$ is a type 1 point $x$ of $A$.
\end{proof}
\begin{rmk}
It is also true that each of the sets consisting of type 2, type 3 or type 4 points is dense in $\NL(X,0)$, but we will not need this fact.
\end{rmk}
Any bounded analytic function on the complement of finitely many type 1 points extends to $\NL(X,0)$.
This follows from the fact that $\NL(X,0)$ is locally a $k((t))$-analytic space combined with the more general result \cite[Proposition 3.3.14]{berkovich:book}.
However, we include a proof here since it is simpler to deduce the result in our very special case from Corollary~\ref{corollary_topology}.
\begin{lem}\label{lemma_extension_to_type1}
Let $U$ be an open subset of $\NL(X,0)$ and let $S$ be a finite subset of $U$ consisting of type $1$ points.
Then the inclusion $U\setminus S\hookrightarrow U$ induces an isomorphism
\[
\mathcal O_{\NL(X,0)}^\circ \big( U \big) \cong \mathcal O_{\NL(X,0)}^\circ \big( U\setminus S \big).
\]
\end{lem}
\begin{proof}
Since $\mathcal O^\circ_{NL(X,0)}$ is a sheaf it is enough to prove the result for $T=\{x\}$ a single type $1$ point, and $U$ some neighborhood of $x$ in $\NL(X,0)$.
Hence, by Corollary~\ref{corollary_topology} we can assume without loss of generality that $U$ is a disc centered in $x$.
Then the ring of bounded analytic functions on $U\setminus\{x\}$ is isomorphic to $k((t))[[T]]$, where $T$ is a coordinate function on $U$, and all such functions extend to $x=V(T)$.
\end{proof}
The results of this section can also be used to deduce a characterization of the \emph{essential valuations} of a surface singularities, as appearing in the Nash problem (see \cite[Theorems 10.4 and 10.8]{fantini:normspaces}), and to give another proof of the existence of resolutions for surfaces (see \cite[Theorem 8.6]{FantiniTurchetti2017}).
\begin{comment}
\begin{rmk}
Observe that one can easily deduce Lemma~\ref{lemma:topology_NL_center} and Corollary~\ref{corollary_topology} from Theorem~\ref{theorem_characterization_resolutions}.
The latter does not rely on them, since Lemma~\ref{lemma_extension_to_type1} can be proven without Lemma~\ref{lemma:topology_NL_center}, as is explained before its statement.
Moreover, one can also deduce that the analogous result as Lemma~\ref{lemma:topology_NL_center} holds for divisorial poins as well: if $x\in\NL(X,0)$ is arbitrary, then it has as a basis of neighborhoods the set $\mathrm c_Y^{-1}\big(\overline{\mathrm c_Y(x)}\big)$, for $(Y,D)$ ranging over all the modifications of $(X,0)$. \lorenzo{Should we remove this? Or keep the 2nd half and move?}
\end{rmk}
\end{comment}
\section{Self-similarity of sandwiched singularities}\label{section_preliminariessandwiched}
In this section we introduce sandwiched singularities and prove the implications \ref{condition_sandwiched} $\implies$ \ref{condition_strongly_selfsim} $\implies$ \ref{condition_valtree} $\implies$ \ref{condition_selfsim} of Theorem~\ref{mainthm}.
\subsection{Sandwiched singularities}\label{ssec:sandwich}
\begin{defi}\label{def-sandwich}
Let $\mathcal O$ be a normal 2-dimensional complete local ring with algebraically closed residue field $k$.
We say that $\mathcal O$ is \emph{sandwiched} if there exist two algebraic surfaces $X_0$ and $Y$ over $k$, with $X_0$ smooth over $k$ and $Y$ normal, a proper birational morphism $Y\to X_0$, and a point $y$ in $Y$ such that $\mathcal O\cong\widehat{\mathcal O_{Y,y}}$.
If $X$ is an algebraic surface over $k$ and $0$ is a normal point of $X$, we say that $(X,0)$ is a sandwiched singularity if the complete local ring $\widehat{\mathcal O_{X,0}}$ of $X$ in $0$ is sandwiched.
\end{defi}
Note that when working over $\bb{C}$ our definition is equivalent to \cite[Definition II.1.1]{spivakovsky:sandsingdesingsurfNashtransf} thanks to Artin's approximation theorem \cite[Corollary 1.6]{artin:solaneq}.
\medskip
Fix any smooth algebraic surface $X_0$ over $k$, such as for example $X_0 = \bb{A}^2_k$.
Let $p$ be a point of $X_0$, let $\varphi\colon Y\to X_0$ be a sequence of point blowups centered above $p$, and let $E$ be a connected divisor on $Y$ obtained by removing some irreducible components from $\varphi^{-1}(p)$.
Since the divisor $\varphi^{-1}(p)$ can be contracted to $(X_0,p)$ and $X_0$ is smooth, Artin's contractibility criterion \cite[Theorem 2.3]{artin:contractibilitycriterion} ensures that the divisor $E$ can be contracted to a point $0$ in a normal algebraic variety $X$.
The normal singularity $(X,0)$ is sandwiched, and any sandwiched surface singularities is isomorphic \'etale locally (or analytically, if $k=\bb{C}$) to such a singularity.
\begin{rmk}\label{rmk:sandwichoverC}
Suppose $(X,0)$ is sandwiched.
Then in the category of analytic spaces over $k$, or in the complex analytic category when working over $\mathbb C$, one can always find a morphism $X\to X_0$ to a smooth surface.
If $\mu\colon Y\to X$ is a resolution of $(X,0)$, we get back $(X,0)$ by contracting the divisor $\mu^{-1}(0)$ of $Y$.
This explains the terminology, as $X$ is sandwiched between the smooth surfaces $X_0$ and $Y$.
\end{rmk}
We start by proving a stronger form of the implication \ref{condition_sandwiched} $\implies$ \ref{condition_valtree}.
\begin{lem}\label{lem:structure sandw}
Suppose $(X,p)$ is a sandwiched singularity and $y$ is a point of $\NL(\bb{A}^2_k,0)$.
Then there exists a finite set $T$ of type 1 points in $\NL(X,p)$ such that $\NL(X,p)\setminus T$ is isomorphic
to an open subset of $\NL(\bb{A}^2_k,0)$ containing $y$.
\end{lem}
\begin{proof}
We may assume that $\mc{O}_{X,p}$ is not a regular ring.
Recall that $\NL(X,p)$ only depends on the formal completion of its local ring. We may thus assume
that there exist a proper birational map from a smooth algebraic variety $Y$ to the affine plane $\varphi \colon Y \to \bb{A}^2_k$ which is an isomorphism outside $\varphi^{-1}(0)$,
and a connected divisor $E \subset \varphi^{-1}(0)$ such that $X$ is the surface obtained from $Y$ by contracting $E$ to a point.
We denote by $\mu \colon Y \to X$ the contraction map
so that $p= \mu(E)$.
Note that $\varphi$ factors through $\mu$, so there exists a regular birational map
$\pi\colon X \to \bb{A}^2_k$ mapping $p$ to $0$.
It follows from \cite[Corollary 1.14]{spivakovsky:sandsingdesingsurfNashtransf} that we may further impose that any
irreducible component of $\varphi^{-1}(0)$ that is not included in $E$ has self-intersection $-1$.
Since $Y$ is regular, $\varphi$ is a sequence of point blow-ups hence factors through the blow-up of the origin $\varpi\colon Y_1 \to \bb{A}^2_k$.
The map $Y \to Y_1$ is not an isomorphism, therefore the strict transform $E_0$ of $\varpi^{-1}(0)$ in $Y$ has self-intersection at most $-2$, hence is included in $E$.
Let $T$ be the zero locus in $\NL(X,0)$ of the ideal defining $\pi^{-1}(0)$ around $p$ and let $\mathscr Y$ be the formal completion of $X$ along $\pi^{-1}(0)$.
By Proposition~\ref{proposition_propertiesNL_3} $T$ is a finite set of type 1 points of $\NL(X,0)$, and $\pi$ induces an isomorphism between $\NL(X,p) \setminus T$ and the open subset $U$ of $\NL(\bb{A}^2_k,0)$ consisting
of the points whose center in $X$ is equal to $p$.
Note that, by the commutativity of the center maps, $U$ is also equal to the set of points of $NL(X,p)$ whose center on $Y$ is contained in $E=\mu^{-1}(p)$.
If $U$ contains $y$ there is nothing left to prove.
If this is not the case, observe that the linear group $\GL(2,k)$ acts naturally on $\NL(\bb{A}^2_k,0)$ in such a way that any element $g\in \GL(2,k)$ defines an isomorphism of non-archimedean links $g_\bullet \colon \NL(\bb{A}^2_k,0) \to \NL(\bb{A}^2_k,0) $.
The linear map $g$ also lifts to an automorphism $L_g \colon Y_1\to Y_1 $ whose action on the exceptional divisor $\varpi^{-1}(0)$ is given by the projectivization of $g$, and satisfies $\mathrm{c}_{Y_1} (g_\bullet (y)) = L_g( \mathrm{c}_{Y_1}(y))$.
We may thus find $g\in \GL(2,k)$ such that $ L_g( \mathrm{c}_{Y_1}(y))$, and hence $\mathrm{c}_{Y_1} (g_\bullet (y))$, does not belong to the indeterminacy locus of
the birational map $ \varphi^{-1} \circ \varpi \colon Y_1 \to Y$.
In particular, the single irreducible component of $\varphi^{-1}(0)$ containing the point $\mathrm{c}_{Y} (g_\bullet (y))$ is $E_0$, so that $\mathrm{c}_{Y}(g_\bullet (y))\in E$, and $g_\bullet (y)$ belongs to $U$.
It follows that the open subset $g_\bullet^{-1} (U)$ in $\NL(\bb{A}^2_k,0)$, which is isomorphic to $\NL(X,p) \setminus T$ because $U$ is and $g_\bullet$ is an isomorphism, contains $y$ as required.
This concludes the proof of the Lemma.
\end{proof}
\subsection{The implication \ref{condition_sandwiched} $\implies$ \ref{condition_strongly_selfsim}}
Pick any point $x\in \NL(X,0)$ which is not of type 2 and an open set $U$ containing $x$.
By Proposition~\ref{prop:topo_NL} one may choose a good resolution $Y \to X$ and a point $p$ on its exceptional divisor
such that $x\in \mathrm{c}_Y^{-1}(p)\subset U$.
Proposition~\ref{proposition_propertiesNL} (ii) implies that the good resolution $Y \to X$ induces an isomorphism $\mathrm{c}_Y^{-1}(p) \cong \NL(\mathscr Y_p) \setminus V(\mathcal{J}_p)$. Observe that the latter non-archimedean link is isomorphic to $\NL(\bb{A}^2_k,0) \setminus S$ where $S$ is a set of type 1 points of cardinality $1$ or $2$ depending on whether $p$ is a smooth point of the exceptional divisor or not. Denote by $y$ the image of $x$ under this isomorphism.
By Lemma~\ref{lem:structure sandw} there exists a finite set $T$ of type 1 points and an isomorphism of non-archimedean links $\psi$ between $\NL(X,0)\setminus T$ and an open subset of $\NL(\bb{A}^2_k,0)$ containing $y$.
Adding $\psi^{-1}(S)$ to $T$ if necessary we may suppose that $\psi(\NL(X,0)\setminus T) \subset \NL(\bb{A}^2_k,0)\setminus S$.
We conclude observing that $\varphi^{-1} \circ \psi$ is an isomorphism mapping $\NL(X,0)\setminus T$ to $\mathrm{c}_Y^{-1}(p)$ and containing $x$. \hfill$\qed$
\subsection{The implication \ref{condition_strongly_selfsim} $\implies$ \ref{condition_valtree}}
By Corollary~\ref{corollary_topology}, one can find a disc $D$ in $\NL(X,0)$. By~\ref{condition_strongly_selfsim}, there exists a finite set $T$ of type 1 points and an open subset of $D$
which is isomorphic to $\NL(X,0)\setminus T$. By Lemma~\ref{lemma_disc_annulus}, $D$ can be realized as an open subset of $\NL(\bb{A}^2_k,0)$.
\hfill$\qed$
\subsection{The implication \ref{condition_valtree} $\implies$ \ref{condition_selfsim}}
Let $V$ be any open subset of $\NL(X,0)$.
By Corollary~\ref{corollary_topology}, $V$ contains a point of type 1, and therefore it contains a disc $D$ by the same corollary.
Now fix a finite set $T$ of type 1 points in $\NL(X,0)$ such that condition \ref{condition_valtree} holds.
Adding one more point to $T$ if necessary, one may assume that
$\NL(X,0)\setminus T$ is isomorphic to an open subset of $\NL(\bb{A}^2_k,0)\setminus V(t)$, which is a disc by Lemma~\ref{lemma_disc_annulus}.
We conclude that $\NL(X,0)\setminus T$ can be realized as an open subset of $D\subset V$.
\hfill$\qed$
\section{Self-similar graphs}\label{section_graphs}
In this section we introduce the notions of self-similar and sandwiched graphs
and prove that these notions are in fact equivalent.
\subsection{Modifications of graphs}
Let us introduce some terminology first. A \emph{graph} $\Gamma$ is the data of a finite set of vertices $V= V(\Gamma)$ and a subset $E= E(\Gamma)$ of pairs in $V$
(the set of edges). In particular our graph has no loop. Two vertices are said to be connected (or joined) by an edge when $\edge{v_1}{v_2}\in E$.
In that case, we write $v_1 \sim v_2$.
A graph is connected if for any two vertices $v_0, v_1$ there exists a sequence of edges
$\edge{v_0}{w_1}, \edge{w_1}{w_2}, \ldots, \edge{w_n}{v_1}$ joining them. A \emph{weighted graph} is a graph together with a function
$V\to \bb{N}\times \bb{N}^*$. We shall write this function as $v\mapsto (g(v),e(v))$.
The \emph{link} of a vertex $v$ of a graph $\Gamma$ is by definition the set $L(v)$ of vertices $w$ such that $\edge{v}{w}$ is an edge.
The cardinality of $L(v)$ is the \emph{valency} of $v$.
\begin{rmk}
In order to motivate our further definitions, let us indicate in which situation graphs will arise in the sequel.
We shall consider the dual graph of a resolution of a normal surface singularity,
so that vertices are in bijection with exceptional components of this resolution. The weight function is then given
by genus and the opposite of the self-intersection of a component.
\end{rmk}
If $\Gamma$ is a graph, a \emph{simple modification of} $\Gamma$ \emph{centered at a vertex} $v$ is a graph $\Gamma'$ such that
its set of vertices is $V'= V \cup\{v'\}$,
its set of edges is $E' = E \cup \big\{ \edge{v}{v'} \big\}$,
and its weight function $(g',e')$ satisfies $g'(w) = g(w)$ and $e'(w) = e(w)$ for all $w\in V\setminus\{v\}$, $g'(v) = g(v)$, $g'(v') =0$, $e'(v) = e(v) +1$, and $e(v')=1$.
A \emph{simple modification of} $\Gamma$ \emph{centered at an edge} $\edge{v_0}{v_1}\in E$ is a graph $\Gamma'$ such that
its set of vertices is $V'= V \cup\{v'\}$,
its set of edges is $ E' = \big(E \setminus \big\{ \edge{v_0}{v_1}\big\}\big) \cup \big\{\edge{v_0}{v'}, \edge{v_1}{v'}\big\}$,
and its weight function $(g',e')$ satisfies $g'(w) = g(w)$ for all $w\in V$, $g(v') =0$, $e'(w) = e(w)$ for all $w\in V \setminus \{v_0,v_1\}$, $e'(v_0) = e(v_0) +1$, $e'(v_1) = e(v_1) +1$, and $e(v')=1$.
\smallskip
A \emph{modification} of a graph $\Gamma$ is a graph $\Gamma'$ which is obtained from $\Gamma$ by a finite sequence of simple modifications (centered either at a vertex or at an edge).
A modification is nontrivial whenever it is not an isomorphism.
We shall write $\Gamma' \rightsquigarrow \Gamma$ to say that $\Gamma'$ is a modification of $\Gamma$.
Note the following transitivity property: if $\Gamma''\rightsquigarrow \Gamma'$ and $\Gamma' \rightsquigarrow \Gamma$ are modifications, then
$\Gamma'' \rightsquigarrow \Gamma$ is also a modification.
Observe that a modification $\Gamma' \rightsquigarrow \Gamma$ yields a canonical inclusion $\imath\colon V(\Gamma) \to V(\Gamma')$ of the set of vertices of $\Gamma$ into $\Gamma'$.
The image of a vertex $v$ by $\imath$ is called its \emph{strict transform}. Note that in general two vertices in $V$ joined by an edge need not have strict transforms joined by an edge in $\Gamma'$.
We can also define the \emph{total transform} of a connected subgraph $\Delta$ of $\Gamma$ by a modification $\Gamma' \rightsquigarrow \Gamma$.
It is sufficient to explain the construction of the total transform in the case of a simple modification.
If the center of the modification is a vertex not belonging to $\Delta$ or an edge $\edge{v}{w}$ with $v,w\notin \Delta$, then the total transform of $\Delta$ is the copy of $\Delta$ in $\Gamma'$ whose vertices are the strict transform of the vertices of $\Delta$.
If the center is a vertex of $\Delta$, then its total transform is obtained from $\Delta$ by adding the new edge and the new vertex.
If the center is an edge $\edge{v}{w}$ of $\Delta$, then its total transform is the graph whose vertices are the vertices of $\Delta$ together with the new vertex,
and edges are edges of $\Delta$ different from $\edge{v}{w}$ together with the two new edges.
Finally, if the center is an edge $\edge{v}{w}$ with $v\in \Delta$ and $w\notin \Delta$, then the vertices of the total transform are the vertices of $\Delta$ together with the new vertex, and edges are edges of $\Delta$ together with the only new edge that contains $v$.
It is not difficult to see that the total transform of a connected subgraph of $\Gamma$ is a connected subgraph of $\Gamma'$.
\smallskip
We also introduce the notion of \emph{embedding} of a (weighted) graph of $\Gamma$ into another one $\Gamma'$.
This is an injective map $\varphi\colon V(\Gamma) \to V(\Gamma')$ such that $v \sim w$ implies $\varphi(v) \sim \varphi(w)$, $g'(\varphi(v))= g(v)$ and $e'(\varphi(v)) =e(v)$.
We shall write $\varphi \colon \Gamma \hookrightarrow \Gamma'$, and denote by $\varphi(\Gamma)$ the subgraph of $\Gamma'$ whose vertices are $\varphi(v)$ with $v\in V(\Gamma)$ and edges $\edge{\varphi(v)}{\varphi(w)}$
with $\edge{v}{w} \in E(\Gamma)$. An \emph{isomorphism} is an embedding such that the induced maps on vertices and edges are bijective.
\smallskip
Suppose we are given two modifications $\Gamma' \rightsquigarrow \Gamma$ and $\Delta' \rightsquigarrow \Delta$
and two embeddings $\varphi \colon \Delta \hookrightarrow \Gamma$, $\varphi' \colon \Delta' \hookrightarrow \Gamma'$. Then we say
that the pair $(\Gamma' \rightsquigarrow \Gamma, \Delta' \rightsquigarrow \Delta)$ is $(\varphi', \varphi)$-\emph{compatible} whenever there exist simple modifications $\Gamma_{i+1} \rightsquigarrow \Gamma_i$
and embeddings $\varphi_i: \Delta_i \hookrightarrow \Gamma_i$ such that
$\Gamma_0 = \Gamma$, $\Gamma_l = \Gamma'$, $\Delta_0 = \Delta$, $\Delta_l = \Delta'$,
$\varphi_0 = \varphi$, $\varphi_l = \varphi'$, and the following holds.
\begin{enumerate}[label=(\roman*)]
\item\label{item:compatible_rules_vin}
When $\Gamma_{i+1} \rightsquigarrow \Gamma_i$ is centered at a vertex $\varphi_i(v)$ for some
$v\in \Delta_i$, then $\Delta_{i+1}$ is obtained from $\Delta_i$ by the simple modification centered at $v$.
\item
When $\Gamma_{i+1} \rightsquigarrow \Gamma_i$ is centered at a vertex $v\notin \varphi_i(\Delta_i)$, then $\Delta_{i+1} = \Delta_i$.
\item
When $\Gamma_{i+1} \rightsquigarrow \Gamma_i$ is centered at an edge $\edge{\varphi_i(v)}{\varphi_i(w)}$ for some
edge $\edge{v}{w}$ in $\Delta_i$, then $\Delta_{i+1}$ is obtained from $\Delta_i$ by the simple modification centered at $\edge{v}{w}$.
\item
When $\Gamma_{i+1} \rightsquigarrow \Gamma_i$ is centered at an edge $\edge{v}{\varphi_i(w)}$
for some $v \notin \Delta_i$ and $w\in\Delta_i$, then $\Delta_{i+1}$ is obtained from $\Delta_i$ by the simple modification centered at $w$.
\item \label{item:compatible_rules_eout}
When $\Gamma_{i+1} \rightsquigarrow \Gamma_i$ is centered at an edge $\edge{v}{w}$
for some $v,w \notin \Delta_i$, then $\Delta_{i+1} = \Delta_i$.
\end{enumerate}
\begin{prop}\label{prop-modif}
Suppose the pair $(\Gamma' \rightsquigarrow \Gamma, \Delta' \rightsquigarrow \Delta)$ is $(\varphi', \varphi)$-compatible, where
$\Gamma' \rightsquigarrow \Gamma$ and $\Delta' \rightsquigarrow \Delta$ are modifications, and $\varphi \colon \Delta \hookrightarrow \Gamma$, $\varphi' \colon \Delta' \hookrightarrow \Gamma'$ are embeddings.
\begin{enumerate}[label=(\roman*)]
\item\label{item:prop-modif-1}
The image by $\varphi'$ of the total transform in $\Delta'$ of any subgraph $G\subset \Delta$ is the total transform of $\varphi(G)$.
In particular, the total transform of $\varphi(\Delta)$ is $\varphi'(\Delta')$.
\item\label{item:prop-modif-2}
If $\imath_\Delta$ and $\imath_{\Gamma}$ denote the strict transform maps induced by the modifications, then
$ \varphi' \circ \imath_\Delta = \imath_{\Gamma} \circ \varphi $.
\end{enumerate}
\end{prop}
\begin{proof}
It is only necessary to check these properties when $\Gamma' \rightsquigarrow \Gamma$ is a simple modification in which case $\Delta' \rightsquigarrow \Delta$
is described by one of the rules \ref{item:compatible_rules_vin}-\ref{item:compatible_rules_eout} above. It is a routine argument to verify the proposition in each of these cases.
\end{proof}
\begin{prop}\label{prop-induction}
Suppose $\Delta' \rightsquigarrow \Delta$ is a modification, and let $\varphi \colon\Delta\hookrightarrow\Gamma$ be any embedding.
\begin{itemize}
\item
Then there exists a modification $\Gamma' \rightsquigarrow \Gamma$, called the \emph{induced modification}, and an embedding $\varphi' \colon\Delta'\hookrightarrow\Gamma'$
such that the pair $(\Gamma' \rightsquigarrow \Gamma, \Delta' \rightsquigarrow \Delta)$ is $(\varphi', \varphi)$-compatible.
\item
Suppose we are given another embedding $\psi\colon \Gamma \hookrightarrow \widetilde{\Gamma}$.
Then the two modifications of $ \widetilde{\Gamma}$ induced by $\psi$ or $\psi \circ \varphi$ are isomorphic.
We denote either of them by $\widetilde{\Gamma}'\rightsquigarrow\widetilde{\Gamma}$.
The embedding $\Delta'\hookrightarrow\widetilde{\Gamma}'$ is the composition of the embeddings $\Delta'\hookrightarrow\Gamma'$ and $\Gamma'\hookrightarrow\widetilde{\Gamma}'$.
\end{itemize}
\end{prop}
\begin{proof}
Decompose $\Delta' \rightsquigarrow \Delta$ into simple modifications $\Delta_{i+1} \to \Delta_i$, such that
$\Delta_0 = \Delta$, $\Delta_l = \Delta'$. To simplify notation we assume $\Delta_0 \subset \Gamma_0 := \Gamma$.
We define by induction a graph $\Gamma_i$ that contains $\Delta_i$ as follows.
If $\Delta_{i+1}$ is the modification centered at an edge $\edge{v}{w}$ with $v,w \in \Delta_i$, then set $\Gamma_i$
to be the modification centered at $\edge{v}{w}$. If $\Delta_{i+1}$ is the modification centered at a vertex $v\in \Delta_i$, then we set $\Gamma_i$
to be the modification centered at $v$. It is clear from the definitions that we obtain compatible modifications in this way.
The second point is proven in the same way;
details are left to the reader.
\end{proof}
\begin{rmk}
A graph $\Gamma'$ satisfying the first point of Proposition~\ref{prop-induction} is in general not unique. When $\Delta_{i+1}$ is the modification centered at a vertex $v\in \Delta_i$ that is connected to a vertex
$w\notin \Delta_i$, we could also have defined $\Gamma_{i+1}$ to be the modification centered at the edge $\edge{v}{w}$.
Note however that the construction given in the proof of the previous proposition yields a graph $\Gamma'$ that is minimal in the sense that the sum of the weights at all its vertices is minimal (among all possible graphs).
\end{rmk}
\subsection{Characterization of self-similar graphs}
\begin{defi}\label{def:graph-ss}
A connected graph is said to be \emph{regular} if it is a modification of the graph with one vertex and weight $(0,1)$.
It is said to be
\emph{sandwiched} if it can be embedded into a regular graph.
A connected graph $\Gamma$ is said to be \emph{self-similar} if it admits a nontrivial modification $\Gamma'$
that contains a subgraph $\Gamma_0$ that is isomorphic (as a weighted graph) to $\Gamma$.
\end{defi}
\begin{thm}\label{thm:graph-sdw}
Suppose $\Gamma$ is a connected weighted graph.
If $\Gamma$ is self-similar then it is sandwiched.
\end{thm}
In fact the reverse implication is also true. We leave it as an exercise to the reader since we shall not need it in the sequel.
\begin{proof}
We fix a modification $\Gamma'$ of $\Gamma$, and an embedding $\varphi\colon \Gamma \hookrightarrow \Gamma'$. Recall that the strict transform yields an inclusion
$\imath\colon V(\Gamma) \to V(\Gamma')$.
\medskip
\noindent {\bf Step 1}. There exists a sequence of graphs $\Gamma_n$ with $\Gamma_0 := \Gamma$, and $\Gamma_1 := \Gamma'$, such that
\begin{enumerate}[label=(\roman*)]
\item $\Gamma_{n+1}$ is a modification of $\Gamma_n$;
\item there exist embeddings $\varphi_n\colon \Gamma_n \hookrightarrow \Gamma_{n+1}$;
\item \label{item:step1compatibility} the pair $(\Gamma_{n+1}\rightsquigarrow \Gamma_n,\Gamma_{n}\rightsquigarrow \Gamma_{n-1})$ is $(\varphi_n, \varphi_{n-1})$-compatible.
\end{enumerate}
Since $\Gamma'\rightsquigarrow \Gamma$ is a modification and $\Gamma$ is a subgraph of $\Gamma'$ via the embedding $\varphi$,
Proposition~\ref{prop-induction} gives a modification $\Gamma_2\rightsquigarrow \Gamma'$ and an embedding $\varphi_1\colon \Gamma' \hookrightarrow \Gamma_2$
with the right compatibility properties.
To build the sequence $\Gamma_n$ we proceed in the same way and use repeteadly Proposition~\ref{prop-induction}.
More precisely suppose $\Gamma_{n}$ has been defined. For each $n$ there exists an embedding $\Gamma \hookrightarrow \Gamma_n$ obtained by the map
$\Phi_n := \varphi_{n-1} \circ \ldots \circ \varphi_0$ (with $\varphi = \varphi_0$). Then we construct
$\Gamma_{n+1}$ from this embedding and the modification $\Gamma'\rightsquigarrow \Gamma$ using Proposition~\ref{prop-induction}.
Observe that $\Gamma_{n}$ is obtained by the modification $\Gamma'\rightsquigarrow \Gamma$ using $\Phi_{n-1}$ whereas
$\Gamma_{n+1}$ is obtained by the modification $\Gamma'\rightsquigarrow \Gamma$ using $\varphi_{n-1} \circ \Phi_{n-1}$.
This implies the existence of an embedding $\varphi_n \colon \Gamma_n \to \Gamma_{n+1}$ satisfying the compatibility property (c)
(see the second point of Proposition~\ref{prop-induction}).
\medskip
\noindent {\bf Step 2}. Suppose that there exists a vertex $v\in V(\Gamma)$ such that $\imath(v) = \varphi(v)$. We claim that the modification is trivial.
\smallskip
To see this define the edge distance on $V(\Gamma)$ by setting $d(x, y)$ to be the least integer $n \in \bb{N}$ such that there exists
$v_0, \ldots, v_n \in V(\Gamma)$ with the property $v_0 =x$, $v_n=y$ and $\edge{v_0}{v_1}, \ldots, \edge{v_{n-1}}{v_n} \in E(\Gamma)$.
\smallskip
We first prove by induction on $n:= d(w, v)$, that there exists an integer $k\ge1$ such that
$I_k(w) = \Phi_k(w)$ for any $w\in V(\Gamma)$ at distance at most $n$ to $v$ where $I_k$ is the strict transform map induced by the modification
$\Gamma_k \rightsquigarrow \Gamma$, and $\Phi_k := \varphi_{k-1} \circ \ldots \circ \varphi_0$.
\smallskip
Our claim for $n=0$ is our standing assumption (with $k=1$). Assume the induction hypothesis for vertices at distance $n-1$ to $v$.
Pick $w$ at distance $n$ to $v$. By definition there exists (at least one) $w_0\in V$ at distance $n-1$ to $v$ and $1$ to $w$. Replacing $\Gamma_k$ by $\Gamma'$
we may suppose that $\imath(w_0) = \varphi(w_0)$. In particular we have $e(\imath(w_0)) = e(w_0)$.
Decompose the modification from $\Gamma' \rightsquigarrow\Gamma'$ into a sequence of simple modifications
$\Delta_{i+1} \to \Delta_i$ with $\Delta_0:=\Gamma$, and $\Gamma':= \Delta_l$.
The equality of weights $e(\imath(w_0)) = e(w_0)$ implies that the center of all simple modifications
$\Delta_{i+1} \to \Delta_i$ cannot be an edge that contains the strict transform of $w_0$ in $\Delta_i$ or a vertex that is joined to this strict transform
by an edge. It follows that the strict transform of any vertex at distance $1$ from $ w_0$ in $\Gamma'$ is again joined to $\imath(w_0)$ by an edge, and no new edge
starting from $\imath(w_0)$ are created. In particular, $\imath$ induces a bijection from the link $L(w_0)$ to $L(\imath(w_0))$, and
the valency of $w_0$ is equal to the valency of $\imath(w_0)$. Since $\varphi$ is an isomorphism from $\Gamma$ onto its image, $\varphi(L(w_0))$ is included in $L(\varphi(w_0))$. The two sets having the same cardinality, we conclude that $\varphi$ is a bijection (hence an isomorphism) from $L(w_0)$ to $L(\varphi(w_0))=\varphi(L(w_0))$. Denote by $f := \imath^{-1} \circ \varphi$ the bijection on $L(w_0)$.
\smallskip
Observe that the quantity $\sum_{v\in L(w_0)} e(v)$ is equal to $\sum_{v\in L(\varphi(w_0))} e(v)$.
Since this quantity can only increase along the sequence of simple modifications $\Delta_{i+1} \to \Delta_i$
it remains constant. This implies that the total transform of $L(w_0)$ is equal to $L(\varphi(w_0))$.
\smallskip
By Proposition~\ref{prop-modif} \ref{item:prop-modif-1} and Property \ref{item:step1compatibility} of Step 1, it follows that the total transform
of $L(\varphi(w_0)) = \varphi(L(w_0))$ by $\Gamma_2\rightsquigarrow \Gamma_1$ is also equal to $\varphi_1( \varphi(L(w_0)))$.
In other words, the total transform of $L(w_0)$ by the composite modification $\Gamma_2 \rightsquigarrow \Gamma$
is isomorphic to itself. In particular the injective map $I_2$ induced by this strict transform defines a bijection from
$L(w_0)$ to $\varphi_1( \varphi(L(w_0)))$, and we get as before a bijection $f_2\colon L(w_0) \to L(w_0)$.
Denote by $\imath_1$ the strict transform map induced by $\Gamma_2 \rightsquigarrow \Gamma$ so that
$f_2 = \imath^{-1} \circ \imath_1^{-1}\circ \varphi_1 \circ \varphi$.
By Proposition~\ref{prop-modif} \ref{item:prop-modif-2}, we have $\varphi_1 \circ \imath= \imath_1\circ \varphi$,
whence $f_2 = \imath^{-1} \circ \varphi\circ \imath^{-1} \circ \varphi = f \circ f$.
By repeating this argument, we see that the strict tranform of $L(w_0)$ by the modification $\Gamma_k \rightsquigarrow \Gamma$ is equal to
$\Phi_k(L(w_0))$ and that $I_k^{-1} \circ \Phi_k = f^{\circ k}$.
Since $L(w_0)$ is finite, it follows that $f^{\circ k} = \id$ for some $k$.
Observe that we may choose $k$ to be the least common multiple of all integers less than the cardinality of $\Gamma$, which
is then independent of $w_0$.
This proves our claim.
\smallskip
We now explain how this claim implies Step 2. Replacing $\Gamma'$ by some $\Gamma_k$ for $k$ sufficiently large,
we have $\imath(w) = \varphi(w)$ for all $w$. Since $\varphi$ preserves the weights, we get $e(\imath(w)) = e(w)$ for all $w$
which is only possible when the modification is trivial.
\medskip
\noindent {\bf Step 3}. If $\Gamma$ is self-similar,
then there exists an integer $k\ge 1$ such that for all $v\in V(\Gamma)$ one has $I_k(v) \notin \Phi_k(V(\Gamma))$.
\smallskip
We first prove that if $I_k(v) \notin \Phi_k(V(\Gamma))$ then $I_{k+l}(v) \notin \Phi_{k+l}(V(\Gamma))$ for all $l\ge 0$.
Indeed suppose $I_k(v) \notin \Phi_k(V(\Gamma))$, and recall from Step 1 that $\Gamma_{k+1}$ is built from the embedding $\Phi_k \colon \Gamma \hookrightarrow \Gamma_k$
and the modification $\Gamma' \rightsquigarrow \Gamma$. It follows that the strict tranform $I_{k+1}(v)$ of $I_k(v)$ in $\Gamma_{k+1}$ is not contained
in the image of $\Gamma'$ in $\Gamma_{k+1}$ which contains the image of $\Gamma$ in $\Gamma_{k+1}$.
Therefore $I_{k+1}(v) \notin \Phi_{k+1}(V(\Gamma))$ which proves $I_{k+l}(v) \notin \Phi_{k+l}(V(\Gamma))$ for all $l\ge 0$ by induction as required.
\smallskip
It is thus sufficient to find for any vertex $v$ of $\Gamma$ an integer $k$ such that $I_k(v) \notin \Phi_k(V(\Gamma))$.
To prove this we proceed by contradiction, and pick a vertex $v$ of $\Gamma$ for which $I_k(v) \in \Phi_k(V(\Gamma))$ for all $k$.
Since $\Phi_k$ is an embedding, for each $k$ there exists a unique vertex $v_k$ of $\Gamma$ such that $I_k(v) = \Phi_k(v_k)$.
Choose $k, l >0$ such that $v_{k+l} = v_k$.
Observe that we have a modification $\Gamma_{k+l}\rightsquigarrow \Gamma_k$ and an embedding $\Phi \colon\Gamma_k \hookrightarrow\Gamma_{k+l} $
obtained by composing $\varphi_{k+l-1} \circ \ldots \circ \varphi_k$. Denote by $I$ the strict transform map induced by $\Gamma_{k+l}\rightsquigarrow \Gamma_k$, and let $w := \Phi_k(v_k) = \Phi_k(v_{k+l}) = I_k(v)$.
Then we get
\[
I(w) = \imath_{k+l-1} \circ \ldots \circ \imath_k (I_k(v)) = I_{k+l}(v) = \Phi_{k+l}(v_{k+l}) = \Phi (\Phi_k(v_{k+l})) = \Phi(w)~.
\]
Step 2 then implies that $\Gamma_{k+l}\rightsquigarrow \Gamma_k$ is an isomorphism which is only possible if $\Gamma'\rightsquigarrow \Gamma$
is also an isomorphism. This is a contradiction.
\medskip
\noindent {\bf Step 4}. Finally we prove that $\Gamma$ being self-similar implies the graph to be sandwiched.
\smallskip
By Step 3, replacing $\Gamma'$ by $\Gamma_k$ for a sufficiently large $k$ we may assume that $\imath(v) \notin \varphi(V(\Gamma))$ for all vertices $v$ of $\Gamma$.
It remains to prove that this implies the graph to be sandwiched.
This is a consequence of the following general fact. Suppose $\Gamma' \rightsquigarrow \Gamma$ is a modification.
Look at the (not necessarily connected) graph $\widehat{\Gamma'}$ obtained by removing the strict transforms of all vertices in $\Gamma$
and all edges connected to any of these.
\begin{lem}\label{lem:whaou}
The graph $\widehat{\Gamma'}$ is a union of regular graphs.
\end{lem}
Now $\varphi(\Gamma)$ is a subgraph that does not contain any of the strict transform $\imath(v)$ for $v\in \Gamma$, hence
is a subgraph of $\widehat{\Gamma'}$. In particular, $\varphi(\Gamma)$ (hence $\Gamma$) is sandwiched.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:whaou}]
We argue by induction on the number $k$ of simple modifications needed to define $\Gamma' \rightsquigarrow \Gamma$.
When $k=1$ then $\widehat{\Gamma'}$ is reduced to the sole vertex added to $\Gamma$ and by definition its weight is $(0,1)$, hence is regular.
Suppose the lemma is proved for $k-1$, and suppose $\Gamma' \rightsquigarrow \Gamma$ is decomposed into
$k$ simple modifications $\Delta_i \rightsquigarrow \Delta_{i-1}$.
Consider the modification $\Delta_{k-1} \rightsquigarrow \Gamma$. By the inductive assumption, the corresponding graph
$\widehat{\Delta}_{k-1}$ is a disjoint union of regular graphs. When $\Delta_k$ is a modification centered at a vertex that does not lie
in $\widehat{\Delta}_{k-1}$ then $\widehat{\Gamma'}$ is the disjoint union of $\widehat{\Delta}_{k-1}$ and a single vertex with weight $(0,1)$.
When $\Delta_k$ is a modification centered at a vertex $v\in \widehat{\Delta}_{k-1}$ then $\widehat{\Gamma'}$ is obtained by a modification of $\widehat{\Delta}_{k-1}$ centered at $v$
and it still regular. When $\Delta_k$ is a modification centered at an edge $\edge{v}{w}$ with $v,w\notin \widehat{\Delta}_{k-1}$, then
$\widehat{\Gamma'}$ is the disjoint union of $\widehat{\Delta}_{k-1}$ and a single vertex with weight $(0,1)$.
When $\Delta_k$ is a modification centered at an edge $\edge{v}{w}$ with $v\in \widehat{\Delta}_{k-1}$ and $w\notin \widehat{\Delta}_{k-1}$, then
$\widehat{\Gamma'}$ is obtained by a modification of $\widehat{\Delta}_{k-1}$ centered at $v$.
Finally, when $\Delta_k$ is a modification centered at an edge $\edge{v}{w}$ with $v,w\in \widehat{\Delta}_{k-1}$, then
$\widehat{\Gamma'}$ is obtained by a modification of $\widehat{\Delta}_{k-1}$ centered at $\edge{v}{w}$.
In all cases, all connected components of $\widehat{\Gamma'}$ remain regular which completes the proof.
\end{proof}
\section{Self-similar links have self-similar skeletons}\label{section_proof_main}
In this section we prove the implication \ref{condition_selfsim} $\implies$ \ref{condition_graph} of Theorem~\ref{mainthm}.
For technical reasons we introduce the following condition.
\begin{enumerate}[label=($\dagger$)]
\item \label{condition_prime} There exists a finite set $T$ of type 1 points of $\NL(X,0)$ such that $\NL(X,0)\setminus T$ is isomorphic to an open subset $U$ of $\NL(X,0)$ satisfying $\overline{U} \subsetneq \NL(X,0)$.
\end{enumerate}
\label{conditiondag}
Observe that \ref{condition_selfsim} implies \ref{condition_prime}, so that
we are reduced to prove \ref{condition_prime} $\implies$ \ref{condition_graph}. To do so,
we rely on the following result.
\subsection{Extending morphisms from the punctured disc}
\begin{prop}\label{lem:keyboundary}
Let $D$ be an open $k((t))$-analytic disc and let $0$ denote its origin.
Let $\varphi \colon D\setminus\{0\}\to\NL(X,0)$ be any map that induces an isomorphism of locally ringed spaces between $D\setminus\{0\}$ and its image.
Then the map $\varphi$ extends continuously to $0$, and $\varphi(0)$ is either a divisorial point or a rigid point of $\NL(X,0)$.
In the latter case $\varphi$ extends to an isomorphism of locally ringed spaces from $D$ to $\varphi(D)$.
\end{prop}
Let us first take for granted the following lemma.
\begin{lem}\label{lem:extension}
The map $\varphi \colon D\setminus\{0\}\to\NL(X,0)$ extends uniquely to a continuous map $D\to\NL(X,0)$.
\end{lem}
\begin{proof}[Proof of Proposition~\ref{lem:keyboundary}]
We continue to denote by $\varphi$ the continuous map on $D$ obtained via Lemma~\ref{lem:extension}.
We will prove by contradiction that $\varphi(0)$ is either a divisorial point or a rigid point of $\NL(X,0)$.
Suppose that $\varphi(0)$ is a point of type 3 of $\NL(X,0)$.
Then there exist an open subspace $U$ of $\varphi(D\setminus\{0\})$ with $\varphi(0)\in\overline U$ and two non constant functions $f$ and $g$ on $U$ such that
\[
\lim_{y\in U,y\to \varphi(0)}\frac{\log|f(y)|}{\log|g(y)|}
\]
exists and is an irrational number.
Therefore the same holds for the ratio of the logarithms of the absolute values of the two nonconstant functions $\varphi^*f$ and $\varphi^*g$ on $\varphi^{-1}(U)$, tending towards $0$.
This yields a contradiction since $0$ is a rigid point of $\NL(X,0)$, therefore its valuation ring has rational rank $1$ over $k$.
Now suppose that the point $\varphi(0)$ is of type 4.
Then we can find $U$ as above, bounded functions $\{f_n\}_{n\in\bb{N}}$, and $t$ on $U$ such that the group
\[
\Gamma =
\bigg\langle
\lim_{y\in U,y\to \varphi(0)}\frac{\log|f_n(y)|}{\log|t(y)|}
\bigg\rangle_{n\in\bb{N}}
\]
is not a finitely generated subgroup of $\bb{Q}$.
Now consider the bounded functions $F_n=\varphi^*f_n$ and $T=\varphi^*t$ on $\varphi^{-1}(U)$.
Since $\varphi(U)\cup\{0\}$ is an open neighborhood of $0$ in $D$ we can find a smaller disc $D'$ centered in $0$ such that all the $F_n$ and $T$ are defined on $D'\setminus\{0\}$.
The functions $F_n$ and $T$ extend to bounded functions on $D'$, therefore they can be expressed as power series in a variable $X$.
Observe now that we have
\[
\lim_{y\to \varphi(0),y\in U}\frac{\log|f_n(y)|}{\log|t(y)|}=
\lim_{y\to 0,y\in D}\frac{\log|F_n(y)|}{\log|T(y)|}= \frac{\mathrm{val}_X (F_n(X))}{\mathrm{val}_X (T(X))}.
\]
This implies that $\Gamma$ is a subgroup of $\frac{1}{\mathrm{val}_X (T(X)} \bb{Z}$, hence it is finitely generated, which is a contradiction.
It follows that $\varphi(0)$ can only be a rigid point or a divisorial point of $\NL(X,0)$.
If $\varphi(0)$ is a rigid point then it has a neighborhood $U$ isomorphic to a disc thanks to Corollary~\ref{corollary_topology}, and without loss of generality we can take a smaller disc $D$ if needed and assume that $\varphi(D)$ is contained in $U$.
We obtain an injective analytic map from a punctured disc to disc,
thus this map necessarily extends as an isomorphism from $D$ onto its image.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:extension}]
By Corollary~\ref{corollary_global_topology}, the compact set $\NL(X,0)$ is a disjoint union of discs, finitely many annuli and finitely many (type 2) points.
Recall the definition of $x_r$ for $r\in (0,1)$ from Theorem~\ref{thm:R-tree} and define the injective continuous map $\gamma(r) = \varphi(x_r)$ from $(0,1)$ to $\NL(X,0)$.
Since $\gamma$ is injective, there exists a subset $A$ of $\NL(X,0)$ that is either a disc or an annulus and such that $\gamma(r) \in A$ for all $r$ sufficiently small.
Recall that $A$ is uniquely arcwise connected by Theorem~\ref{thm:R-tree} (i), and has either one or two endpoints by Theorem~\ref{thm:R-tree} (ii).
Moreover, the complement of $A$ in its closure $\overline{A}$ in $\NL(X,0)$ contains either one or two points by Corollary~\ref{corollary_global_topology}.
This implies that $\overline{A}$ is both uniquely arcwise connected and compact.
It follows that $\gamma$ extends continuously as a map from $[0,1)$ to $\overline{A}$ by setting $\gamma(0) = \lim_{r\to 0} \gamma(x_r)$.
We set $\varphi(0) = \gamma(0)$, and claim that the resulting map $\varphi\colon D \to \overline{A} \subset \NL(X,0)$ is continuous.
Since $\overline{A}$ and $D$ are uniquely arcwise connected, one may define the segment joining any two points $x, y$, and we denote it by $[x,y]$.
For any $0< r < 1$, consider the set $U_r \subset \overline{A}$ (resp. $V_r \subset D$) consisting of those points $x$ such that
$\varphi(x_r) \notin [x, \varphi(0)]$ (resp. $x_r \notin [x, 0]$). These sets form bases of neighborhoods of $\varphi(0)$ in $\overline{A}$ and $0$ in $D$ respectively.
Since $\varphi$ is continuous and injective on $D\setminus \{0 \}$, one has $\varphi(V_r) \subset U_r$, which implies the continuity of $\varphi$ at $0$.
\end{proof}
\subsection{The implication \ref{condition_prime} $\implies$ \ref{condition_graph}}
Let $T$ be a finite and nonempty set of rigid points of $\NL(X,0)$, let $U$ be an open subset of $\NL(X,0)$ whose closure $\overline U$ in $\NL(X,0)$ is strictly contained in $\NL(X,0)$, and let $\varphi\colon \NL(X,0) \setminus T \to U$ be an isomorphism.
We will show that $(X,0)$ has a self-similar dual graph.
Since every point $x$ of $T$ has a neighborhood in $\NL(X,0)$ that is a disc with origin $x$, by repeatedly applying Lemma~\ref{lem:keyboundary} we deduce that $\partial U=\overline U\setminus U$ is a finite subset of $\NL(X,0)$ consisting of rigid and divisorial points.
Moreover, we can extend $\varphi$ to each point of $T$ whose image is a rigid point and therefore assume without loss of generality that $\partial U$ only consists of divisorial points.
Observe also that $\partial U$ is
nonempty since $\overline U$ is strictly contained in $\NL(X,0)$.
Let $C$ be a germ of curve on $(X,0)$ whose components correspond to the points of $T$, let $\pi \colon X' \to X$ be the good resolution of $(X,0)$ that also resolves the germ $C$ and that is minimal with respect to this property (see e.g. \cite[Theorem 5.12]{laufer:normal2dimsing}), and let $Z$ be another good resolution
of $(X,0)$ resolving the germ $C$ and whose divisorial set $\Div(Z)$ contains $\partial U$.
Set $S=\big(\Div(Z) \setminus U \big) \cup \varphi (\Div(X'))$ and let $\mathscr Y'$ be the formal modification whose divisorial set is $S$, as given by Theorem~\ref{theorem_existence_modifications}.
We claim that $\mathscr Y'$ is algebraized by a good resolution of $(X,0)$ which resolves the germ $C$.
This boils down to verifying the conditions of Theorem~\ref{theorem_characterization_resolutions}.
In order to do so, let $V$ be any connected component of $\NL(X,0) \setminus S$.
Recall that $S$ contains $\Div(Z)$ hence $\partial{U}$,
so that $V$ is either disjoint from ${U}$, or contained in it.
In the first case, $V$ is a connected component of $\NL(X,0) \setminus \Div(Z)$.
Since $Z$ is a good resolution of $(X,0)$ and $C$, it follows that $V$ has the form we want by Theorem~\ref{theorem_characterization_resolutions}.
In the second case, $V$ is isomorphic through $\varphi$ to a connected component of $\NL(X,0) \setminus (\Div(X') \cup T)$.
As $X'$ is a resolution of $C$, such a component is a disc, an annulus, or a disc with the origin removed (which is itself isomorphic to an annulus, as observed in Remark~\ref{remark_disc_minus_point}).
This completes the proof of the fact that $\mathscr Y'$ is algebraized by a good resolution $Y'$ of $(X,0)$ and $C$.
\smallskip
Now let $\mathscr Y$ be the formal modification of $(X,0)$ whose divisorial set is $\Div(Z) \setminus U$ and denote by $y$ the point of the reduction $\mathscr Y_0$ of $\mathscr Y$ corresponding as in Proposition~\ref{proposition_propertiesNL_3} to the connected component $U$ of $\NL(X,0) \setminus \Div(\mathscr Y)$.
Let $\mc{J}_y$ be the ideal of $\widehat{\mc{O}_{\mathscr Y,y}}$ which defines $\mathscr Y_0$ locally around $y$.
Observe now that we have a sequence of isomorphisms of complete local rings
\begin{multline*}
\widehat{\mc{O}_{\mathscr Y,y}}
\mathop{\cong}\limits^{\mathrm{Prop.}\ref{proposition_propertiesNL_1}}
\mathcal O_{\NL(\mathscr Y,y)}^\circ \big( \NL(\mathscr Y,y) \big)
\mathop{\cong}\limits^{\mathrm{Lem.}\ref{lemma_extension_to_type1}}
\mathcal O_{\NL(\mathscr Y,y)}^\circ \big( \NL(\mathscr Y,y) \setminus V(\mc{J}_y)\big)\\
\mathop{\cong}\limits^{\mathrm{Prop.}\ref{proposition_propertiesNL}}
\mathcal O_{\NL(X,0)}^\circ \big( U \big)
\mathop{\cong}\limits^{\varphi}
\mathcal O_{\NL(X,0)}^\circ \big(\NL(X,0)\setminus T)
\mathop{\cong}\limits^{\mathrm{Lem.}\ref{lemma_extension_to_type1}}
\mathcal O_{\NL(X,0)}^\circ \big(\NL(X,0) )
\mathop{\cong}\limits^{\mathrm{Prop.}\ref{proposition_propertiesNL_1}}
\widehat{\mc{O}_{X,0}},
\end{multline*}
which tells us that $\varphi$ induces an isomorphism of formal schemes $\widehat{\mathscr Y/y}\cong \Spf\big(\widehat{\mc{O}_{X,0}}\big)$.
\smallskip
The inclusion $\Div(\mathscr Y) \subset \Div(\mathscr Y')$ yields a morphism of formal schemes $\psi\colon \mathscr Y' \to \mathscr Y$ (geometrically this morphism is the contraction of the exceptional components of $\mathscr Y'$ corresponding to divisorial points in $U$).
Then $\varphi$ induces an isomorphism of formal schemes between the formal completion $\mathscr Y''=\widehat{\mathscr Y'/\psi^{-1}(y)}$ of $\mathscr Y'$ along $\psi^{-1}(y)$ and the formal completion $\mathscr X'=\widehat{X'/\pi^{-1}(0)}$ of $X'$ along its exceptional divisor $\pi^{-1}(0)$.
Since $X'$ is the minimal good resolution of $(X,0)$ and $C$, the resolution $\mu\colon Y'\to X$ factors through a morphism $\rho \colon Y' \to X'$.
Now observe that since $\widehat{\mathscr Y'/\psi^{-1}(y)}$ and $\widehat{X'/\pi^{-1}(0)}$ are isomorphic as formal schemes, then $\Dual\big(\psi^{-1}(y)\big)$ and $\Dual\big(\pi^{-1}(0)\big)$ are isomorphic as weighted graphs (see section \ref{ssec:dualgraphs} for the definition of the weighted graph associated to a good resolution).
Indeed, let $x$ be a point of $\Div(X')$ corresponding to a component $E$ of $\pi^{-1}(0)$ and denote by $\varphi(E)$ the component of $\psi^{-1}(y)$ corresponding to the point $\varphi(x)\in \Div(Y')$.
Then we have the following sequence of field isomorphisms
\[
k(E)\cong\widetilde{\mathscr H(x)}\cong \widetilde{\mathscr H\big(\varphi(x)\big)} \cong k\big(\varphi(E)\big)
\]
since $\varphi$ is an isomorphism of locally ringed spaces, hence $E$ and $\varphi(E)$ have the same genus.
Moreover, we have $\widehat{Y'/\varphi(E)}=\widehat{\mathscr Y''/\varphi(E)} \cong \widehat{\mathscr X'/E}=\widehat{X'/E}$, which implies that $(E\cdot E)=(\varphi(E)\cdot\varphi(E))$ because the degree of the normal bundle of $E$ in $X'$ is the same as the degree of the formal normal bundle of $E$ in $\mathscr X'$,
and the same holds for the normal bundle of $\varphi(E)$ in $Y'$.
We now observe that the weighted dual graph $\Dual\big(\psi^{-1}(y)\big)$ is a subgraph of $\Dual\big((\pi\circ\rho)^{-1}(0)\big)$, and the latter is obtained by a sequence of simple modifications from $\Dual\big(\pi^{-1}(0)\big)$ since $\rho \colon Y' \to X'$ is a birational morphism between smooth surfaces.
This shows that $(X,0)$ has a self-similar dual graph, which is what we wanted to prove.
\hfill \qed
\bigskip
For the reader convenience, the following diagram summarizes the constructions made in the course of the proof above.
\begin{displaymath}
\xymatrix@C=1pc@R=1.3pc@M=3pt@L=3pt{
& Y' \ar@{->}[d]^{\rho} \ar@{->}[dl]_{\psi} && \widehat{{Y'}/{(\pi\circ\rho)^{-1}(0)}} \ar@{->}[ll] \ar@{->}[d]^{\cong} && \widehat{Y'/\psi^{-1}(y)} \ar@{_{(}->}[ll] \ar@{->}[dll]^{\cong}_{\varphi}\\
Y \ar@{->}[dr] & X' \ar@{->}[d]^{\pi} && \widehat{X'/\pi^{-1}(0)} \ar@{->}[ll]\\
& X
}
\end{displaymath}
\section{Sandwiched singularities are determined by their dual graphs}\label{section_plumbing}
The goal of this section is two-fold.
We first prove that a normal surface singularity is sandwiched if
its (weighted) dual graph is sandwiched.
This is an extension of \cite[Proposition~1.11]{spivakovsky:sandsingdesingsurfNashtransf} to arbitrary characteristic.
We then explain how this allows to prove the implication \ref{condition_graph} $\implies$ \ref{condition_sandwiched} of Theorem~\ref{mainthm}.
\subsection{Dual graphs of sandwiched singularities}\label{ssec:dualgraphs}
Let $(X,0)$ be any normal surface singularity, and let $(Y,D) \to (X,0)$ be a good resolution of the singularities of $(X,0)$.
We define the weighted dual graph $\Dual (D)$ as follows.
Its set of vertices is the set of the irreducible components of $D$, and there is an edge connecting two vertices if and only if the corresponding components intersect.
The weight of a vertex is the pair of positive integers $\big(g(E),-E^2\big)$ consisting of the genus and of the opposite of the self-intersection of the corresponding component $E$ of $D$.
\begin{thm}\label{thm:extend-spiv}
Let $(X,0)$ be a normal surface singularity and assume there exists a good resolution of $(X,0)$ whose associated weighted dual graph is sandwiched.
Then $(X,0)$ is sandwiched.
\end{thm}
Over the complex numbers, this result is due to Spivakovsky.
His proof is complex analytic in nature, relying on plumbing constructions in an essential way.
We proceed in very much the same way, using an analogue of plumbing in formal geometry.
\begin{proof}
Suppose that
$(Y,D)$ is a good resolution of singularities of $(X,0)$, that the associated weighted dual graph $\Dual(D)$ is sandwiched, and choose an embedding of $\Dual(D)$ in a regular weighted graph $\Gamma$.
We can assume without loss of generality that each of the vertices of $\Gamma\setminus\Dual(D)$ has weight $(0,1)$ and valence 1, so that it is only adjacent to a vertex of $\Dual(D)$.
Indeed, each connected component of $\Gamma\setminus\Dual(D)$ is itself regular and can be contracted to a single vertex adjacent to $\Dual(D)$ and of weight $(0,1)$, see \cite[Proposition 1.13]{spivakovsky:sandsingdesingsurfNashtransf}).
We will now build a smooth formal $k$-scheme $\mathscr Y'$ whose reduction $\mathscr Y'_0$ is a curve with simple normal crossings such that $\Dual(\mathscr Y'_0)\cong\Gamma$, together with a closed immersion of formal schemes $\widehat{Y/D} \to \mathscr Y'$.
We will do so by means of a patching procedure, using as elementary building blocks the following smooth formal $k$-schemes of dimension 2.
For $n>0$, let $\mathbb{F}_n = \mathbb{P}_{\mathbb P^1_k} \big( \mc{O}_{\mathbb P^1_k} \oplus \mc{O}_{\mathbb P^1_k}(n) \big)$ be the Hirzebruch surface of order $n$ over $k$ and let $F$ denote the rational curve of self-intersection $-n$ in $\mathbb F_n$.
Pick a point $p$ in $\mathbb F_n$ that does not lie in $F$, call $F'$ the curve in the canonical rational fibration passing through $p$, and denote by $\widetilde{\mathbb F_n}$ the blowup of $\mathbb F_n$ at $p$.
Define $\mathscr X_{n}$ to be the formal completion of $\mathbb F_n'$ along the union of $F$ with the strict transform of $F'$ in $\widetilde{\mathbb F_n}$, so that the reduction of $\mathscr X_n$ consists of two smooth and rational curves that intersect transversally and have self-intersection $-n$ and $-1$ respectively.
Now take a vertex $v$ in $\Gamma\setminus \Dual(D)$ and let $E$ be the component of $D$ corresponding to the vertex of $\Dual(D)$ adjacent to $v$.
Since $\Gamma$ is regular, $E$ is rational and has negative self-intersection, say $E^2=-m<0$.
Consider the formal $k$-scheme $\mathscr X_m$ constructed above and denote by $F$ its rational curve of self-intersection $-m$ and by $q$ the point of $F$ that intersects the other component of $(\mathscr X_m)_0$.
The formal completions $\widehat{Y/E}$ and $\widehat{\mathscr X_m/F}$ are isomorphic by \cite[Theorem 2.11]{LeeNakayama2013} (in characteristic zero this result was obtained earlier in \cite[Satz 2.10]{Brieskorn1967}), and, after composing with an automorphism of $\widehat{\mathscr X_m/F}$ if necessary (for instance one induced by a suitable automorphism of the Hirzeburch surface), we can choose such an isomorphism $\widehat{\mathscr X_m/F}\cong\widehat{Y/E}$ sending $q$ to a point of $E$ that is smooth in $D$.
Therefore we can glue a copy of $\mathscr X_m$ to $\widehat{Y/D}$, obtaining a smooth formal $k$-scheme whose associated dual graph is the weighted graph spanned by $\Dual(D)$ and the vertex $v$.
By repeating this procedure for every vertex in $\Gamma\setminus \Dual(D)$ we obtain the formal scheme $\mathscr Y'$ as we wanted.
Observe that by construction the formal scheme $\widehat{Y/D}$ is isomorphic to the closed formal subscheme of $\mathscr Y'$ obtained by formally completing the latter along $D$.
Since $\Gamma\cong\Dual(\mathscr Y_0)$ is regular, Grauert-Artin contractibility criterion \cite{artin:contractibilityalgebraicspaces} gives a formal modification $\pi\colon \mathscr Y'\to \mathcal{Z}$ contracting $\mathscr Y_0$ to a point $\{z\}=\mathcal Z_0$ in a smooth two-dimensional formal $k$-scheme $\mathcal Z$.
Having only one point, $\mathcal Z$ is affine, therefore it can be algebraized by a smooth surface $Z$ over $k$.
It follows that $\mathscr Y'$, being a formal good resolution of $(Z,z)$, is itself algebraized by a good resolution $\pi\colon Y'\to Z$ by \cite[Proposition 7.6]{fantini:normspaces}.
Using the other contractibility criterion by Artin, \cite[Theorem 2.3]{artin:contractibilitycriterion}, one can then also contract the divisor $D$ of $Y'$ to a (possibly singular) point $z'$ of a normal surface $Z'$ over $k$, which yields a modification $\varpi\colon Y'\to Z'$ such that $\pi$ factors as $Y' \stackrel{\varpi}{\longrightarrow} Z' \stackrel{\tau}{\longrightarrow} {Z}$.
Observe that the local ring $\widehat{\mc{O}_{Z',z'}}$ is sandwiched since $Z$ is smooth.
Finally, observe that we have
\begin{displaymath}
\widehat{\mc{O}_{X,0}}
\cong
\mc{O}^\circ\big(\NL(X,0)\big)
\cong
\mc{O}^\circ\big(\NL(Y,D)\big)
\cong
\mc{O}^\circ\big(\NL(Y',D)\big)
\cong
\mc{O}^\circ\big(\NL(Z',z')\big)
\cong
\widehat{\mc{O}_{Z',z'}},
\end{displaymath}
where the first and the last isomorphisms come from Proposition~\ref{proposition_propertiesNL_1} and the others follow from the invariance of non-archimedean links under modifications.
This proves that $\widehat{\mc{O}_{X,0}}$ is sandwiched, which is what we wanted to show.
\end{proof}
\begin{rmk}\label{rem-equivalence}
The converse to the theorem is also true: the dual graph $\Dual(D)$ is sandwiched if $(X,0)$ is sandwiched.
A direct proof of this fact will be given in Remark~\ref{rem:64}.
\end{rmk}
\subsection{The implication \ref{condition_graph} $\implies$ \ref{condition_sandwiched}}
Let $(X,0)$ be any normal surface singularity such that $\Dual(D)$ is self-similar for some
good resolution of singularities $(Y,D) \to (X,0)$.
By Theorem~\ref{thm:graph-sdw}, the weighted graph $\Dual(D)$ is sandwiched,
which implies that $(X,0)$ is sandwiched by Theorem~\ref{thm:extend-spiv}. \hfill$\qed$
\section{End of the proof of Theorem~\ref{mainthm}: Kato data}\label{sec-Kato data}
In this section, we prove the two implications \ref{condition_sandwiched} $\implies$ \ref{condition_kato}, and
\ref{condition_kato} implies the condition \ref{condition_prime} from \S\ref{section_proof_main},
concluding the proof of our main theorem.
\subsection{The implications \ref{condition_sandwiched} $\implies$ \ref{condition_kato}}\label{sec:164}
A \textit{Kato datum} \label{def:kato datum} for a normal surface singularity $(X,0)$ is a modification $\pi \colon (X',D) \to (X,0)$, together with an isomorphism of complete local rings $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$ for some $p\in D$.
Note that in particular $\pi$ is not an isomorphism.
Suppose $(X,0)$ is a sandwiched singularity.
Proving \ref{condition_sandwiched} $\implies$ \ref{condition_kato} amounts to constructing a Kato datum for $(X,0)$.
To do so, recall that we may find a proper birational morphism $h \colon Z \to \bb{A}^2_k$ and a point $z \in h^{-1}(0)$ such that
$\widehat{\mathcal O_{Z,z}} \cong \widehat{\mathcal O_{X,0}}$.
Choose a good resolution of singularities $g \colon Y \to X$ of $(X,0)$, and pick an arbitrary point $q\in g^{-1}(0)$.
Since $Y$ is smooth at $q$ we can find an \'etale map $\varphi \colon Y \to \bb{A}^2_k$ mapping $q$ to the origin.
We consider the fibered product $X' = Y \times_{\bb{A}^2_k} Z$, so that the following diagram is commutative
\[
\xymatrix{
X' = Y \times_{\bb{A}^2_k} Z
\ar[r]^-\varpi \ar[d]^\psi & Y \ar[d]^\varphi \ar[r]^g & X\\
Z \ar[r]^h & \bb{A}^2_k &
}
\]
Observe that the projection map $\varpi\colon X' \to Y$ is birational since $h$ is, and $\psi \colon X' \to Z$ is \'etale since \'etale morphisms are preserved by base change.
The image of $X'$ in $Z$ contains $h^{-1}(0)$ since $\varphi(q) =0$.
We may thus find a point $p\in X'$ over $q$ such that $\psi(p) = z$.
Since $\psi$ is \'etale it induces an isomorphism of complete local rings $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{Z,z}}$, and the latter is isomorphic to $\widehat{\mathcal O_{X,0}}$.
It follows that the composition $g \circ \varpi \colon X' \to X$, which is a modification (that is, $(g \circ \varpi)^{-1}(0)$ is a Cartier divisor) because it factors through the resolution $Y$, defines a Kato datum for $(X,0)$. \hfill$\qed$
\subsection{The implication \ref{condition_kato} $\implies$ \ref{condition_prime}}\label{sec:165}
Suppose that $(X,0)$ is a normal surface singularity and that $\pi \colon (X',D) \to (X,0)$ is a Kato datum for $(X,0)$, with $p\in D$ a point such that $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$.
By Proposition~\ref{proposition_propertiesNL_3} there exists a finite set $T'$ of type 1 points of $\NL(X',p)$ such that the open subspace $U=\mathrm{c}_{X'}^{-1}(p)$ of $\NL(X,0)$ is isomorphic to $\NL(X',p)\setminus T'$.
Since $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$, it follows that $U\cong \NL(X,0) \setminus T$ for some finite set $T$ of type 1 points of $\NL(X,0)$.
Finally, the closure $\overline{U}$ of $U$ in $\NL(X,0)$ is strictly contained in the latter, since it is contained in $U\cup \Div(X')$ as observed in Remark~\ref{remark_closure_component}.
\hfill$\qed$
\begin{rmk}\label{rem:64}
Let us sketch a proof of the implication \ref{condition_kato} $\implies$ \ref{condition_graph}.
Although not necessary to complete the proof of Theorem~A, we obtain in this way a direct argument showing the implication \ref{condition_sandwiched} $\implies$ \ref{condition_graph}, which is equivalent to saying that every sandwiched singularity has a sandwiched dual graph, as mentioned in Remark~\ref{rem-equivalence}.
Suppose that $(X,0)$ is a normal surface singularity and that $\pi \colon (X',D) \to (X,0)$ is a Kato datum for $(X,0)$, with $p\in D$ a point such that $\widehat{\mathcal O_{X',p}} \cong \widehat{\mathcal O_{X,0}}$.
Without loss of generality we may assume that $D$ has simple normal crossings away from $p$, so that in particular $X'$ is smooth away from $p$.
Let $\mu \colon Y \to X$ and $\mu' \colon Y' \to X'$ be the minimal good resolutions of the singularities of $(X,0)$ and $(X',p)$ respectively.
Since $Y'$ is a good resolution of $(X',p')$ it is also a good resolution of $(X,0)$ so that there exists a birational morphism $\pi' \colon Y' \to Y$ satisfying $\mu \circ \pi' = \pi \circ \mu'$.
This morphism is not an isomorphism, or $\pi$ would be an isomorphism as well.
Moreover, since $Y'$ and $Y$ are smooth, $\pi'$ is a composition of point blow-ups.
The graph $\Dual\big((\pi\circ\mu')^{-1}(0)\big)$ is thus a nontrivial modification of $\Dual\big(\mu^{-1}(0)\big)$ and contains $\Dual\big((\mu')^{-1}(p)\big)$ which is isomorphic (as a weighted graph) to $\Dual\big(\mu^{-1}(0)\big)$.
This proves that the latter graph is sandwiched as required.
\end{rmk}
\section{Complex analytic sandwiched singularities}
\label{section_complexanalytic}
In this section we work with \emph{complex analytic varieties}.
In this context, a sandwiched singularity $(X,0)$ is a germ of normal complex analytic surface such that the completion of its analytic local ring is sandwiched in the sense of Definition~\ref{def-sandwich}.
Our aim is to characterize such singularities in terms of their complex link, proving Theorems~\ref{thm3} and~\ref{thm2}.
\subsection{Pseudoconvex $3$-folds} \label{sec:pdcvx3folds}
Let us recall some definitions and terminology from~\cite{kato:cpctcplxsurfwithGSPH}.
A \emph{real-analytic strongly pseudoconvex $3$-fold} $\Sigma$ is a smooth (real-analytic) hypersurface in a smooth complex surface $S$ such that for any $p\in \Sigma$ there exists an open subset $U$of $S$ containing $p$ and a real-analytic strictly plurisubharmonic function $\rho \colon U \to \bb{R}$ such that $\Sigma \cap U = \rho^{-1}(0)$.
A real-analytic strongly pseudoconvex $3$-fold \emph{bounds a Stein domain} if $\Sigma$ is compact and there exists an embedding of a tubular neighborhood of $\Sigma$ in $S$ into
a normal (but possibly singular) complex surface $X$ such that $X \setminus \Sigma$ has one component which is Stein.
We shall need one more piece of terminology.
A compact real $3$-fold $\Sigma$ in a compact complex surface $S$ is said to be \emph{global} when $S\setminus \Sigma$ is connected.
\smallskip
Archimedean links are the main example of real-analytic strongly pseudoconvex $3$-folds bounding a Stein domain.
Let $(X,0)$ be a normal surface singularity, and fix an embedding of $(X,0)$ inside the unit ball in some complex affine space $\bb{C}^n$ such that $0$ is sent to the origin.
The function $\rho(z) = \rho(z_1, \ldots , z_n) := \sum_{i=1}^n |z_i|^2$ is real-analytic and strictly plurisubharmonic.
There exists $\varepsilon_0$ small enough so that for any $0 < \varepsilon < \varepsilon_0$ the intersection of the sphere of center $0$ and radius $\varepsilon$ with $X$ is transversal, so that $\LC{\varepsilon}(X,0) := X \cap \{ \rho = \varepsilon\}$ is a real-analytic strongly pseudoconvex $3$-fold, and
bounds the Stein domain $X_\varepsilon := X \cap \{\rho < \varepsilon\}$.
Such $\varepsilon_0$ may be taken maximal satisfying the above property, and any $\varepsilon$ strictly smaller than the threshold $\varepsilon_0$ above is said to be \emph{admissible}.
Observe that the diffeomorphism type of $\LC{\varepsilon}(X,0)$ does not depend on the choice of an admissible $\varepsilon$, but its embedding as a real-analytic $3$-fold in $X$ does.
In fact the diffeomorphism type of $\LC{\varepsilon}(X,0)$ is also independent on the choice of an embedding in $\bb{C}^n$, see~\cite[Proposition~2.5]{MR3112993}.
\subsection{Kato surfaces}\label{ssec:Katosurfaces}
A special case of real-analytic strongly pseudoconvex $3$-folds is given by \emph{spherical shells}, corresponding to the boundary of the unit ball $B$ in $\bb{C}^2$, i.e., to the link of a regular point $(X,0)$.
In \cite{kato:cptcplxmanifoldsGSS}, M. Kato considers the following construction to produce compact complex surfaces admitting a global spherical shell.
Let $Y$ be any connected open neighborhood of $\overline{B}$ in $\bb{C}^2$,
and let $\pi\colon Y' \to Y$ be a proper bimeromorphic map which is an isomophism above $Y \setminus \{0\}$.
Let $y$ be a point in $\pi^{-1}(0)$, and pick a relatively compact neighborhood $U$ of $y$ in $\pi^{-1}(B)$ such that there exists a biholomorphism $\sigma\colon Y \to U$.
We call the pair $(\pi,\sigma)$ a \emph{regular geometric Kato datum}, to distinguish this definition from the one given in \S\ref{sec:164}.
One can define a compact complex surface $S=S(\pi,\sigma)$, called \emph{Kato surface}, obtained from $Y' \setminus \sigma(\overline{B})$ by gluing $Y'\setminus \pi^{-1}(\overline{B})$ and $\sigma(Y\setminus \overline{B})$ using $\sigma \circ \pi$
Kato surfaces have been studied intensively in the literature, see for example the monograph of G. Dloussky~\cite{dloussky:phdthesis}. They are compact complex surfaces with negative Kodaira dimension, $b_1 =1$, $b_2 >0$, and they admit a global spherical shell.
In the Kodaira classification of compact complex surfaces, they belong to the VII$_0$ class, and it is believed that they are the only examples of surfaces in this class having $b_2>0$,
see~\cite{MR2726099}.
\subsection{Proof of Theorem~\ref{thm3}}\label{ssec:proofthm3}
In this section we fix a sandwiched singularity $(X,x_0)$. Our aim is to realize its archimedean link as a global real-analytic strongly pseudoconvex $3$-fold in a compact complex surface $S$ that has a global spherical shell.
Our first objective is to construct a complex analytic version of a Kato datum attached to $(X,x_0)$.
Let $Y$ be a connected neighborhood of the closed unit ball in $\bb{C}^2$.
By Artin's approximation theorem and \S \ref{ssec:sandwich}, we may find a proper bimeromorphism $\pi \colon Y' \to Y$ that is an isomorphism over $Y\setminus \{0 \}$, and a non-empty connected divisor $D \subset \pi^{-1}(0)$ such that the normal complex analytic germ obtained by contracting $D$ to a point is isomorphic to $(X,x_0)$.
We write $\mu \colon Y' \to X$ for the contraction morphism.
It is a proper bimeromorphism that is a local isomorphism at any point of $D$ and contracts $D$ to the point $x_0$.
Observe that we may (and shall) assume that the support of $D$
is the union of all rational curves in the exceptional divisor $\pi^{-1}(0)$ of self-intersection $\le -2$
by~\cite[Corollary 1.14]{spivakovsky:sandsingdesingsurfNashtransf}.
In that case the map $\mu$ is the minimal good resolution of $(X,x_0)$. Define the proper bimeromorphism $\eta \colon X \to Y$ by setting $\eta := \pi \circ \mu^{-1}$.
Consider any embedding of a local neighborhood of $x_0$ in $X$ into the unit ball in $\bb{C}^n$ that sends $x_0$ to the origin, so that we can talk about its complex link $\LC{\varepsilon}(X,x_0)$ for any admissible $\varepsilon \ll 1$.
Recall the definition of the Stein domain $X_\varepsilon$ from \S\ref{sec:pdcvx3folds}.
Now pick any point $y \in \mu^{-1}(x_0)$, and choose a neighborhood $U$ of $y$ such that $\mu(U)$ is relatively compact in $X_\varepsilon$ and there exists a biholomorphism $\sigma\colon Y \to U$ (we possibly have to shrink $Y$ in order to do that).
We construct a complex surface $X'$ by patching together $Y' \setminus \sigma(\overline{B})$ and $X$
using $\sigma \circ \eta \colon X \setminus \eta^{-1}(\overline{B}) \to \sigma(Y\setminus\overline{B})$.
The identification of $X$ with its copy inside $X'$ induces a holomorphic map $\tilde{\sigma} \colon X \to X'$ that is a biholomorphism onto its image, and such that $\tilde{\sigma}(X)$ is relatively compact inside $X'$.
We also have a proper bimeromorphic map $\tilde{\pi} \colon X' \to X$ defined by $\tilde{\pi} = \mu$ on $Y' \setminus \sigma(\overline{B})$ and by $\tilde{\pi}= \mu \circ \sigma \circ \eta$ on $X$.
We proceed to construct a compact complex surface following Kato's construction as in the previous section. Pick three admissible positive real numbers $ \varepsilon_- < \varepsilon< \varepsilon_+$, and set $X'_t = \tilde{\pi}^{-1}(X_t)$ for every $t$ in $\{\varepsilon_-, \varepsilon, \varepsilon_+\}$.
Define the surface $\tilde{S}$ considering $X'_{\varepsilon_+} \setminus \tilde{\sigma}(\overline{X_{\varepsilon_-}})$ and gluing together
$X'_{\varepsilon_+} \setminus \overline{X'_{\varepsilon_-}}$ and $\tilde{\sigma}(X_{\varepsilon_+} \setminus \overline{X_{\varepsilon_-}})$ via the map $\tilde{\sigma} \circ \tilde{\pi}$.
Observe that the canonical map $\overline{X'_\varepsilon} \setminus \tilde{\sigma}(X_\varepsilon) \to \tilde{S}$ is surjective
hence $\tilde{S}$ is compact.
It is a smooth surface, since the only singularity of $X'_{\varepsilon_+}$ lies inside $\tilde{\sigma}(\overline{X_{\varepsilon_-}})$.
By construction, $\tilde{S}$ contains a neighborhood of the archimedean link $\LC{\varepsilon}(X,x_0)$, that is hence realized as a real-analytic strongly pseudo-convex $3$-fold bounding a Stein domain in $\tilde{S}$.
To see that $\tilde{S} \setminus\LC{\varepsilon}(X,x_0)$ is connected, it is enough to see that $X'_{\varepsilon} \setminus \overline{\tilde{\sigma}(X_{\varepsilon})}$ is connected.
But $\tilde{\sigma}(\LC{\varepsilon}(X,x_0))$ is a compact and connected real $3$-fold in $X'_{\varepsilon}$, hence $X'_{\varepsilon} \setminus \tilde{\sigma}(\LC{\varepsilon})$ has at most two components, one of which is $\tilde{\sigma}(X_{\varepsilon})$.
This implies that $X'_{\varepsilon} \setminus \overline{\tilde{\sigma}(X_{\varepsilon})}$ is connected.
It remains to see that $\tilde{S}$ contains a global spherical shell.
This follows from~\cite[Proposition~2]{kato:cpctcplxsurfwithGSPH}, whose proof is given on pp.~541--546. The proof should be read by replacing $Z(\delta)$ (resp. $A$, $0^*$, and $g$) by $X'_\varepsilon$ (resp. by $\tilde{\pi}^{-1}(x_0)$, $\sigma(x_0)$ and $\tilde{\sigma} \circ \tilde{\pi}$).
One can also see it directly, by showing that $\tilde{S}$ is biholomorphic to the Kato surface $S=S(\pi,\sigma)$ associated with the regular Kato datum $(\pi,\sigma)$.
Define a map $\eta'\colon X'\to Y'$ by setting $\eta'=\id$ on $Y' \setminus \sigma (\overline{B})\subset X'$, and $\eta'=\sigma \circ \eta$ on $X\subset X'$.
Denote by $S^*$ the surface obtained from $X' \setminus \tilde{\sigma}\big(\eta^{-1} (\overline{B})\big)$ by gluing $X'\setminus \tilde{\pi}^{-1}\big(\eta^{-1} (\overline{B})\big)$
and $\tilde{\sigma}\big(Y\setminus \eta^{-1} (\overline{B})\big)$
using $\tilde{\sigma} \circ \tilde{\pi}$.
The map $\eta'$ then induces a biholomorphism between $S^*$ and $S$.
We show that $S^*$ and $\tilde{S}$ are biholomorphic.
To that end, consider the surface $S'$ obtained from $X' \setminus \tilde{\sigma}(\overline{X_{\varepsilon_-}})$ by gluing $X' \setminus \overline{X'_{\varepsilon_-}}$ to $\tilde{\sigma}(X\setminus \overline{X_{\varepsilon_-}})$ via the map $\tilde{\sigma} \circ \tilde{\pi}$.
The inclusions $\imath^*\colon X' \setminus \tilde{\sigma}(\eta^{-1} (\overline{B})) \hookrightarrow X' \setminus \tilde{\sigma}(\overline{X_{\varepsilon_-}})$ and $\tilde{\imath}\colon X'_{\varepsilon_+} \setminus \tilde{\sigma}(\overline{X_{\varepsilon_-}}) \hookrightarrow X' \setminus \tilde{\sigma}(\overline{X_{\varepsilon_-}})$ induce biholomorphisms $\Phi^*\colon S^* \to S'$ and $\tilde{\Phi}\colon\tilde{S} \to S'$.\hfill$\qed$
\subsection{Proof of Theorem~\ref{thm2}}
As above, we fix a closed embedding of $X$ in the open unit ball in $\bb{C}^n$, define $\rho (z) := \sum_{i=1}^n |z_i|^2$, and we write $\LC{\varepsilon}(X,0):= X \cap \{\rho = \varepsilon\}$ for any admissible $\varepsilon>0$, $X_\varepsilon:= X \cap \{ \rho < \varepsilon\}$, and $A_{\varepsilon_-,\varepsilon_+} := X_{\varepsilon_+} \setminus \overline{X_{\varepsilon_-}}$.
We suppose that for some admissible $\varepsilon$ the manifold $\LC{\varepsilon}(X,0)$ can be realized as a global strongly pseudoconvex $3$-fold in a smooth compact complex surface $S$.
By assumption, there exist $\varepsilon_- < \varepsilon < \varepsilon_+$ and a holomorphic embedding $\imath \colon A_{\varepsilon_-,\varepsilon_+} \to S$ such that $S\setminus \imath(A_{\varepsilon_-,\varepsilon_+})$ is connected.
Let $X'$ be the surface obtained by gluing $S\setminus \imath(\LC{\varepsilon}(X,0))$ and $X_{\varepsilon_+}$ together along the open subsets $\imath (A_{\varepsilon,\varepsilon_+})$ and $A_{\varepsilon,\varepsilon_+}$ via the map $\imath^{-1}$.
This surface has a single singularity at the point $0'$ corresponding to the point $0$ of $X_{\varepsilon_+}$, and $(X',0')$ is a normal singularity which is analytically isomorphic to $(X,0)$.
Denote by $\sigma$ the map induced by the identification of $X_{\varepsilon_+}$ inside $X'$.
It is a holomorphic embedding mapping the singular point $0$ to $0'$.
Observe that there is a biholomorphic copy $A$ of $A_{\varepsilon_-,\varepsilon}$ that is included in $X'$ and whose complement in $X'$ is compact.
Since $X_{\varepsilon}$ is Stein, the natural biholomorphism $A \to A_{\varepsilon_-,\varepsilon}$ extends to a proper bimeromorphic map $\pi \colon X' \to X_{\varepsilon}$ by a classical theorem of Hartogs.
Note that $\pi$ could be a local isomorphism at $0$, and it does not necessarily send $0'$ to $0$.
\smallskip
If $\pi(0')\neq 0$, then $\pi$ is a proper bimeromorphic map from $(X',0')$ onto a smooth point in $X_{\varepsilon}$,
therefore $(X',0')$ (and hence $(X,0)$) is a sandwiched singularity as was to be shown.
We may thus assume that $\pi(0') = 0$ so that $0'$ is fixed by the map $f:=\sigma\circ \pi \colon X' \to X'$.
\smallskip
First suppose that $\pi$ does not induce a local biholomorphism from $(X',0')$ to $(X,0)$.
Then the maps $\pi \colon X' \to X_\varepsilon$ and $\sigma\colon X_\varepsilon \to X'$ define a Kato datum in the sense of page~\pageref{def:kato datum}.
We deduce that $(X,0)$ is sandwiched by the implication \ref{condition_kato}$\Rightarrow$\ref{condition_sandwiched} of Theorem~\ref{mainthm}.
The fact that $\tilde{S}$ contains a global spherical shell follows similarly as before from~\cite[Proposition~2]{kato:cpctcplxsurfwithGSPH}.
\smallskip
Now suppose that $\pi$ induces a local biholomorphism from $(X',0')$ to $(X,0)$.
Then the map $f$ induces a local biholomorphism from $(X',0')$ to itself and we have
$K:=\bigcap_{n\in\bb{N}} f^n(\sigma(X_\varepsilon)) = \{ 0'\}$.
Indeed, $K$ is a compact subset of $\sigma(X_\varepsilon)$, and, since the latter can be realized as a bounded set in $\bb{C}^n$, Montel's theorem applies and all limits of the sequence of iterates $\{f^m\}_{m\ge0}$ should be constant (this argument is due to M.~Kato, see the proof of \cite[Lemma 2]{kato:cptcplxmanifoldsGSS}). But $f$ is fixing $0'$, hence $K = \{ 0'\}$.
In other words, $f \colon (X',0')\to (X',0')$ is a contracting automorphism in the terminology of~\cite{MR3270424}.
Let $S(f)$ be the space of orbits of $f$, that is the quotient of $X'\setminus\{0'\}$ by the equivalence relation defined by $x \simeq x'$ if and only if $f^n(x) = f^m(x')$ for some positive $n,m\in\bb{N}$. This space is naturally a compact complex surface. Observe that one has a natural holomorphic map $S\setminus \imath\big(\LC{\varepsilon}(X,0)\big) \subset X'$
to $S(f)$, and that this map actually descends to a map $S \to S(f)$.
It follows that $S$ is a modification of $S(f)$.
The singularity $(X,0)$ and the geometry of $S(f)$ are completely described in~\cite[Theorem 7.5]{MR3270424}.
We are thus in one of the following situations.
\begin{enumerate}[label=Case \arabic{enumi}.,leftmargin=0pt, itemindent=40pt]\setlength\itemsep{1mm}
\item The singularity $(X,0)$ is a cyclic quotient singularity. In this case \cite[Corollary~B]{MR3270424}
shows that $S(f)$ is a Hopf surface. More precisely the universal cover of $S(f)$ is isomorphic to $\bb{C}^2\setminus\{0\}$ and its fundamental group is the subgroup of polynomial automorphisms generated by $\gamma$ and $\tilde{f}$ in the notations of \cite[Example~7.1]{MR3270424}.
In particular, this fundamental group is not cyclic, so that $S(f)$ is a secondary Hopf surface. We are in case \ref{item:thm2b} of the theorem.
\item \label{item:2} There exist a Riemann surface $C$, a line bundle $L\to C$ of negative degree, and a finite group $G$ of automorphisms of $C$ that acts linearly on $L$. Let $X'$ be the surface obtained by contracting the zero section in the total space of $L$, and let $0'$ be the image of the zero section in $X'$.
Then the germ $(X,0)$ is isomorphic to a neighborhood of the image of $0'$ in the quotient space $X'/G$.
Moreover, one can find a positive integer $N\ge 1$ and a complex number $\alpha$ of norm smaller than $1$ such that $f^N$ lifts to a linear map acting by multiplication by $\alpha$ on the fibers of $L$.
By~\cite[Lemma 8.1]{MR3270424}, the natural map $S(f^N)\to S(f)$ is a Galois cyclic (unramified) holomorphic cover.
\smallskip
\begin{enumerate}[label=Case 2\alph{enumii}.,leftmargin=0pt, itemindent=45pt]\setlength\itemsep{1mm}
\item Suppose first that the genus of $C$ is positive.
Then $(X',0')$ is not rational, hence $(X,0)$ is not rational either by~\cite[Claim 6.11]{MR1368632}.
In particular $(X,0)$ is not a quotient singularity. By~\cite[Theorem~A]{MR3270424}, it follows that $(X,0)$ is weighted homogeneous.
Finally the proof of \cite[Corollary~B]{MR3270424} shows that $S(f^N)$ is a principal elliptic fiber bundle of Kodaira dimension $0$ or $1$ so that we are in case \ref{item:thm2a} of our theorem.
\item Suppose that $C=\bb{P}^1$ is the Riemann sphere. Then $(X',0')$ is a cyclic singularity and
$(X,0)$ is a quotient singularity. Again the proof of \cite[Corollary~B]{MR3270424} shows that
$S(f)$ is a Hopf surface.
If the group $G$ acting on $(X',0')$ is trivial, then $(X,0)$ is a cyclic quotient singularity, and the surface is either a secondary Hopf or a primary Hopf.
In the latter case it contains a global spherical shell and we are in case \ref{item:thm2c}.
In the former case we are in case \ref{item:thm2b} of our theorem.
When $G$ is non-trivial, then the fundamental group of $S(f)$ is not cyclic hence it is a secondary Hopf surface; and we are in case \ref{item:thm2b}.
\end{enumerate}
\end{enumerate}
This completes the proof of Theorem~\ref{thm2}. \hfill$\qed$
\begin{comment}
CF: I am trying to clarify Theorem B, especially the link with the CR structure on the link.
- The link of any sing. can be embedded in a projective variety (with its CR structure). of course
the $3$-fold is not global. this follows from Artin's theorem, and a stronger version was proved by Lempert in 1995 (algebraic approximations)
- The CR structure on a link determines the singularity: this follows from Hartogs-Bochner type extension theorem. It is attributed to Sherk "CR structures on the link of an isolated singular point".
- Embeddability and fillability of a strongly pseudoconvex CR structure are equivalent, see
"embeddability of some strongly pseudoconvex CR manifolds" by Marinescu and the discussion in the introduction.
\smallskip
8.
Other comments: several structure can be put on the archimedean link.
The real-analytic structure, the CR structure (up to diffeos), the contact structure.
Observe that the latter is defined up to isotopy by Varchenko, " Contact structures and isolated singularities" (unfortunately this paper is in russian).
The first two structures determines the singularity. The latter does not: Caubel and Popescu-Pampu proved that there exists a unique contact structure on the link of a normal surface singularity (viewed as an oriented $3$-fold). On the other hand the oriented diffeomorphism type of the link only determines the weighted graph of a minimal good resolution.
Remark: this is quite interesting that the classification we obtain with Matteo is the same as in the paper by Neumann [Geometry
of quasihomogeneous surfaces singularities] where a classification of surface singularites whose link carries a homogeneous geometric structure.
Then $(X,0)$ is quasi-homogeneous or sandwich up to a finite quotient.
\smallskip
obs: a finite quotient of a sandwiched singularity remains sandwich. Indeed $X$ is obtained by blowing up a primary ideal $I$ in $(\bb{C}^2,0)$.
Symetrize $I$ by the action of $G$.
\end{comment}
\bibliographystyle{alpha}
|
{
"timestamp": "2018-02-26T02:10:18",
"yymm": "1802",
"arxiv_id": "1802.08594",
"language": "en",
"url": "https://arxiv.org/abs/1802.08594"
}
|
\section{Introduction}
Mastery learning sits at the foundation of intelligent tutoring
systems \cite{corbett2001cognitive,wenger2014artificial}.
The philosophy of mastery learning assumes a well-structured
curriculum, and posits that students progress within the curriculum as
they master its skills
\cite{bloom1968learning,kulik1990effectiveness}. The Cognitive Tutor
Algebra I (CTAI) system, developed by Carnegie Learning, Inc., is one of the best studied and best regarded examples of modern educational software. It is a blended learning system for teaching algebraic concepts and principles, to middle and high school students, including both textbook materials and software. The software component of the curriculum allows students to progress at their own pace and receive individualized feedback on their performance. A large-scale randomized effectiveness trial conducted by the RAND corporation showed that, in some circumstances, CTAI boosts students' scores on an Algebra-I posttest by about one fifth of a standard deviation \cite{pane2014effectiveness}. CTAI's success in this experiment would seem to validate its pedagogy: mastery learning, and the algebra I curriculum on which it is based.
However, the theory underlying CTAI does not always determine its
use. To be sure, the software has a standard set of algebra topics,
divided into units and further into sections; and a standard sequence
for presenting them. But this precise curriculum is not mandatory. At
the request of educators, it can be customized by altering what units
or sections are included (including, possibly, material from a
different standard curriculum such as geometry), as well as their
sequence, to conform to local or state standards or scope and sequence
guides. Further, teachers have the option of moving students within
the curriculum, regardless of the software's estimate of their skill
mastery.
This article examines teachers' and schools' adherence and
non-adherence to the standard, mastery-based CTAI curriculum, using
data from the RAND study, a seven-state randomized controlled trial of
CTAI in high schools and middle schools. That study found a significant
positive effect of CTAI in high schools, during their second year of
implementation but not the first. Students in the treatment group of
the study were enrolled in one or more of the standard or customized
curricula during their participation in the study, and the software
logged aspects of their usage, including time spent, sections
encountered, and whether the software judged the students to have
mastered the sections. Adopting standard procedures of an
effectiveness trial, both Carnegie Learning and the researchers
running the study restricted their support and oversight to what is
typically provided outside of an experimental context. Thus, the
software data from the study reflect typical usage. Secondary data analyses used principal stratification to show that students who attempted more sections experienced larger treatment effects, and students who had high or low assistance levels, as opposed to an average level, experienced smaller treatment effects \cite{sales2015exploring,sales2016student}.
\citeN{sales2017role} found that students more likely to master worked
sections of the CTAI software may experience \emph{smaller} effects
than those less likely to achieve mastery, casting doubt on the role
of mastery learning as a mechanism for the treatment effect.
Here we contextualize the previous findings to describe, in detail, the ways
in which schools, teachers, and students violate mastery learning.
We will begin with short discussions of the RAND effectiveness trial and the usage
data it produced.
Sections \ref{sec:curricula}---\ref{sec:order} will describe overall
patterns of usage.
First, we will discuss standard and customized CTAI curricula used in
the study (Section \ref{sec:curricula}).
Next, in Section \ref{sec:usage}, we will describe patterns in the amount CTAI usage, showing that
it varied widely between states, between years, between schools, and between students.
In particular, the amount of usage decreased from years 1 to
2---though more in some states than others.
Then in section \ref{sec:order} we will describe how the amount of usage
changed---which units of the CTAI curriculum were worked more and less
from years 1 to 2---and find that order in which students worked
CTAI's units varied across years. We show that this change is due
mostly, but not completely, to the presence of customized curricula in
year 2.
In Section \ref{sec:mastery} we will describe patterns of mastery in general,
and in Section \ref{sec:cp} we will delve deeply into ``reassignment'': the process in which a
teacher moves a student out of a section he or she has not (yet)
mastered into a new section.
In particular, we will attempt to elucidate teachers' goals in
reassigning students. One hypothesis is the need for teachers to push
ahead students who were falling behind, i.e. to reassign them from
sections on which they were struggling, to allow them catch up with
the rest of the class. Another hypothesis is that teachers sought to
push students past easier sections to begin working on more relevant
or challenging topics for them. A third hypothesis is that teachers needed to cover certain topics in preparation for an upcoming state exam, and might have reassigned groups of students all at the same time to cover topics that might otherwise not have been covered. This hypothesis may lead to an increase in reassignments as the exam approaches.
We find evidence of all three motivations for reassignment, varying
across classrooms, schools, states, and study years.
Finally, in Section \ref{sec:effects} will will give
quasi-experimental estimates of the effects of reassignment on
students' posttest scores.
We find that reassignment probably decreases student learning,
though the effects vary widely across classrooms.
Section \ref{sec:discussion} will conclude the article with a summary
of findings and discussion.
\section{The RAND Effectiveness Trial}\label{sec:RANDtrial}
The study to measure the effectiveness of CTAI included 7 states, 73
high schools, and 74 middle schools with nearly 18,700 high school
students and 6,800 middle school students participating. Schools were enrolled in a
total of 52 school districts that were distributed among urban,
suburban, and rural areas. Schools were matched on a set of
covariates, and then randomly assigned to the treatment or control
group. Schools in the control group continued with their current
algebra curriculum, and schools in the treatment group used Carnegie
Learning's curriculum which includes CTAI textbook materials and sofware. Each school participated for two years, with a different cohort
of students participating the second year (with a small fraction of
students present in the study both years because they repeated
algebra). It should be noted that this study did not include statewide
implementations; the study results cannot be generalized to all
schools within the state. In some states, one large school district
participated, while in other states, a set of smaller school districts
participated. The states included Alabama (AL), Connecticut (CT),
Kentucky (KY), Louisiana (LA), Michigan (MI), New Jersey (NJ), and
Texas (TX). Each state participated in both the middle school and high
school arms of the study, except AL, which participated only in the
middle school arm. The current study focuses on high school students
only.
There are some limitations to the available data for this study.
Log data from some schools, and some students within schools, were
missing either because the log files were not retrievable, or because
of an imperfect ability to link log data to other study data files.
For this reason, this study uses only data from the
18 treatment schools for which at least
80\%
of students in both study years appear in the log data file.
This sample includes 4460 students, around
75\%
of the treated high-school sample.
Table \ref{tab:nByState} gives the number of students in the sample by
state and year.
The states in the table are ordered by the total number of students
they represent in the sample; they will appear in this order in
all of the forthcoming tables and figures.
Some figures will only show data from a subset of states; since so few
students were in New Jersey, it will be excluded from almost all
state-by-state comparisons (but included analyses that pool across states).
\begin{table}[ht]
\centering
\begin{tabular}{rrrrrrr}
\hline
& TX & KY & MI & LA & CT & NJ \\
\hline
Year 1 & 947 & 890 & 325 & 164 & 112 & 17 \\
Year 2 & 719 & 646 & 285 & 180 & 139 & 36 \\
\hline
\end{tabular}
\caption{Numbers of students in the sample by state and study year}
\label{tab:nByState}
\end{table}
In this sample from 18 schools, 164
students who participated in the RAND study do not appear in the log data; they may have not used the CTAI
software at all, or may have been excluded from the log data for other
reasons.
Since we don't know which is true, we exclude these students from most
analyses.
It is likely that some usage data were missing, even for students who
appear in the usage dataset.
However, it is impossible to know in which cases these data were
missing or why; for the most part, we ignore this problem, but it
should be kept in mind nonetheless.
\section{Standard and Customized Curricula}\label{sec:curricula}
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{curricula-1}
\caption{Percentage of worked problems coming from various courses
(denoted by color, with Algebra II and Geometry bundled as ``$>$Algebra I''), from
standard and customized variants, denoted by shading.}
\label{fig:curricula}
\end{figure}
Students' automatic progress through the Cognitive Tutor (CT) software
is normally governed by
the sequences of sections and units embedded in the software.
Without external meddling, the curriculum a student works on
determines the sequence, and thus what section he or she will be directed towards next after
mastering (or exhausting the problems) from a previous section.
In the CTAI effectiveness trial, the most common curriculum was,
naturally, Algebra I.
This came with three closely related variants, due to new software releases.
Students requiring more remediation were able to work on a less
advanced curriculum, called ``Bridge to Algebra,'' and more advanced
students could work on Algebra II or Geometry.
In the second year of the study, some high schools, primarily in
Texas, Michigan, and Kentucky, requested customized variants of the
curricula.
This was typically due to state standards, testing schedules, or local
scope and sequence guidelines.
These ``customized curricula'' altered the order of some sections and
units, and were usually particular to schools.
Figure \ref{fig:curricula} shows the percentage of worked problems
from each curriculum, from standard and customized varieties, by state
and year.
First, note that the vast majority of worked problems were from the
Algebra I sequence.
A small but notable number of less advanced problems were worked in
Kentucky in year 2, and some more advanced problems were worked in
Michigan and Louisiana.
Secondly, note the rise in ``customized curricula'' in year 2 in
Texas, Kentucky, and Michigan, the three states with the most students
in our dataset.
In particular, Texas shifted almost entirely to customized curricula
from years 1 to 2.
Throughout the school year, teachers could have a class of students
working on multiple curricula either sequentially, where the students
changed curricula in lock step, or simultaneously, where students
worked on different curricula at the same time. As an example, two
teachers located in Kentucky had their students working on Algebra I
throughout most of the year and then reassigned them to Algebra II in
the last month of school. In contrast, a different teacher in Kentucky had students variously
enrolled in three different curricula throughout the entire year
(Bridge-to-Algebra, Algebra I, and a customized Geometry curriculum),
while a year 2 teacher in Michigan enrolled students in
three curricula sequentially throughout the year: Algebra I
until November, followed by a customized curriculum until February, and
ending with a different customized curriculum until June. While there are numerous instances of these
uses of multiple curricula in year 2, there are also many occurrences
of teachers who had their students enrolled in the standard Algebra I
throughout the entire year, including all Connecticut teachers.
There were also teachers, mostly in Texas, who used customized
curricula exclusively throughout the
second year.
\section{Student Usage Across States and Years}\label{sec:usage}
\begin{table}[ht]
\centering
\begin{tabular}{rrrrr}
\hline
& Hours & Problems & Sections & Units \\
\hline
Year 1 & 33.47 & 309 & 43.0 & 9 \\
Year 2 & 23.83 & 228 & 36.5 & 10 \\
\hline
\end{tabular}
\caption{Median numbers of hours, problems, sections, and units worked by each student in the dataset in the two years of the study. Students with no usage data were excluded.}
\label{tab:medUsage}
\end{table}
Table \ref{tab:medUsage} shows the median numbers of hours,
problems, sections, and units worked on by each student in the dataset
in the two years of the study.
Apparently, usage decreased markedly in the second year: the median
of hours worked decreased by
10, the median number of problems decreased by
81, and
the median number of sections decreased by
6.5 from
years 1 to 2.
Yet, as discussed below, the median number of units worked increased
by 1.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{usageTime-1}
\caption{Boxplots of hours each student spent on Cognitive Tutor
software over by year and state. Students with no timestamp data
($n=$193), with anomalous negative time
($n=$3) or with more than 110 hours ($n=$91)
were excluded.}
\label{fig:timeByStud}
\end{figure}
Figure \ref{fig:timeByStud} shows that the number of hours students
spent working on the CT software in some more detail, via
state-by-year boxplots.
Analogous figures for the numbers of problems and sections students
worked,
showed similar patterns.
Usage time varied substantially between students and across states and
years.
Students in Texas,
Connecticut, and New Jersey worked far fewer hours than students in
Kentucky, Louisiana, and Michigan.
Not every state reduced its usage from years 1 to 2---while students
in Texas, Kentucky and New Jersey used the software less in the second
year than in the first, students in Michigan, Louisiana, and
Connecticut increased their usage.
Overall, usage varied a bit more in year 2 than in year
1---the median absolute deviation of time spent was 15.9 hours in the first year, compared to 16.9 in the second year.
The increase in variation seems to be driven both by increasing
between-state variation, and a between-student increase in Louisiana.
One intriguing possibility is that the amount of CT usage may have been better tailored to
teachers and students in the second year than in the first. Perhaps
usage increased for
students who stood to gain more from the software and decreased for
students who stood to gain less.
In contrast to the decreasing numbers of hours, problems, and sections students
worked in year 2, Table \ref{tab:medUsage} shows that the median number of units
students worked increased by
1
in year 2.
This suggests students in year 2 were exposed, on average, to a
slightly wider range of topics.
Figure \ref{fig:unitsByStud} shows boxplots of the numbers of units
worked by state and year.
The geographic variation in units worked mirrors the pattern in Figure
\ref{fig:timeByStud}, with more usage in Kentucky, Michigan, and
Louisiana but less in Texas, Connecticut, and New Jersey.
However, in every state the median year 2 student worked at least as
many different units as the median year 1 student.
Variation in the number of units worked also increased slightly
from years 1 to 2---the interquartile range (IQR) increased in every state
except for Kentucky, where a decrease in IQR was accompanied by an
increase in the number of outliers.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{unitsWorked-1}
\caption{Boxplots of the number of units of Cognitive Tutor
software each student worked, by year and state. Students
working more than 55 units (
7 of
4460) and students
with no usage data (189)
were excluded.}
\label{fig:unitsByStud}
\end{figure}
All in all, students used CT software less in year 2 than in year
1.
On the other hand, students in the second year tended to see a
slightly wider range of topics, and varied somewhat more in their usage.
\section{Working Units in Order---Or Not}\label{sec:order}
Overall, students used CT less in the second year than in the
first.
How was this difference distributed across CTAI units?
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{whichUnits-1}
\caption{The percentages of students with usage data who worked at
least one problem from each unit in the Algebra I curriculum. The
units are arranged in order for the standard curriculum. In the 2008
version of the software, the ``Unit Conversions'' unit was broken up
into two smaller units; for the sake of between-year comparisons, we
re-combined them.}
\label{fig:unitsWorked}
\end{figure}
Figure \ref{fig:unitsWorked} shows the units of algebra along the
horizontal axis, according to their order in the standard CTAI
curriculum.
The vertical axis shows the percentage of students with usage data who
worked each unit.
In year 1, the curve is almost monotonically decreasing, as one would
expect if students adhered to the curriculum.
Students varied in the number of units they worked---with the variation due
to both student ability and the amount of time allocated to CTAI
within a classroom---but they mostly followed the standard curriculum.
Students who worked fewer units stopped earlier in the sequence, and those who worked more units progressed farther.
Hence, earlier sections were worked by higher proportions of students
than later units.
In contrast, in year 2 students were much more likely to depart from the
standard unit order.
For instance, Figure \ref{fig:unitsWorked} suggests that some students
skipped ``Unit Conversions'' to work on ``1st
Quadrant Linear Graphs'' or skipped ``1 step Linear Equations'' to work on ``Independent Variables in Linear Models''
In both these cases, the subsequent unit was worked on by a greater
proportion of students than the immediately prior unit.
Most strikingly, ``Linear Equations with Variables on Both Sides'' was worked by a
greater proportion of students in year 2 than in year 1, and by a
greater proportion of students than any of the previous six sections.
Presumably teachers and administrators wanted students to focus on
that unit, perhaps because they found it to be particularly effective,
because students tend to struggle with its main topic, or because its
topic may figure prominently in an upcoming standardized test.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{unitsWorkedCust-1}
\caption{The percentages of year-2 students with usage data who worked at
least one problem from each unit in the Algebra I curriculum. The
units are arranged in order for the standard curriculum. Students
are divided between those attending schools using primarily a customized
curriculum and those using primarily the standard Algebra I curriculum.}
\label{fig:unitsWorkedCust}
\end{figure}
Most of the variation in unit order was driven by the rise, in year 2,
of customized curricula.
Figure \ref{fig:unitsWorkedCust} divides year-2 students into those
attending schools using primarily a customized curriculum, and those
attending schools using primarily a standardized
curriculum.\footnote{At least
90\% of
problems worked by students at ``Customized'' schools were from a customized curriculum,
and at most
1\% of
problems at ``Standard'' schools were from a customized curriculum.}
Students using a standardized curriculum followed the standard
sequence---more or less---while students using customized curricula
did not.
That said, there were some order violations in the standard group:
specifically, more students worked problems from units ``2-step Linear
Equations'' and ``Exponents'' than worked the preceding sections;
this suggests that some teachers used the reassignment tool to
prioritize particular topics.
Of course, teacher reassignment may have occurred in schools with customized
curricula as well---a possibility we will discuss in the next
section.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{unitsBySchool-1}
\caption{The percentages of year-2 students with usage data in each
school who worked at least one problem from each unit in the Algebra I curriculum. The
units are arranged in order for the standard curriculum. Schools are
classified as either using primarily customized curricula (solid
line) or using primarily the standard Algebra I curriculum (dotted).}
\label{fig:unitsBySchool}
\end{figure}
Figure \ref{fig:unitsBySchool} further decomposes the year-2 results
by school and state, showing a large amount of variation between
states, as well as variation between schools within states.
In Texas, every school used customized curricula, most of which seem to
prioritize some of the same units, for instance, ``Linear Patterns,''
``Independent Variables in Linear Models,'' and
``Linear Equations with Variables on Both Sides''
On the other hand, there was also variance between schools.
For instance, one school prioritized units ``2 Step Linear Equations''
and ``4-Quadrant Linear Graphs'' while nearly eliminating ``Linear Patterns.''
Between-school variation is evident in the other states, as well.
In four of the five Kentucky schools, nearly every student worked on
the first nine units; in the one Kentucky school that used a
customized curriculum, nearly every student worked on the first 13
units, omitted the 15th (``Lin. Mod. in General Form''), and worked on
the 16th and 17th (``Literal Equations'' and ``Linear Equations with Variables on Both Sides'').
In the remaining school, nearly every student worked on the first
section, but usage decreased rapidly from there.
In one Michigan school which used the standard curriculum,
no students seem to have worked on the ``Linear Models \& Ratios'' section.
If unit order and topic scaffolding are important to CT's mastery
learning mechanism, the wide variation in students' realized curricula
would seem to pose a problem.
The fact that the prescribed order was followed less in the second
year of the study, when CTAI was effective, than in the first year,
when it wasn't, suggests that the standard curriculum may play a
smaller role than one might otherwise imagine.
\section{Mastering the Material---Or Not}\label{sec:mastery}
The central idea behind mastery learning is that students progress
through the curriculum as they master skills.
In the context of CT, skills are clustered within sections, which are
in turn clustered within units.
Students progress from the current section to the next section after
mastering all of the current section's skills.
Ideally, students would master all of the skills in all of the
sections they work.
By default, the software operates by automatically moving students
from section to section based on the sequence of topics defined by the
curriculum they were currently enrolled in. In this
software-controlled sequencing, students ideally spend the time
necessary to learn the material of a section, are judged by the
software to have mastered the material, and then ``graduate'' to the
next section.
However, the software will also ``promote'' a student to the next
section if the student exhausts a section's material without mastering
its skills.
Additionally, teachers are able to modify a student's path within the curriculum.
They can ``reassign'' students from their current sections to other
sections earlier or later in the intended sequence, including sections
they worked on previously.
Finally, if the semester ends, or a student stops using CT for some
other reason, while in the middle of working through a
section, the section is designated ``final.''
All in all, each CT section a student encounters ends in one of four possible ways:
mastery, promotion, reassignment, or as the student's final section.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{overallStatus-1}
\caption{The distributions of outcomes of worked sections, by state and across the
entire sample, in the two study years.}
\label{fig:overallStatus}
\end{figure}
\begin{table}
\centering
\begin{tabular}{rllllll
& \multicolumn{6}{c}{Year 1}\\%&\multicolumn{6}{c}{Year 2}\\
&TX&KY&MI&LA&CT&Overall\\Final&699&625&242&129&91&1796\\Reassigned&600&140&220&5&92&1082\\Promoted&1619&9628&4582&1459&261&17561\\Mastered&15815&56199&31200&8648&1303&113374\\Total&18733&66592&36244&10241&1747&133813\\&\multicolumn{6}{c}{Year 2}\\
Final&1252&660&236&137&114&2412\\Reassigned&1215&253&275&9&144&2042\\Promoted&1000&6296&4717&1998&450&14477\\Mastered&4929&36676&26509&10108&1513&79874\\Total&8396&43885&31737&12252&2221&98805\\
\end{tabular}
\caption{Numbers of worked sections that ended in each of the
four possible outcomes, across states and study years.}
\label{tab:overallStatus}
\end{table}
Figure \ref{fig:overallStatus} and Table \ref{tab:overallStatus} show
the proportions of worked sections in each state and study year that
ended with mastery, promotion, or reassignment, or as the
student's final section.
In the first year, about
85\% of
worked sections are mastered, except in Connecticut.
Other than in Texas, about
13--15\%
of sections end in promotion.
About 3\% of sections in Texas
and 5\%
in Connecticut end in reassignment, which is even rarer in the other states.
With the exception of Texas, sections tended to be completed similarly
in both years.
In Texas, however, the percentage of sections ending in reassignment increased by a factor
of about about
5,
to about 14\%.
The proportion of Texas sections labeled ``Final'' increased as
well---the expected result of decreasing the overall number of worked
sections and holding fixed the likelihood of ending
usage while in the middle of a section.
Across states, sections ended in reassignment at a rate of about
1\% in year 1 and
2\% in year 2.
\subsection{Section Mastery and Curriculum}
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{statusCur-1}
\caption{The distributions of outcomes of worked sections, by
curriculum, in the two study years. (There were no Bridge-to-Algebra
sections in customized curricula in our dataset.)}
\label{fig:statusCur}
\end{figure}
A well-designed curriculum, can, in theory, play an important role in
students' attainment of mastery.
Students who work on appropriate problems that build on their current
set of skills should be more likely to master new skills than students
working on problems above their level.
What role did variations in the CT curriculum play in mastery during the effectiveness trial?
Figure \ref{fig:statusCur} shows the proportions of worked sections
that were mastered or ended in promotion, reassignment, or finality, in
standard and customized versions of each CT curriculum.
Mastery proportions do, indeed, depend on curriculum.
Specifically, students mastered sections from more advanced curricula
less frequently.
Sections from the most basic curriculum, Bridge to Algebra, were
mastered
92\%
of the time;
those from Algebra I were mastered
83\%
of the time, and those from more advanced curricula were mastered
at a rate of
75\%.
This is unsurprising, since more advanced curricula may be expected
to be more challenging.
However, it may suggest that some students studying advanced topics
would fare better in more standard curricula.
Algebra I sections from customized curricula tended to end in
reassignment more often than sections from the standard Algebra I
curriculum (3\% vs.
1.4\%, in year 2).
This may indicate an overall skepticism towards the Carnegie Learning standards
among certain schools and teachers, manifested in both adoption of
alternative curricula and reassignment
\section{Digging Deeper into Section Reassignment}\label{sec:cp}
The proportion of worked sections in our dataset ending in
reassignment was small.
Nevertheless, since reassignment represents the only mechanism by
which individual teachers can affect their students' progress through
the Cognitive Tutor, exploring patterns of reassignment can provide
insight into how CT was used.
\subsection{How Do Reassignment Patterns Vary?}
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{vcs-1}
\caption{Results from a set of eight multilevel logistic regressions
predicting section reassignment. For each year, in the entire sample (``Overall'')
and in the three states with the largest numbers of reassignments (Texas,
Kentucky, and Michigan), we regressed a binary variable indicating
whether a section ended in reassignment on random intercepts for
school, class, student, and unit, and in the overall case, for state
as well, and recorded their variance. The residual variance was set
as the variance of the standard logistic distribution,
$\pi/3$. These bar charts give the proportion of the total variance
attributable to each random effect.}
\label{fig:vc}
\end{figure}
Teachers alone control reassignment.
Nevertheless, the factors influencing student reassignment vary at a
number of levels.
For instance, state and district standards may prod teachers into
reassigning students to particular units.
Some principals may encourage teachers to adhere to the official
curriculum and avoid reassignment.
Some students may be more prone to reassignment than others.
Certain units in the CTAI curriculum may be harder than others,
causing students to tarry and teachers to reassign.
Finally, a host of other factors, at these levels and others, may spur reassignment.
To better understand the source of the variation in
reassignment---what drives some, but not other, sections worked by
students to end in
reassignment---we fit a set of multilevel models.
We fit separate models to data from each the three states with the highest numbers of reassignments, Texas,
Kentucky, and Michigan, and in the sample as a whole, in each of the
two study years, yielding a total of eight models.
Each model was a logistic regression: a binary indicator for section
reassignment was regressed on a random intercept for unit, as well as
nested random intercepts for student, classroom, and school.
Models fit to data from all six states included an additional random
intercept for state.
Logistic regression can be represented in terms of an underlying
latent variable $Z^*$: student $i$ working section $sec$ is reassigned
when $Z_{sec,i}^*>0$.
The model for $Z^*$ is:
\begin{equation*}
Z^*_{sec,i}=\alpha_0+\beta_{u[sec]}+\gamma_i+\delta_{c[i]}+\epsilon_{s[i]}+e_{sec,i}
\end{equation*}
Where $\alpha_0$ is an overall intercept,
and $\beta_{u[sec]}$, $\gamma_i$, $\delta_{c[i]}$, and $\epsilon_{s[i]}$
are random intercepts for the unit in which $sec$ appears, for student
$i$, for $i$'s classroom, and for $i$'s school, respectively.
Again, the model fit to all six states includes an additional random
intercept for state.
The random intercepts are modeled as independent and normally
distributed, each with its own variance.
The regression error $e_{sec,i}$ is given the standard logistic
distribution, with ``residual'' variance $\pi/3$.
It is convenient to represent variance in reassignment probabilities
in terms of the variance of $Z^*$.
Figure \ref{fig:vc} gives the variance components estimated from these
logistic regressions: variances of the random intercept terms, as a
percentage of the total variance of $Z^*$.
Overall, in both years of the study, the largest determinant of
reassignment was school, accounting for
30\% of the variation in year 1,
and 43\% in year 2.
After school, state was the most important, accounting for
20\% and
21\% in the two years, and unit,
accounting for 16\% and 14\%.
Surprisingly, classroom and student-level factors only accounted for
7\% and
3\% in year 1, respectively,
and 6\% and
1\% in year 2.
The pattern was similar in Texas---where school accounted for over half
the variation in reassignment in both years---and in Michigan to a
lesser extent.\footnote{Percentages in state specific models, in which
there is no between-state variance, cannot be
directly compared to those from the overall model.}
In Kentucky, unit played the largest role
(52\%) in year 1, and classroom played
the largest role in year 2 (50\%).
Across states and years, student level factors never accounted for
more than 5\% of
the variation in reassignment.
Other than in Kentucky in year 2, classroom never accounted for more
than 12\% of
the variation.
\textbf{Summary.} Although teachers control reassignment, their decisions appear to be largely
determined by broader policies, occurring at the state or school level.
\subsection{When are Students Reassigned?}
The timing of reassignments can also provide a window into what drives
teachers' decisions to reassign students.
Figure \ref{fig:byMonth} shows the proportion of worked sections in
each month that end in reassignment.
In both years, reassignments were much more common in the second half
of the school year than in the first.
This may be the result of teachers learning how to use the software as
the year progresses, or responding the pressure of upcoming
standardized tests by accelerating students' progress and reassigning
students to relevant sections.
As we've seen, reassignment was more common in year 2 than in year 1.
In fact, reassignment increases fairly steadily over the entire length
of the study.
Through December of the first year, reassignment was rare. From
January through May of year 1, between one and two percent of sections
ended in reassignment.
Year 2 begin where year 1 left off, with one to two percent of
sections reassigned.
Finally, from February through May of the second year, the rate of
reassignment increased again.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{byMonth-1}
\caption{The proportion of worked sections ending in reassignment, by
month and year.}
\label{fig:byMonth}
\end{figure}
\begin{table}
\centering
\begin{tabular}{rll|ll}
&\multicolumn{2}{c}{Year 1}&\multicolumn{2}{c}{Year 2}\\
&Aug--Dec&Jan--Jun&Aug--Dec&Jan--Jun\\
TX & 18 & 582 & 281 & 932 \\
KY & 9 & 131 & 52 & 201 \\
MI & 47 & 173 & 245 & 30 \\
LA & 0 & 5 & 9 & 0 \\
CT & 14 & 78 & 84 & 60 \\
NJ & 3 & 22 & 17 & 129 \\
\hline
Overall & 91 & 991 & 688 & 1352 \\
\hline
\end{tabular}
\caption{The number of reassignments in each state and overall in the
first and second halves of the year}
\label{tab:byMonth}
\end{table}
Figure \ref{fig:byMonthState} and Table \ref{tab:byMonth} decompose
these trends by state, Figure \ref{fig:byMonthState} includes just
the three states with the highest number of reassignments, Texas,
Kentucky, and Michigan. Table \ref{tab:byMonth} shows data for each of
the six states and overall.
For the two study years, Figure \ref{fig:byMonthState} shows the proportion of all of each
state's reassignments the occurred in each month of the school year.
(Note that while both Figure \ref{fig:byMonth} and \ref{fig:byMonthState}
show proportions for each month, the denominators are not the same:
Figure \ref{fig:byMonth} gives the proportion of each month's worked
sections that ended in reassignment, and Figure \ref{fig:byMonthState}
gives the proportion of all of the state's reassignments over the
course of the year, that occurred in each month.)
Figure \ref{fig:byMonthState} reveals patterns that are not apparent
in Figure 10.
For example, the bulk of Kentucky's reassignments took place in May.
A fifth of Michigan's reassignments in year 1 occurred in October;
for the rest of the year, Michigan roughly followed the same pattern as Texas.
In year 2, the vast majority of Michigan's reassignments also occurred near
the beginning of the year, in September and October.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{byMonthState-1}
\caption{The proportion of a state's reassignments that occurred in
each month of the year.}
\label{fig:byMonthState}
\end{figure}
\textbf{Summary.} In Texas and overall, reassignments were more common
in the second half of the school year than in the first half. In
Kentucky and Michigan, they were clustered in a few specific months.
\subsection{Does Reassignment Depend on Classmates?}
Student individuality and independence might be the most important
motivating factors behind mastery learning---each student learns at
his or her own pace, and struggles on a unique set of skills.
Students are supposed to move through the CT curriculum independently
of each other.
However, reassignments give teachers the ability to override this
feature, and coordinate students' progress.
Teachers might identify students who are behind their classmates and
reassign them to later sections.
They may also move an entire class together to a particular section or
unit of interest.
To what extent did these and similar considerations drive reassignment
in the CTAI study?
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{classmates-1}
\caption{Distributions of classmates' statuses at each reassignment in
Texas, Kentucky, and Michigan. Each reassignment that took place in
these three states is represented by a bar showing the proportions of
the reassigned student's classmates who had exited the section on
the same day or earlier via promotion or mastery, who had been
reassigned from that section on an earlier date, on the same date,
or on a later date, who mastered the section or were promoted on a
later date, or who never worked the section at all. The bars are
rank ordered according to those proportions, in the order listed.}
\label{fig:classmates}
\end{figure}
Figure \ref{fig:classmates} addresses this question by plotting a
students' classmates' statuses at the time he or she is reassigned.
For each reassignment in Texas, Kentucky, and Michigan, Figure \ref{fig:classmates}
plots a vertical bar colored to show the proportions of the reassigned
students' classmates (represented in the usage data) who had exited the section on
the same day or earlier via promotion or mastery, who had been
reassigned from that section on an earlier date, on the same date,
or on a later date, who mastered the section or were promoted on a
later date, or who never worked the section at all.
The bars are rank ordered according to those same proportions: first by the
proportion of classmates who were promoted or mastered the section on
the same day as the reassignment in question, or earlier, next by the
proportion who were reassigned from the same section on an earlier
date, next by the proportion who were reassigned from the same section
on the same date, and finally by the proportion who were reassigned
from the same section on a later date.
Do teachers reassign students in order to help them catch up with classmates?
According to Figure \ref{fig:classmates}, that might be part of the
story, but isn't all of it.
Across years and states, in about 40\% of
reassignments, at least 75\% of the rest of the class had exited the
section through graduation or promotion on the same date or earlier.
The figure also reveals that the proportions of students who had graduated or been
promoted from the same section on the same day or earlier was smaller in year 2
than in year 1, especially in Texas and Michigan.
In Texas, year 2 saw a dramatic increase in the proportions of
students reassigned from the same section on the same day, suggesting
that some teachers may have been moving the class together through the
curriculum.
In Michigan in year 2, teachers reassigned almost all students who
worked certain sections.
That is, in 65\% of the instances in which year-two Michigan
students were reassigned from a section, at least 75\% of their
classmates were, at some point, reassigned from the same section, or
never worked it.
Across all three states and both years, it is exceedingly rare for
students to master or be promoted from a section after someone in
their class has been reassigned from the same section.
\textbf{Summary.} Three different patterns emerge from Figure
\ref{fig:classmates}: teachers reassigning students who have fallen
behind their classmates, teachers reassigning an entire class from the
same section on the same day, and teachers reassigning almost all
students who begin to work on particular sections.
Each of these patterns mostly takes place in different states and
years.
\subsection{Where To?}
Teachers who reassign students may simply move them to the next
section within the same unit.
Say that a teacher believes that a particular student had already mastered
the skills in one of the CTAI sections or is wheel-spinning---working
problems without learning---or a teacher dislikes one of the sections in a CT unit.
Still, the teacher wants the student to learn as much as possible from
the current unit.
Then moving the student to the next section within the same unit might
make sense.
Alternatively, teachers who believe that some of their students are
not progressing quickly enough may reassign them out of their current
units entirely, and into the next unit in the sequence---whatever that
may be.
Finally, a teacher who wanted his or her students to focus on a
particular topic might reassign them all to the appropriate unit
when the time is appropriate.
Which of these patterns is most prevalent?
More generally, when teachers reassign their students, where in the
curriculum do they send them?
\begin{figure}
\centering
\begin{subfigure}{5in}
\includegraphics[width=\maxwidth]{transition1-1}
\end{subfigure}
\begin{subfigure}{1in}
\centering
\includegraphics[width=\maxwidth]{trans1legend-1}
\end{subfigure}
\caption{Reassignment transition plot for study year 1. The units on
the left of the plot are the Algebra I units that ended in reassignment at
least 30 times in year 1. The units on the right are those that were
the destination of at least 30 reassignments. Also included on the
left are the top two sending units for each of the units on the
right, and also included on the right are all the units on the left,
along with those units' top two receiving units. The units are numbered
according their order in the standard Algebra I curriculum. There is an arrow from a
unit on the left to a unit on the right if a student was assigned
from the unit on the left to the one on the right. The thickness of
the arrows is proportional to the percentage of all reassignments from
the sending unit that ended in that receiving unit. The darkness of
the arrows is proportional to the number of reassignments from the
sending unit to the receiving unit.}
\label{fig:trans1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{5in}
\includegraphics[width=\maxwidth]{transition2-1}
\end{subfigure}
\begin{subfigure}{1in}
\includegraphics[width=\maxwidth]{trans2legend-1}
\end{subfigure}
\caption{Reassignment transition plot for study year 2. See caption of
Figure \ref{fig:trans1} for details.}
\label{fig:trans2}
\end{figure}
To address these questions, we focus on the units in the standard
Algebra I sequence.
Figures \ref{fig:trans1} and \ref{fig:trans2} give transition plots
for reassignment in the two study years.
On the left of each figure, marked ``From,'' are the top ``sending''
units: the units that students were reassigned \emph{from} at least 30
times.
They are numbered according to the order they appear in the standard Algebra I curriculum.
On the right, under ``To,'' are the top ``receiving'' units: the units
that students were reassigned \emph{to} at least 30 times.
For completeness, each of the top two receivers for each sending unit,
and each of the top two senders for each receiving unit were also included.
Finally, all of the sending units listed on the left were also included in the
receiving column.
All in all, 75\% of the first-year reassignments and
75\% of the second-year reassignments are
captured in the figures.
The arrows in the plot represent reassignments.
An arrow from a sending unit to a receiving unit indicates
reassignment from the former to the latter.
The thickness of the arrows represents the proportion of reassignments
originating in the sending unit whose destination was the receiving unit.
The darkness of the arrows represents the number of such reassignments.
For instance, in Figure \ref{fig:trans1}, the arrow from
``4-Quadrant Linear Graphs'' on the left to ``4-Quadrant Linear
Graphs'' on the right is fairly thick, since
44\%
of the reassignments from ``4-Quadrant Linear Graphs'' end in the
same unit.
Yet it is also fairly faint, since it only represents
12
reassignments.
Inspecting the figures shows that the most common pattern is for
students to be reassigned to the next unit in the curriculum---this pattern comprises
52\% of the reassignments in year 1 and
24\% of those in
year 2.
On the other hand, it is relatively rare for students to be reassigned within the same unit
(these reassignments account for 15\% and 18\% in the two years).
It is also rare for students to be reassigned to units earlier in the curriculum (comprising
2\% and
8\%).
The transition plots also reveal some interesting cases worth highlighting.
In year 1 (Figure \ref{fig:trans1}), students reassigned from the first unit, ``Linear Patterns'', were primarily placed two units ahead, in ``1st Quadrant Linear Graphs,'' skipping ``Unit Conversions,'' perhaps suggesting a disinterest in ``Unit Conversions'' on the part of the reassigning teachers.
Similarly,
9
of the students
(27\%)
reassigned from the section ``Independent Variables in Linear Models'' were placed 13 units later in ``Systems of Linear Equations''
and all 62 of the students reassigned from ``Quadratic Models \& Area'' were placed in ``Exponents.''
This suggests that some teachers may have considered the units ``Systems of Linear Equations'' and ``Exponents'' to contain particularly important material.
Since the total number of reassignments was higher in year 2, the corresponding plot (Figure \ref{fig:trans2}) is larger and more complex.
It is also more common in year 2 for students to be reassigned to units other than the next unit in the sequences.
This may be partly due to the proliferation of customized curricula.
Two units, ``4-Quadrant Linear Graphs'' and ``Lin. Equations with
Variables on Both Sides,'' were common destinations from a wide
variety of earlier units, suggesting strong teacher interest in those units.
The majority (76\%)
of the students reassigned from ``Probability'' were placed in another section of the same unit, perhaps suggesting problems with some of that unit's sections; in fact, all of the students reassigned within the ``Probability'' unit were reassigned from one its first three sections (of seven).
Finally, 44
(81\%) of the reassignments from ``Pythagorean Theorem'' ended in the earlier ``Product Rule for Exponents'' unit.
\textbf{Summary.} The most prevalent pattern was for students to be reassigned
to the following unit, suggesting that teachers may be mostly
interested in advancing students who are behind.
On the other hand, a number of examples of other patterns---students moving within the same unit, or to units out of sequence---appear as well, suggesting that some teachers may be finely manipulating their students' curricula.
\section{Effects of Reassignment}\label{sec:effects}
The goal of CTAI is to help students learn Algebra, so
the most important questions about reassignment are about its effect on learning.
Although the data from this study came from a randomized trial, it was
CTAI as a whole that was randomized, not individual behaviors within CTAI.
Specifically, student reassignment was not randomized.
Therefore, precise estimates of causal effects of reassignment on
learning require strong untestable assumptions that are unlikely to be
true.
As in all observational studies, this includes the assumption that all
confounding variables---variables that predict both reassignment and
learning---have been measured well and modeled correctly.
Further complicating matters, although reassignment itself is a
well-defined process, in practice it can take many forms, as we have
endeavored to show.
There is no reason to expect the effects of reassignment to be
the same regardless of whether the teacher used it to help lagging
students catch up, to allocate time to important topics, or for some
other reason.
All that said, observational estimates of reassignment's average
causal effects can be valuable, if interpreted cautiously, for
instance by assessing their sensitivity to unmeasured confounding, as
we do below.
In the absence of evidence from randomized trials, observational
studies can help guide intuition, future research, and even---when
combined with other relevant information and theory---practice.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{cpYyear-1}
\caption{Students' Gain scores (posttest minus pre-test) versus the number of times the were reassigned over the course of the year (jittered), with a Loess smoother added.}
\label{fig:cpYyear}
\end{figure}
Figure \ref{fig:cpYyear} shows students' gain scores---the difference between their posttest and pre-test scores---as a function of the number of times they were reassigned.
(The number of reassignments was jittered---random noise was added on
the horizontal dimension---to avoid overplotting.)
Overall, the relationship between the two variables is positive.
Nevertheless, some non-linearity seems to be present, especially in year 2.
Further, the distribution of the number of reassignments is right-skewed---again, especially in year 2.
Care in modeling the number of reassignments, then, is especially important---observations from students reassigned an unusually large number of times can exert undo influence on a regression model and generate misleading results, particularly in the presence of non-monotonic relationships.
We settled on three different strategies: first, the variable $R^{bin}$ dichotomizes reassignment---$R^{bin}=0$ for students who were never reassigned, and $R^{bin}=1$ for students who were.
Next, $R^{cat}$ defines a categorical variable taking the values $R^{cat}=0,1,2,3,4+$ for students who were reassigned 0, 1, 2, 3 or four or more times, respectively.
Finally, $R^{num}$ is the raw number of reassignments, which we include for completeness.
We used linear models to estimate the effect of reassignment, regressing posttest scores on $R^{bin}$, $R^{cat}$, or $R^{num}$ along with fixed effects for classroom, essentially modeling reassignment as randomly assigned within classroom.
Since this is unlikely to be the case (even approximately), we ran a
second set of models including student level covariates as well:
pretest scores,\footnote{These are measured with error, and missing at a relatively high rate (
15\%). To account for
this, we used regression calibration based on the 20 ``multiple
imputations'' used in the original CTAI study,
\citeN{pane2014effectiveness}.} race, sex, grade, special education
and gifted status, English as a second language (ESL), and free and reduced-price lunch eligibility.
\begin{table}
\centering
\begin{tabular}{r|c|c|c|}
&\multicolumn{3}{c}{Parametrization}\\
&$R^{bin}$&$R^{cat}$&$R^{num}$\\
&\makecell[c]{Effect of\\ $\ge 1$ Reassignment}&\makecell[l]{Effect of\\ \# Reassignments:}& \makecell[c]{Effect per\\ Reassignment:}\\
\hline
\makecell[r]{No\\Covariates}&-0.14 $\pm$ 0.06 &\makecell[l]{1 : -0.15 $\pm$ 0.07 \\2 : -0.15 $\pm$ 0.1 \\3 : -0.07 $\pm$ 0.15 \\4 +: -0.12 $\pm$ 0.17 }&-0.04 $\pm$ 0.03 \\
\hline\makecell[r]{Covariate\\ Adjusted}&-0.15 $\pm$ 0.06 &\makecell[l]{1 : -0.15 $\pm$ 0.07 \\2 : -0.16 $\pm$ 0.1 \\3 : -0.08 $\pm$ 0.15 \\4 +: -0.14 $\pm$ 0.17 }&-0.04 $\pm$ 0.03 \\
\hline
\end{tabular}
\caption{Estimates of effects of reassignment on posttests, with 95\% margins of error.}
\label{effectResults}
\end{table}
The results are reported in Table \ref{effectResults}.
All the effect estimates are negative, indicating that reassigning students may hurt their algebra learning.
The estimated effects decrease in magnitude for three or four reassignments, but these estimates are very noisy---very few students were reassigned more than two times.
The magnitudes of the effects are rather large: \citeN{pane2014effectiveness} reported an effect, in year 2, of about 0.2; the estimated effect of at least one reassignment is 73\% of that.
But what of unmeasured confounding?
For instance, the negative effect may be due to baseline differences
in ability, beyond what is captured in pretest scores.
\citeN{hhh} suggest a method of estimating the sensitivity of a regression to an omitted confounder based on bench-marking from observed confounders.
In order to confound the causal relationship between reassignment and posttests, a confounder would have to predict both.
Roughly speaking, the idea is to widen the confidence interval from an
ostensibly causal linear model to account for the possibility of a hypothetical unmeasured confounder that predicts reassignment and posttests to the same extent as one of the observed covariates.
These ``sensitivity intervals'' account for uncertainty from two sources: random error, and systematic error due to the omission of a confounder.
As is typical, the pretest is our most important measured covariate, both in terms of its prediction of reassignment and of posttest scores.
The sensitivity interval for the effect of being reassigned at least
once on posttest scores, allowing for the possible omission of a
hypothetical confounder at most as important as pretest, is
-0.15 $\pm$
0.14.
This interval is quite wide, implying that such a covariate could
explain much of the observed relationship between reassignment and
posttest scores (or that the relationship may be much stronger).
On the other hand, the sensitivity interval allowing for the possible
omission of a less important hypothetical confounder---one that
predicts posttests as well as ESL status and reassignment as well as
Hispanic ethnicity, the next best observed
predictors---is
-0.15 $\pm$
0.07.
This interval is substantially tighter.
All in all, unmeasured confounding may play an important role here,
but there is good reason to believe that it does not explain all of
the observed relationship.
The wide variety in the use of reassignment that we have documented
here might suggest that reassignment's treatment effect varies as
well.
Figure \ref{fig:trtHet} shows estimated classroom-specific treatment
effects of $R^{bin}$, being reassigned at least once.
The estimates came from a multilevel model in which posttest scores
were regressed on $R^{bin}$ and student-level covariates, along with
random effects for classroom and random slopes for $R^{bin}$ varying
by classroom.
Unlike fixed effects models, multilevel models ``partially pool'' data
across classrooms to estimate classroom specific effects more precisely \cite{gelmanHill}.
This is especially important given the small sample sizes within
classrooms.
Figure \ref{fig:trtHet} shows a wide variation in the effect of
reassignment across classrooms---the estimated standard deviation of
these effects was
0.18,
larger than the average effect itself.
While the effect was negative in most classrooms, it was positive in some.
This variation could be due to a number of factors, including
differences in the composition of classrooms, but supports the
hypothesis that differences in when and how reassignment is used
lead to differences in its effect.
\begin{figure}
\centering
\includegraphics[width=\maxwidth]{hetroPlot-1}
\caption{The effect of being reassigned at least once, in each
classroom, as estimated by a multilevel model. The classrooms are
sorted by these estimated effects. The dotted line indicated an
effect of zero.}
\label{fig:trtHet}
\end{figure}
\section{Summary and Discussion}\label{sec:discussion}
The effectiveness of the Cognitive Tutor software presumably depends
on how much, and how, it is used.
This paper exploited available log data from the high-school arm of the CTAI
effectiveness trial---which yielded an impressive result for the
software in year 2---to describe variation and patterns in the software's
usage, paying particular attention to issues of mastery learning.
We found that:
\begin{itemize}
\item The amount the software was used varied widely between states
and decreased, overall, from years 1 to 2.
\item Year 2 saw the proliferation of ``customized'' curricula in
three states, altering which units students worked, and in what
order.
\item Year 2 saw frequent departures from the standard CTAI unit
sequence, driven mainly, but not entirely, by customized
curricula.
\item Examination of section mastery found that:
\begin{itemize}
\item About 85\% of
worked sections in year 1 and
81\% in year 2 ended
with the student having mastered all of the included skills.
\item There were three ways students worked a section without
mastering its contents: exhausting its problems and being
promoted, being reassigned to a new section by the teacher, and
ending CT use altogether.
\item Reassignment was rare, though it was more prevalent in Texas
and Connecticut than in other states, and more common in year 2
than 1.
\item Mastery rates were lower for more advanced curricula.
\end{itemize}
\item Examination of reassignment patterns found that:
\begin{itemize}
\item Reassignment rates were determined more by factors varying by state and school
than by classroom.
\item Reassignment was more common in the second half of the year
than in the first---particularly in Texas.
\item Depending on state and year, reassignment typically takes
place in one of three scenarios: teachers reassigning students
who have fallen behind, teachers reassigning (almost) the entire
class together, and teachers assigning (almost) all students
\emph{out} of a particular section.
\item Students are most commonly reassigned to the next unit in the
curriculum---suggesting that teachers may be advancing lagging
students---but in some cases they might be finely manipulating
their students' curricula.
\end{itemize}
\item Reassignment appears to lower students' performance on the
post-test relative to that of their peers; however, the effect
appears to vary widely between classrooms (and, presumably, how
reassignment is used).
\end{itemize}
In practice, mastery learning and topic scaffolding often unfolded
quite differently from the vision of CT's designers.
In some cases, the departures were apparently a matter of a student's inability
to achieve mastery quickly enough, and in other cases of educators' preferences that ran counter to CTAI's design.
Notably, both the amount of usage and fidelity to CTAI's design
decreased from years 1 to 2---just as the estimated \emph{effect} of
CTAI increased.
In year 2, when the effect was substantial, students spent less time,
and followed the CTAI curriculum and guidelines less closely than in
year 1, when the estimated effect was negative but statistically
insignificant.
This raises questions as to the roles of structure and mastery in
CTAI's effectiveness.
Does flexibility lead to higher effects? Is mastery learning an
important mechanism for CTAI?
Answers to these questions could prove crucial for optimizing the
realized effectiveness of CTAI and other intelligent tutors.
On the other hand, reassignment appears to hurt students' performance
on the posttest, relative to their classmates (though based on data
from a randomized experiment, this analysis was observational, so
confounding can't be ruled out).
Does this, in contrast, suggest that achieving mastery is an important
component of effective intelligent tutoring?
As educational technology spreads, attention to the details of
implementation may yield important insights---or important
questions---about effectiveness, and the science of when
intelligent tutors work, and when they don't.
\section*{Acknowledgements}
This work was supported by NSF Award \#1420374.
\bibliographystyle{acmtrans}
|
{
"timestamp": "2018-02-26T02:10:56",
"yymm": "1802",
"arxiv_id": "1802.08616",
"language": "en",
"url": "https://arxiv.org/abs/1802.08616"
}
|
\section{Introduction}
Matroids can be characterized
by various cryptomorphically equivalent axioms; see e.g.,~\cite{Aigner}.
Among them, a lattice-theoretic characterization by Birkhoff~\cite{Birkhoff}
is well-known:
The lattice of flats of any matroid
is a {\em geometric lattice} ($=$ atomistic semimodular lattice), and
any geometric lattice gives rise to a simple matroid.
The goal of the present article is to extend
this classical equivalence to {\em valuated matroids}
(Dress-Wenzel~\cite{DressWenzel_greedy,DressWenzel}).
Valuated matroid is a quantitative generalization
of matroid, which abstracts linear dependence structures of vector spaces
over a field with a non-Archimedean valuation.
A valuated matroid is defined as
a function on the base family of a matroid
satisfying a certain exchange axiom that originates from the Grassmann-Pl\"ucker identity.
Just as matroids, valuated matroids enjoy nice optimization properties;
they can be optimized by a greedy algorithm,
and this property characterizes valuated matroids.
In the literature of combinatorial optimization,
theory of valuated matroids has evolved into
{\em discrete convex analysis}~\cite{MurotaBook},
which is a framework of ``convex" functions on discrete structures
generalizing matroids and submodular functions.
In tropical geometry (see e.g., \cite{TropicalBook}),
a valuated matroid
is called a {\em tropical Pl\"ucker vector}.
The space of valuated matroids is understood as
a tropical analogue of the Grassmann variety in algebraic geometry;
see \cite{Speyer08,SpeyerSturmfels04}.
While Murota-Tamura~\cite{MurotaTamura01} established
cryptomorphically equivalent axioms for valuated matroids
in terms of (analogous notions of) {\em circuits}, {\em cocircuits}, {\em vectors}, and {\em covectors}, a lattice-theorectic axiom has never been given in the literature.
The goal of this paper is to establish a lattice-theoretic axiom for valuated matroids
by introducing a new class of semimodular lattices,
called {\em uniform semimodular lattices}.
This class of lattices can be
viewed as an affine-counterpart of geometric lattices, and is defined by a fairly simple axiom:
They are semimodular lattices
with the property that the operator
$x \mapsto $ (the join of all elements covering $x$)
is an automorphism.
This operator was introduced in a companion paper~\cite{HH18a}
to characterize
Euclidean buildings in a lattice-theoretic way.
The main result of this paper is
a cryptomorphic equivalence
between uniform semimodular lattices and integer-valued valuated matroids.
The contents of this equivalence and its intriguing features
are summarized as follows:
\begin{itemize}
\item
A valuated matroid is constructed from
a uniform semimodular lattice ${\cal L}$ as follows.
We introduce the notion of a {\em ray} and {\em end} in ${\cal L}$.
A ray is a chain of ${\cal L}$ with a certain straightness property, and
an end is an equivalence class of the parallel relation on the set of rays.
Ends will play the role of atoms in a geometric lattice.
%
We introduce a matroid ${\bf M}^{\infty}$ on the set $E$ of ends,
called the {\em matroid at infinity},
which will be the underlying matroid of our valuated matroid.
As expected from the name,
this construction is inspired by
the {\em spherical building at infinity} in a Euclidean building.
%
A ${\bf Z}^n$-sublattice ${\cal S}(B)$ ($\simeq {\bf Z}^n$) is naturally associated with each base $B$ in ${\bf M}^{\infty}$, and plays the role of
apartments in a Euclidean building.
%
Then a valuated matroid $\omega = \omega^{{\cal L},x}$ on $E$ is
defined from apartments and any fixed $x \in {\cal L}$;
the value $\omega(B)$
is the negative of a ``distance" between $x$ and ${\cal S}(B)$.
%
It should be emphasized that this construction is done purely in a coordinate-free lattice-theoretic manner.
\item The reverse construction of a
uniform semimodular lattice from a valuated matroid uses
the concept of the {\em tropical linear space}.
The tropical linear space ${\cal T}(\omega)$ is a polyhedral object in ${\bf R}^E$ associated with a valuated matroid $\omega$ on $E$.
This concept and the name were introduced by Speyer~\cite{Speyer08}
in the literature of tropical geometry.
Essentially equivalent concepts were earlier considered
by Dress-Terhalle~\cite{DressTerhalle93,DressTerhalle98}
as the {\em tight span}
and by Murota-Tamura~\cite{MurotaTamura01}
as the {\em space of covectors}.
In the case of a matroid (i.e., $\{0,-\infty\}$-valued valuated matroid), the tropical linear space reduces to the {\em Bergman fan} of the matroid,
which is viewed as a geometric realization of the order complex of
the geometric lattice of flats~\cite{ArdilaKlivans06}.
%
We show that the set ${\cal L}(\omega) := {\cal T}(\omega) \cap {\bf Z}^E$ of integer points
in ${\cal T}(\omega)$ forms a uniform semimodular lattice.
Then the original $\omega$
is recovered by the above construction (up to the projective-equivalence),
and ${\cal T}(\omega)$ is a geometric realization
of a special subcomplex of the order complex of ${\cal L}(\omega)$.
Thus our result establishes
a coordinate-free lattice-theoretic characterization
of tropical linear spaces.
\item The above constructions incorporate, in a natural way,
the {\em completion process of valuated matroids} by Dress-Terhalle~\cite{DressTerhalle93},
which is
a combinatorial generalization of the $p$-adic completion.
They introduced an ultrametric metrization of underlying set $E$
by a valuated matroid $\omega$, and
a completeness concept for valuated matroids
by the completeness of this metrization of $E$.
%
They proved that any (simple) valuated matroid $(E, \omega)$ is (uniquely) extended to
a complete valuated matroid $(\bar E, \bar \omega)$, which is called
a {\em completion} of $(E, \omega)$.
%
We show that the space $E$ of ends in a uniform semimodular lattice ${\cal L}$
admits an ultrametric metrization $d$, and it is complete in this metric,
where $d$ coincides with the Dress-Terhalle metrization of
the constructed valuated matroid $\omega^{{\cal L},x}$.
Then the process $\omega \to {\cal L}(\omega) \to \omega^{{\cal L}(\omega),x}$ coincides with
the Dress-Terhalle completion.
\item Our result sums up, from a lattice-theory side,
connections between
valuated matroids and {\em Euclidean buildings}
(Bruhat and Tits~\cite{BruhatTits}), pointed out by~\cite{DressKahrstromMoulton11,DressTerhalle98,JoswigSturmfelsYu07}.
%
Let us recall the spherical situation, and
recall {\em modular matroid}, which is a matroid
whose lattice of flats is a modular lattice.
We can say that
a modular matroid is equivalent to a {\em spherical building of type A}~\cite{Tits}.
Indeed, a classical result of Birkhoff~\cite{Birkhoff} says that
a modular geometric lattice is precisely the direct product of subspace lattices of
projective geometries. Another classical result by Tits~\cite{Tits} says
that a spherical building of type A is the order complex of
the direct product of subspace lattices of projective geometries.
An analogous relation is naturally established for valuated matroids
by introducing the notion of
a {\em modular valuated matroid}, which is defined as a valuated matroid
such that the corresponding uniform semimodular lattice is a modular lattice.
The companion paper~\cite{HH18a} showed that
uniform modular lattices are cryptomorphically equivalent to Euclidean buildings of type A.
%
Thus a modular valuated matroid $\omega$
is equivalent to a Euclidean building of type A,
in which (the projection of) the tropical linear space ${\cal T}(\omega)$
is a geometric realization of the Euclidean building.
%
This generalizes a result by Dress and Terhalle~\cite{DressTerhalle98}
obtained for the Euclidean building of ${\rm SL}(F^n)$ for a valued field $F$.
\end{itemize}
The rest of this paper is organized as follows.
Sections~\ref{sec:pre} and \ref{sec:valuated} are preliminary sections
on lattice, (valuated) matroids, and tropical linear spaces.
Section~\ref{sec:uniform}
constitutes the main body of our results on uniform semimodular lattices.
Section~\ref{sec:example} discusses
three representative examples of valuated matroids
in terms of uniform semimodular lattices.
\section{Preliminary}\label{sec:pre}
Let ${\bf R}$ denote the set of real numbers.
Let ${\bf Z}$ and ${\bf Z}_+$ denote the sets of integers and nonnegative integers, respectively.
For a set $E$ (not necessarily finite),
let ${\bf R}^E$, ${\bf Z}^E$, and ${\bf Z}_+^E$
denote the sets of all functions from $E$ to ${\bf R}$, ${\bf Z}$, and ${\bf Z}_+$, respectively.
A function $g: E \to {\bf Z}$ is said to be {\em upper-bounded}
if there is $M \in {\bf Z}$ such that $g(e) \leq M$ for all $e \in E$.
If $|g(e)| \leq M$ for all $e \in E$, then $g$ is said to be {\em bounded}.
Let ${\bf 1}$ denote the all-one vector in ${\bf R}^E$, i.e., ${\bf 1}(e) = 1$ $(e \in E)$.
For a subset $F \subseteq E$, let ${\bf 1}_F$ denote the incidence vector
of $F$ in ${\bf R}^E$, i.e., ${\bf 1}_{F}(e) = 1$ if $e \in F$ and zero otherwise.
Let ${\bf 0}$ denote the zero vector.
For $x,y \in {\bf R}^E$, let $\min (x,y)$ and $\max (x,y)$
denote the vectors in ${\bf R}^E$ obtained from $x,y$
by taking componentwise minimum and maximum, respectively; namely
$\min(x,y)(e) = \min (x(e),y(e))$ and $\max(x,y)(e) = \max (x(e),y(e))$ for $e \in E$.
The vector order $\leq$ on ${\bf R}^E$ is defined by
$x \leq y$ if $x(e) \leq y(e)$ for $e \in E$.
For $e \in E$ and $B \subseteq E$,
we denote $B \cup \{e\}$ and $B \setminus \{e\}$
by $B + e$ and $B - e$, respectively.
\subsection{Lattice}\label{subsec:lattice}
We use the standard terminology on posets (partially ordered sets)
and lattices (see, e.g., \cite{Aigner,Birkhoff}), where $\preceq$ means a partial order relation, and
$x \prec y$ means $x \preceq y$ and $x \neq y$.
A lattice is a poset
${\cal L}$ such that every pair $x,y$ has
the greatest common lower bound $x \wedge y$ and
the least common upper bound $x \vee y$;
the former is called the {\em meet} of $x,y$, and
the latter is called the {\em join} of $x,y$.
For a subset $S \subseteq {\cal L}$,
the greatest lower bound of $S$ (the {\em meet} of $S$) is denoted by
$\bigwedge S$ (if it exists), and the least upper bound of $S$ (the {\em join} of $S$)
is denoted by $\bigvee S$ (if it exists).
For elements $x,y$ with $x \preceq y$, the {\em interval} $[x,y]$ of $x,y$
is the set of elements $z$ with $x \preceq z \preceq y$.
If $[x,y] = \{x,y\}$ and $x \neq y$, then we say that $y$ {\em covers} $x$
and write $x \prec_1 y$.
A {\em chain} is a totally ordered subset ${\cal C}$ of ${\cal L}$;
a chain will be written, say, as
$x_0 \prec x_1 \prec \cdots \prec x_m \prec \cdots$.
The {\em length} of chain ${\cal C}$ is defined as its cardinality $|{\cal C}|$ minus one.
In this paper, we deal with lattices satisfying the following finiteness assumption:
\begin{itemize}
\item[(F)] No interval $[x,y]$ has a chain of infinite length.
\end{itemize}
An order-preserving bijection $\varphi:{\cal L} \to {\cal L}'$
is called an {\em isomorphism}.
If ${\cal L} = {\cal L'}$, then isomorphism $\varphi$
is called an {\em automorphism} on ${\cal L}$.
A {\em sublattice} of a lattice ${\cal L}$ is a subset ${\cal L'} \subseteq {\cal L}$
with the property that $x,y \in {\cal L'}$ imply $x \wedge y, x \vee y \in {\cal L}'$.
Intervals are sublattices.
An {\em atom} is an element that covers
the minimum ${\bar 0} = \bigwedge {\cal L}$.
The rank of ${\cal L}$ (having the minimum and maximum) is defined as
the maximum length of a maximal chain of ${\cal L}$.
A {\em height function} of a lattice ${\cal L}$ is
an integer-valued function $r:{\cal L} \to {\bf Z}$
such that
$r(y) = r(x) + 1$ for any $x,y \in {\cal L}$ with $x \prec_1 y$.
A lattice ${\cal L}$ is said to be {\em semimodular}
if
$x \wedge a \prec_1 a$ implies $x \prec_1 x \vee a$ for any $x,a \in {\cal L}$.
From the definition, we easily see that
a semimodular lattice satisfies the Jordan-Dedekind chain condition:
\begin{itemize}
\item[(JD)] For any interval $[x,y]$, all maximal chains in $[x,y]$ have the same length.
\end{itemize}
We denote this length by $r[x,y]$, which is finite by (F).
\begin{Lem}\label{lem:semimodular}
For a lattice ${\cal L}$, the following conditions are equivalent:
\begin{itemize}
\item[{\rm (1)}] ${\cal L}$ is semimodular.
\item[{\rm (2)}] For $a,b \in {\cal L}$, if $a,b$ cover $a \wedge b$, then $a \vee b$ covers $a,b$.
\item[{\rm (3)}] ${\cal L}$ admits a height function $r$ satisfying
\begin{equation}\label{eqn:semimo}
r(x) + r(y) \geq r(x \wedge y) + r(x \vee y) \quad (x,y \in {\cal L}).
\end{equation}
\end{itemize}
\end{Lem}
\begin{proof}[Sketch of proof]
We verify (1) $\Rightarrow$ (3); other directions are easy or obvious.
Fix $z \in {\cal L}$,
define $r:{\cal L} \to {\bf Z}$ by $r(x) := r[z, x \vee z] - r[x, x \vee z]$.
Consider an element $y$ that covers $x$.
If $y \preceq x \vee z$, then $x \vee z = y \vee z$ and
$r[y,y \vee z] = r[x,x \vee z] - 1$.
If $y \not \preceq x \vee z$, then
by semimodularity, $y \vee z$ covers $x \vee z$,
and hence $r[y,y \vee z] = r[x,x \vee z]$ and $r[z,y \vee z] = r[z,x \vee z] +1$.
Thus $r$ is a height function.
We show (\ref{eqn:semimo}).
Consider a maximal chain
$x \wedge y = z_0 \prec_1 z_1 \prec_1 \cdots \prec_1 z_k = y$,
where $k = r[x\wedge y, y]$ by (JD).
Consider a chain $x = x \vee z_0 \preceq x \vee z_1 \preceq \cdots \preceq x\vee z_k = x \vee y$, which contains a maximal chain in $[x, y]$
by the semimodularity.
This implies $r(x \vee y) - r(x) = r[x, x \vee y] \leq k = r[x \wedge y, y] = r(y) - r(x \wedge y)$, and (\ref{eqn:semimo}).
\end{proof}
A {\em modular pair} is a pair $x,y \in {\cal L}$ satisfying (\ref{eqn:semimo})
in equality.
A {\em geometric lattice} is a semimodular lattice
such that it has the minimum and maximum, and every element can be represented as
the join of atoms.
A {\em hyperplane} in a geometric lattice is
an element that is covered by the maximum element.
The following is well-known.
\begin{Lem}[{See, e.g., \cite[Section 2.3]{Aigner}}]\label{lem:hyperplane}
Let ${\cal L}$ be a geometric lattice.
\begin{itemize}
\item[{\rm (1)}] Every element in ${\cal L}$ is
written as the meet of hyperplanes.
\item[{\rm (2)}] Every interval in ${\cal L}$ is a geometric lattice.
\end{itemize}
\end{Lem}
A {\em modular lattice} is a lattice ${\cal L}$ such that
for every triple $x,y,z \in {\cal L}$ with $x \preceq y$
it holds $x \wedge (y \vee z) = x \vee (y \wedge z)$.
A modular lattice is precisely a semimodular lattice in which every pair is modular.
\subsection{Matroid}\label{subsec:matroid}
Here we introduce matroids on a possibly infinite ground set, where
our treatment follows \cite[Chapter VI]{Aigner}.
A {\em matroid} ${\bf M} = (E,{\cal I})$ is
a pair of a set $E$
and a family ${\cal I}$ of subsets of $E$
such that $\emptyset \in {\cal I}$,
$I' \subseteq I \in {\cal I}$ implies $I' \in {\cal I}$,
and for $I,I' \in {\cal I}$ with $|I| < |I'|$
there is $e \in I' \setminus I$ such that $I + e \in {\cal I}$,
and $\max_{I \in {\cal I}} |I| < + \infty$.
%
A member of ${\cal I}$ is called an {\em independent set}.
A maximal independent set is called a {\em base}.
%
The set of all bases is denoted by ${\cal B}$.
A matroid can be defined by the base family, and also be written as
${\bf M} = (E, {\cal B})$.
%
Bases have the same cardinality $(< +\infty)$,
which is called the {\em rank} of ${\bf M}$.
%
A {\em loop} is an element $e \in E$ such that no base contains $e$.
Non-loop elements $e,f \in E$ are said to be {\em parallel}
if no base contains both $e$ and $f$.
The parallel relation gives rise to
an equivalence relation on the set of non-loop elements,
and an equivalence class
is called a {\em parallel class}.
If matroid ${\bf M}$ has no loop and no parallel pair, then ${\bf M}$ is called {\em simple}.
%
For a subset $E' \subseteq E$ obtained by
selecting one element from each parallel class,
we obtain a simple matroid ${\bf M}' = (E', {\cal I}')$ on $E'$,
where ${\cal I}' := \{ I \in {\cal I} \mid I \subseteq E' \}$.
This matroid ${\bf M}'$ is called a {\em simplification} of ${\bf M}$.
%
The {\em rank function} $\rho$ is defined by
$\rho(X) := \max\{ |I| \mid I \in {\cal I}: I \subseteq X\}$.
The {\em closure operator} ${\rm cl}$ is defined by
${\rm cl}(X) = \{e \in E \mid \rho(X+e) = \rho(X) \}$.
A {\em flat} is a subset $F \subseteq E$ with ${\rm cl} (F) = F$.
A parallel class is exactly a flat $F$ with $\rho(F) = 1$.
%
The family of all flats becomes a lattice with respect to the inclusion order.
Let us review the relationship between matroids and geometric lattices.
Let ${\cal L}$ be a geometric lattice with height function $r$.
Assume $r(\bar 0) = 0$.
A subset $I$ of atoms of ${\cal L}$ is called {\em independent}
if $r( \bigvee I ) = |I|$.
\begin{Thm}[{\cite{Birkhoff}; see \cite[Chapter VI]{Aigner}}]\label{thm:Birkhoff}
\begin{itemize}
\item[{\rm (1)}] For a geometric lattice ${\cal L}$ with rank $n$,
the pair ${\bf M}_{\cal L}$ of the set of atoms and the family of independent atoms is a simple matroid with rank $n$.
\item[{\rm (2)}]
The family of flats of a matroid ${\bf M}$ with rank $n$
is a geometric lattice ${\cal L}$ with rank $n$,
where ${\bf M}_{\cal L}$
is a simplification of ${\bf M}$.
\end{itemize}
\end{Thm}
\section{Valuated matroid and tropical linear space}\label{sec:valuated}
Let ${\bf M} = (E,{\cal B})$ be a matroid with rank $n$.
A {\em valuated matroid} on ${\bf M}$
is a function $\omega: {\cal B} \to {\bf R}$ satisfying:
\begin{itemize}
\item[(EXC)] For $B, B' \in {\cal B}$ and $e \in B \setminus B'$ there is $e' \in B' \setminus B$ such that
\begin{equation
\omega(B) + \omega(B') \leq \omega(B + e' - e ) + \omega(B' + e - e').
\end{equation}
\end{itemize}
A valuated matroid $\omega$ is viewed as a function on the set of
all $n$-element subsets of $E$ by defining $\omega(B) = - \infty$
for $B \not \in {\cal B}$.
A valuated matroid is also written as a pair $(E, \omega)$.
A valuated matroid is called {\em simple} if
the underlying matroid is a simple matroid.
\begin{Lem}[{\cite{DressTerhalle93}}]\label{lem:parallel}
Let $(E, \omega)$ be a valuated matroid.
If $e,f \in E$ are parallel in the underlying matroid, then there is $\alpha \in {\bf R}$
such that $\omega(K+e) = \omega (K+f) + \alpha$
for every $(n-1)$-element subset $K \subseteq E \setminus \{e,f\}$.
\end{Lem}
Therefore no essential information is lost
when a valuated matroid $(E,\omega)$ is restricted to a simplification
of the underlying matroid.
The obtained simple valuated matroid $(\tilde E, \tilde \omega)$
is called a {\em simplification} of $(E, \omega)$.
For $\omega: {\cal B} \to {\bf R}$ and $x \in {\bf R}^E$,
define $\omega + x:{\cal B} \to {\bf R}$ by
\begin{equation*}
(\omega+ x)(B) := \omega(B) + \sum_{e \in B} x(e) \quad (B \in {\cal B}).
\end{equation*}
It is easy to see from (EXC) that
$\omega + x$ is a valuated matroid
if $\omega$ is a valuated matroid.
Two valuated matroids $\omega$ and $\omega'$ are said to be
{\em projectively equivalent} if $\omega' = \omega + x$ for some $x \in {\bf R}^E$.
For $\omega: {\cal B} \to {\bf R}$,
let ${\cal B}_{\omega}$ be
the set of all bases $B$ that
attain $\max_{B \in {\cal B}} \omega(B)$.
A direct consequence of (EXC) is as follows.
\begin{Lem}[{\cite{DressWenzel_greedy}; see \cite[Theorem 5.2.7]{MurotaMatrix}}]\label{lem:opt}
Let $\omega$ be a valuated matroid on $(E, {\cal B})$.
A base $B \in {\cal B}$ belongs to ${\cal B}_{\omega}$ if and only if
\begin{equation*}
\omega (B - e + f) \leq \omega(B)
\end{equation*}
for all $e \in B$ and $f \in E \setminus B$ with $B - e + f \in {\cal B}$.
\end{Lem}
One can also observe from (EXC)
that for a valuated matroid $\omega$, the maximizer family
${\cal B}_{\omega}$ is the base family of a matroid.
Murota~\cite{Murota97} proved that
this property characterizes valuated matroids when $E$ is finite.
\begin{Lem}[{\cite{Murota97}; see \cite[Theorem 5.2.26]{MurotaMatrix}}]\label{lem:murota}
Let ${\bf M} = (E,{\cal B})$ be a matroid.
An upper-bounded
integer-valued function $\omega:{\cal B} \to {\bf Z}$ is a valuated matroid
if and only if
$(E, {\cal B}_{\omega+x})$ is a matroid
for every bounded integer vector $x \in {\bf Z}^E$.
\end{Lem}
\begin{proof}[Proof: Reduction to finite case]
%
We reduce the proof of the if-part to finite case.
Consider bases $B,B' \in {\cal B}$.
Let $E' := B \cup B'$, and let $(E',\omega')$ be the restriction of $(E,\omega)$.
By the upper-boundedness of $\omega$,
for $x' \in {\bf Z}^{E'}$, by choosing a large positive integer $M$ and
by defining $x(e) := - M$ $(e \in E \setminus E')$,
we can extend $x'$ to bounded vector $x \in {\bf Z}^E$ so that
${\cal B}_{\omega + x} ={\cal B}_{\omega'+x'} \subseteq 2^{E'}$.
Thus the exchange property (EXC) of
$\omega$ on $B$ and $B'$ follows from that of~$\omega'$.
\end{proof}
Next we introduce the {\em tropical linear space}~\cite{MurotaTamura01,Speyer08}
of valuated matroid.
Let $\omega$ be an integer-valued valuated matroid on $(E, {\cal B})$.
To deal with a possible infiniteness of $E$,
we here employ the following definition.
The {\em tropical linear space} ${\cal T}(\omega)$ of $\omega$ is defined
as the set of all vectors $x \in {\bf R}^E$ such that matroid
${\bf M}_{\omega + x} = (E, {\cal B}_{\omega +x})$
has no loop, i.e.,
\begin{equation*}
{\cal T}(\omega) := \{ x \in {\bf R}^E \mid \mbox{${\bf M}_{\omega + x}$ has no loop}\}.
\end{equation*}
This definition tacitly imposes
that the maximum of $\omega+x$ for $x \in {\cal T}(\omega)$
is attained by some $B \in {\cal B}$.
According to the definition in \cite{MurotaTamura01,Speyer08},
the tropical linear space
is the set of all points $x \in {\bf R}^E$ satisfying:
\begin{itemize}
\item[(TW)] For any $(n+1)$-element subset $C \subseteq E$,
the maximum of $\omega(C - f ) - x(f)$ over all $f \in C$ with $C - f \in {\cal B}$ is attained at least twice.
\end{itemize}
(In the definition of \cite{MurotaTamura01}, the sign of $x$ is opposite.)
Speyer~\cite[Proposition 2.3]{Speyer08} proved that the two definitions are
equivalent when $E$ is finite.
Our infinite setting needs a little care;
we prove a slightly modified equivalence in Lemma~\ref{lem:trop} below.
In the literature,
the tropical linear space is referred to as its projection ${\cal T}(\omega)/ {\bf R}{\bf 1}$,
since $x \in {\cal T}(\omega)$ implies
$x+ {\bf R} {\bf 1} \subseteq {\cal T}(\omega)$.
Earlier than \cite{MurotaTamura01,Speyer08},
Dress and Terhalle~\cite{DressTerhalle93,DressTerhalle98} introduced
the concept of
the {\em tight span} ${\cal TS}(\omega)$ of $\omega$, which is defined by
\begin{equation*}
{\cal TS}(\omega) := \left\{ p \in {\bf R}^E \ \left|{\Large \strut}\right.
p(e) = \max_{B \in {\cal B}: e \in B} \{ \omega(B) - \sum_{f \in B \setminus \{e\}} p(f)\} \quad (e \in E) \right\}.
\end{equation*}
Observe that ${\cal TS}(\omega)$ is the set of representatives of the negative of ${\cal T}(\omega)/ {\bf R}{\bf 1}$. More precisely, it holds
\begin{equation}\label{eqn:TS}
{\cal TS}(\omega) = - \{x \in {\cal T}(\omega) \mid \max_{B \in {\cal B}} (\omega + x)(B) = 0 \}.
\end{equation}
Dress and Terhalle~\cite{DressTerhalle93,DressTerhalle98} introduced
an ultrametric metrization of
the ground set $E$ of valuated matroid $\omega$, which we explain below.
Let us recall the notion of ultrametric.
An {\em ultrametric} on a set $X$
is a metric $d: X \times X \to {\bf R}_+$ satisfying the ultrametric inequality
\begin{equation}\label{eqn:ultrametric}
d(x,y) \leq \max \{d(x,z), d(z,y)\} \quad (x,y,z \in X).
\end{equation}
For $p \in {\cal TS}(\omega)$,
define $D_p: E \times E \to {\bf R}$ by
\begin{equation*}
D_p(e,f) := \left\{
\begin{array}{ll}
\exp (\max \{ (\omega - p) (B) \mid B \in {\cal B}: \{e,f\} \subseteq B \}) & {\rm if}\ e \neq f,\\
0 & {\rm if}\ e = f
\end{array}\right.
\quad (e,f \in E).
\end{equation*}
\begin{Prop}[\cite{DressTerhalle93}]
Let $(E,\omega)$ be a simple valuated matroid.
For $p \in {\cal TS}(\omega)$, we have the following:
\begin{itemize}
\item[{\rm (1)}] $D_{p}$ is an ultrametric.
\item[{\rm (2)}] For $q \in {\cal TS}(\omega)$,
it holds $\alpha D_p \leq D_q \leq \beta D_p$ for some $\alpha,\beta > 0$.
\end{itemize}
\end{Prop}
A simple valuated matroid $(E,\omega)$ is called {\em complete}
if the metric space $(E, D_p)$ is a complete metric space.
By the property (2)
the convergence property is independent of the choice of $p \in {\cal TS}(\omega)$.
A {\em completion} of a valuated matroid $(E, \omega)$
is a complete valuated matroid $(\bar E,\bar \omega)$ with the properties
that
$\bar E$ contains $E$ as a dense subset, and
$\omega$ is equal to the restriction of $\bar \omega$ to $n$-element subsets in $E$.
\begin{Thm}[\cite{DressTerhalle93}]
For a simple valuated matroid $(E, \omega)$,
there is a (unique) completion $(\bar E, \bar \omega)$ of $(E, \omega)$.
\end{Thm}
The construction of a completion of valuated matroid $(E,\omega)$
is analogous to (and generalizes)
that of $p$-adic numbers from rational numbers:
Consider the set $\bar E$ of
all Cauchy sequences $(x_i)$, relative to $D_p$,
modulo the equivalence relation $\sim$ defined by
$(x_i) \sim (y_i) \Leftrightarrow \lim_{i \rightarrow \infty} D_p(x_i,y_i) = 0$.
Regard $E$ as a subset of $\bar E$ by associating $x \in E$ with
a Cauchy sequence converging to $x$, and
extend $D_p$ to $\bar E \times \bar E \to {\bf R}$ by
$D_p((x_i),(y_i)) := \lim _{i \rightarrow \infty} D_p(x_i,y_i)$.
Then $\bar E$ is a complete metric space containing $E$ as a dense subset.
Accordingly, $\omega$ is extended to $\bar \omega$ by
\begin{equation*}
\bar \omega(B) := \lim_{i \rightarrow \infty} \omega(B_i),
\end{equation*}
where $B_i \subseteq E$ consists of $n$ elements each converging to an element of $B \subseteq \bar E$.
By a completion of nonsimple valuated matroid $(E, \omega)$
we mean a completion of a simplification of $(E, \omega)$.
In Section~\ref{subsec:v->u},
we give a natural interpretation of
this completion process via
a uniform semimodular lattice.
The rest of this section is to
give some basic properties of the tropical linear space
${\cal T}(\omega)$. Let $(E, \omega)$ be
an integer-valued valuated matroid on
underlying matroid ${\bf M} = (E,{\cal B})$ of rank $n$.
We suppose that ${\cal T}(\omega)$ is endowed with vector order $\leq$.
\begin{Lem}\label{lem:projection}
Let $(\tilde E, \tilde \omega)$ be a simplification of $(E, \omega)$.
Then the projection $x \mapsto x|_{\tilde E}$ is
an order-preserving bijection from ${\cal T}(\omega)$ to ${\cal T}(\tilde \omega)$.
\end{Lem}
\begin{proof}
Let $e,f \in E$ be a parallel pair with $\omega (K+e) = \omega(K+f) + \alpha$
for every $(n-1)$-element subset $K \subseteq E \setminus \{e,f\}$; see Lemma~\ref{lem:parallel}.
Let $x \in {\cal T}(\omega)$. If
$B \in {\cal B}_{\omega+x}$ contains $e$,
then $B - e + f \in {\cal B}_{\omega+x}$.
From this, we see that ${\cal B}_{\tilde \omega + x|_{\tilde E}}$ is a subset of
${\cal B}_{\omega + x}$, and ${\bf M}_{\tilde \omega + x|_{\tilde E}}$ has no loop.
Thus the projection $x \mapsto x|_{\tilde E}$ is
an order-preserving map from ${\cal T}(\omega)$ to~${\cal T}(\tilde \omega)$.
Since $e,f$ are also parallel in ${\cal B}_{\omega+x} \subseteq {\cal B}$,
it must hold $(\omega+x)(B) = (\omega+x)(B - e + f)$ for
$B \in {\cal B}_{\omega +x}$ with $e \in B$.
By $\omega (B) = \omega(B-e + f) + \alpha$,
we obtain $x(e) + \alpha = x(f)$, where $\alpha$ is independent of $x$.
This means that the coordinate of $e$ in ${\cal T}(\omega)$ is
recovered from that of $f$.
From this, we see that the projection is a bijection.
\end{proof}
For $x \in {\bf R}^E$, let $\lfloor x \rfloor \in {\bf Z}^E$
denote the vector obtained from $x$
by rounding down each fractional component of $x$, i.e.
$\lfloor x \rfloor (e) := \lfloor x(e) \rfloor$ for $e \in E$.
\begin{Lem}\label{lem:chain_of_flats}
For $x \in {\cal T}(\omega)$, we have the following:
\begin{itemize}
\item[{\rm (1)}]
$\lfloor x \rfloor \in {\cal T}(\omega)$.
\item[{\rm (2)}] There are a chain
$\emptyset \neq F_1 \subset F_2 \subset \cdots \subset F_n = E$ of flats in ${\bf M}_{\omega + \lfloor x \rfloor}$ and coefficients $\lambda_i \geq 0$ such that $\sum_{i=1}^n \lambda_i < 1$ and
$x = \lfloor x \rfloor + \sum_{i=1}^{n}
\lambda_i {\bf 1}_{F_i}$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). Let $B \in {\cal B}_{\omega+x}$.
By Lemma~\ref{lem:opt},
we have $(\omega + x) (B + e - f) \leq (\omega + x)(B)$
for all $e \in E \setminus B$ and $f \in B$.
From this, we have
\begin{equation*}
(\omega + \lfloor x \rfloor)
(B + e - f) + \varDelta(e) - \varDelta(f) \leq (\omega + \lfloor x \rfloor)(B),
\end{equation*}
where $\varDelta(g) := x(g) - \lfloor x(g) \rfloor$ for $g \in E$.
Since $\varDelta(e) - \varDelta(f) > - 1$ and $\omega$ is integer-valued,
we have $(\omega + \lfloor x \rfloor)
(B + e - f) \leq (\omega + \lfloor x \rfloor)(B)$. By Lemma~\ref{lem:opt} again,
we have $B \in {\cal B}_{\omega+ \lfloor x \rfloor}$.
Hence ${\cal B}_{\omega + x} \subseteq {\cal B}_{\omega + \lfloor x \rfloor}$.
Therefore ${\bf M}_{\omega + \lfloor x \rfloor}$ has no loop.
(2). It suffices to show:
If $x \in {\cal T}(\omega)$ and $\alpha \in [0,1)$, then
$F_{\alpha} := \{e \in E \mid x(e) - \lfloor x(e) \rfloor \geq \alpha\}$ is a flat of matroid ${\bf M}_{\omega + \lfloor x \rfloor}$.
By ${\cal B}_{\omega + x} \subseteq {\cal B}_{\omega + \lfloor x \rfloor}$ shown above, one can see that ${\cal B}_{\omega+ x}$
is the maximizer family of
linear objective function $B \mapsto (x - \lfloor x \rfloor)(B)$
over ${\cal B}_{\omega + \lfloor x \rfloor}$.
Suppose to the contrary that $e \in {\rm cl} (F_{\alpha}) \setminus F_\alpha$ exists.
There is a base $B \in {\cal B}_{\omega+\lfloor x \rfloor}$ containing $e$.
Then ${\rm cl} (B - e) \not \supseteq F_{\alpha}$
since otherwise $e \not \in {\rm cl} (B - e) = {\rm cl} ({\rm cl} (B - e)) \supseteq {\rm cl} (F_{\alpha})\ni e$.
Thus we can choose $f \in F_{\alpha}$ such that $B + f - e \in {\cal B}_{\omega + \lfloor x \rfloor}$.
But the above linear objective function increases strictly.
This is a contradiction.
\end{proof}
\begin{Lem}\label{lem:trop}
A vector $x \in {\bf R}^E$ belongs to ${\cal T}(\omega)$ if and only if
the maximum of $\omega+x$ over ${\cal B}$ is attained and $x$ satisfies {\rm (TW)}.
\end{Lem}
\begin{proof}
(If part). Consider $B \in {\cal B}_{\omega + x}$ and $e \in E \setminus B$.
Then $\max_{f \in B+e: B + e - f \in {\cal B}} \omega(B + e - f) - x(f)$ is attained by $f = e$ (Lemma~\ref{lem:opt}),
and $f \neq e$ by (TW).
This means that $B + e- f \in {\cal B}_{\omega + x}$.
Thus ${\bf M}_{\omega+x}$ is loop-free.
(Only if part). By Lemma~\ref{lem:chain_of_flats}~(2) and $|F_i \cap B| \in \{0,1,2,\ldots,n\}$,
it holds $\{ (\omega + x)(B) \mid B \in {\cal B}\} \subseteq {\bf Z} + U$
for a finite set $U = \{\sum_{i=1}^{n} \lambda_i k_i \mid 0 \leq k_i \leq n\}$.
Consequently the maximum of $\omega + x + \alpha {\bf 1}_{F}$ is attained
for all $\alpha \geq 0$ and $F \subseteq E$.
%
The rest is precisely the same as in the proof of \cite[Proposition 2.3]{Speyer08}.
Consider an arbitrary $n+1$ element subset $C$.
As $\alpha \geq 0$ increases,
the maximizer family
${\cal B}_{\omega + x + \alpha {\bf 1}_{C}}$ changes finitely many times.
Also, for large $\alpha \geq 0$, ${\cal B}_{\omega + x + \alpha {\bf 1}_{C}}$
consists only of bases $B \in {\cal B}$ with $B \subseteq C$.
We show that each $e \in C$ is not a loop in ${\cal B}_{\omega + x + \alpha {\bf 1}_{C}}$ for $\alpha \geq 0$.
For small $\epsilon > 0$, any base $B$
in ${\cal B}_{\omega + x + \alpha {\bf 1}_{C}}$ with maximal $C \cap B$
is also a base in
${\cal B}_{\omega + x + (\alpha+ \epsilon) {\bf 1}_{C}}$; see below.
Since each $e \in C$ is not a loop in ${\bf M}_{\omega + x}$,
so is in ${\bf M}_{\omega + x + \alpha {\bf 1}_{C}}$.
Thus, for large $\alpha>0$, the maximum of $\omega + x + \alpha {\bf 1}_{C}$
must be attained by at least two bases in $C$, which implies (TW).
\end{proof}
In the last step of the proof, we use the following lemma:
\begin{Lem}[\cite{Speyer08}]\label{lem:integer-valued}
Let $x \in {\cal T}(\omega)$ and $F \subseteq E$.
Any base $B \in {\cal B}_{\omega+x}$
with maximal $B \cap F$
belongs to ${\cal B}_{\omega + x + \alpha {\bf 1}_F}$
for sufficiently small $\alpha > 0$.
If $\omega$ and $x$ are integer-valued,
then we can take $\alpha = 1$.
\end{Lem}
\begin{proof}
We only show the case where $\omega$ and $x$ are integer-valued;
the proof of non-integral case is essentially the same.
We can assume that $x = {\bf 0}$.
Consider a base $B \in {\cal B}_{\omega}$
with maximal $B \cap F$.
By Lemma~\ref{lem:opt},
it suffices to show that $\omega(B) + |B \cap F| \geq
\omega (B - e + f) + |(B - e + f) \cap F|$ for $e \in B$
and $f \in E \setminus B$ with $B - e + f \in {\cal B}$.
If $f \not \in F$ or $e \in F$,
then this obviously holds.
Suppose $f \in F$ and $e \not \in F$.
Then $|(B - e + f) \cap F| = 1 + |B \cap F|$.
By the maximality, $B-e+f$ does not belongs to
${\cal B}_{\omega}$, implying $\omega(B - e+f) \leq \omega(B) - 1$.
Thus $\omega(B) + |B \cap F| \geq
\omega (B - e + f) + |(B - e + f) \cap F|$.
\end{proof}
The tropical linear space enjoys
a tropical version of convexity
introduced by Develin-Sturmfels~\cite{DevelinSturmfels}.
A subset $Q \subseteq {\bf R}^E$ is said to
be {\em tropically convex}~\cite{DevelinSturmfels}
if $\min (x + \alpha {\bf 1}, y + \beta {\bf 1}) \in Q$ for all $x,y \in Q$ and $\alpha,\beta \in {\bf R}$. An equivalent condition for the tropical convexity
consists of (TC$_\wedge$) and (TC$_{+{\bf 1}}$) below:
\begin{itemize}
\item[(TC$_\wedge$)] $\min(x,y) \in Q$ for all $x,y \in Q$.
\item[(TC$_{+{\bf 1}}$)] $x+ \alpha {\bf 1} \in Q$ for all $x \in Q$, $\alpha \in {\bf R}$.
\end{itemize}
These two properties of ${\cal T}(\omega)$
were recognized
by Murota-Tamura~\cite{MurotaTamura01} (in finite case).
\begin{Lem}[{\cite[Theorem 3.4]{MurotaTamura01};
see also \cite[Proposition 2.14]{Hampe15}}]\label{lem:trop_convexity}
The tropical linear space ${\cal T}(\omega)$ is tropically convex.
\end{Lem}
\begin{proof}
We show that ${\cal T}(\omega)$ satisfies (TC$_\wedge$), while
(TC$_{+{\bf 1}}$) is obvious.
Let $x,y \in {\cal T}(\omega)$. As in the proof of Lemma~\ref{lem:trop},
we see from Lemma~\ref{lem:chain_of_flats}~(2) that the image
$\{ (\omega + x \wedge y)(B) \mid B \in {\cal B} \}$ of $\omega + x \wedge y$
is discrete in ${\bf R}$.
Hence the maximum of $\omega + x \wedge y$ is attained by some base.
Let $C$ be an $(n+1)$-element subset of $E$.
We may assume that $\max_f \omega(C - f ) - x(f) \geq \max_f \omega(C - f ) - y(f)$. Necessarily $\max_f \omega(C - f ) - (x \wedge y) (f) = \max_f \omega(C - f ) - x(f)$. By (TW) and Lemma~\ref{lem:trop}, we can choose distinct $e,e' \in C$ that attain
$\max_f \omega(C - f ) - x(f)$. Necessarily $x(e) = (x \wedge y)(e)$ and
$x(e') = (x \wedge y)(e')$. Thus $e,e'$ attain $\max_f \omega(C - f ) - (x \wedge y) (f)$. By Lemma~\ref{lem:trop}, we have $x \wedge y \in {\cal T}(\omega)$.
\end{proof}
By this property, ${\cal T}(\omega) \cap {\bf Z}^E$
becomes a lattice with respect to vector order $\leq$.
In the next section,
we characterize this lattice ${\cal T}(\omega) \cap {\bf Z}^E$.
\section{Uniform semimodular lattice}\label{sec:uniform}
The {\em ascending operator} of a lattice ${\cal L}$ is a map $(\cdot)^+:{\cal L} \to {\cal L}$ defined by
\begin{equation*}
(x)^+ := \bigvee \{y \in {\cal L} \mid \mbox{$y$ covers $x$}\}.
\end{equation*}
A {\em uniform semimodular lattice} ${\cal L}$ is a semimodular lattice such that the ascending operator $(\cdot)^+$ is defined,
and is an automorphism on ${\cal L}$.
If, in addition, ${\cal L}$ is a modular lattice, then ${\cal L}$
is called a {\em uniform modular lattice}.
The condition for $(\cdot)^+$ is viewed as
a lattice-theoretic analogue of condition (TC$_{+{\bf 1}}$).
The simplest but important example of
a uniform (semi)modular lattice is ${\bf Z}^m$:
\begin{Ex}
View ${\bf Z}^m$ as a poset with vector order $\leq$.
Then ${\bf Z}^m$ is a lattice, where the join $x \vee y$ and meet $x \wedge y$
are $\max(x,y)$ and $\min(x,y)$, respectively.
The component sum $x \mapsto \sum_{i=1}^m x_i$ is a height function
satisfying semimodular inequality (\ref{eqn:semimo}) (in equality).
Therefore ${\bf Z}^m$ is a (semi)modular lattice.
Observe that the ascending operator is equal to $x \mapsto x + {\bf 1}$, which is obviously an automorphism.
Thus ${\bf Z}^m$ is a uniform (semi)modular lattice.
\end{Ex}
\subsection{Basic concepts and properties}
In this section, we introduce basic concepts on uniform semimodular lattices
and prove some of basic properties,
which will be a basis of our cryptomorphic equivalence to valuated matroids.
Some of them were introduced and proved in \cite{HH18a} for uniform modular lattices.
Let ${\cal L}$ be a uniform semimodular lattice.
\begin{Lem}\label{lem:u-rank}
For $x,y \in {\cal L}$,
the intervals $[x, (x)^+]$ and $[y, (y)^+]$ are
geometric lattices of the same rank.
\end{Lem}
\begin{proof}
By definition, $(x)^+$ is the join of all atoms of $[x, (x)^+]$.
Hence $[x, (x)^+]$ is a geometric lattice.
We show that $[x, (x)^+]$ and $[y, (y)^+]$ have the same rank.
%
It suffices to consider the case where $y$ covers $x$.
Since $(\cdot)^+$ is an automorphism,
$(y)^+$ covers $(x)^+$.
Therefore we have $1 + r[y,(y)^+] = r[x, (y)^+] = r[x,(x)^+] + 1$ (by (JD)), which implies $r[x,(x)^+] = r[y, (y)^+]$.
\end{proof}
The {\em uniform-rank} of ${\cal L}$ is defined as the rank $r[x, (x)^+]$
of interval $[x,(x)^+]$ for $x \in {\cal L}$.
We next study the inverse $(\cdot)^-$ of the ascending operator $(\cdot)^+$.
\begin{Lem}\label{lem:inverse}
The inverse $(\cdot)^-$ of $(\cdot)^+$ is given by
\begin{equation}\label{eqn:x^-}
(x)^- = \bigwedge \{ y \in {\cal L} \mid \mbox{$y$ is covered by $x$}\} \quad (x \in {\cal L}).
\end{equation}
\end{Lem}
\begin{proof}
%
Suppose that $y \in {\cal L}$ is covered by $(x)^+$.
Since $(\cdot)^+$ is an automorphism,
there is $y' \in {\cal L}$ such that $(y')^+ = y$.
Also $x$ covers $y'$,
which implies $x \preceq (y')^+= y$ by the definition of $(\cdot)^+$.
Namely $y$ belongs to $[x, (x)^+]$.
%
Now $x$ is also the meet of all hyperplanes
of geometric lattice $[x, (x)^+]$ (Lemma~\ref{lem:hyperplane}~(1)).
By the above argument, they are exactly elements covered by $(x)^+$ in ${\cal L}$.
This means that the right hand side of (\ref{eqn:x^-}) exists,
and equals $(x)^-$.
\end{proof}
For $x \in {\cal L}$ and $k \in {\bf Z}$, define $(x)^{+k}$ by
\begin{equation*}
(x)^{+k} := \left\{
\begin{array}{ll}
x & {\rm if}\ k= 0, \\
((x)^{+(k-1)})^{+} & {\rm if}\ k > 0, \\
((x)^{+(k+1)})^{-} & {\rm if}\ k < 0.
\end{array}\right.
\end{equation*}
For $k > 0$, we denote $(x)^{+(-k)}$ by $(x)^{-k}$.
\begin{Lem}\label{lem:preceq}
For $x,y \in {\cal L}$,
there is $k \geq 0$
such that $x \preceq (y)^{+k}$.
\end{Lem}
\begin{proof}
We may assume that $x \not \preceq y$.
Hence $x \succ x \wedge y$.
Choose an atom $a$ in $[x \wedge y, x]$.
Then $a \wedge y = x \wedge y$. By semimodularity,
$a \vee y$ is an atom in $[y, (y)^+]$, and
$x \wedge y \prec a \preceq x \wedge (y)^+$.
Thus, for $k \geq r[x \wedge y,x]$, it holds $x \wedge (y)^{+k} = x$,
implying $x \preceq (y)^{+k}$.
\end{proof}
\subsubsection{Segment and ray}\label{subsec:segments}
A {\em segment} is a chain $e^0 \prec e^1 \prec \cdots \prec e^s$
such that $e^\ell$ covers $e^{\ell-1}$ for $\ell=1,2,\ldots,s$,
and $e^{\ell+1} \not \in [e^{\ell-1}, (e^{\ell-1})^+] (\ni e^\ell)$ for $\ell=1,2,\ldots,s-1$.
A {\em ray} is an infinite chain $e^0 \prec e^1 \prec \cdots$
such that $e^0 \prec e^1 \prec \cdots \prec e^\ell$ is a segment for all $\ell$.
The following characterization of segments was suggested by K. Hayashi.
\begin{Lem}\label{lem:straight}
A chain $x = e^0 \prec e^1 \prec \cdots \prec e^s = y$ is a segment if and only
if $[x,y] = \{ e^0, e^1, \ldots, e^s \}$.
\end{Lem}
\begin{proof}
(If part).
Suppose to the contrary that $e^{\ell+1} \in [e^{\ell-1},(e^{\ell-1})^+]$.
Then there is an atom $a$ in $[e^{\ell-1},(e^{\ell-1})^+]$
such that $e^{\ell+1} = a \vee e^{\ell}$ (by Lemma~\ref{lem:hyperplane}~(2)).
This implies that $a \in [e^{\ell-1},e^{\ell+1}]$,
which contradicts $[e^{\ell-1},e^{\ell+1}] = \{e^{\ell-1},e^\ell,e^{\ell+1}\}$.
(Only if part). We use the induction on the length $s$;
the case of $s=1$ is obvious.
Suppose that $[x,e^{s-1}] = \{ e^0, e^1, \ldots, e^{s-1}\}$, and suppose to the contrary that $[x,y]$ properly contains $\{ e^0, e^1, \ldots, e^{s}\}$.
Then (by induction) there
is an atom $a$ of $[x,y]$ not belonging to $\{ e^0, e^1, \ldots, e^{s}\}$.
In particular, $a \not \preceq e^{s-1}$.
By semimodularity, $a \vee e^{s-1}$ covers $e^{s-1}$, and is equal to $e^s$.
Consider $e^{s-2} \vee a$, which covers $e^{s-2}$ and
is not equal to $e^{s-1}$ (by $a \not \preceq e^{s-1}$).
The join $(e^{s-2} \vee a) \vee e^{s-1}$ is equal to $e^{s}$.
However this contradicts $e^s \not \in [e^{s-2}, (e^{s-2})^+]$.
\end{proof}
A ray (or segment) $e^0 \prec e^1 \prec \cdots$ with $x = e^0$
is called an {\em $x$-ray} (or {\em $x$-segment}).
\begin{Lem}\label{lem:segment}
Let $x = e^0 \prec e^1 \prec \cdots \prec e^s$ be an $x$-segment.
For $p \succeq x$ with $p \wedge e^1 = x$,
chain $p = p \vee e^0 \prec p \vee e^1 \prec \cdots \prec p \vee e^{s}$
is a $p$-segment.
\end{Lem}
\begin{proof}
It suffices to consider the case where $p$ covers $x$.
By $p \neq e^1$ and $[e^0,e^s] = \{e^0,e^1,\ldots,e^s\}$ by Lemma~\ref{lem:straight},
it holds $p \not \preceq e^s$.
Then, by semimodularity, $(p,e^s)$ is a modular pair.
Consequently, $p \vee e^{\ell+1}$ covers $p \vee e^{\ell}$ and $e^{\ell+1}$.
Let $f^{\ell} := p \vee e^{\ell}$.
We show $(f^{\ell})^+ = (f^{\ell-1})^+ \vee f^{\ell+1}$,
which implies $f^{\ell+1} \not \in [f^{\ell-1}, (f^{\ell-1})^+]$.
%
By $f^{\ell-1} \vee e^\ell = f^\ell$, we have $(f^{\ell-1})^+ \vee (e^\ell)^+ = (f^\ell)^+$.
By $(e^\ell)^+ = (e^{\ell-1})^+ \vee e^{\ell+1}$,
we have $(f^\ell)^+ = (f^{\ell-1})^+ \vee (e^{\ell-1})^+ \vee e^{\ell+1} = (f^{\ell-1})^+ \vee e^{\ell+1} = (f^{\ell-1})^+ \vee f^\ell \vee e^{\ell+1} = (f^{\ell-1})^+ \vee f^{\ell+1}$, as required.
\end{proof}
For $x \in {\cal L}$, let $r_x$ be a height function (on $[x,(x)^+]$)
defined by $r_x(y) = r(y) - r(x)$.
A set of $x$-rays $(e_i^\ell)$ $(i=1,2,\ldots,k)$
is said to be {\em independent} if
$r_x(e_1^1 \vee e_2^1 \vee \cdots \vee e_k^1) = k$,
or equivalently if $e_i^1 \wedge (\bigvee_{j\neq i} e_j^1) = x$ for each $i$.
\begin{Prop}\label{prop:generated}
The sublattice generated by an independent set of $k$ $x$-rays
is isomorphic to ${\bf Z}^k_+$, where the isomorphism is given by
\begin{equation*}
{\bf Z}^k_+ \ni (z_1,z_2,\ldots,z_k) \mapsto e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}.
\end{equation*}
\end{Prop}
\begin{proof} Suppose that
$x$-rays $(e_i^\ell)$ $(i=1,2,\ldots,k)$ are independent.
We first show:
\begin{Clm}
For $z \in {\bf Z}^k_+$, we have the following.
\begin{itemize}
\item[(1)] $r_x(e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}) = \sum_{i=1}^k z_i$.
\item[(2)] $e_j^{z_j} \wedge (\bigvee_{i:i\neq j} e_i^{z_i}) = x$ for $j \in \{1,2,\ldots,k\}$.
\end{itemize}
\end{Clm}
\begin{proof}
(1).
We prove the claim by induction on $k$; the case of $k=1$ is obvious.
From Lemma~\ref{lem:straight} and the independence of $(e_i^\ell)$,
we have $e_j^1 \wedge e_k^{z_k} = x$ for $j=1,2,\ldots,k-1$.
By Lemma~\ref{lem:segment}, $(e_j^\ell \vee e_k^{z_k})$ $(j=1,2,\ldots,k-1)$
are $e_k^{z_k}$-segments.
We next show that they are independent.
Indeed, $e_k^2$ covers $e_k^1$
and $e_k^2 \not \preceq e_1^1 \vee e_2^1 \vee \cdots \vee e_k^1$
(otherwise $e_k^2 \in [e_k^0,(e_k^0)^+]$).
Thus, by semimodularity,
$r_{e_k^2} (e_1^1 \vee e_2^1 \vee \cdots \vee e_{k-1}^1 \vee e_k^2) = r_{e_k^1}(e_1^1 \vee e_2^1 \vee \cdots \vee e_{k}^1) = k-1$, and
$e_j^1 \vee e_k^1$ $(j=1,2,\ldots,k-1)$ are independent in $[e_k^1,(e_k^1)^+]$.
Repeating this, we see that $e_j^1 \vee e_k^{z_k}$ $(j=1,2,\ldots,k-1)$
are independent in $[e_k^{z_k},(e_k^{z_k})^+]$.
By induction, we have
$r_x(e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}) = r_{e_k^{z_k}}(e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}) + z_k = \sum_{i=1}^k z_i$, as required.
(2). From (1) and semimodularity (\ref{eqn:semimo}),
we have $\sum_{i: i \neq j} z_i + z_j = r_x(\bigvee_{i:i\neq j} e_i^{z_i}) + r_x(e_j^{z_j}) \geq r_x(e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}) + r_x(e_j^{z_j} \wedge (\bigvee_{i:i\neq j} e_i^{z_i})) \geq \sum_{i}z_i$.
Thus $r_x(e_j^{z_j} \wedge (\bigvee_{i:i\neq j} e_i^{z_i})) = 0$ must hold, implying $x = e_j^{z_j} \wedge (\bigvee_{i:i\neq j} e_i^{z_i})$.
\end{proof}
By (2) of the claim, any element $y$ in the sublattice generated by $e_i^\ell$ $(i=1,2,\ldots,k,\ell=0,1,2,\ldots)$ can be written as
\begin{equation}\label{eqn:expression}
y = e_1^{z_1} \vee e_2^{z_2} \vee \cdots \vee e_k^{z_k}
\end{equation}
for $z = (z_1,z_2,\ldots,z_k) \in {\bf Z}^k_+$.
It suffices to show that the expression (\ref{eqn:expression})
is unique.
For $i=1,2,\ldots,k$, let $z_i' := \max \{ \ell \in {\bf Z}_+ \mid e^{\ell}_i \preceq y \}$.
Then $z_i \leq z_i'$ (since $e^{z_i}_i \preceq y$).
Consider $y' := e_1^{z'_1} \vee e_2^{z'_2} \vee \cdots \vee e_k^{z'_k}$.
Then $y' \preceq y$, which implies $r_x(y') \leq r_x(y)$.
On the other hand, $r_x(y) = z_1+ z_2 + \cdots + z_k \leq z_1' + z_2' + \cdots + z_k' = r_x(y')$. Thus it must hold $z_i = z_i'$ for $i=1,2,\ldots,k$, and $y = y'$.
\end{proof}
\subsubsection{Parallelism and end}
Here we introduce a parallel relation for rays,
and introduce the concept of an end
as an equivalence class of this relation.
\begin{Lem}\label{lem:ray}
Let $x = e^0 \prec e^1 \prec \cdots$ be an $x$-ray.
For $y \succeq x$, there is an index $\ell$
such that $e^{\ell} \preceq y$ and $e^{\ell+1} \not \preceq y$.
In particular, $y=e^{\ell} \vee y \prec e^{\ell+1} \vee y \prec \cdots$ is a $y$-ray.
\end{Lem}
\begin{proof}
By (F),
there is no infinite chain in any interval.
Therefore $e^\ell \preceq y$ for all $\ell$ is impossible.
The latter statement follows from Lemma~\ref{lem:segment}.
\end{proof}
For an $x$-ray
$(e^\ell) = (x = e^0 \prec e^1 \prec \cdots)$ and $y \succeq x$,
the $y$-ray in the above lemma is denoted by $(e^\ell) \vee y$.
An $x$-ray $(e^{\ell})$ and $y$-ray $(f^{\ell})$ are said to be {\em parallel}
if $(e^{\ell}) \vee (x \vee y) = (f^{\ell}) \vee (x \vee y)$.
We write $(e^{\ell}) \approx (f^{\ell})$ if they are parallel.
\begin{Lem}
The parallel relation $\approx$ is an equivalence relation on the set of all rays.
\end{Lem}
\begin{proof}
We first show the following claim:
\begin{Clm}
Let $(e^{\ell})$ and $(f^\ell)$ be $x$-rays, and let $y \succeq x$.
Then $(e^{\ell}) = (f^\ell)$ if and only if $(e^\ell) \vee y = (f^\ell) \vee y$.
\end{Clm}
\begin{proof}
The only if part is obvious. We prove the if part.
Suppose that $(e^{\ell}) \neq (f^\ell)$. We show that $(e^\ell) \vee y \neq (f^\ell) \vee y$.
We may assume that $y$ covers $x$.
The above claim is clearly true when $y = e^{1} = f^{1}$.
Suppose that $y = e^1 \neq f^1$.
By Proposition~\ref{prop:generated} applied to
independent $x$-rays $(e^{\ell})$,$(f^\ell)$,
we have $y \vee e^2 = e^2 \neq y \vee f^1$,
and $(e^\ell) \vee y \neq (f^\ell) \vee y$.
Suppose that $y \neq e^1$ and $y \neq f^1$.
%
For some $k \geq 0$,
we have $e^\ell = f^\ell$ for $\ell \leq k$ and $e^{k+1} \neq f^{k+1}$.
It suffices to show that $(y \vee e^k)$-rays
$(y \vee e^k \prec y \vee e^{k+1} \prec \cdots)$
and $(y \vee f^k \prec y \vee f^{k+1} \prec \cdots)$
are different.
So we may consider the case $k = 0$.
By the above argument, we can assume that $y$, $e^1$, and $f^1$ are different.
If $y$, $e^1$, and $f^1$ are independent in $[x,x^+]$,
then $y \vee e^1$ and $y \vee f^1$ are different,
and $(e^\ell) \vee y \neq (f^\ell) \vee y$, as required.
Suppose that they are dependent; namely
$y \vee e^1 \vee f^1 = y \vee e^1 = y \vee f^1 = e^1 \vee f^1 =:z$.
Then $e^2 \neq z$ and $f^2 \neq z$ (since $e^0 \prec e^1 \prec e^2$ is a segment).
We show that $y \vee e^2$ and $y \vee f^2$ are different.
By Lemma~\ref{lem:segment},
$e^1 \prec z = e^1 \vee f^1 \prec e^1 \vee f^2 = y \vee f^2$ is a segment.
If $y \vee e^2 = y \vee f^2$, then $y \vee e^2 = z \vee e^2$ implies that
$y \vee f^2$ is the join of $z$ and $e^2$, both covering $e^1$;
this contradicts the fact that $e^1 \prec z \prec y \vee f^2$ is a segment.
Thus $(e^\ell) \vee y \neq (f^\ell) \vee y$.
\end{proof}
%
It suffices to show that $(e^{\ell}) \approx (f^{\ell})$ and $(f^{\ell}) \approx (g^\ell)$ imply
$(e^{\ell}) \approx (g^{\ell})$.
%
Suppose that $(e^{\ell})$, $(f^{\ell})$, and $(g^\ell)$
are $x$-, $y$-, and $z$-rays, respectively.
Then $(e^{\ell}) \vee (x \vee y) = (f^{\ell}) \vee (x \vee y)$
and $(f^{\ell})\vee (y \vee z) = (g^\ell) \vee (y \vee z)$.
This implies that
$(e^{\ell}) \vee (x \vee y \vee z) = (f^{\ell}) \vee (x \vee y \vee z) = (g^{\ell}) \vee (x \vee y \vee z)$.
By the above claim, it must hold $(e^{\ell}) \vee (x \vee z) = (g^{\ell}) \vee (x \vee z)$.
\end{proof}
An equivalence class is called an {\em end}.
Let $E = E^{\cal L}$ denote the set of all ends.
\begin{Lem}
For an $x$-ray $(e^\ell)$ and $y \in {\cal L}$,
there (uniquely) exists a $y$-ray that is parallel to~$(e^{\ell})$.
\end{Lem}
\begin{proof}
Consider $y' := (y)^{+k} \succeq x$ (Lemma~\ref{lem:preceq}).
Then $((e^\ell) \vee y')^{-k} \approx (e^\ell) \vee y' \approx (e^\ell)$,
implying $((e^\ell) \vee y')^{-k} \approx (e^\ell)$, where $((e^\ell) \vee y')^{-k}$ is a $y$-ray.
\end{proof}
Let $E_x$ denote the set of all $x$-rays.
By the above lemma, for each end $e \in E$,
there is an $x$-ray $e_x \in E_x$ that is a representative of $e$.
In particular $E_x$ and $E$ are in one-to-one correspondence.
For $e \in E$, the representative of $e$ in $E_x$
is denoted by $e_x = (x = e_x^0 \prec e_x^1 \prec e_x^2 \prec \cdots )$.
In particular, $E_x = \{e_x \mid e \in E\}$.
\subsubsection{Ultrametric on the space of ends}\label{subsub:ultra}
Let $x \in {\cal L}$.
Define $\delta_x: E \times E \to {\bf Z}_+$ by
\begin{equation*}
\delta_x(e,f) := \sup \{i \mid e_x^i = f_x^i \} \quad (e,f \in E),
\end{equation*}
and define $d_x: E \times E \to {\bf R}_+$ by
\begin{equation*}
d_x(e,f) := \exp (- \delta_x(e,f)) \quad (e,f \in E).
\end{equation*}
Observe from Proposition~\ref{prop:generated} that
two different $x$-rays $(e_x^\ell)$,$(f_x^\ell)$
never meet again once they are separated,
i.e., if $e_x^i \neq f_x^i$ then $e_x^j \neq f_x^j$ for $j > i$.
In particular, all elements in $x$-rays in $E_x$
induce a rooted tree with root $x$ in the Hasse diagram of ${\cal L}$.
From this view, $\delta_x(e,f)$ is the distance between the root $x$
and the lowest common ancestor (lca) of $e$ and $f$.
\begin{Prop}\label{prop:ultrametric}
For $x \in {\cal L}$, we have the following:
\begin{itemize}
\item[{\rm (1)}] $d_x$ is an ultrametric on $E$.
\item[{\rm (2)}] The metric space $(E,d_x)$ is complete.
\item[{\rm (3)}] For $y \in {\cal L}$,
it holds $\alpha d_y \leq d_x \leq \beta d_y$ for positive constants
$\alpha := \exp ( - r[x, x \vee y])$ and $\beta := \exp ( r[y, x \vee y])$.
\end{itemize}
\end{Prop}
\begin{proof}
(1). From the view of rooted tree,
one can easily see that $\delta_x$ satisfies the anti-ultrametric inequality:
\begin{equation*}
\delta_x(e,f) \geq \min (\delta_x(e,g), \delta_x(g,f)) \quad (e,f,g \in E).
\end{equation*}
Hence $d_x$ satisfies ultrametric inequality~(\ref{eqn:ultrametric}).
If $e \neq f$ then $\delta_x(e,f)$ is finite, and $d_x(e,f)$ is nonzero.
This means that $d_x$ is an ultrametric.
(2).
Consider a Cauchy sequence $(e_i)_{i=1,2,\ldots}$
in $E$ relative to $d_x$.
We construct $e \in E$ such that $\lim_{i \rightarrow \infty} d_x(e, e_i) = 0$.
Let $a^0 := x$.
For $\ell \in {\bf Z}_+$, there is
$n_\ell \in {\bf Z}_+$ such that $\delta_x(e_i, e_{i'}) \geq \ell$ for $i,i' \geq n_\ell$.
Let $a^{\ell} := f_{x}^{\ell}$ for $f := e_{n_\ell}$.
Then all $(e_i)_x$ for $i \geq n_\ell$ contain~$a^{\ell}$.
Hence $(a^\ell)$ is an $x$-ray such that $(e_{i})$ converges to
the end $e$ of $x$-ray $(a^\ell)$.
(3). For $z \succeq x$,
it holds $\delta_z(e,f) \leq \delta_x(e,f) \leq \delta_z(e,f) + r[x,z]$.
Therefore $\delta_x(e,f) \leq \delta_{x \vee y}(e,f) + r[x, x \vee y] \leq \delta_y(e,f) + r[x, x \vee y]$, which implies $d_x(e,f) \geq \exp (- r[x, x \vee y]) d_y(e,f)$. Similarly $d_y(e,f) \leq \exp (r[y, x \vee y]) d_x(e,f)$.
\end{proof}
Thus $E^{\cal L}$ is endowed with the topology induced by ultrametric $d_x$,
which is independent of the choice of $x \in {\cal L}$ by (3).
We will see in Section~\ref{subsec:v->u} that
$E^{\cal L}$ coincides with
the Dress-Terhalle completion
when ${\cal L}$ comes from a valuated matroid $(E, \omega)$.
\subsubsection{Realization in ${\bf Z}^E$}
%
Here we show that ${\cal L}$ can be realized as a subset of ${\bf Z}^E$,
which will be the set of integer points of a tropical linear space.
Let $x \in {\cal L}$.
For $y \succeq x$, the {\em $x$-coordinate} of $y$
is an integer vector $y_x \in {\bf Z}^E_+$ defined by
\begin{equation*}
y_x(e) := \max \{ \ell \in {\bf Z}_+ \mid e_x^{\ell} \preceq y \} \quad (e \in E).
\end{equation*}
\begin{Lem}\label{lem:y_x}
For $x \preceq y \preceq z$, we have the following:
\begin{itemize}
\item[{\rm (1)}] $z_x = z_y + y_x$.
\item[{\rm (2)}] $(y)^+_x = y_x + {\bf 1} = y_{(x)^-}$.
\item[{\rm (3)}] $\displaystyle y = \bigvee_{e \in E} e_x^{y_x(e)}$.
\end{itemize}
\end{Lem}
\begin{proof}
(1).
It suffices to consider the case where $z$ covers $y$.
Consider $e \in E$. By semimodularity, $y \vee e_x^{y_x(e)+1}$ covers $y$.
If $z = e_x^{y_x(e)+1}$, then $z = e_y^1$ and $z_y(e) = 1$, and
$z_x(e) = y_x(e) + 1$, where $z_x(e) > y_x(e) + 1$ is impossible by Lemma~\ref{lem:segment}.
If $z \neq e_x^{y_x(e)+1}$, then $z_y(e) = 0$ and $z_x(e) = y_x(e)$
(since $z \vee e_x^{y_x(e)+1} = z \vee (y \vee e_x^{y_x(e)+1})$ covers $z$).
(2). It is easy to see $(x)^+_x = {\bf 1}$.
By (1), we obtain (2).
(3). Observe from $y \succeq e_x^{y_x(e)}$ that
$(\succeq)$ holds;
in particular, the right hand side of (3) actually exists.
%
We show the equality ($=$).
Let $u (\preceq y)$ denote the right hand side of~(3).
Then $u_x = y_x$. From $y_x = y_{u} + u_x$ by (1), we have $y_u = {\bf 0}$.
Here $y \succ u$ is impossible, otherwise $y_u \neq {\bf 0}$.
\end{proof}
For general $x,y \in {\cal L}$,
the $x$-coordinate $y_x$ of $y$ is defined by
\begin{equation*}
y_x := (y)^{+k}_x - k {\bf 1}
\end{equation*}
for an integer $k$ with $y^{+k} \succeq x$.
This is well-defined by Lemma~\ref{lem:y_x}~(2).
Then it is easy to see that Lemma~\ref{lem:y_x}~(1) and (2) also hold for general $x,y,z$.
By ${\bf 0} = x_x = x_y + y_x$, we have:
\begin{Lem}\label{lem:y_x=-x_y}
For $x,y \in {\cal L}$, it holds $y_x = - x_y$.
\end{Lem}
For $x \in {\cal L}$,
define ${\cal Z}({\cal L},x) \subseteq {\bf Z}^{E}$ by
\begin{equation}
{\cal Z}({\cal L},x) := \{y_x \mid y \in {\cal L}\}.
\end{equation}
The partial order on ${\cal Z}({\cal L},x)$ is induced by vector order $\leq$ in ${\bf Z}^{E}$
\begin{Prop}\label{prop:Z(L,x)}
Let $x \in {\cal L}$. Then
${\cal L}$ is isomorphic to ${\cal Z}({\cal L},x)$ by $y \mapsto y_x$.
\end{Prop}
\begin{proof}
By Lemma~\ref{lem:y_x}~(3),
the map $y \mapsto y_x$ is injective on $\{ y \in {\cal L} \mid y \succeq x\}$.
Consequently it is bijective.
We show that the order is preserved.
Suppose that $y \preceq z$.
For some $k$, we have $x \preceq y^{+k} \preceq z^{+k}$.
By Lemma~\ref{lem:y_x},
we have $(z)^{+k}_x = (z)^{+k}_{(y)^{+k}} + (y)^{+k}_x$, and $z_x = z_y + y_x$.
By $z_y \geq {\bf 0}$, we have $z_x \geq y_x$.
\end{proof}
\subsubsection{Matroid at infinity}
Here we introduce matroid structures on the set $E$ of ends.
Suppose that ${\cal L}$ has uniform-rank $n$.
For $x \in {\cal L}$,
a subset $I \subseteq E$ of ends is called {\em independent at $x$}
or {\em $x$-independent}
if $\{e_x^1 \mid e \in I\}$ is independent in $[x,(x)^+]$.
Let ${\cal I}^x = {\cal I}^{{\cal L},x}$ denote the family of all $x$-independent subsets in $E$.
\begin{Lem}
$(E, {\cal I}^{x})$ is a matroid with rank $n$.
\end{Lem}
Indeed, ${\bf M}^{x} = {\bf M}^{{\cal L},x} := (E, {\cal I}^{x})$
is obtained by adding parallel elements
to the simple matroid corresponding to
geometric lattice $[x, (x)^+]$ whose rank is equal to uniform rank $n$ of ${\cal L}$.
We call ${\bf M}^x$ the {\em matroid at $x$}.
Its base family is denoted by ${\cal B}^x$.
Let ${\cal I}^{\infty} := \bigcup_{x \in {\cal L}} {\cal I}^x$
be the union of all $x$-independent subsets over all $x \in {\cal L}$.
The goal here is to show the following.
\begin{Prop}\label{prop:matroid_infty}
$(E, {\cal I}^{\infty})$ is a simple matroid with rank $n$.
\end{Prop}
We call ${\bf M}^\infty := (E, {\cal I}^{\infty})$
the {\em matroid at infinity}.
The base family ${\cal B}^{\infty}$ of ${\bf M}^{\infty}$ is given by
${\cal B}^{\infty} = \bigcup_{x \in {\cal L}} {\cal B}^x$.
We see in Section~\ref{subsec:u->v}
that ${\cal B}^{\infty}$ is the domain of the valuated matroid corresponding to ${\cal L}$.
We are going to prove Proposition~\ref{prop:matroid_infty}.
\begin{Lem}\label{lem:dependent}
For $K \subseteq E$ and $x \in {\cal L}$, we have the following:
\begin{itemize}
\item[{\rm (1)}]
For any $z \in {\cal L}$ with $z \succeq x$ and
$z \not \succeq e_x^1$ $(e \in K)$, if $K \in {\cal I}^z$, then
$K \in {\cal I}^x$.
\item[{\rm (2)}] For any $z \in [x,(x)^+]$ with $z \succeq \bigvee_{e \in K} e_x^1$,
it holds $r[z, \bigvee_{e \in K} e_z^1] \geq r[x, \bigvee_{e \in K} e_x^1]$; in particular,
if $K \in {\cal I}^x$, then $K \in {\cal I}^z$.
\item[{\rm (3)}] For $I \subseteq K$, let $y := \bigvee_{e \in I} e_x^{1}$.
If $I \in {\cal I}^x$, $K \in {\cal B}^y$, and $e_x^1 \not \preceq y$ for $e \in K \setminus I$,
then $K \in {\cal B}^x$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). We show the contrapositive;
suppose $|K| > r_x(\bigvee_{e \in K} e_x^1)$.
By $z \not \succeq e_x^1$,
it holds $e_z^1 = z \vee e_x^1$ for $e \in K$.
Then $r_x(z) + |K| > r_x(z) + r_x(\bigvee_{e \in K} e_x^1) \geq r_x(\bigvee_{e \in K} e_z^1) + r_x(z \wedge \bigvee_{e \in K} e_x^1) \geq r_z(\bigvee_{e \in K} e_z^1) + r_x(z)$.
Thus $|K| > r_z(\bigvee_{e \in K} e_z^1)$, and $K \not \in {\cal I}^z$.
(2). Let $y:= \bigvee_{e \in K} e_x^1$.
We can choose an $x$-independent subset $K' \subseteq K$ such that
$y = \bigvee_{e \in K'} e_x^1$.
Also we can choose an $x$-independent subset $J \subseteq E \setminus K$
such that $y \vee (\bigvee_{e \in J} e_x^1) = z$.
Then $K' \cup J$ is $x$-independent.
Now $z$ belongs to the sublattice generated
by independent $x$-rays $e_x$ $(e \in K' \cup J)$.
From Proposition~\ref{prop:generated}, we conclude that
$K'$ is independent at $z$.
Hence $r_x[x, \bigvee_{e \in K}e_x^1] = |K'|
= r_z[z,\bigvee_{e \in K'}e_z^1] \leq r_z[z,\bigvee_{e \in K}e_z^1]$.
(3). By $\bigvee_{e \in K}e_x^1 = y \vee \bigvee_{e \in K \setminus I} e_x^1
= \bigvee_{e \in K\setminus I} e_y^1$,
we have $r[y, \bigvee_{e \in K} e_x^1] = r[y, \bigvee_{e \in K \setminus I} e_y^1]
= |K| - |I| = n - |I|$.
Thus $r[x, \bigvee_{e \in K} e_x^1] = r[x,y] + r[y, \bigvee_{e \in K} e_x^1] = n$.
This implies that $K$ is a base at $x$.
\end{proof}
\begin{Lem}\label{lem:independent}
For $I \subseteq E$ and $x \in {\cal L}$,
define $x = x^0,x^1,\ldots$ by
\begin{equation}\label{eqn:x^k}
x^{k} := \bigvee_{e \in I} e_{x}^k \quad (k=0,1,2,\ldots).
\end{equation}
If $I \in {\cal I}^{\infty}$,
then there is $m \geq 0$ such that $I$ is independent at $x^m$.
\end{Lem}
\begin{proof}
By the definition of ${\cal I}^{\infty}$, there is $y \in {\cal L}$
such that $I$ is independent at $y$.
We can assume that $y \succeq x$ (Lemma~\ref{lem:preceq}).
Consider the $x$-coordinate $y_x \in {\bf Z}^E$ of $y$, and
let $z := \bigvee_{e \in I} e_x^{y_x(e)} (\preceq y)$.
By Lemma~\ref{lem:y_x}~(1),
it holds $y_z(e) = 0$ for all $e \in I$.
This means that $e^1_z \not \preceq y$ for all $e \in I$.
Therefore, by Lemma~\ref{lem:dependent}~(1) and $I \in {\cal I}_y$,
$I$ is independent at $z$.
By $z \not \succeq e_x^{y_x(e) + 1}$ and Lemma~\ref{lem:ray},
it holds $e_z^{l} = z \vee e_x^{y_x(e) + l}$ for $e \in I$ and $l \geq 0$.
Let $m := \max_{e \in I} y_x(e)$.
Then $x^m = \bigvee_{e \in I} e_x^{y_x(e)} \vee e_x^{m}
= \bigvee_{e \in I} z \vee e_x^{m} = \bigvee_{e \in I} e_{z}^{m - y_z(e)}$.
Thus $x^m$ belongs to the sublattice generated
by independent $z$ rays, which implies that $I$ is independent at $x^m$.
\end{proof}
\begin{Lem}\label{lem:x^k'}
For $I \subseteq E$ and $x \in {\cal L}$, define $x = x^0,x^1,\ldots$ by $(\ref{eqn:x^k})$.
Then we have
\begin{equation}\label{eqn:x^k'}
x^k = \bigvee_{e \in I} e_{x^{k-1}}^1 \quad (k=1,2,\ldots).
\end{equation}
\end{Lem}
\begin{proof}
We show by induction on $k$ that $e_x^{k} \not \preceq x^{k-1}$ for $e \in I$.
This implies $e_{x^{k-1}}^{1} = x^{k-1} \vee e_x^{k}$
by Lemma~\ref{lem:ray}, and implies
(\ref{eqn:x^k'}):
$x^k := \bigvee_{e \in I} e_x^k
= \bigvee_{e \in I} e_x^{k-1} \vee e_x^k
= \bigvee_{e \in I} x^{k-1} \vee e_x^k
= \bigvee_{e \in I} e_{x^{k-1}}^1$.
For $e \in I$, by induction, $e_x^{k-1} \not \preceq x^{k-2}$.
Then $e_x^{k} \vee x^{k-2} = e_{x^{k-2}}^{2}$ (by Lemma~\ref{lem:ray}).
If $e^k_x \preceq x^{k-1}$, then
$e_{x^{k-2}}^2 = e_x^k \vee x^{k-2} \preceq x^{k-1}$, and
$x^{k-2} = e_{x^{k-2}}^0 \prec e_{x^{k-2}}^1 \prec e_{x^{k-2}}^2
\preceq x^{k-1} = \bigvee_{e \in I} e_{x^{k-2}}^1 \preceq (x^{k-2})^+$, contradicting $e_{x^{k-2}}^2 \not \in [x^{k-2}, (x^{k-2})^+]$.
Thus $e_x^{k} \not \preceq x^{k-1}$, as required.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:matroid_infty}]
We verify the axiom of independent sets.
Choose $I,J \in {\cal I}^{\infty}$ with $|I| < |J|$.
By the definition of ${\cal I}^{\infty}$,
there is $x \in {\cal L}$ with $I \in {\cal I}^x$.
Consider $x^1 := \bigvee_{e \in I} e_x^1$ and $y^1 := \bigvee_{e \in J} e_x^1$.
If $y^1 \not \preceq x^1$,
then we can choose $e^* \in J \setminus I$
with $(e^*)^1_{x} \not \preceq x^1$, and $I + e^*$ is independent at $x$;
$I + e^* \in {\cal I}_x \subseteq {\cal I}_{\infty}$, as required.
So suppose $y^1 \preceq x^1$. For $k=1,2,\ldots$,
let $x^k := \bigvee_{e \in I} e_x^k$, and let $y^k := \bigvee_{e \in J} e_x^k$.
By Lemma~\ref{lem:dependent}~(2) and Lemma~\ref{lem:x^k'},
$I$ is independent at all $x^k$.
By Lemma~\ref{lem:independent} and $J \in {\cal I}^{\infty}$,
there is $\ell$ such that $J$ is independent at all $y^k$ for $k \geq \ell$.
With Lemma~\ref{lem:x^k'}, it holds
$r[x^{k},x^{k+1}] = r[x^k, \bigvee_{e \in I} e_{x^k}^1]= |I| < |J|
= r[y^k, \bigvee_{e \in J} e_{y^k}^1] = r[y^k,y^{k+1}]$ for $k \geq \ell$.
For large $k$, the increase of the height of $y^k$ is greater than that of $x^k$.
Therefore there is $k^*$ such that $y^{k^*} \preceq x^{k^*}$
and $y^{k^*+1} \not \preceq x^{k^*+1}$.
This implies that $x^{k^*+1} \not \succeq x^{k^*} \vee y^{k^*+1}
= x^{k^*} \vee \bigvee_{e \in J} e_{y^{k^*}}^{1} \preceq \bigvee_{e \in J} e_{x^{k^*}}^1$.
Thus $\bigvee_{e \in J} e_{x^{k^*}}^1 \not \preceq \bigvee_{e \in I}e_{x^{k^*}}^1 (= x^{k^*+1})$, and there is $e^* \in J \setminus I$ with $I + e^* \in {\cal I}_{x^{k^*}}$, as above.
For distinct $e,f \in E$ and $x \in {\cal L}$,
let $y := e_x^{\delta_x(e,f)} = f_x^{\delta_x(e,f)}$.
Then $e_{y}^1 \neq f_{y}^1$; see Section~\ref{subsub:ultra},
This means that $\{e,f\}$ is independent on ${\bf M}^{\infty}$.
Thus ${\bf M}^{\infty}$ is a simple matroid.
\end{proof}
\begin{Lem}\label{lem:key}
Let $x \in {\cal L}$.
For a bounded vector $c \in {\bf Z}^E_+$, let $y := \bigvee_{e \in E} e_x^{c(e)}$.
Then there is $B \in {\cal B}^y$ such that $y_x(e) = c(e)$ for $e \in B$, and
\begin{equation}
y = \bigvee_{e \in B} e_x^{c(e)}.
\end{equation}
\end{Lem}
Notice that $\bigvee_{e \in E} e_x^{c(e)}$
exists by $\bigvee_{e \in E} e_x^{c(e)} \preceq (x)^{+\max_{e \in E}c(e)}$.
\begin{proof}
%
We use the induction on $\max_{e \in E} c(e)$.
Define $c' \in {\bf Z}_{+}^{E}$ by $c'(e) := \max \{ c(e) - 1,0\}$.
Let $z := \bigvee_{e \in E} e_x^{c'(e)}$.
By induction, there is $B' \in {\cal B}^z$
such that $z_x(e) = c'(e)$ for $e \in B'$ and $z = \bigvee_{e \in B'} e_x^{c'(e)}$.
Define $I := \{ e \in B' \mid c(e) > 0 \}$.
Then $I \in {\cal I}^z$ (by $B' \in {\cal B}^z$).
Let $Z := \{ e \in E \mid e_x^{c(e)} \not \preceq z \}$.
Then $I \subseteq Z$ (by $c(e) - 1 = z_x(e)$ for $e \in I$).
Now $y = \bigvee_{e \in E} z \vee e_{x}^{c(e)} = \bigvee_{e \in Z} z \vee e_x^{c'(e)+1} =
\bigvee_{e \in Z} e_z^1$.
There is $J \in {\cal I}^z$ such that $I \subseteq J \subseteq Z$
and $y = \bigvee_{e \in J} e_z^1 = \bigvee_{e \in J} e_x^{c(e)}$.
If $J \in {\cal B}^z$,
then $J \in {\cal B}^y$ (by Proposition~\ref{prop:generated}),
and $J$ is a desired subset.
Suppose not.
By the independence axiom for $B',J \in {\cal I}^z$ with $|B'| > |J|$
we can choose a subset $K \subseteq B' \setminus J$
with $J \cup K \in {\cal B}^z$.
Then $B := J \cup K$ is a desired base in ${\cal B}^y$,
since $c(e) = c'(e) = z_x(e) = 0$ and $y_x(e) = y_z(e) + z_x(e) = 0$ for $e \in K$.
\end{proof}
\subsubsection{${\bf Z}^n$-skeleton}
Let $x \in {\cal L}$, and $B \in {\cal B}^x$.
By Proposition~\ref{prop:generated},
the
sublattice ${\cal S}^x(B)$ generated by elements in $x$-rays $e_x \in B$ is
isomorphic to ${\bf Z}_+^n$, where $n$ is the uniform rank of ${\cal L}$.
This sublattice is closed under the ascending operation.
Define sublattice ${\cal S}(B)$ by
\begin{equation*}
{\cal S}(B) := \bigcup_{k \in {\bf Z}} ({\cal S}^x(B))^k.
\end{equation*}
Then ${\cal S}(B)$ is isomorphic to ${\bf Z}^n$
with $(y)^+ = y + {\bf 1}$ for $y \in {\cal S}(B)$ (identified with ${\bf Z}^n$).
We call ${\cal S}(B)$ the {\em ${\bf Z}^n$-skeleton} generated by $B$.
The next lemma shows that ${\cal S}(B)$
is independent of the choice of $x$,
and is well-defined for $B \in {\cal B}^{\infty}$.
\begin{Lem}\label{lem:S(B)}
For $B \in {\cal B}^x$, it holds
${\cal S}(B) = \{ y \in {\cal L} \mid B \in {\cal B}^y \}$.
\end{Lem}
\begin{proof}
From Proposition~\ref{prop:generated},
the inclusion $(\subseteq)$ is obvious. We show the converse.
Let $y \in {\cal L}$ with $B \in {\cal B}^y$.
We may assume that $y \succeq x$
by considering $(y)^{+k}$ and by $({\cal S}(B))^{+k} = {\cal S}(B)$.
Let $y' := \bigvee_{e \in B} e_x^{y_x(e)}$.
Then $y' \preceq y$.
We show $y' = y$.
Suppose not: $y' \prec y$.
There is an atom $a$ of $[y',(y')^+]$ with $a \preceq y$; necessarily $a \neq e_{y'}^1$ for $e \in B$.
By Lemma~\ref{lem:dependent}~(1), $B$ is also a maximal independent set at $y'$.
Hence $r_{y'}(a \vee \bigvee_{e \in B} e_{y'}^1 ) = r_{y'}((y')^+)= n$
and $n - 1 = r_{a} (\bigvee_{e \in B} (a \vee e_{y'}^1)) = r_a(\bigvee_{e \in B} e_{a}^1)$.
Namely $B$ is dependent at $a$ with $a \preceq y \not \succeq e_a^1$ for $e \in B$.
By Lemma~\ref{lem:dependent}~(1),
$B$ is dependent at $y$, contradicting $B \in {\cal B}^y$.
\end{proof}
\subsection{Valuated matroid from uniform semimodular lattice}\label{subsec:u->v}
Let ${\cal L}$ be a uniform semimodular lattice with uniform-rank $n$.
For $x \in {\cal L}$ and $B \in {\cal B}^{\infty}$,
define $x_B \in {\cal L}$ as the maximum element $y \in {\cal S}(B)$ with $y \preceq x$:
\begin{equation*}
x_B := \bigvee \{y \in {\cal S}(B) \mid y \preceq x\}.
\end{equation*}
The maximum element $x_B$ indeed exists by (F) and the fact that
${\cal S}(B)$ is a sublattice.
Now define $\omega = \omega^{{\cal L},x}: {\cal B}^{\infty} \to {\bf Z}$ by
\begin{equation*}
\omega(B) := - r[x_B,x] \quad (B \in {\cal B}^{\infty}).
\end{equation*}
This quantity $\omega(B)$ is the negative of a ``distance"
between $x$ and ${\cal S}(B)$; see Figure~\ref{fig} for intuition.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{fig.pdf}
\caption{A distance between $x$ and ${\cal S}(B)$}
\label{fig}
\end{center}
\end{figure}\noindent
One of the main theorems is as follows:
\begin{Thm}\label{thm:main1}
Let ${\cal L}$ be a uniform semimodular lattice with uniform-rank $n$,
and let $x \in {\cal L}$.
Then $\omega = \omega^{{\cal L}, x}$ is
a complete valuated matroid with rank $n$,
where
\begin{itemize}
\item[{\rm (1)}] ${\cal T}(\omega) \cap {\bf Z}^E$ is isomorphic to ${\cal L}$, and
\item[{\rm (2)}] ${\cal T}(\omega)$ is a geometric realization of simplicial complex ${\cal C}({\cal L})$ consisting of all chains
$x^0 \prec x^1 \prec \cdots \prec x^m$ with $x^m \preceq (x^0)^+$.
\end{itemize}
\end{Thm}
To prove this theorem, we show several properties of $x_B$.
\begin{Lem}\label{lem:x_B1}
Let $x,y \in {\cal L}$ with $y \preceq x$, and $B \in {\cal B}^{\infty}$.
\begin{itemize}
\item[{\rm (1)}] $y=x_B$ if and only if $B \in {\cal B}^y$ and $x_y(e) = 0$ for all $e \in B$.
\item[{\rm (2)}] $x_B \preceq y$ if and only if
$x_y(e) = 0$ for all $e \in B$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). If $y \in {\cal S}(B)$, $y \preceq x$, and $x_y(e) > 0$ for some $e \in B$,
then $\bigvee_{e \in B} e_y^{x_y(e)} $ belongs to ${\cal S}(B)$, is greater than $y$, and is not greater than $x$.
The claim follows from this fact. In particular, $x_{x_B}(e) = 0$ for $e \in B$.
(2). The only-if part follows from $x_{x_B} = x_{y} + y_{x_B}$ and
$x_{x_B}(e) = 0$ of all $e \in B$.
We show the if part.
Suppose that $B$ is dependent at $y$ (otherwise $y = x_B$ by (1)).
Define the sequence $y = y^0, y^1, y^2,\ldots$ by
\begin{equation}\label{eqn:y^k}
y^k := (\bigvee_{e \in B} e_y^{k})^{-k} \quad (k=0,1,2,\ldots).
\end{equation}
Then it holds that
\begin{equation}\label{eqn:y^k'}
y^k = (\bigvee_{e \in B} e^1_{y^{k-1}})^{-1} \quad (k=1,2,\ldots).
\end{equation}
Indeed, let $z^{k} := \bigvee_{e \in B} e_y^{k}$.
Then $y^k = (z^{k})^{-k} = (\bigvee e_{z^{k-1}}^1)^{-k}
= (\bigvee (e_{z^{k-1}}^1)^{-k+1})^{-1}
= (\bigvee e_{(z^{k-1})^{-k+1}}^1)^{-1} = (\bigvee e_{y^{k-1}}^1)^{-1}$,
where the second equality follows from Lemma~\ref{lem:x^k'}
and the forth one follows from
the observation $(e_u^1)^{-1} = e_{u^{-1}}^1$.
In particular, $x \succeq y \succeq y^1 \succeq y^2 \succeq \cdots$ holds
(by Lemmas~\ref{lem:hyperplane} and \ref{lem:inverse}).
Also $x_{y^k}(e) = 0$ for all $e \in B$ (by Lemma~\ref{lem:y_x}~(1),(2)).
By Lemma~\ref{lem:independent}, there is $\ell$
such that $B \in {\cal B}^{y^\ell}$.
By (1), we have $y^\ell = x_B$, and $x_B \preceq y$, as required.
\end{proof}
\begin{Lem}\label{lem:x_B2}
For $x,y \in {\cal L}$ with $y \preceq x$, we have the following:
\begin{equation*}
r(x_B) + \sum_{e \in B} y_x(e)
\left\{
\begin{array}{ll}
= r(y) & {\rm if}\ y \in {\cal S}(B) ( \Leftrightarrow B \in {\cal B}^y), \\
< r(y) & {\rm otherwise}.
\end{array}
\right. \quad (B \in {\cal B}^{\infty}).
\end{equation*}
\end{Lem}
\begin{proof}
Suppose that $y \in {\cal S}(B)$.
By Lemmas~\ref{lem:y_x}~(1) and~\ref{lem:x_B1}~(1),
$\bigvee_{e \in B} e_y^{x_y(e)}$ is equal to $x_B$.
By Proposition~\ref{prop:generated},
$r[y,x_B] = \sum_{e \in B} x_y(e)$.
Therefore $r(y) + \sum_{e \in B} x_y(e) = r(x_B)$ holds,
which implies $r(y) = r(x_B) + \sum_{e \in B} y_x(e)$
by $y_x = - x_y$; see Lemma~\ref{lem:y_x=-x_y}.
Suppose that $y \not \in {\cal S}(B)$.
Let $y' := \bigvee_{e \in B} e_y^{x_y(e)}$.
By Lemma~\ref{lem:x_B1}~(2),
we have $x_{B} \preceq y' \preceq x$, and
\begin{eqnarray*}
&& r[y,y'] \leq \sum_{e \in B} x_y(e), \\
&& r(x_B) \leq r(y').
\end{eqnarray*}
It suffices to show that one of the inequalities is strict.
If $y' \succ x_B$, then $(<)$ holds in the second inequality.
Suppose that $y' = x_B$, and suppose to the contrary that
equality holds in the first inequality.
Let $I:= \{e \in B \mid x_y(e) > 0 \} (\neq \emptyset)$, and
let $y'' := \bigvee_{e \in I} e_y^{x_y(e) - 1}$.
Then $y' = x_B = \bigvee_{e \in I}e_{y''}^{1}$.
By the equality in the first inequality and Lemma~\ref{lem:y_x}(1),
$I$ must be independent at $y''$,
and $e_{y''}^1 \not \preceq y'$ for $e \in B \setminus I$
(otherwise $x_y(e) > 0$ for $e \in B \setminus I$ ).
By Lemma~\ref{lem:dependent}~(3), $B$ is independent at~$y''$.
Also $r[y,y''] = \sum_{e \in B} \max \{ x_y(e) - 1,0\}$ holds.
By repeating this argument (to $y''$), we eventually obtain a contradiction that
$B$ is independent at~$y \not \in {\cal S}(B)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main1}]
Observe that $\omega$ is upper-bounded.
By Lemma~\ref{lem:murota},
we show that for any bounded vector $c \in {\bf Z}^E$
the maximizer family ${\cal B}_{\omega + c}$
is a matroid base family.
Suppose that $c = y_x$ for some $y \preceq x$.
By Lemma~\ref{lem:x_B2},
the maximizer family ${\cal B}_{\omega + c}$
is nothing but ${\cal B}^{y}$.
Suppose that $c$ is general.
From ${\cal B}_{\omega + c} = {\cal B}_{\omega + c + k {\bf 1}}$,
we can assume that $c \geq 0$.
Let $y := \bigvee_{e \in E} e_x^{c(e)}$.
By Lemma~\ref{lem:key}, there is $B \in {\cal B}^y$
such that $y = \bigvee_{e \in B} e_x^{c(e)}$.
Let $\tilde c := y_x$.
Then $\tilde c \geq c$. Thus
$- r[x_{B'},x] + \sum_{e \in B'} c(e) \leq - r[x_{B'},x] + \sum_{e \in {B'}} \tilde c(e)$
for arbitrary $B' \in {\cal B}^{\infty}$, and
the equality holds for $B$ by $c(e) = y_x(e) = \tilde c(e)$ $(e \in B)$.
Since $B \in {\cal B}^y = {\cal B}_{\omega + \tilde c}$ (by above),
the maximum of $\omega + c$
is the same as that of $\omega + \tilde c$.
This implies that ${\cal B}_{\omega +c} \subseteq {\cal B}_{\omega + \tilde c}$.
Now ${\cal B}_{\omega +c}$ is viewed as
the maximizer family of a linear function
$B \mapsto \sum_{e \in B} (c - \tilde c)(e)$
over the matroid base family ${\cal B}_{\omega + \tilde c}$, and
is a matroid base family, as required.
(1) follows from Proposition~\ref{prop:Z(L,x)} and the next claim.
\begin{Clm}
${\cal T}(\omega) \cap {\bf Z}^E = {\cal Z}({\cal L}, x)$.
\end{Clm}
\begin{proof}
For $c = y_x \in {\cal Z}({\cal L}, x)$, the maximizer family
${\cal B}_{\omega + c}$ is equal to ${\cal B}^y$, which is loop-free.
Hence $(\supseteq)$.
Let $c \in {\bf Z}^E_{+}$ with $c \not \in {\cal Z}({\cal L}, x)$.
Consider $\tilde c$ as above. Then $\tilde c \geq c$, and $\tilde c \neq c$.
As seen above,
$\max_{B} -r[x_{B},x] + \sum_{e \in B} c(e) = \max_{B} - r[x_{B},x] + \sum_{e \in B} \tilde c(e)$.
This means that an element $e \in E$ with $\tilde c(e) > c(e)$
cannot belong to any maximizer in ${\cal B}_{\omega +c}$.
Namely $e$ is a loop in ${\cal B}_{\omega + c}$.
Thus $c \not \in {\cal T}(\omega) \cap {\bf Z}^n$,
implying $(\subseteq)$.
\end{proof}
(2) is a corollary of this claim and Lemma~\ref{lem:chain_of_flats}~(2).
By Proposition~\ref{prop:matroid_infty}, $(E,\omega)$ is a simple valuated matroid.
Lemma~\ref{lem:d=D} in the next section shows that
topologies on $E$ induced by $d_x$ and by $D_p$ from $\omega$ coincide.
By Proposition~\ref{prop:ultrametric}~(2),
$\omega$ is complete.
\end{proof}
\subsection{Uniform semimodular lattice from valuated matroid}\label{subsec:v->u}
The main statement for the uniform
semimodular lattice of a valuated matroid
is as follows.
\begin{Thm}\label{thm:main2}
Let $(E,\omega)$ be
an integer-valued valuated matroid with rank $n$.
Then ${\cal L}(\omega):= {\cal T}(\omega) \cap {\bf Z}^E$ is a uniform semimodular lattice with
uniform-rank $n$, in which the following hold:
\begin{itemize}
\item[{\rm (1)}] The ascending operator is equal to $x \mapsto x + {\bf 1}$.
\item[{\rm (2)}] A height function $r$ is given by
\[
x \mapsto \max_{B \in {\cal B}} (\omega + x)(B).
\]
\item[{\rm (3)}] The meet $\wedge$ and the join $\vee$ are given by
\begin{eqnarray*}
x \wedge y &= & \min (x,y), \\
x \vee y &= & \bigwedge \{ z \in {\cal L}(\omega) \mid x \leq z \geq y \} \quad (x,y \in {\cal L}(\omega)).
\end{eqnarray*}
\item[{\rm (4)}] For $x \in {\cal L}(\omega)$, the valuated matroid
$(E^{{\cal L}(\omega)}, \omega^{{\cal L}(\omega),x})$
is a completion of a valuated matroid projectively equivalent to $(E, \omega)$.
\end{itemize}
\end{Thm}
The rest of this section is to devoted to the proof.
Let ${\bf M} = (E, {\cal B})$ be the underlying matroid of $\omega$.
By (TC$_{+{\bf 1}}$),
if $x\in {\cal L}(\omega)$ then $x+{\bf 1} \in {\cal L}(\omega)$.
We first show that the interval $[x,x+{\bf 1}]$ in ${\cal L}(\omega)$
is a geometric lattice
corresponding to ${\bf M}_{\omega + x}$.
\begin{Lem}\label{lem:[x,x+1]} Let $x \in {\cal L}(\omega)$.
\begin{itemize}
\item[{\rm (1)}] $[x,x+{\bf 1}]$
is isomorphic to the lattice of flats of ${\bf M}_{\omega + x}$,
where the isomorphism is given by the map $x + {\bf 1}_{F} \mapsto F$.
\item[{\rm (2)}] $y \in {\cal L}(\omega)$ covers $x$ if and only if $y = x + {\bf 1}_F$
for a parallel class $F$ in ${\bf M}_{\omega + x}$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). By replacing $\omega$ by $\omega+x$,
we can assume $x = {\bf 0}$.
By Lemma~\ref{lem:integer-valued},
for a flat $F$ of ${\cal B}_{\omega}$, and any $e \in F$ and $f \not \in F$
we can choose $B \in {\cal B}_{\omega} \cap {\cal B}_{\omega + {\bf 1}_{F}}$
containing $e,f$. This implies $x + {\bf 1}_{F} \in {\cal L}(\omega)$.
Suppose that $F$ is not a flat of ${\cal B}_{\omega}$.
Consider $e \in {\rm cl}(F) \setminus F$.
Then $\max \{ |B \cap (F + e)| \mid B \in {\cal B}_{\omega}\}
= \max \{ |B \cap F| \mid B \in {\cal B}_{\omega}\}$.
This implies that $\max_B (\omega + {\bf 1}_{F})(B)
= \max_B (\omega + {\bf 1}_{F + e})(B)$.
Thus no base in ${\cal B}_{\omega+ {\bf 1}_F}$ contains $e$, implying $x + {\bf 1}_{F} \not \in {\cal L}(\omega)$.
(2). By (1), it suffices to the only-if part.
%
We first show that for $F \subseteq E$ and $e \in E \setminus F$,
if $e$ is a loop in ${\bf M}_{\omega}$ then so is ${\bf M}_{\omega + {\bf 1}_F}$.
%
Choose $B \in {\cal B}_{\omega}$ with maximal $B \cap F$.
By Lemma~\ref{lem:integer-valued}
it holds $B \in {\cal B}_{\omega+ {\bf 1}_F}$.
If there is a base in ${\cal B}_{\omega + {\bf 1}_F}$
containing $e$, then by exchange axiom there is $f \in B$ such that
$B + e - f \in {\cal B}_{\omega + {\bf 1}_F}$.
Then $B + e - f \not \in {\cal B}_{\omega}$, and
$\omega(B+e-f) \leq \omega(B) - 1$.
By $e \not \in F$,
it holds $|(B + e- f) \cap F| \leq |B \cap F|$.
Therefore $(\omega+{\bf 1}_{F})(B+e-f) < (\omega+{\bf 1}_{F})(B)$,
contradicting $B + e - f \in {\cal B}_{\omega + {\bf 1}_F}$.
Thus no base in ${\cal B}_{\omega + {\bf 1}_F}$ contains $e$.
Let $y = x + \sum_{i} {\bf 1}_{F_i}$ for $F_1 \supseteq F_2 \supseteq \cdots \supseteq F_m$.
By repeated use of the above property, one can see that
$F_1$ must be a flat in ${\cal B}_{\omega + x}$;
otherwise $e \in {\rm cl}(F_1) \setminus F_1$
is a loop in ${\cal B}_{\omega + y}$.
Consider the parallel class $F$ of $e \in F_1$ in ${\bf M}_{\omega+x}$.
By (1),
$x+ {\bf 1}_F$ belongs to ${\cal L}(\omega)$.
Therefore $x \leq x+ {\bf 1}_F \leq y$, implying $y = x+ {\bf 1}_F$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main2} (1-3)]
First we show (2) that
a height function $r$ of ${\cal L}(\omega)$ is given by $x \mapsto \max_{B \in {\cal B}} (\omega + x)(B)$.
Consider $x,y \in {\cal L}(\omega)$ such that $y$ covers $x$.
By Lemma~\ref{lem:[x,x+1]}~(2), $y= x+ {\bf 1}_F$
for a parallel class $F$.
Then ${\cal B}_{\omega+y} \supseteq \{ B \in {\cal B}_{\omega + x} \mid |B \cap F| = 1\} (\neq \emptyset)$ by Lemma~\ref{lem:integer-valued}.
Therefore $r(y) = r(x)+1$.
Next we show that ${\cal L}(\omega)$ is a lattice with property (3).
Let $x,y \in {\cal L}(\omega)$, and let $z := \min (x,y)$.
By the tropical convexity (Lemma~\ref{lem:trop_convexity}),
$z$ belongs to ${\cal L}(\omega)$, and necessarily
$x \wedge y = z$.
By Lemma~\ref{lem:[x,x+1]}~(2) and (2) shown above,
$x - z$ and $y - z$ are upper-bounded.
This implies that $\max(x,y) - x$ and $\max(x,y) - y$ are upper-bounded.
Thus $\{z \in {\cal L}(\omega) \mid z \geq \max(x,y)\}$ is nonempty;
for example, consider $x+ \alpha {\bf 1}$ for large $\alpha$.
By this fact and the existence of a height function,
$\bigwedge \{z \in {\cal L}(\omega) \mid z \geq \max(x,y)\}$ exists, and is the join of $x,y$.
By Lemma~\ref{lem:[x,x+1]},
if $a,b$ cover $a \wedge b$, then $a \vee b$ covers $a,b$.
Hence ${\cal L}(\omega)$ is semimodular (Lemma~\ref{lem:semimodular}).
The property (1) is also an immediate corollary of the same lemma.
The map $x \mapsto x + {\bf 1}$ is obviously an automorphism.
Thus ${\cal L}(\omega)$
is a uniform semimodular lattice.
The uniform-rank is equal to the rank of
$[x, x+{\bf 1}]$ that is equal to the rank of~${\bf M}$.
\end{proof}
To show the property (4),
we have to study the relationship between $E$
and the space $E^{{\cal L}(\omega)}$ of ends
in ${\cal L}(\omega)$.
\begin{Lem}\label{lem:normal_ray}
Let $(a^\ell)$ be a ray in ${\cal L}(\omega)$.
\begin{itemize}
\item[{\rm (1)}] There is a decreasing sequence $F_0 \supseteq F_1 \supseteq \cdots$
of nonempty subsets in $E$ such that
\begin{equation*}
a^{\ell+1} = a^{\ell} + {\bf 1}_{F_{\ell}} \quad (\ell = 0,1,\ldots),
\end{equation*}
where $F_{\ell}$ is a parallel class of ${\bf M}_{\omega + a^\ell}$.
\item[{\rm (2)}] If $\bigcap_{\ell} F_{\ell}$ is nonempty,
then $\bigcap_{\ell} F_{\ell}$ is a parallel class of ${\bf M}$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). By Lemma~\ref{lem:[x,x+1]} (2),
$F_\ell$ is a parallel class of ${\bf M}_{\omega + a^\ell}$.
It suffices to show $F_0 \supseteq F_1$.
Here $F_0 \cap F_1 = \emptyset$ is impossible, since
otherwise $a^2 \in [a^0, a^0+{\bf 1}]$ contradicting the fact that $(a^\ell)$ is a ray.
Suppose $F_1 \setminus F_0 \neq \emptyset$.
Choose $e \in F_0 \cap F_1$ and $f \in F_1 \setminus F_0$.
Then there is a base $B \in {\cal B}_{\omega + a^0}$ containing $e,f$.
By Lemma~\ref{lem:integer-valued}, $B$ is also a base in $B_{\omega + a^1}$.
Namely $e,f$ are independent in ${\bf M}_{\omega + a^1}$.
However this is a contradiction to the fact that $F_1$ is a parallel class
of ${\bf M}_{\omega + a^1}$.
(2).
Suppose that there are distinct non-parallel elements
$e,f \in \bigcap_{\ell} F_{\ell}$.
There is $B \in {\cal B}$ containing $e,f$.
Then $(\omega + a^{\ell+1})(B)
- (\omega + a^{\ell})(B) \geq 2$.
On the other hand, $r(a^{\ell+1}) - r(a^{\ell}) = 1$.
Therefore $B$ must be in ${\cal B}_{\omega + a^{\ell'}}$ for some $\ell'$;
this is a contradiction to the fact that
$e,f$ are parallel in ${\bf M}_{\omega + a^{\ell}}$ for all $\ell$.
\end{proof}
In the case of (2), ray $(a^\ell)$ is said to be {\em normal}
and {\em have $\infty$-direction $F = \bigcap_{\ell} F_{\ell}$}.
\begin{Lem}\label{lem:normal_ray2}
\begin{itemize}
\item[{\rm (1)}] Two normal rays are parallel if and only if they have the same $\infty$-direction.
\item[{\rm (2)}] For $x \in {\cal L}(\omega)$ and a parallel class $F$ of ${\bf M}$,
there is a normal $x$-ray having $\infty$-direction $F$.
\end{itemize}
\end{Lem}
\begin{proof}
(1). Let $(a^\ell)$ be a normal ray having $\infty$-direction $F$,
and let $y \in {\cal L}(\omega)$ with $a^{\ell'+1} \not \succeq y \succeq a^{\ell'}$.
We show that ray $(a^\ell) \vee y =
(y = a^{\ell'} \vee y \prec a^{\ell'+1} \vee y \prec \cdots)$
is a normal ray having $\infty$-direction $F$.
By Lemma~\ref{lem:normal_ray}~(1),
we can suppose that $y \vee a^{\ell'+k+1} = y \vee a^{\ell' + k} + {\bf 1}_{G_k}$ for $G_k \subseteq E$.
By $\min (y,a^{\ell'+1}) = y \wedge a^{\ell'+1} = a^{\ell'}$ and
$a^{\ell'+1} = a^{\ell'} + {\bf 1}_{F_{\ell'}}$,
it holds $y \vee a^{\ell'+1} - y \geq \max (y, a^{\ell'+1}) - y = {\bf 1}_{F_{\ell'}}$.
Necessarily $G_0 \supseteq F_{\ell' + 1}$.
Consequently $G_k \supseteq F_{\ell'+k+1}$ for all $k$.
Therefore $\bigcap_k G_k$ contains $F$, and must be equal to $F$,
since $\bigcap_k G_k$ is also a parallel class of ${\bf M}$ (Lemma~\ref{lem:normal_ray}~(2)).
Thus $(a^\ell) \vee y$ has $\infty$-direction~$F$.
The only-if part is immediate from this property.
The if-part also follows from this property and the observation that
if two normal rays at the same starting point
have the same $\infty$-direction, then the two rays must be equal.
(2). Note that
$F$ is a rank-$1$ subset in ${\bf M}_{\omega + y}$ for every $y \in {\cal L}(\omega)$.
Let $a^0 := x$. For $\ell=0,1,2,\ldots$,
define $F^{\ell}$ as ${\rm cl}\, (F)$ in ${\bf M}_{\omega + a^{\ell}}$,
and $a^{\ell+1} := a^\ell + {\bf 1}_{F^{\ell}}$.
Then $(a^{\ell})$ is a ray,
since $a^{\ell + 1} \geq a^{\ell-1} + 2 {\bf 1}_F$ and $a^{\ell + 1} \not \in [a^{\ell-1}, a^{\ell-1}+{\bf 1}]$.
Also $(a^{\ell})$ is normal with $\infty$-direction $F$
(since parallel class $\bigcap_{\ell} F^{\ell}$ contains $F$ and equals $F$).
\end{proof}
In the case where $\omega$ is simple,
by associating $e \in E$ with the end having $\infty$-direction $\{e\}$,
we can regard $E$ as a subset of $E^{{\cal L}(\omega)}$.
Then each local matroid ${\bf M}_{\omega + x}$
is the restriction of ${\bf M}^{{\cal L},x}$ to $E$:
\begin{Lem}\label{lem:restriction}
For $x \in {\cal L}(\omega)$, it holds
${\cal B}_{\omega+x} = \{ B \in {\cal B}^{{\cal L},x} \mid B \subseteq E \}$.
\end{Lem}
\begin{proof}
By Lemma~\ref{lem:[x,x+1]},
$B \in {\cal B}_{\omega + x}$ if and only if $x+ {\bf 1}_{F_e}$ $(e \in B)$
are independent atoms in geometric lattice $[x,x+{\bf 1}]$,
where $F_e$ is the parallel class of $e$ in ${\bf M}_{\omega + x}$.
If $e \in E$ is regarded as a normal ray, then $e_x^1 = x + {\bf 1}_{F_e}$.
From this, we see the equality to hold.
\end{proof}
We verify that $d_x$ and $D_p$ induce the same topology on the set $E$ of normal rays.
\begin{Lem}\label{lem:d=D}
Suppose that $\omega$ is simple.
For $x \in {\cal L}(\omega)$, if $r(x) = 0$,
then $-x \in {\cal TS}(\omega)$, and $D_{-x}(e,f) = d_x(e,f)$ for $e,f \in E$.
\end{Lem}
\begin{proof}
The fact $-x \in {\cal TS}(\omega)$ follows from (\ref{eqn:TS})
and $r(x) = \max_{B} (\omega + x)(B)$.
%
It suffices to show that
for two normal rays $e,f \in E$, it holds
\begin{equation}\label{eqn:h(x)-max}
\delta_x(e,f) = - \max \{ (\omega+x)(B) \mid B \in {\cal B}: \{e,f\} \subseteq B \}\ (\geq 0).
\end{equation}
Consider
the sequence $x = x^0,x^1,\ldots$ defined by
$x^{i+1} := e_{x^i}^1 \vee f_{x^i}^1 = e_{x}^{i+1} \vee f_x^{i+1}$; recall Lemmas~\ref{lem:independent} and \ref{lem:x^k'}.
Then $\delta_x(e,f)$ is the minimum index $i^*$
such that $r(x^{i^*}) = r(x^{i^*-1}) + 2$
or equivalently that there is $B \in {\cal B}_{\omega + x^{i^*}}$ with $e,f \in B$.
Now $r(x^i) = r(x^{i-1}) + 1$, and $(\omega+x^i)(B) = (\omega+x^{i-1})(B) + 2$
for base $B \in {\cal B}$ with $e,f \in B$.
Therefore the index $i^*$ must be the right hand side of (\ref{eqn:h(x)-max}).
\end{proof}
\begin{Lem}\label{lem:dense}
The set $E$ of normal rays is dense in $E^{{\cal L}(\omega)}$.
\end{Lem}
\begin{proof}
Consider a ray $e \in E^{{\cal L}(\omega)}$. Let $x \in {\cal L}$.
Then $x$-ray
$e_x$ is represented as in Lemma~\ref{lem:normal_ray} for
some decreasing sequence $F_1 \supseteq F_2 \supseteq \cdots$
of nonempty subsets in $E$.
For each $i$, choose $e_i \in F_i$.
Then the sequence $(e_i)$ of normal rays
satisfies $\lim_{i \rightarrow \infty} d_x(e,e_i) = 0$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main2}(4)]
We can assume that $\omega$ is simple.
Let $x \in {\cal L}(\omega)$.
By Lemmas~\ref{lem:d=D} and \ref{lem:dense},
$E^{{\cal L}(\omega)}$ coincides with the Dress-Terhalle completion of $E$.
Finally we verify the linear equivalence between $\omega$ and $\omega^{{\cal L},x}$ (restricted to $E$).
\begin{Clm}
For $B \in {\cal B}$, it holds
$(\omega+x)(B) = r(x_B) = \omega^{{\cal L},x}(B) + r(x)$.
\end{Clm}
\begin{proof}
Consider the sequence $x = x^0 \succeq x^1 \succeq \cdots$ defined by
$x^{i} := (\bigvee_{e \in B} e_{x^{i-1}})^{-1} = \bigvee_{e \in B} e_{x^{i-1}} - {\bf 1}$.
As seen in the proof of Lemma~\ref{lem:x_B1}
(see~(\ref{eqn:y^k}) and (\ref{eqn:y^k'})), for some $k$ it holds $x^k = x_B$.
We prove the statement by induction on $k$.
In the case of $k = 0$, $x = x_B$,
$B \in {\cal B}^{{\cal L},x}$, and
$B \in {\cal B}_{\omega +x}$ by Lemma~\ref{lem:restriction}.
Then $r(x_B) = r(x) = (\omega+x)(B)$,
implying the base case.
Suppose $k > 0$. Notice $(x^1)_B = x_B$.
By induction, $(\omega+x^1)(B) = r(x_B)$.
By definition of $x^k$, it holds $x^1(e) = x(e)$ for $e \in B$.
Therefore, $(\omega+x)(B) = (\omega + x^1)(B) + (x - x^1)(B) = r(x_B)$, as required.
\end{proof}
Note the constant term $r(x)$ is represented as
linear term $(r(x)/n){\bf 1}$.
Thus $\omega$ is projectively equivalent to the restriction of $\omega^{{\cal L}(\omega),x}$ to $E$.
This completes the proof of Theorem~\ref{thm:main2}.
\end{proof}
\section{Example}\label{sec:example}
\paragraph{Tree metric.}
{\em Tree metrics} may be viewed as valuated matroids of rank $2$; see e.g.,~\cite{DressTerhalle98}.
We here study
tree metrics from our framework of uniform semimodular lattice.
Let $T = (V,E)$ be a tree, and let $X$ be a subset of vertices of $T$.
Let ${\cal B} := \{ \{u,v\} \subseteq X \mid u \neq v \}$.
Then ${\bf M} = (X, {\cal B})$ is a uniform matroid of rank $2$.
Define $d: {\cal B} \to {\bf Z}$ by
\begin{equation*}
d(u,v) := \mbox{the number of edges in the unique path in $T$ connecting $u$ and $v$},
\end{equation*}
where $d(\{u,v\})$ is written as $d(u,v)$.
Then the classical four-point condition of tree-metrics says
\begin{equation*}
d(u,v) + d (u',v') \leq
\max \{ d(u,v') + d (u',v), d(u',v) + d(u,v') \}
\end{equation*}
for distinct $u,v,u',v' \in X$.
This is nothing but the exchange axiom (EXC).
Thus $d$ is a valuated matroid on ${\bf M}$.
Let us construct the corresponding uniform semimodular lattice in a combinatorial way.
First delete all redundant vertices not belonging to
the (shortest) path between any pair of $X$.
Fix a vertex $z \in V$ (as a root).
Next, for each $u \in X$,
consider an infinite path $P_u$ (with $V(P_u) \cap V(T) = \emptyset$) having a vertex $u'$ of degree one.
Glue $T$ and $P_u$ by identifying $u$ and $u'$.
Let ${\cal L}$ denote the union of
$V \times 2 {\bf Z}$ and $E \times (2 {\bf Z}+1)$.
For each $(uv,k) \in E \times (2 {\bf Z}+1)$,
consider binary relations (directed edges)
$(uv,k) \leftarrow (u,k+1)$, $(uv,k) \leftarrow (v,k+1)$,
$(u,k-1) \leftarrow (uv,k)$, and $(v,k-1) \leftarrow (uv,k)$.
The partial order $\preceq$ on ${\cal L}$ is induced by
the transitive closure of $\leftarrow$.
Then ${\cal L}$ is a uniform (semi)modular lattice of uniform-rank $2$, where
the ascending operator is given by
$(x,k) \mapsto (x,k+2)$; see~\cite[Example~3.2]{HH18a}.
Ends are naturally identified with $P_u$ $(u \in X)$.
In particular ${\cal B} = {\cal B}_{\infty}$.
For two ends $P_u, P_v$, there is a simple path $P$ of $T$
containing $P_u,P_v$.
The ${\bf Z}^2$-skeleton ${\cal S}(\{u,v\})$
is the sublattice of ${\cal L}$ induced by the union of $V(P) \times {\bf Z}$ and $E(P) \times (2{\bf Z}+1)$.
Let $x := (z,0)$.
For the lowest common ancestor $z_{u,v}$ of $u, v$ in $T$,
$x_{\{u,v\}}$ is given by $(z_{u,v}, - 2 d(z,z_{u,v}))$.
Thus the valuated matroid $\omega = \omega^{{\cal L},x}$ is given by
\begin{equation*}
\omega (u,v) = - 2 d(z, z_{u,v}) \quad (\{u,v\} \in {\cal B}).
\end{equation*}
From the relation $- 2 d(z, z_{u,v}) = d(u,v) - d(z,u) - d(z,v)$,
we see the projective-equivalence between $\omega$ and $d$.
\paragraph{Representable valuated matroid.}
Let $K$ be a field, and $K(t)$ the field of rational functions with indeterminate $t$.
The {\em degree} $\deg (p/q)$ of $p/q \in K(t)$
with polynomials $p,q$ is defined by $\deg (p) - \deg (q)$.
Consider the vector space $K(t)^n$ over $K(t)$.
Let $E$ be a subset of $K(t)^n$, and let ${\cal B}$ be
the family of $K(t)$-bases $B \subseteq E$ of $K(t)^n$.
Then ${\bf M} = (E, {\cal B})$ is a matroid.
Define $\omega = \omega^E:{\cal B} \to {\bf Z}$ by
\begin{equation*}
\omega^E(B) := \deg \det (B) \quad (B \in {\cal B}),
\end{equation*}
where $B \in {\cal B}$ is regarded as
a nonsingular $n \times n$ matrix consisting of vectors in $B$.
Then $\omega^E$ is a valuated matroid.
Such a valuated matroid is called {\em representable} (over $K(t)$).
A tropical interpretation \cite{MurotaTamura01,Speyer08,SpeyerSturmfels04} of
${\cal L}(\omega) = {\cal T}(\omega) \cap {\bf Z}^E$
is the set of degree vectors $(\deg (q^{\top} e): e \in E)$ for all $q \in K(t)^n$,
where we need to add $-\infty$ to ${\bf Z}$ for $\deg (0) := - \infty$.
We here provide a different algebraic interpretation,
which is viewed as an analogue of:
The lattice of flats of the matroid represented by a matrix $M$
is the lattice of vector spaces spanned by columns of $M$.
Let $K(t)^-$ denote the ring of elements $p/q$ in $K(t)$ with $\deg (p/q) \leq 0$.
Then $K(t)^n$ is also viewed as a $K(t)^-$-module.
For a subset $F \subseteq K(t)^n$,
let $\langle F \rangle$ denote the $K(t)^-$-module generated by $F$,
i.e., $\langle F \rangle = \{ \sum_{u \in F'} \lambda_u u \mid \lambda_u \in K^-(t), F' \subseteq F: |F'| < \infty \}$.
Also, for $z \in {\bf Z}^F$, let $F^z := \{t^{z(u)} u \mid u \in F\}$.
Suppose that $E \subseteq K(t)^n$ contains a $K(t)$-basis of $K(t)^n$.
Let ${\cal B} \subseteq 2^{E}$ be the family of $K(t)$-bases,
which is the underlying matroid of $(E,\omega)$.
Define the family ${\cal L}(E)$ of $K^-(t)$-submodules of $K(t)^n$:
\begin{equation*}
{\cal L}(E) := \{ \langle E^z \rangle \mid z \in {\bf Z}^E \}.
\end{equation*}
The partial order on ${\cal L}(E)$ is defined as the inclusion relation.
For $L \in {\cal L}(E)$, define $z^L \in {\bf Z}^E$ by
\begin{equation*}
z^L(p) :=\max \{\alpha \in {\bf Z} \mid t^\alpha p \in L \} \quad (p \in E).
\end{equation*}
\begin{Prop}\label{prop:L(E)}
${\cal L}(E)$ is a uniform semimodular lattice that is isomorphic to ${\cal L}(\omega^E)$
by the maps $L \mapsto z^L$ and $z \mapsto \langle E^z \rangle$,
where the following hold:
\begin{itemize}
\item[{\rm (1)}] The ascending operator is given by $L \mapsto t L$.
\item[{\rm (2)}]
The ${\bf Z}^n$-skeleton ${\cal S}(B)$ of $B \in {\cal B}$ is
equal to ${\cal L}(B)$.
\item[{\rm (3)}] A height function~$r$ of ${\cal L}(E)$ is given by
\begin{equation*}
r(L) = \deg \det (Q) \quad (L \in {\cal L}(E)),
\end{equation*}
where $Q$ is a $K^-(t)$-basis of $L$.
\item[{\rm (4)}] For $x \in {\cal L}(\omega)$,
it holds
\begin{eqnarray*}
\langle E^x \rangle_B & = & \langle B^x \rangle, \\
\omega^{{\cal L}(\omega),x}(B) & = & (\omega^E + x)(B) - r(\langle E^x \rangle)
\quad (B \in {\cal B}).
\end{eqnarray*}
\end{itemize}
\end{Prop}
Here, for $F \subseteq E$ and $x \in {\bf Z}^E$,
we denote $F^{x|_F}$ by $F^x$.
The proof uses the following basic lemma.
\begin{Lem}\label{lem:basis}
$\langle E \rangle$ is a free $K^-(t)$-module having any
$B \in {\cal B}_{\omega}$ as a basis.
\end{Lem}
\begin{proof}
Choose any $B \in {\cal B}_{\omega}$.
Since $B$ is a $K(t)$-basis of ${K(t)}^n$,
every element $u \in E$ is represented as
$u = B \lambda$ for $\lambda \in K(t)^n$, where $B$ is regarded as a matrix.
By Cramer's rule, the $i$-th component $\lambda_i$ of $\lambda$
is equal to $\det (B^{i}) / \det (B)$,
where $B^i$ is obtained from $B$ by replacing the $i$-th column with $u$.
Then $\deg (\lambda_i)
= \deg \det (B^{i}) - \deg \det (B) = \omega(B^i) - \omega(B) \leq 0$
by $B \in {\cal B}_{\omega}$.
This means that $\lambda \in K^-(t)$.
Consequently $\langle E \rangle$ is a free $K^-(t)$-module of basis $B$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:L(E)}]
Obviously we have $L = \langle E^{z^L} \rangle$.
We show that $z^L \in {\cal L}(\omega)$.
Suppose indirectly that $p \in E$ is a loop in ${\cal B}_{\omega + z^L}$.
By the above lemma, for any
$B \in {\cal B}_{\omega+ z^L}$, $B^{z^L}$ is a basis of $L$.
Consider equation $B^{z^L} \lambda = t^{z^L(p)} p$.
By using Cramer's rule as above, we have
$\deg \lambda_i = (\omega+z^L)(B - e_i + p) - (\omega+z^L)(B) \leq - 1$ for each $i=1,2,\ldots,n$,
where $e_i$ is the $i$-th column of $B$.
The inequality follows from the fact
that $p$ is a loop in ${\cal B}_{\omega + z^L}$.
This means that $t^{z^L(p)+1} p$ also belongs to $\langle B^{z^L} \rangle = L$.
This is a contradiction to the definition of $z^L$.
Thus $z^L \in {\cal L}(\omega)$.
Also $L \mapsto z^L$ is the inverse of $z \mapsto \langle E^z \rangle$.
Indeed, $z^{E^z} \geq z$. If $z^{E^z}(e) > z(e)$,
then one can see as above that $e$ does not belong to any base of ${\cal B}_{\omega + z}$,
contradicting $z \in {\cal L}(\omega)$.
(1). This follows from $z^{tL} = z^L + {\bf 1}$.
(2). Observe that the sublattice
${\cal L}(B) = \{\langle B^z \rangle \mid z \in {\bf Z}^B\}$
of ${\cal L}(E)$ is isomorphic to~${\bf Z}^n$.
By Lemma~\ref{lem:basis},
we have $B \in {\cal B}_{\omega + z^L}$ for $L = \langle B^z \rangle \in {\cal L}(B)$.
By Lemma~\ref{lem:S(B)}, we have $B \in {\cal S}(B)$.
Thus ${\cal L}(B) \subseteq {\cal S}(B)$.
Both ${\cal L}(B)$ and ${\cal S}(B)$ are isomorphic to ${\bf Z}^n$
with the same ascending operator.
Consequently, it must hold ${\cal L}(B) = {\cal S}(B)$.
(3). Suppose that $L'$ covers $L$.
We can choose $B \in {\cal B}$ with $L,L' \in {\cal S}(B)$.
Necessarily $L' = \langle B^{z'} \rangle$ and $L = \langle B^z \rangle$
for $z - z' = {\bf 1}_{e}$ for some $e \in B$, where ${\bf 1}_e := {\bf 1}_{\{e\}}$.
Then $\deg \det B^{z'}= \deg \det B^z + 1$.
(4). It obviously holds that $\langle B^x \rangle \in {\cal L}(B) = {\cal S}(B)$, and
$\langle B^x \rangle \subseteq \langle E^x \rangle_B$.
Suppose indirectly that the inclusion is strict.
Then, for some $e \in B$, it holds $\langle B^{x+{\bf 1}_e} \rangle \subseteq \langle E^x \rangle_B \subseteq \langle E^x \rangle_B$.
This means that $\langle E^{x+{\bf 1}_e} \rangle = \langle E^{x} \rangle$.
However this is a contradiction to $x = z^{\langle E^{x} \rangle}$.
From the definition, we have
$\omega^{{\cal L}(\omega),x}(B)
= - r[\langle B^x \rangle, \langle E^{x} \rangle]
= \deg \det (B^x) - r(\langle E^{x} \rangle)
= \omega^{E^x}(B) - r(\langle E^{x} \rangle) = (\omega^{E}+x)(B) - r(\langle E^{x} \rangle)$
\end{proof}
\paragraph{Modular valuated matroid and Euclidean building.}
Analogous to a modular matroid ---
a matroid whose lattice of flats is a modular lattice,
a {\em modular valuated matroid} is defined as
an integer-valued valuated matroid $(E,\omega)$
such that the corresponding ${\cal L}(\omega)$ is a uniform modular lattice.
The companion work~\cite{HH18a} showed that uniform modular lattices
and Euclidean buildings of type A are cryptomorphically equivalent in the following sense.
For a uniform modular lattice ${\cal L}$,
define equivalence relation $\simeq$ on ${\cal L}$ by $x \simeq y$ if $x = (y)^{+k}$ for some $k$. Then the simplicial complex ${\cal C}({\cal L})$ modulo $\simeq$
is a Euclidean building of type~A;
recall Theorem~\ref{thm:main1} for simplicial complex ${\cal C}({\cal L})$.
Conversely, every Euclidean building of type A is obtained in this way.
Thus we have the following:
%
\begin{Thm}\label{thm:building}
For a modular valuated matroid $(E,\omega)$,
the tropical linear space ${\cal T}(\omega)/ {\bf R}{\bf 1}$
is a geometric realization of the Euclidean building
associated with uniform modular lattice ${\cal L}(\omega)$.
\end{Thm}
Dress and Terhalle~\cite{DressTerhalle98}
claimed this result on the Euclidean building for ${\rm SL} (F^n)$,
where $F$ is a field with a discrete valuation.
In the previous example, take the whole set $K(t)^n$ as $E$.
In this case, ${\cal L}(E)$ is
the lattice of all full-rank free $K^-(t)$-submodules of $K(t)^n$,
and is a uniform modular lattice of uniform-rank $n$;
see \cite[Example 3.3]{HH18a}.
In particular,
valuated matroid $(E, \omega^E)$ is a modular valuated matroid.
The simplicial complex ${\cal C}({\cal L}(E))$
is nothing but the Euclidean building for ${\rm SL}(K(t)^n)$; see \cite[Section 19]{Garrett}.
\section*{Acknowledgments}
The author thanks Kazuo Murota, Yuni Iwamasa, and Koyo Hayashi
for careful reading and helpful comments.
This work was partially supported by JSPS KAKENHI Grant Numbers
JP25280004, JP26330023, JP26280004, JP17K00029.
|
{
"timestamp": "2018-02-27T02:04:28",
"yymm": "1802",
"arxiv_id": "1802.08809",
"language": "en",
"url": "https://arxiv.org/abs/1802.08809"
}
|
\section{Introduction}
The Hyper Suprime-Cam Subaru Strategic Program
\citep[HSC-SSP;][]{HSC1styr,HSC1styrOverview,HSCPhotoz18,Bosch18} is
an ongoing wide-field imaging survey that uses the HSC
\citep{Miyazaki12,Miyazaki15,Miyazaki18,Komiyama18,Kawanomoto18,Furusawa18}
mounted on the prime focus of the Subaru Telescope. The HSC-SSP
survey has three different layers, Wide, Deep, and Ultra-deep. The
wide layer takes five-band ($grizy$) and deep ($r\simlt26$~AB~mag)
imaging over $1400$~deg$^2$. To date, the survey covers $456$~deg$^2$
with non-full-depth and $178$~deg$^2$ with the full-depth and
full-color \citep{HSC1styr}.
The deep and multi-band HSC-SSP imaging gives us a unique opportunity
to conduct a systematic search of optical galaxy clusters. In fact,
\cite{Oguri18} discovered 2000 galaxy clusters with richness
$\hat{N}_{\rm mem}>15$ in $\sim232$~deg$^2$, by applying the CAMIRA
algorithm developed by \cite{Oguri14b}. The galaxy clusters are
discovered as concentrations of red-sequence galaxies by applying a
compensated spatial filter to the three-dimensional richness map. The
accuracy of photometric redshifts of the CAMIRA clusters is $\Delta
z/(1+z)\sim0.01$.
The CAMIRA catalog features a wide redshift coverage and a low mass
limit, which therefore provides us with an unprecedented cluster
sample including high-redshift objects. Because the limiting
magnitudes of the HSC-SSP survey is much deeper than those of the
Sloan Digital Sky Survey (SDSS) and Dark Energy Survey (DES), the
galaxy clusters can be securely identified up to $z\sim 1.1$, in
contrast with the SDSS \citep[$z\sim0.4$;][]{Oguri14b,Rykoff14} and
the DES \citep[$z\sim0.8$;][]{Rykoff16}. The redshift range is
comparable to those covered by Sunyaev-Zel'dovich effect (SZE)
surveys, which used the South Pole Telescope \citep[][]{SPTSZ15} and
the Atacama Cosmology Telescope \citep{Hilton17}. The richness
$\hat{N}_{\rm mem}\sim 15$ roughly corresponds to $M_{200m}\sim
10^{14}h^{-1}M_\odot$ \citep{Oguri18} and is equivalent to
$M_{500}\sim7\times 10^{13}\mathrel{h_{70}^{-1}M_\odot}$ if we assume a median halo
concentration of $c_{200\rm m}=6$ \citep{Diemer15}. The detection
limit of the cluster mass for the CAMIRA clusters is then much lower
than those of the SZE clusters \citep[$M_{500}\sim3.5\times
10^{14}\mathrel{h_{70}^{-1}M_\odot}$;][]{SPTSZ15}.
To understand the gas physics and establish scaling relations between
cluster mass and X-ray observables in preparation for future
cosmological research, it is important to systematically study the
X-ray properties of the optically-selected clusters and compare them
with other multi-wavelength surveys. To date, a number of systematic
cluster observations \citep[see,
e.g.,][]{Vikhlinin06,Zhang08,Sun09,Martino14,Mahdavi13,
Donahue14,vonderLinden14,Okabe14b,Hoekstra15,Smith16,Mantz16} have
been conducted by referring to cluster catalogs constructed from the
ROSAT All Sky Survey \citep[RASS; e.g.,][]{Bohringer01}. More
recently, statistical studies use the cutting-edge X-ray surveys
\citep[e.g.,][]{XXL16}, SZE \citep[e.g.,][]{Sanders17}, or optical
techniques \citep[e.g.,][]{Hicks08,Hicks13,Takey13}. Since different
survey techniques have their own selection functions, some systematic
differences may appear in their observed cluster properties and
scaling relations. If this happens, a selection bias issue arises,
which eventually leads to a difficulty in constraining the
cosmological models using the cluster mass function \citep[see,
e.g.,][]{Allen11,Giodini13}. This will have an impact on
interpretation of the upcoming eROSITA \citep{Merloni12} and other
ongoing/future large-scale cluster surveys.
A useful measure of the cluster dynamical state is given by the offset
between the location of the brightest cluster galaxy (BCG) and the
X-ray centroid or X-ray peak \citep[e.g.,][]{Katayama03}. The X-ray
centroid (or peak) offset is sometimes used to classify the clusters
into relaxed and disturbed clusters
\citep{Mann12,Mahdavi13,Rossetti16}. \cite{Rossetti16} showed that the
fraction of relaxed clusters is smaller in the Planck sample than that
in the X-ray samples, indicating that SZE and X-rays surveys of galaxy
clusters are affected by the different selection effects. In this
way, the X-ray centroid shift is useful not only to characterize the
cluster dynamical state but also to study the selection effect. While
the offsets between optical and X-ray centers have been used to study
the misidentification of central galaxies in optical cluster finding
algorithms \citep[e.g.,][]{Rozo14,Rykoff16,Oguri18}, dynamical states
of optically selected clusters based on offset distributions have not
yet been fully explored. To address this situation, this paper
presents a systematic measurement of the centroid shift in the optical
sample.
We thus carried out a systematic X-ray analysis of the CAMIRA clusters
with high optical richness using the XMM-Newton archival data.
Section~\ref{sec:sample} presents the sample selection and
section~\ref{sec:analysis} describes the data analyses regarding
centroid determination and spectral analysis.
Section~\ref{sec:results} derives the centroid shift and the
luminosity-temperature relation, and section~\ref{sec:discussion}
discusses the implication of the results. Finally
section~\ref{sec:summary} summarizes the results and briefly discusses
the future prospects of this X-ray follow-up project.
The cosmological parameters are $\Omega_{m0}=0.28$,
$\Omega_\Lambda=0.72$ and $h=0.7$ throughout this paper, and we use
the proto-solar abundance table from \cite{Lodders09}. Unless
otherwise noted, the quoted errors represent the $1\sigma$ statistical
uncertainties.
\section{Sample}\label{sec:sample}
The CAMIRA catalog comprises 2086 clusters at $0.1<z<1.1$ in the S16A
Wide and Deep fields \citep{Oguri18}, whose redshift distributions are
shown in Figure~\ref{fig:sample}. We cross-correlated the CAMIRA
catalog with the 3XMM-DR7 catalog \citep{Rosen16} to find that there
are $>300$ X-ray sources within $60\arcsec$ from the optical centers.
We then excluded a $\sim25\,{\rm deg}^2$ XXL survey region overlapped
with that of the HSC-SSP survey from the above search result; an X-ray
study in the XXL field is to be done through the HSC-XXL external
collaboration. To do the systematic X-ray analysis of high-richness
clusters, we constructed the sample by selecting objects with the
richness $\hat{N}_{\rm mem}>20$ and good-quality XMM-Newton archival
data. For the latter, we require typically more than 1000
cluster-photon counts so as to enable X-ray spectroscopic measurements
of the gas temperature and luminosity. Therefore, as listed in
Table~\ref{tbl:sample}, the present sample consists of 17 clusters at
$0.14 < z < 0.75$, whose distribution is overlaid on that of the
entire CAMIRA catalog (Figure~\ref{fig:sample}). Except for
HSC~J141508-002936 at $z=0.14$ (alternative name is Abell~1882), X-ray
emissions from these clusters are serendipitously detected inside the
XMM-Newton fields of view. The average (median) redshift is 0.40
(0.33). Examples of HSC images of the CAMIRA clusters are shown in
Figure~\ref{fig:image}.
Table~\ref{tbl:sample} lists the location of BCGs identified by the
CAMIRA algorithm \citep{Oguri18}. Note that for 3 out of 17 clusters,
the BCGs are clearly misidentified by the CAMIRA algorithm. Since we
are interested in physical offsets between BCGs and X-ray peaks rather
than miscentering of optical cluster finding algorithms, we correct
the BCG coordinates for these three (HSC~J140309-001833,
HSC~J021427-062720, HSC~J100049+013820) by visual inspection of their
HSC images.
\begin{table*}[hbt]
\tbl{Sample list.}{%
\begin{tabular}{llllllllll}\hline\hline
Cluster & $z$ & $\hat{N}_{\rm mem}$$^{\mathrm{a}}$ & $R_{500}$& BCG position & X-ray centroid & $D_{\rm XC}$$^{\mathrm{b}}$ & $D_{\rm XP}$$^{\mathrm{c}}$ & OBSID$^{\mathrm{d}}$ & Exposure$^{\mathrm{e}}$ \\
& & & (Mpc/\arcsec) & RA, Dec (deg) & RA, Dec (deg) & (kpc) & (kpc) & & M1, M2, PN \\ \hline
HSC~J142624-012657 & 0.460 & 69.7 & 0.835 / 142 & 216.6011, -1.4492 & 216.6005, -1.4491 & 12 & 23 & 0674480701 & 12.3, 12.6, 9.1 \\
HSC~J021115-034319 & 0.745 & 52.3 & 0.653 / 88 & 32.8135 , -3.7219 & 32.8142 , -3.7232 & 38 & 60 & 0655343861 & 7.7 , 13.0, 3.4 \\
HSC~J095939+023044 & 0.730 & 51.7 & 0.657 / 90 & 149.9132, 2.5122 & 149.9185, 2.5201 & 250 & 196 & 0203361701 & 30.1, 30.2, 24.3 \\
HSC~J161136+541635 & 0.332 & 48.3 & 0.807 / 168 & 242.8998, 54.2763 & 242.8967, 54.2775 & 37 & 38 & 0059752301 & 4.9 , 4.7 , 2.9 \\
HSC~J090914-001220 & 0.303 & 46.5 & 0.811 / 180 & 137.3075, -0.2056 & 137.3089, -0.2057 & 24 & 23 & 0725310142 & 2.5 , 2.7 , 2.4 \\
HSC~J141508-002936 & 0.144 & 43.0 & 0.860 / 340 & 213.7850, -0.4932 & 213.7729, -0.4884 & 119 & 50 & 0145480101 & 11.0, 11.8, 6.9 \\
HSC~J140309-001833 & 0.449 & 39.7 & 0.715 / 124 & 210.7876, -0.3091 & 210.7939, -0.3069 & 76 & 36 & 0606430501 & 20.4, 21.1, 13.3 \\
HSC~J095737+023426 & 0.372 & 37.4 & 0.734 / 142 & 149.4043, 2.5738 & 149.4052, 2.5750 & 27 & 14 & 0203362201 & 28.9, 29.0, 12.3 \\
HSC~J022135-062618 & 0.300 & 35.7 & 0.754 / 169 & 35.3947 , -6.4384 & 35.4069 , -6.4457 & 228 & 10 & 0655343837 & 2.6 , 2.6 , 2.2 \\
HSC~J232924-004855 & 0.310 & 35.2 & 0.746 / 163 & 352.3487, -0.8154 & 352.3495, -0.8147 & 17 & 44 & 0673002346 & 3.5 , 3.8 , 1.8 \\
HSC~J022512-062259 & 0.202 & 33.0 & 0.775 / 232 & 36.3012 , -6.3831 & 36.2933 , -6.3865 & 102 & 246 & 0655343836 & 2.6 , 2.5 , 2.2 \\
HSC~J021427-062720 & 0.246 & 31.3 & 0.746 / 192 & 33.6071 , -6.4607 & 33.6173 , -6.4566 & 151 & 15 & 0655343859 & 2.5 , 2.7 , 2.0 \\
HSC~J161039+540554 & 0.330 & 29.5 & 0.702 / 147 & 242.6626, 54.0983 & 242.6706, 54.1014 & 97 & 144 & 0059752301 & 4.9 , 4.8 , 2.9 \\
HSC~J095903+025545 & 0.332 & 26.4 & 0.679 / 142 & 149.7614, 2.9291 & 149.7611, 2.9219 & 123 & 8 & 0203361601 & 19.1, 0.0 , 8.6 \\
HSC~J100049+013820 & 0.228 & 23.2 & 0.692 / 189 & 150.1898, 1.6573 & 150.1986, 1.6574 & 116 & 94 & 0302351001 & 37.6, 38.9, 28.0 \\
HSC~J090743+013330 & 0.172 & 23.1 & 0.711 / 242 & 136.9295, 1.5583 & 136.9445, 1.5577 & 159 & 14 & 0725310156 & 2.6 , 2.7 , 2.4 \\
HSC~J095824+024916 & 0.341 & 20.1 & 0.625 / 128 & 149.6001, 2.8212 & 149.5998, 2.8215 & 8 & 9 & 0203362101 & 59.4, 59.5, 51.1 \\
\hline
\end{tabular}}\label{tbl:sample}
\begin{tabnote}
$^{\mathrm{a}}$ Richness. $^{\mathrm{b}}$Centroid shift (see
section \ref{subsec:centroid} for definition). $^{\mathrm{c}}$Peak shift (see
section \ref{subsec:centroid} for definition). $^{\mathrm{d}}$The
XMM-Newton observation id. $^{\mathrm{e}}$ The XMM-Newton
EPIC-MOS1(M1), MOS2(M2), and PN exposure time after data filtering
(ksec).
\end{tabnote}
\end{table*}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm,angle=90]{fig1.eps}
\end{center}
\caption{Redshift distributions of the present sample (black), CAMIRA
HSC S16A Wide (blue) and Deep (magenta) cluster catalogs
\citep{Oguri18}. The binsize is $\Delta z=0.1$ and each histogram
is normalized such that the integral over the range is unity. The
vertical dashed line indicates the median redshift of the present
sample, $\tilde{z}=0.33$.}\label{fig:sample}
\end{figure}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=6cm]{fig2a.eps}
\includegraphics[width=6cm]{fig2b.eps}
\includegraphics[width=6cm]{fig2c.eps}
\includegraphics[width=6cm]{fig2d.eps}
\end{center}
\caption{Examples of HSC member galaxy density maps smoothed with
FWHM$=200$ kpc (upper panels) and I-band images (lower panels) of
the CAMIRA clusters, HSC~J161136+541635 at $z=0.332$ (left panels)
and HSC~J161039+540554 at $z=0.330$ (right panels). In each panel,
the X-ray centroid and BCG positions are marked with a magenta
``$\times$'' and white ``+'', respectively. The white contours are
linearly spaced by half of the average height of galaxy density maps over
all CAMIRA clusters at the same redshift. The red contours for
X-ray emission are ten levels logarithmically spaced from
$[10-1000]\,{\rm cts\,s^{-1}\,deg^{-2}}$. }\label{fig:image}
\end{figure*}
\section{Analysis}\label{sec:analysis}
\subsection{Data reduction}\label{subsec:reduction}
Observation data files were retrieved from the XMM-Newton Science
Archive\footnote{http://nxsa.esac.esa.int} and reprocessed with the
XMM-Newton Science Analysis System v15.0.0 and the Current Calibration
Files. The data reduction, including flare screening, point source
detection, and estimation of the quiescent particle background, was
done in the standard manner by using the XMM Extended Source Analysis
Software [ESAS; \cite{Snowden08}; see also \cite{Miyaoka18}].
\subsection{Centroid determination}\label{subsec:centroid}
The X-ray centroid of each cluster was determined from the mean of the
photon distribution in an aperture circle of radius $R_{500}$. This
analysis used the 0.4--2.3~keV EPIC composite image (one image pixel
is 5\arcsec). Here, $R_{500}$ was calculated by substituting
$\hat{N}_{\rm mem}$ in Table~\ref{tbl:sample} in the $R-\hat{N}_{\rm
mem}$ relation, which was deduced from the $R-T$ relation
\citep{Arnaud05} and the $T-\hat{N}_{\rm mem}$ relation
\citep{Oguri18}. Starting with the optical center, we iterated the
centroid search until its position converged within 5\arcsec. If
contaminating point sources remained in the circle, we excluded the
region centered at the sources and the region symmetric to them so as
not to affect the above calculation. The result is listed in
Table~\ref{tbl:sample}. The offset between X-ray centroid and BCG
position is presented in section~\ref{subsec:centroidshift}.
\subsection{Spectral analysis}\label{subsec:spec}
To evaluate the gas temperature and bolometric luminosity, we derive
the X-ray spectra by extracting the EPIC data from a circular region
within a radius of $R_{500}$ centered on the X-ray centroid. The
spectra were rebinned so that each spectral bin contains over 25
counts. After subtracting the quiescent particle background, the
observed spectra of the EPIC MOS/PN cameras in the 0.3--10/0.4--10~keV
band were simultaneously fit by using XSPEC 12.9.1 \citep{Arnaud96}.
The spectral model consists of (i) cluster thermal emission and (ii)
background components. For (i), we used the APEC thin-thermal plasma
model with AtomDB version 3.0.8 \citep{Smith01,Foster12} . The cluster
redshift and metal abundance were fixed at the optical value
[Table~\ref{tbl:sample}; \cite{Oguri18}] and at 0.3~solar,
respectively. The Galactic hydrogen column density $N_{\rm H}$ was
fixed at a value taken from the Leiden/Argentine/Bonn survey
\citep{Kalberla05}. For (ii), the Galactic emission and the cosmic
X-ray background were evaluated by jointly fitting the RASS spectra
\citep{Snowden97} taken from the $0\degree.5-1\degree$ ring region
around the cluster. The other components due to possible solar wind
charge exchange, soft proton events, and instrumental fluorescent
lines were added to the model. An example of the spectral fitting is
shown in Figure~\ref{fig:spec}. The resultant APEC model parameters
are summarized in Table~\ref{tbl:spec}. The bolometric luminosity was
estimated from the best-fit model flux in the source-frame energy
range of 0.01 -- 30~keV.
The XMM + RASS joint fitting gives a reasonable result for most of
clusters; however, the background subtraction is not perfect at high
energies, particularly for the three clusters, HSC~J021115-034319,
HSC~J021427-062720, and HSC~J161039+540554. This is likely to be due
to the residual soft proton flares, as indicated by the count-rate
ratio between in-FOV and out-FOV \citep{DeLuca04}. Thus, to check the
background uncertainty, we subtract the local background extracted
from an $r=(2-3)R_{500}$ annulus centered on the X-ray centroid and
fit the APEC model to the observed spectra. Since the resultant
parameters are consistent with those obtained from the XMM + RASS
joint analysis within that statistics for 14 clusters, we quote the
values obtained from the analysis by using the local background for
the three clusters mentioned above (see Table~\ref{tbl:spec}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7cm]{fig3.eps}
\end{center}
\caption{Example of spectral fit. The upper panel shows the MOS1
(black), MOS2 (red), and PN (green) spectra of HSC~J161136+541635
at $z=0.332$ and the RASS background spectrum (blue). The solid
and dotted lines represent the total model consisting of cluster
emission and backgrounds (see section~\ref{subsec:spec}) and the
residual soft-proton background component, respectively. The
lower panel shows the residual of the fit.}\label{fig:spec}
\end{figure}
\begin{table*}[hbt]
\tbl{Results of spectral analysis under APEC thermal plasma model.}{%
\begin{tabular}{lllll}\hline\hline
Cluster & $N_{\rm H}$ & $kT$ & $L_X^{\mathrm{a}}$ & $\chi^2$/d.o.f. \\
& ($10^{20}~{\rm cm^{-2}}$) & (keV) & ($10^{44}{\rm erg\,s^{-1}}$) & \\ \hline
HSC~J142624-012657 & 3.21 & $ 5.19 _{ -0.96 }^{+ 1.48 } $ & $ 7.05 _{ -1.09 }^{+ 0.78 } $ & 60.4 / 60 \\
HSC~J021115-034319 & 1.95 & $ 6.91 _{ -2.28 }^{+ 5.78 } $ & $ 5.48 _{ -1.19 }^{+ 1.94 } $ & 92.1 / 81 \\
HSC~J095939+023044 & 1.71 & $ 4.03 _{ -0.59 }^{+ 0.73 } $ & $ 2.41 _{ -0.22 }^{+ 0.57 } $ & 113.0 / 107 \\
HSC~J161136+541635 & 0.95 & $ 3.30 _{ -0.71 }^{+ 1.94 } $ & $ 1.78 _{ -0.35 }^{+ 0.64 } $ & 73.5 / 69 \\
HSC~J090914-001220 & 2.73 & $ 4.02 _{ -0.93 }^{+ 1.52 } $ & $ 1.90 _{ -0.47 }^{+ 0.39 } $ & 48.6 / 33 \\
HSC~J141508-002936 & 3.27 & $ 2.16 _{ -0.24 }^{+ 0.21 } $ & $ 0.64 _{ -0.08 }^{+ 0.05 } $ & 249.0 / 212 \\
HSC~J140309-001833 & 3.66 & $ 3.03 _{ -0.54 }^{+ 0.84 } $ & $ 1.69 _{ -0.28 }^{+ 0.35 } $ & 115.7 / 122 \\
HSC~J095737+023426 & 1.85 & $ 3.36 _{ -0.37 }^{+ 0.56 } $ & $ 2.99 _{ -0.36 }^{+ 0.34 } $ & 149.6 / 154 \\
HSC~J022135-062618 & 2.73 & $ 4.98 _{ -3.58 }^{+ 29.44 } $ & $ 0.45 _{ -0.32 }^{+ 0.57 } $ & 25.1 / 13 \\
HSC~J232924-004855 & 4.33 & $ 1.72 _{ -1.25 }^{+ 4.98 } $ & $ 0.17 _{ -0.15 }^{+ 1.09 } $ & 56.4 / 32 \\
HSC~J022512-062259 & 2.95 & $ 1.93 _{ -0.24 }^{+ 0.36 } $ & $ 0.78 _{ -0.19 }^{+ 0.20 } $ & 76.1 / 71 \\
HSC~J021427-062720 & 2.13 & $ 4.93 _{ -0.87 }^{+ 1.26 } $ & $ 1.41 _{ -0.19 }^{+ 0.20 } $ & 58.0 / 61 \\
HSC~J161039+540554 & 0.94 & $ 1.92 _{ -0.81 }^{+ 1.70 } $ & $ 0.48 _{ -0.12 }^{+ 0.11 } $ & 51.4 / 43 \\
HSC~J095903+025545 & 1.79 & $ 1.71 _{ -0.34 }^{+ 2.61 } $ & $ 0.84 _{ -0.36 }^{+ 0.36 } $ & 108.6 / 112 \\
HSC~J100049+013820 & 1.80 & $ 3.30 _{ -0.83 }^{+ 0.72 } $ & $ 0.89 _{ -0.27 }^{+ 0.06 } $ & 55.7 / 51 \\
HSC~J090743+013330 & 3.20 & $ 1.01 _{ -0.16 }^{+ 0.22 } $ & $ 0.25 _{ -0.10 }^{+ 0.15 } $ & 23.8 / 23 \\
HSC~J095824+024916 & 1.84 & $ 1.86 _{ -0.33 }^{+ 0.60 } $ & $ 0.16 _{ -0.04 }^{+ 0.04 } $ & 328.8 / 283 \\ \hline
\end{tabular}}\label{tbl:spec}
\begin{tabnote}
$^{\mathrm{a}}$ The bolometric luminosity within the scale radius $R_{500}$
\end{tabnote}
\end{table*}
\section{Results}\label{sec:results}
\subsection{Centroid shift and peak shift}\label{subsec:centroidshift}
We define the centroid shift $D_{\rm XC}$ as a projected distance
between the BCG coordinates and the X-ray centroid measured within
$R_{500}$. The measured centroid shift is given in
Table~\ref{tbl:sample}. The histograms of the centroid shift in kpc
and fractions of $R_{500}$ are shown in the upper panels of
Figure~\ref{fig:centroid}. The median of the centroid shift is
$\tilde{D}_{\rm XC}=85$~kpc or $0.12R_{500}$.
Next, we measured the X-ray peak position within $R_{500}$ by using
the XMM composite image smoothed with a $\sigma=3$~(pixels) Gaussian
function. We define the peak shift $D_{\rm XP}$ as a projected
distance relative to the BCG coordinates. The resultant peak shift is
shown in Table~\ref{tbl:sample}. As discussed in \cite{Mann12}, the
accuracy of the X-ray peak position depends on the statistical quality
of the X-ray observations as well as the surface brightness
distribution, which varies significantly between clusters. We assessed
the standard error of the peak shift $\delta D_{\rm XP}$ by comparing
X-ray images of each cluster with different smoothing scale
($\sigma=2,3,4$~pixles). For 17 clusters, $\delta D_{\rm XP}$ ranges
from 4\% to 160\% (the mean is 25\%). The lower panels of
Figure~\ref{fig:centroid} show the histograms of the measured peak
shift in units of kpc and $R_{500}$. The median is $\tilde{D}_{\rm
XP}=36$~kpc or $0.047R_{500}$.
We divide the sample into two classes, ``relaxed'' clusters with a
small peak shift ($D_{\rm XP}<0.02R_{500}$) and ``disturbed'' clusters
with a large shift ($D_{\rm XP}>0.02R_{500}$) following the criteria
used in \cite{Sanders09}. As a result, there are 5 (11) relaxed
(disturbed) clusters and the fraction of relaxed objects is
$30\pm13$\%. Here the error indicates the systematic uncertainty in
the measurement and was estimated by comparing the X-ray images with
different smoothing scale. Section~\ref{subsec:discussion_centroid}
compares the fraction of relaxed clusters for the present optical
sample with nearby X-ray and SZE cluster samples.
\begin{figure*}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig4a.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig4b.eps}
\end{center}
\end{minipage}
\vspace{8mm}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig4c.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig4d.eps}
\end{center}
\end{minipage}
\vspace{4mm}
\caption{Histograms of centroid shift (upper panels) and peak shift
(lower panels) in units of kpc and $R_{500}$. In each panel, the
vertical dashed line indicates the distribution
median. }\label{fig:centroid}
\end{figure*}
\subsection{Luminosity-temperature relation}\label{subsec:lt}
In the self-similar model, the redshift evolution of the cluster
scaling relations is described by the factor $E(z) = (\Omega_M(1+z)^3
+ \Omega_{\Lambda})^{1/2}$ and the luminosity of the cluster gas in
the hydrostatic state follows $E(z)^{-1}L\propto T^2$. Within this
framework, the normalization of the luminosity-temperature relation
evolves as $E(z)^{\gamma}$ \citep{Giles16}. Despite a number of
observational studies, however, no clear consensus has been reached on
the evolution of the scaling relations \citep[for a review,
see][]{Giodini13}. In the present paper, we correct the redshift
evolution by applying the self-similar model and plot $E(z)^{-1}L$
against gas temperature in the left panel of Figure~\ref{fig:lt}.
We fit the observed $L_X-T$ relation to the power-law model
(equation~\ref{eq:model_lt}). To account for measurement errors in
both variables, we use the Bayesian regression method \citep{Kelly07}
because it has been demonstrated that it outperforms other common
estimators that can constrain the parameters even when the measurement
errors are large. The quantities $a$, $b$, and the intrinsic scatter
are treated as free parameters.
\begin{equation}
\log{\left( \frac{E(z)^{-1}L_X}{10^{42}{\rm erg\,s^{-1}}}\right ) } = a + b \log{\left(\frac{T}{\rm keV}\right)} \label{eq:model_lt}
\end{equation}
The best-fit parameters are $a=1.08\pm 0.24$, $b=2.00\pm 0.51$, and
$\sigma_{L|T}=0.23\pm0.09$.
For comparison, if we apply the BCES code \citep{Akritas96} to the
present optical sample, the fitting yields the best-fit $L_X-T$ slope
steeper than 2.0 but with a fairly large uncertainty; namely,
$b=2.59\pm3.31$. \cite{Kelly07} noted that the BCES estimate of the
slope tends to suffer some bias and becomes considerably unstable when
the measurement errors are large and/or the sample size is
small. Therefore, in section~\ref{subsec:discussion_scaling} we quote
the above results based on the Bayesian regression method.
\begin{figure*}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig5a.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=50mm,angle=90]{fig5b.eps}
\end{center}
\end{minipage}
\vspace{4mm}
\caption{(left) Luminosity-temperature relation of the high-richness
clusters. (right) Gas temperature - richness relation. In each
panel, the dashed line shows the best-fit power-law model. In the
right panel, the dotted line shows the best-fit $T-\hat{N}_{\rm
mem}$ relation derived for the XXL and XXL-LSS sample
\citep{Oguri18}. }\label{fig:lt}
\end{figure*}
\section{Discussion}\label{sec:discussion}
\subsection{Centroid shift and the cluster dynamical state}\label{subsec:discussion_centroid}
In section~\ref{subsec:centroidshift} we quantified the centroid shift
and peak shift from the XMM image analysis to find that half of the
sample has the centroid (peak) shift larger than $0.12R_{500}$
($0.05R_{500}$) or 85~kpc (36~kpc). Following the criteria used in
\cite{Sanders09}, \cite{Rossetti16} estimated the fraction of relaxed
clusters in the Planck SZE sample to be $52\pm4$\%. They also
calculated the fraction to be $\sim 74$\% in X-ray selected cluster
samples constructed from the HIFLUGCS, MACS, and REXCESS surveys,
whereas we obtain only $30\pm13$\% from our optical sample. This
suggests that the optically-selected sample contains a larger fraction
of merging clusters with disturbed morphology particularly in
comparison with the X-ray selected cluster samples.
X-ray observations preferentially detect relaxed clusters having cool
cores at the center as opposed to more disturbed, non-cool-core
clusters found in SZE surveys \citep{Eckert11,
Rossetti17,Andrade-Santos17}. Furthermore, \cite{Chon17} claim that
the cool-core bias in previous X-ray surveys is due to the
survey-selection method such as for a flux-limited survey, and is not
due to the inherent nature of X-ray selection. Therefore, considering
the nature of the HSC cluster survey, we suggest that the observed
small fraction of relaxed clusters in the present optical sample is
due to the fact that the CAMIRA algorithm is immune to the dynamical
state of X-ray-emitting gas and is likely to detect clusters with a
wider range of cluster morphology.
Given a higher merger rate in the distant universe, the redshift
evolution of X-ray morphology is likely to affect the measurement of
the fraction of relaxed clusters with respect to disturbed clusters.
\cite{Mann12} reported based on the Chandra observations that the
fraction of merging clusters increases at $z>0.4$ for the X-ray
luminous clusters. The redshift evolution is, however, only
marginally seen in our optically-selected cluster sample; the
fractions of relaxed clusters estimated from the X-ray peak shifts are
$38\pm23$\% at $z<0.4$ and $<25$\% at $z>0.4$.
\subsection{Scaling relations}\label{subsec:discussion_scaling}
The slope of $2.0\pm0.5$ of the $L_X-T$ relation derived for the
present optically-selected clusters (section~\ref{subsec:lt}) is
consistent with the slope of 2.0 predicted from the self-similar
model, whereas a steeper slope of $\sim 3$ has been reported by many
X-ray observations in the past \citep[for review,
see][]{Giodini13}. Even so, the data points lie within the observed
large scatter of X-ray clusters on the $L_X-T$ plane \citep{Takey11}.
The fitted slope agrees with that of the Red-sequence Cluster Survey
at high redshifts [the slope parameter is $2.1\pm0.3$; \cite{Hicks08}]
and that of the total RCS sample; namely, 18 clusters at $0.16<z<1.0$
[$2.7\pm0.5$; \cite{Hicks13}] within the errors. In comparison with
X-ray selected samples that contain a large number of clusters
($>100$) at a wide redshift range [$2.53\pm0.15$; \cite{Reichert11},
$2.80\pm0.12$; \cite{Takey11}, $2.72\pm0.18$; \cite{Maughan12}], the
present sample shows a marginally shallower slope. To further confirm
the result, however, we need to increase the number of clusters and
improve the accuracy with which the $L_X-T$ relation is measured.
The right panel of Figure~\ref{fig:lt} shows the relationship between
gas temperature and optical richness. Although the scatter is large,
the positive correlation is seen and the correlation coefficient is
calculated to be 0.63. Assuming the power-law model,
\begin{equation}
\log{\left(\frac{T}{\rm keV}\right)} = a_T \log{\left(\frac{\hat{N}_{\rm mem}}{30}\right)} + b_T, \label{eq:model_tn}
\end{equation}
the fit to the data yields $a_T = 0.92\pm0.38$ and $b_T=0.38\pm0.06$.
This is marginally steeper than the best-fit power-law relation
derived for 50 bright X-ray clusters in the XXL and XXL-LSS fields
[$a_T=0.50\pm0.12$, $b_T=0.48\pm0.02$; \cite{Oguri18}]. Because the
gas temperature of XXL and XXL-LSS clusters was measured in the
central $r<300$~kpc region \citep{Pierre04,Pierre16}, direct
comparison is not easy. Conversely, the self-similar model predicts
$T\propto \hat{N}_{\rm mem}^{2/3}$ given that the cluster mass is
related to richness and temperature through $M\propto \hat{N}_{\rm
mem}$ and $M\propto T^{3/2}$, respectively. Thus our fitting result
is consistent with the self-similar model, although the statistical
uncertainty is large.
\section{Summary and future prospects}\label{sec:summary}
Using the XMM-Newton archive data, we apply an X-ray analysis to 17
rich, optically-selected clusters of galaxies at $0.14<z<0.75$ in the
HSC-SSP field. Most of the clusters were serendipitously detected in
the XMM-Newton fields of view. The major findings are as follows:
\begin{enumerate}
\item We systematically analyzed the X-ray centroid or peak shift as
compared with the BCG position. The fraction of relaxed clusters in
the optically-selected cluster sample, which is defined based on the
offset between the BCG and X-ray peak, is $30\pm13$\%. This is less
than that of the X-ray samples. Because the optical sample is immune
to the cool-core bias, it is likely to contain more irregular
clusters and thus cover a larger range of the cluster morphology.
\item The slope of the luminosity-temperature relation is marginally
less than that of X-ray samples and is consistent with the
self-similar model prediction of 2.0. The slope of the
temperature-richness relation is also consistent with the prediction
of the self-similar model although the former has a large
statistical uncertainty.
\end{enumerate}
Our results provide important information about the X-ray properties
of the optically-selected clusters, which are marginally different
from those observed in the X-ray samples. To obtain more conclusive
results, we need to improve the measurement accuracy. We thus plan to
extend the analysis by (1) incorporating fainter objects in the
3MM-DR7 catalog and (2) conducting X-ray observations of the massive,
high-redshift ($0.8<z<1.2$) clusters newly discovered by the HSC-SSP
survey. For the latter, the XMM-Newton follow-up project is now
ongoing and is to be the subject of an upcoming presentation.
Furthermore, by the time of completion of the HSC-SSP survey, the
CAMIRA cluster catalog will be about 6-times larger than that at
present. These works should allow us to derive the mass-observable
scaling by using a larger number of clusters and study the redshift
evolution of the X-ray properties of the optical clusters. Detailed
comparisons of optical, weak lensing, SZE, and X-ray selected clusters
will improve our knowledge of cluster-mass calibration and cluster
evolution.
\begin{ack}
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical
communities of Japan and Taiwan, and Princeton University. The HSC
instrumentation and software were developed by the National
Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the
Physics and Mathematics of the Universe (Kavli IPMU), the University
of Tokyo, the High Energy Accelerator Research Organization (KEK), the
Academia Sinica Institute for Astronomy and Astrophysics in Taiwan
(ASIAA), and Princeton University. Funding was contributed by the
FIRST program from Japanese Cabinet Office, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), the Japan Society for
the Promotion of Science (JSPS), Japan Science and Technology Agency
(JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and
Princeton University.
This paper makes use of software developed for the Large Synoptic
Survey Telescope. We thank the LSST Project for making their code
available as free software at http://dm.lsst.org
The Pan-STARRS1 Surveys (PS1) have been made possible through
contributions of the Institute for Astronomy, the University of
Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its
participating institutes, the Max Planck Institute for Astronomy,
Heidelberg and the Max Planck Institute for Extraterrestrial Physics,
Garching, The Johns Hopkins University, Durham University, the
University of Edinburgh, Queen's University Belfast, the
Harvard-Smithsonian Center for Astrophysics, the Las Cumbres
Observatory Global Telescope Network Incorporated, the National
Central University of Taiwan, the Space Telescope Science Institute,
the National Aeronautics and Space Administration under Grant
No. NNX08AR22G issued through the Planetary Science Division of the
NASA Science Mission Directorate, the National Science Foundation
under Grant No. AST-1238877, the University of Maryland, and Eotvos
Lorand University (ELTE) and the Los Alamos National Laboratory.
We are grateful to Chien-Hsiu Lee for useful comments. This work was
supported in part by JSPS KAKENHI grants 16K05295 (NO) and JP15K17610
(SU). YI is financially supported by a Grant-in-Aid for JSPS Fellows
(16J02333).
\end{ack}
\bibliographystyle{apj}
|
{
"timestamp": "2018-02-27T02:00:18",
"yymm": "1802",
"arxiv_id": "1802.08692",
"language": "en",
"url": "https://arxiv.org/abs/1802.08692"
}
|
\section{Introduction}
Realizing the physics programs of the planned and/or upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper (CWP)~\cite{HSF2017} which will describe the community strategy and a roadmap for software and computing research and development in HEP for the 2020s. This activity is organized under the umbrella of the HEP Software Foundation (HSF).
The CWP process was carried out by working groups centered on specific topics. The topics of event reconstruction and software triggers are summarized together in this document and in more detail elsewhere~\cite{RECOCWP}. The reconstruction of raw detector data and simulated data and its processing in real time represent a major component of today's computing requirements in HEP. A recent projection [Campana2016] of the ATLAS 2016 computing model results in $>$85\% of the HL-LHC CPU resources being spent on the reconstruction of data or simulated events. We have evaluated the most important components of next generation algorithms, data structures, and code development and management paradigms needed to cope with highly complex environments expected in HEP detector operations in the next decade. New approaches to data processing were also considered, including the use of novel, or at least, novel to HEP, algorithms, and the movement of data analysis into real-time environments.
This document will discuss software algorithms essential to the interpretation of raw detector data into analysis-level objects. Specifically, these algorithms can be broadly grouped:
\begin{enumerate}[itemsep=0ex]
\item
{\bf Online}: Algorithms, or sequences of algorithms, executed on events read out from the detector in near-real-time as part of the software trigger, typically on a computing facility located close to the detector itself.
\item
{\bf Offline}: As distinguished from online, any algorithm or sequence of algorithms executed on the subset of events preselected by the trigger system, or generated by a Monte Carlo simulation application, typically in a distributed computing system.
\item
{\bf Reconstruction}: The transformation of raw detector information into higher level objects used in physics analysis. A defining characteristic of “reconstruction” which separates it from “analysis” is that the quality criteria used in the reconstruction to, for example, minimize the number of fake tracks, should be general enough to be used in the full range of physics studies required by the experimental physics program. This usually implies that reconstruction algorithms use the entirety of the detector information to attempt to create a full picture of each interaction in the detector. Reconstruction algorithms are also typically run as part of the processing carried out by centralized computing facilities.
\item
{\bf Trigger}: the part of the online system responsible for classification of events which reduces either the number of events which are kept for further “offline” analysis, the size of such events, or both. In this working group we were only concerned with software triggers, whose defining characteristic is that they process data without a fixed latency. Software triggers are part of the real-time processing path and must make decisions quickly enough to keep up with the incoming data, possibly using substantial disk buffers.
\item
{\bf Real-time analysis}: Data processing that goes beyond object reconstruction, and is performed online within the trigger system. The typical goal of real-time analysis is to combine the products of the reconstruction algorithms (tracks, clusters, jets...) into complex objects (hadrons, gauge bosons, new physics candidates...) which can then be used directly in analysis without an intermediate reconstruction step.
\end{enumerate}
\section{Challenges}
Software trigger and event reconstruction techniques in HEP face a number of new challenges in the next decade. These are broadly categorized into 1) those from new and upgraded accelerator facilities, 2) from detector upgrades and new detector technologies, 3) increases in anticipated event rates to be processed by algorithms (both online and offline), and 4) from evolutions in software development practices.
Advancements in facilities and future experiments bring a dramatic increase in physics reach, as well as increased event complexity and rates. At the HL-LHC, the central challenge for object reconstruction is thus to maintain excellent efficiency and resolution in the face of high pileup values, especially at low transverse momenta. Detector upgrades such as increases in channel density, high precision timing and improved detector geometric layouts are essential to overcome these problems. For software, particularly for triggering and event reconstruction algorithms, there is a critical need not to dramatically increase the processing time per event.
A number of new detector concepts are proposed on the 5-10 year timescale in order to help in overcoming the challenges identified above. In many cases, these new technologies bring novel requirements to software trigger and event reconstruction algorithms or require new algorithms to be developed. Ones of particular importance at the HL-LHC include high-granularity calorimetry, precision timing detectors, and hardware triggers based on tracking information which may seed later software trigger and reconstruction algorithms. Longer term projects with sufficiently mature software infrastructure can include cost implications of the simulation and reconstruction algorithms into the detector design considerations. This is especially important when the computing cost is expected to be a substantial part of the total construction and operation cost for an experiment.
Trigger systems for next-generation experiments are evolving to be more capable, both in their ability to select a wider range of events of interest for the physics program of their experiment, and their ability to stream a larger rate of events for further processing. ATLAS and CMS both target systems where the output of the hardware trigger system is increased by 10x over the current capability, up to 1 MHz~\cite{ATLAS2015,CMS2015}. In other cases, such as LHCb~\cite{LHCb2014} and ALICE~\cite{ALICE2015}, the full collision rate (between 30 to 40 MHz for typical LHC operations) will be streamed to real-time or quasi-realtime software trigger systems. The increase in event complexity also brings a “problem” of overabundance of signal to the experiments, and specifically the software trigger algorithms. The evolution towards a genuine real-time analysis of data has been driven by the need to analyze more signal than can be written out for traditional processing, and technological developments which make it possible to do this without reducing the analysis sensitivity or introducing biases.
Evolutions in computing technologies are both opportunities to move beyond commodity x86 technologies, which HEP has used very effectively over the past 20 years, and significant challenges to derive sufficient event processing throughput per cost to reasonably enable our physics programs~\cite{Bird2014}. Specific items identified included 1) the increase of SIMD capabilities (processors capable of running a single instruction set simultaneously over multiple data), 2) the evolution towards multi- or many-core architectures, 3) the slow increase in memory bandwidth relative to CPU capabilities, 4) the rise of heterogeneous hardware, and 5) the possible evolution in facilities available to HEP production systems.
The move towards open source software development and continuous integration systems brings opportunities to assist developers of software trigger and event reconstruction algorithms. Continuous integration systems have already allowed automated code quality and performance checks, both for algorithm developers and code integration teams. Scaling these up to allow for sufficiently high statistics checks is among the still outstanding challenges. As the timescale for experimental data taking and analysis increases, the issues of legacy code support increase. Code quality demands increase as traditional offline analysis components migrate into trigger systems, or more generically into algorithms that can only be run once.
\section{Current approaches}
Substantial computing facilities are in use for both online and offline event processing across all experiments surveyed. Online facilities are dedicated to the operation of the software trigger, while offline facilities are shared for operational needs including event reconstruction, simulation (often the dominant component) and analysis. CPU in use by experiments is typically at the scale of tens or hundreds of thousands of x86 processing cores. Projections to future needs, such as for the HL-LHC, show the need for a substantial increase in scale of facilities without significant changes in approach or algorithms.
The CPU needed for event reconstruction tends to be dominated by charged particle reconstruction (tracking), especially when the need for efficiently reconstructing low transverse momentum particles is considered. Calorimetric reconstruction, particle flow reconstruction, particle identification algorithms also make up significant parts of the CPU budget in some experiments. The CPU required for event reconstruction and trigger area with challenges and substantial potential risk to the computing cost of experiments. In this respect, software for future experiments will continue to evolve, to improve both the physics and technical performance characteristics of algorithms, and the uncertainty and evolution due to detector performance and operating conditions will continue throughout the experimental program.
Disk storage is typically 10s to 100s of PB per experiment. It is dominantly used to make the output of the event reconstruction, both for real data and simulation, available for analysis. Current generation experiments have moved towards smaller, but still flexible, data tiers for analysis. These tiers are typically based on the ROOT~\cite{Brun1996} file format and constructed to facilitate both skimming of interesting events and the selection of interesting pieces of events by individual analysis groups or through centralized analysis processing systems. Initial implementations of real-time analysis systems are in use within several experiments. These approaches remove the detector data that typically makes up the raw data tier kept for offline reconstruction, and keep only final analysis objects~\cite{Aaij2016,ATLAS2017,CMS2016}.
Detector calibration and alignment requirements were surveyed. Generally a high level of automation is in place across experiments, both for very frequently updated measurements and more rarely updated measurements. Often automated procedures are integrated as part of the data taking and data reconstruction processing chain. Some longer term measurements, requiring significant data samples to be analyzed together remain as critical pieces of calibration and alignment work. These techniques are often most critical for a subset of precision measurements rather than for the entire physics program of an experiment.
\section{Research and development Roadmap and Goals}
We identify seven broad areas to be critical for software trigger and event reconstruction work over the next decade. These are:
\begin{enumerate}[itemsep=-1ex]
\item
Enhanced vectorization programming techniques
\item
Algorithms and data structures to efficiently exploit many-core architectures
\item
Algorithms and data structures for non-x86 architectures (e.g., GPUs, FGAs)
\item
Enhanced quality assurance (QA) and quality control (QC) for reconstruction techniques
\item
Real-time analysis
\item
Precision physics-object reconstruction, identification and measurement techniques
\item
Fast software trigger and reconstruction algorithms for high-density environments
\end{enumerate}
Not all roadmap areas are directly applicable to the event reconstruction and triggering approach taken by all experiments. However, we expect that each area of proposed research and development will be broadly applicable to future high-energy physics experimental programs.
\subsection*{Roadmap area 1: Enhanced vectorization programming techniques}
HEP developed toolkits and algorithms typically make poor use of vector units on commodity computing systems. Improving this will bring speedups to applications running on both current computing systems and most future architectures. The goal for work in this area is to evolve current toolkit and algorithm implementations, and best programming techniques to better use SIMD capabilities of current and future computing architectures.
\subsection*{Roadmap area 2: Algorithms and data structures to efficiently exploit many-core architectures}
Computing platforms are generally evolving towards having more cores in order to increase processing capability. This evolution has resulted in multi-threaded frameworks in use, or in development, across HEP. Algorithm developers can improve throughput by being thread safe and enabling the use of fine-grained parallelism. The goal is to evolve current event models, toolkits and algorithm implementations, and best programming techniques to improve the throughput of multithreaded software trigger and event reconstruction applications.
\subsection*{Roadmap area 3: Algorithms and data structures for non-x86 computing architectures (e.g., GPUs, FPGAs)}
Computing architectures using technologies beyond CPUs offer an interesting alternative for increasing throughput of the most time consuming trigger or reconstruction algorithms. Such architectures (e.g., GPUs, FPGAs) could be easily integrated into dedicated trigger or specialized reconstruction processing facilities (e.g., online computing farms). The goal is to demonstrate how the throughput of toolkits or algorithms can be improved through the use of new computing architectures in a production environment.
\subsection*{Roadmap area 4: Enhanced QA/QC for reconstruction techniques}
HEP experiments have extensive continuous integration systems, including varying code regression checks that have enhanced the quality assurance and quality control procedures for software development in recent years. These are typically maintained by individual experiments and have not yet reached the scale where statistical regression, technical, and physics performance checks can be performed for each proposed software change. The goal is to enable the development, automation, and deployment of extended QA and QC tools and facilities for software trigger and event reconstruction algorithms.
\subsection*{Roadmap area 5: Real-time analysis }
Real-time analysis techniques are being adopted to enable a wider range of physics signals to be saved by the trigger for final analysis. As rates increase, these techniques can become more important and widespread by enabling only the parts of an event associated with the signal candidates to be saved, reducing the required disk space. The goal is to evaluate and demonstrate the tools needed to facilitate real-time analysis techniques. Research topics include compression and custom data formats; toolkits for real-time detector calibration and validation which will enable full offline analysis chains to be ported into real-time; and frameworks which will enable non-expert offline analysts to design and deploy real-time analyses without compromising data taking quality.
\subsection*{Roadmap area 6: Precision physics-object reconstruction, identification and measurement techniques}
The central challenge for object reconstruction at HL-LHC is to maintain excellent efficiency and resolution in the face of high pileup values, especially at low transverse momenta. Both trigger and reconstruction approaches need to exploit new techniques and higher granularity detectors to maintain or even improve physics measurements in the future. Reconstruction in very high pileup environments, such as the HL-LHC or FCC-hh, may also greatly benefit from adding timing information to our detectors, in order to exploit the finite beam crossing time during which interactions are produced. The goal is to develop and demonstrate efficient techniques for physics object reconstruction and identification in complex environments.
\subsection*{Roadmap area 7: Fast software trigger and reconstruction algorithms for high-density environments }
Future experimental facilities will bring a large increase in event complexity. The scaling of current-generation algorithms with this complexity must be improved to avoid a large increase in resource needs. In addition, it may be desirable or indeed necessary to deploy new algorithms, including advanced machine learning techniques developed in other fields, in order to solve these problems. The goal is to evolve or rewrite existing toolkits and algorithms focused on their physics and technical performance at high event complexity (e.g. high pileup at HL-LHC). Most important targets are those which limit expected throughput performance at future facilities (e.g., charged-particle tracking). A number of such efforts are already in progress across the community.
\section{Conclusions}
The next decade will see the volume and complexity of data being processed by HEP experiments increase by at least one order of magnitude. While much of this increase is driven by planned upgrades to the four major LHC detectors, new experiments such as DUNE will also make significant demands on the HEP data processing infrastructure. It is essential that software triggers and event reconstruction algorithms continue to evolve so that they are able to efficiently exploit future computing architectures and deal with this increase in data rates without loss of physics capability.
We have identified seven key areas where R\&D is necessary to enable the community to exploit the full power of the enormous datasets which we will be collecting. Three of these areas concern the increasingly parallel and heterogeneous computing architectures which we will have to write our code for. In addition to a general effort to vectorize our codebases, we must understand what kinds of algorithms are best suited to what kinds of hardware architectures, develop benchmarks that allow us to compare the physics-per-dollar-per-watt performance of different algorithms across a range of potential architectures, and find ways to optimally utilise heterogeneous processing centres. The consequent increase in the complexity and diversity of our codebase will necessitate both a determined push to educate tomorrow’s physicists in modern coding practices, and a development of more sophisticated and automated quality assurance and control for our codebases. The increasing granularity of our detectors, and the addition of timing information to help cope with the extreme pileup conditions at the HL-LHC, will require us to both develop new kinds of reconstruction algorithms and to make them fast enough for use in real-time. Finally, the increased signal rates will mandate a push towards real-time analysis in many areas of HEP, in particular those with low transverse momentum signatures.
The success of this R\&D program will be intimately linked to challenges confronted in other areas of HEP computing, most notably the development of software frameworks which are able to support heterogeneous parallel architectures, including the associated data structures and I/O, the development of lightweight detector models which maintain physics precision with minimal timing and memory consequences for the reconstruction, enabling the use of offline analysis toolkits and methods within real-time analysis, and the ability to integrarte machine learning reconstruction algorithms being developed outside HEP into our workflows and apply them to our problems. For this reason perhaps the most important task ahead of us is to maintain the community which has coalesced together in this CWP process, so that the work done in these sometimes disparate areas of HEP fuses coherently together into a solution to the problems facing us over the next decade.
|
{
"timestamp": "2018-02-26T02:11:48",
"yymm": "1802",
"arxiv_id": "1802.08640",
"language": "en",
"url": "https://arxiv.org/abs/1802.08640"
}
|
\section{Introduction}
Iron-gallium alloys (Fe$_{100-x}$Ga$_x$) have become an important material for magnetostrictive applications because of their large tetragonal magnetostriction $\lambda_{100}$ at low field for alloys around 18.4 \% Ga (Galfenol composition)\cite{Clark2000,Clark2001,Clark2003}, whereas, at the same time, they provide good corrosion resistance and mechanical hardness \cite{Atulasimha2011}. The interest in this alloy has been enhanced because the report of a non-joulian magnetostrictive behavior \cite{NatureFeGa} calls for further experimental and theoretical examination. Large magnetoelastic (ME) coupling is a rewarding property in thin films and patterned elements, as the ME anisotropy can control the orientation of the magnetization \textbf{M}. Epitaxial Fe$_{100-x}$Ga$_x$ thin films have shown a remarkable potential for microwave \cite{Parkes2013} and energy conversion \cite{Onuta2011} applications, by using a piezoelectric layer to control the magnetic anisotropy through the modification of the strain in the magnetic layer by means of an applied voltage. This method, which has the advantages of low power consumption and efficiency in the \textbf{M} switching, has been proposed to control \textbf{M} in the magnetostrictive layer of MRAM devices \cite{Hu2011}.
Improved magnetic properties are obtained by the control of the crystalline phase as it has been pointed out by studies on rare-earth iron Laves phases alloys, showing that the magnetostriction can be enhanced in the phase boundary separating two ferromagnetic phases of different crystallographic symmetries \cite{PhysRevLett.104.197201, PhysRevLett.111.017203}. For Fe-Ga alloys, the synergistic use of cubic phases with \textit{bcc} and \textit{fcc} symmetries conducts to composites with stable magnetization and magetostriction at high temperature \cite{Ma2017}.
Bulk samples are obtained after processing the melted alloy by different routes that include or combine slow cooling, quenching and annealing, for a review see \cite{Handbook} and references therein. Regarding the stability of \textit{bcc} and \textit{fcc} crystal phases (see diagram in Fig. \ref{fig:FiguraDiagramas}a \cite{inCollFeGa}) it is established that the formation of the \textit{fcc} Ll$_2$ phase requires a very well controlled procedure that includes long annealing times \cite{Srisukhumbowornchai2002}. Thus, the D0$_3$ phase can be present at low temperature instead of a mix of \textit{bcc} and \textit{fcc} ordered phases as is observed in the metastable phase diagram of Fig. \ref{fig:FiguraDiagramas}b \cite{IKEDA2002198}.
Thin film technology offers a different route to obtain materials and adds a new variable: Strain due to a substrate with a lattice parameter different from that of the film can induce the nucleation of a phase at lower temperature, as is observed for the canonical Fe/Cu(100) system \cite{Liu1}.
Another factor is the local diffusion of the species forming the alloy and its capacity to generate an ordered phase or remain at the sticking point with a disordered structure, either chemical or structural, during the process of adsorption of atoms at the films surface. \textit{bcc} Fe-Ga films can grow epitaxially on MgO(001) \cite{McClure2009}, a material used in important systems as tunnel junctions, on ZnSe(001) where a metastable next-neighbor pairing between Ga-Ga atoms has been reported \cite{Eddrief2011} and on GaAs(001) \cite{Parkes2013,Beardsley2017}. Here, we present a study of Fe$_{100-x}$Ga$_x$ films grown on MgO(001) crystals with x $<$ 30 and substrate temperature $T_s$ between 150 $^o$ C and 700 $^o$ C for a growing rate of about 0.7 nm/min. Several characterization techniques are used to look into the long and short range structure of these epitaxial films: X-ray diffraction, RHEED, TEM and EXAFS. The main result is the observation of the nucleation of a \textit{fcc} Ll$_2$ phase for \textit{x} $\approx$ 25 for $T_{s}\gtrsim$ 400$^o$ C and the formation of metastable \textit{bcc} phases at $T_{s} \approx$ 150$^o$ C for all the range of composition studied with an anticlustering mechanism of the Ga atoms.
\begin{figure}
\includegraphics[width=0.9\textwidth]{diagramas}
\caption{(a) Equilibrium and (b) metastable phase diagram for Fe-Ga alloys adapted, respectively, from references \cite{inCollFeGa} and \cite{IKEDA2002198}. The crystal structures for A2, B2, D0$_3$ and Ll$_2$ phases are also shown, with gray spheres indicating indistinctly Fe or Ga atoms while red and yellow colors specify, respectively, the positions of iron and gallium atoms (lines are guides for the eye).}
\label{fig:FiguraDiagramas}
\end{figure}
\section{Experimental methods}
Fe$_{100-x}$Ga$_x$(001) epitaxial films were grown on MgO(001) by Molecular Beam Epitaxy. Tetragonal deformation is expected to occur due to the lattice mismatch with the MgO substrate. Bulk Fe$_{80}$Ga$_{20}$ and MgO have a lattice parameter a = 2.90 \AA\enspace and a = 4.21 \AA\enspace respectively. When FeGa is grown onto MgO (001), its lattice is turned by 45$^o$ in the growth plane in such a way that the FeGa [110] and MgO [100] directions are aligned. This provides a matching condition with the substrate that gives place to a tensile in-plane mismatch strain equal to 2.6\%. Prior to growth, the MgO(001) crystal was held at 800 $^o$C to obtain a clean surface as shown by the Reflected High Energy Electron Diffraction (RHEED) patterns displayed in Fig. \ref{fig:FiguraRHEED}a-b. These images display sharp spots and Kikuchi lines along the [100], Fig. \ref{fig:FiguraRHEED}a, and [110], Fig. \ref{fig:FiguraRHEED}b, azimuthal directions. The deposition of Fe and Ga atoms was carried out by using an e-beam gun and a high temperature cell, respectively, with $T_s$ ranging between 150 $^o$C and 700 $^o$C. The growth rate was about 0.7 nm/min. The film composition was obtained by means of dispersive X-ray spectroscopy (EDX), and the thickness was determined by X-ray reflectivity. A 2 nm thick Mo capping layer was deposited onto the Fe-Ga layers to prevent from oxidation. The lattice parameters were determined by means of XRD measurements using a Bruker D8 Advance High resolution diffractometer and a Rigaku rotating anode D/max 2500 diffractometer working with a Bragg-Brentano configuration. Transmission electron microscopy images were obtained using a Tecnai F30 Microscope in a sample thinned down by a Helios 600 dual beam system.
The EXAFS spectra were recorded at the Fe K-edge (7112 eV), and at the Ga K-edge (10367 eV) at room temperature, in fluorescence mode.
The experiments were performed on beamline BM30B at the European Synchrotron Radiation Facility (ESRF), Grenoble, France and on beamline XAFS at the Italian Synchrotron Radiation Facility (Elettra), Trieste, Italy. Measurements were performed with
the samples surface oriented nearly parallel to the incident beam $\vec{\varepsilon}_{\|}$ (incidence angle equal to about $5^{\circ}$) and perpendicular to it $\vec{\varepsilon}_{\bot}$ (incidence angle equal to about $85^{\circ}$).
A pure \textit{bcc} Fe film grown epitaxially on MgO was also measured as reference.
The films studied are listed in Table \ref{tab_estruc}. The first column is a code that, hereafter, is used to refer to the samples studied in the following sections. The samples are divided in two sets: S1 and S2. S1 corresponds to films grown at $T_{s}$ = 150 $^o$C and \textit{x} varying between 0 and 28, a value that is used in the label. For the second set of films, S2, the substrate temperature varies form 300 $^o$C to 700 $^o$C, being used as a label, and the flux of Fe and Ga beams is fixed except for the sample with \textit{x} = 13 and $T_{s} $ = 600 $^o$C, which was grown to look for the presence of \textit{fcc} phase at low gallium content.
\begin{table}
\begin{tabular}{cc|cccccc}
& & & & & & &\\
\textit{Sample} &x & SL peak & [100] & [110] & a$_{\bot}$ & a$_{\|}$ & th \\
& (at \% Ga) & & & & (\AA)& (\AA) & (nm) \\
\hline
& & & & & & &\\
S1-0 & 0 & N & N & N & 2.886 & &44 \\
S1-12 & 12 & N & N & N & 2.866 & &25 \\
S1-13 & 13 & N & N & N & 2.871 & & \\
S1-18 & 18 & N & N & N & 2.879 & 2.94 &14.5 \\
S1-21 & 21 & Y & N & N & 2.890 & 2.930 &16 \\
S1-22 & 22 & Y & N & Y & 2.884 & 2.947 &17 \\
S1-24a & 24 & Y & Y & Y & 2.888 & 2.938 &20 \\
S1-24b & 24 & Y & Y & Y & 2.903 & &24.3 \\
S1-28a & 28 & Y & N & Y & 2.894 & &20.5 \\
S1-28b & 28 & Y & Y & Y & 2.904 & 2.918 &27.6 \\
& & & & & & &\\
\hline\hline
& & & & & & &\\
\textit{Sample} & $T_{s}$& x & str.& SL peak & a$_{\bot}$ & a$_{\|}$ & th \\
& ($^o$C)& (at \% Ga)& & & (\AA) & (\AA) & (nm) \\
\hline
&& & & & & & \\
S2-300 &300& 24$^+$ &bcc &Y & 2.935 & &24 \\
S2-400 &400& 24 &fcc &N & 3.614 & &27.1 \\
S2-500 &500& 24 &bcc &Y & 2.909 & 2.898 &26.5 \\
S2-600 &600& 24&fcc &Y & 3.687 & 3.677 &- \\
S2-600b &600& 13&bcc &N & 2.900 & & \\
S2-700 &700& 24&bcc &Y & 2.907 & & \\
\hline
\end{tabular}
\caption {List of the films studied in this work. Films with the S1 label were grown at $T_{s}$ = 150 $^o$C and have \textit{bcc} structure. From left to right: Sample code, gallium content (EDX), presence of the (001) SL peak, reconstruction along the FeGa [100] and [110] azimuthal directions (RHEED), out-of-plane and in-plane lattice parameter (XRD) and film thickness th (XRR). Films labeled with S2, except S2-600b, were prepared with the same experimental conditions for the evaporation beams and the composition of the alloy is assigned to the value obtained by EDX for film S2-300 (+). Sample S2-600b was prepared with a lower Ga/Fe crucible flux ratio to obtain a lower Ga content. From left to right: sample code, gallium content (EDX), crystal structure, presence of the (001) SL peak, out-of-plane and in-plane lattice parameter (XRD) and film thickness th (XRR). Errors on the EDX data are about $\pm$ 1\%.}
\label{tab_estruc}
\end{table}
\section{Experimental Results}
\subsection{Reflected high energy electron diffraction}
The \textit{in situ} RHEED technique provides an initial evaluation of the growth mechanism and the layers crystal structure. The RHEED patterns for representative films with \textit{bcc} and \textit{fcc} structure taken with the incident e-beam along the MgO [100] and [110] directions, see Fig. \ref{fig:FiguraRHEED}, show a $\pi/2$ periodicity indicating that the Fe-Ga films have a fourfold symmetry.
The images obtained for the film S1-13, see Fig.\ref{fig:FiguraRHEED}c-d, are identical to those obtained for reference Fe films (not shown), suggesting a \textit{bcc} structure. The distance ratio between the main reflections of these films along the MgO [100] and [110] directions is about 1.4. These lines, that correspond to atomic planes distances, are aligned with the reflections coming from the MgO [100] and [110] directions. RHEED images taken on films S1-24 and S1-28a show extra lines indicating surface reconstructions that will be discussed in the next section.
On the contrary, for the film S2-600, Fig. \ref{fig:FiguraRHEED}i-j, the distance ratio between the main reflections along the MgO [100] and [110] directions is 0.7 ($\approx 1/1.4 $)
and the stronger streaks in Fig. \ref{fig:FiguraRHEED}i-j, are no longer aligned with the substrate lines. These significant differences in the RHEED patterns suggest that the crystal structure of the S2-600 film is markedly different from the other Fe-Ga and pure Fe films, although it has a fourfold symmetry plane on the MgO(001) surface. The latter structures correspond to a \textit{fcc} phase as determined by XRD experiments reported in the following section. The epitaxial relationships are MgO[100]$\|$FeGa[110] for \textit{bcc} films and MgO[100]$\|$FeGa[100] for \textit{fcc} films.
\subsubsection{RHEED Superstructures}
The RHEED images taken on the \textit{bcc} films grown at $T_s$ = 150 $^o$C show that by increasing the Ga concentration, a complex superstructure is observed along the two main in-plane directions. Films grown at higher temperature can display other reconstructions that are not discussed in this work, because of the complexity of the subject. For instance the c(2 $\times$ 2) reconstruction, that for Fe-Ga alloy could be interpreted as ordering of the Ga and Fe atoms, is also observed in reference pure iron films grown at T$_{s}>$ 220 $^o$C \cite{Oka2001}.
The RHEED images for film S1-13 do not show the presence of surface reconstructions, however for film S1-24,
along the FeGa $<$110$>$ azimuth [Fig. \ref{fig:FiguraRHEED}e] two weak lines can be observed between the main reflections, whereas for the image taken along the FeGa $<$100$>$ azimuth [Fig. \ref{fig:FiguraRHEED}f] a line can be observed at halfway between the stronger strikes. Film S1-28a the reconstruction lines are absent in the image taken along FeGa $<$100$>$ azimuth [Fig. \ref{fig:FiguraRHEED}h] but not along FeGa $<$110$>$ azimuth [Fig. \ref{fig:FiguraRHEED}g]. This fact is highlighted in the profiles presented in Fig. \ref{fig:FiguraRHEED}k and l showing the intensity \textit{vs.} pixel position perpendicularly to the strikes of the images in Fig. \ref{fig:FiguraRHEED}c-h taken in films with and without reconstruction along both azimuthal directions.
The intensity of the SL lines is stronger for images taken along the [110] directions that for images obtained along the [100] directions, which implies that for some films the latter is undetected or with small intensity value. This fact suggests that the presence of lines along the [100] and [110] directions may obey to different reasons.
The super-order along the [110] direction can be interpreted as due to a set of domains with basis vector (0,3) and (1,-1) with respect to square unit cell. This diffraction pattern has similarities with those shown in shape memory alloys, where the cubic lattice undergoes tetragonal transformation and structural domains share the [110] direction from the austenite phase in the martensite phase of the NiMnGa alloy resulting in superlattice peaks along the austenite [110] direction \cite{Luo2011}.
The weak lines observed along [001] directions without spot at the middle distance of the main [110] reflections can indicate a reconstruction (2 $\times$ 1) with domains doubling the lattice parameter \textit{a} along both in-plane [100] and [010] directions. These reflections are very weak and wide, suggesting a dilute presence of these objects with small lateral size. We speculate with the fact that the residual small amount of Ga-Ga nearest neighbors could generate
a 2\textit{a} periodicity in real space (see Fig. \ref{fig:FiguraRHEED}.m) giving rise to the diffraction spots observed in Fig. \ref{fig:FiguraRHEED}. The fact that these spots are not observed for films with low content of Ga may indicates that the number of Ga dimers must be low or/and disordered in the Fe matrix; on the other hand, it appears that films with large content of Ga (S1-28a) could also shown an absence of the (2 $\times$ 1) reconstruction suggesting that increasing the concentration of Ga is detrimental for the formation of Ga-Ga nearest neighbors as happens for the D0$_3$ structure.
\begin{figure}
\includegraphics[width=0.9\textwidth]{ImagenRheed2}
\caption{RHEED patterns of the clean MgO(001) substrate along the MgO $<$100$>$ (a) and $<$110$>$ (b) and the corresponding images for \textit{bcc} films (c-d) S1-13;(e-f) S1-24; (g-h) S1-28a; and (i-j) the \textit{fcc} film S2-600 (x=24). Intensity \textit{vs.} pixel position for the \textit{bcc} images along the MgO[100](k) and MgO[110] (l) directions for films S1-13 (red line), S1-28a (black line) and S1-24 (blue line). The intensity of the lines has been shifted vertically for clarity. (m) The large round objects represent a sketch of a two dimensions area that could explain a (2 $\times$ 1) reconstruction: Alternate rows of Ga and Fe atoms. The color code is the same as shown in Fig. 1. and the smaller balls are added to complete a \textit{bcc} structure.}
\label{fig:FiguraRHEED}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{FiguraRayosX}
\caption{(a) XRD $\theta/2\theta$ scans for films S1-13 (black line), S1-24a (red line) and S2-700 (blue line). Reciprocal space map for films (b) S2-500 and (c) S1-24 (d) S1-28b taken around the($\overline{22}4$) reflection and (e) film S2-600 at the ($\overline{1}\overline{1}3$) one. Units in coordinate axes of the maps are multiples of 2$\pi$/a$_{MgO}$, with a$_{MgO}$ the MgO lattice parameter. The number of counts (arbitrary units) is associated with a color code located at the right side.}
\label{fig:FiguraStruc_1}
\end{figure}
\subsection{X-ray diffraction}
The samples have been studied by performing X-ray diffraction with the scattering vector normal to the film surface. Representative X-ray diagrams are presented in Fig. \ref{fig:FiguraStruc_1}a, showing the main reflection along the growth direction and in some cases superlattice peaks due to chemical order (labeled as (001) in Fig. \ref{fig:FiguraStruc_1}a).
The \textit{bcc} structure is assigned to films with main reflections located at 2$\theta\approx$ 65$^o$, which correspond to the \textit{bcc} (002) peak. The \textit{fcc} structures is associated to films with a main reflection at 2$\theta\approx$ 49$^o$ corresponding the the \textit{fcc} (002) peak. For the \textit{bcc} films the superlattice peak is located at 2$\theta\approx$ 31$^o$ while for the \textit{fcc} films that peak appears at 2$\theta\approx$ 24.5$^o$. No evidence of other reflections is found in these scans. The lattice parameters obtained for both structures (see Table \ref{tab_estruc}) are within the range of values observed in the bulk alloys: 2.90 \AA\ and 3.68 \AA\ for \textit{bcc} and \textit{fcc} crystals, respectively\cite{Kawamiya}.
The A2 structure, with Fe and Ga atoms randomly distributed in the lattice positions does not present the (001) superlattice peak.
Two \textit{bcc} phases, B2 structure and the D0$_3$ (including a modified D0$_3$\cite{He2016177}), in which the Ga atoms are ordered with respect to the Fe matrix, could explain the presence of the (001) reflection. XRD scans were performed to observe superlattice peaks for the D0$_3$ \textit{bcc} symmetry as the (1/2 1/2 1/2) reflection but no signal was detected. Nevertheless, the presence of the (001) clearly indicates a chemical order. For the \textit{fcc} films, the equilibrium phase diagram, see Fig. \ref{fig:FiguraDiagramas}a, shows that a Ll$_2$ structure exists in a certain range of temperature and composition. The presence of this ordered phase implies that the (001)-\textit{fcc} reflection is no longer extincted, as observed for sample S2-600 that displays a weak peak at 2$\theta\approx$ 24.5$^o$.
For the S1 samples ($T_s$ = 150$^o$ C) the presence of the (001) SL peak and the surface reconstruction seem to be associated with the increment of the Ga content, see Table \ref{tab_estruc}. Sample S1-21 is an exception to this behavior, an issue that can be due to the fact that the intensity of the (001) peak, compared with the other samples, takes its lowest value and the RHEED reconstruction signal could be overlooked because it is below the sensibility achieved during that \textit{in situ} experiment.
Asymmetrical scans, see Fig. \ref{fig:FiguraStruc_1}b-e, confirm the epitaxial relations observed by RHEED and allow determining the average strain in the film plane. Fig. \ref{fig:FiguraStruc_1} shows $<$224$>$ reflections for \textit{bcc} and $<$113$>$ reflection of the \textit{fcc} film. These data indicate that the \textit{bcc} films undergo an in-plane extension and out-of-plane compression to accommodate their lattice parameter with the MgO substrate. We observe that built-in strain can be relaxed both by increasing $T_s$ and film thickness.
\subsection{Crystal phase diagram}
Diffraction experiments reveal in some samples the presence of weak (100) superlattice reflections that indicate a chemical ordering of Ga in the Fe lattice. Combining XRD and RHEED measurements, it can be concluded that films with patterns like that shown in Fig. \ref{fig:FiguraRHEED}c-h have a \textit{bcc} structure, while films that present patterns like that shown in Fig. \ref{fig:FiguraRHEED}i-j are cubic with a \textit{fcc} structure.
Different symbols are also used for \textit{bcc} films if the (001) superlattice peak is observed by the XRD measurement.
Considering these results, a phase diagram is presented in Fig. \ref{fig:FiguraDia}, where the symbols indicate the phase observed at a certain value of film composition and substrate temperature $T_s$. It is also noteworthy that for the films grown with the same nominal composition ($x=24$) both the \textit{fcc} and \textit{bcc} structures are observed, suggesting that a minute fluctuations in the composition can induce the nucleation in the whole film of a single cubic phase.
\begin{figure}
\includegraphics[width=0.7\textwidth]{FiguraDiagramaFilms2}
\caption{Phase diagram obtained from the analysis of the structural measurements. Triangles stand for \textit{fcc} phases, empty squares for A2 \textit{bcc} phase and solid squares for \textit{bcc} with superlattice peaks. The shaded area shows the error in composition ($\Delta$x$\cong$ 1).}
\label{fig:FiguraDia}
\end{figure}
\subsection{Transmission Electron Microscopy}
Transmission electron microscopy image of the S1-24 film is shown in Fig. \ref{fig:FiguraStruc_2}a with the MgO(100) and FeGa(110) planes normal to the electron beam. The image displays rows of atoms that demonstrate the crystalline character of the film and the epitaxial relationship with the substrate.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{FiguraTEM}
\caption{(a) Transmission electron microscopy image of the S1-24
film showing the MgO and FeGa layer perpendicular to the [110] direction. (b) FFT of the area marked in (a), corresponding to the FeGa film and (c) FFT of the MgO.}
\label{fig:FiguraStruc_2}
\end{figure}
The fast Fourier transformation of the Fe-Ga marked area of the TEM image displayed in Fig. \ref{fig:FiguraStruc_2}a is presented in Fig. \ref{fig:FiguraStruc_2}b and corresponds to the [110] zone. The spots with the (002) and (110) labels mark the growing and the in-plane direction, respectively. At the position corresponding to the (001) reflection a double peak is clearly present. Regarding the (1/2 1/2 n/2) reflections, the signal is absent for any value of \textit{n}, either odd or even. This analysis has been performed on different areas of the film and most of the them present the same structure although the splitting of the (001) varies slightly from point to point. The splitting of diffraction spots in alloys has been explained as a result of the presence of anti-phase boundaries \cite{DoublePeak}
that split the reflections due to a superlattice structure but not those due to the basic lattice. The lack of the (1/2 1/2 n/2) reflections suggest that the ordered FeGa phase does not correspond to a D0$_3$ or the modified D0$_3$ structures \cite{He2016177}, corroborating the X-ray diffraction data.
\subsection{Extended X-ray Absorption Spectroscopy.}
Extended X-ray Absorption Spectroscopy (EXAFS) has been used to determine the local atomic environment of Ga and Fe by quantitative analysis of the oscillatory contribution to the X-ray absorption spectrum showing up above the X-ray absorption edge of Ga and Fe.
The interested reader can find a recent review on the fundamentals of this technique in reference \cite{Rehr} and its application to Materials Science in reference \cite{Boscherini-Chap7-Lamberti-Agostini-2012} and references therein. EXAFS can provide interatomic distances, thermic/static disorder factors (Debye-Waller) and chemical nature of Nearest Neighbors (NN) and Next Nearest Neighbors (NNN) giving the sample composition at local scale. This allows one to discern if the Ga atoms are randomly distributed in the lattice or some atomic ordering mechanism occurs. In addition, we performed EXAFS experiments with
the beam polarization vector, $\vec{\varepsilon}$, directed along the $[100]$ and $[001]$ crystallographic directions, corresponding to $\vec{\varepsilon}_{\|}$ and $\vec{\varepsilon}_{\bot}$ respectively to exploit the strong anisotropy of the EXAFS spectrum \cite{brouder_ang-dep_1990} and determine the in-plane and out-of-plane lattice parameters, the distribution of Ga-Ga pairs and possible anisotropies in the local Ga concentration.
\subsubsection{EXAFS analysis}
We performed a fit procedure of the experimental EXAFS spectra to theoretical EXAFS signals calculated by using \textit{ab-initio} theoretical phases and amplitudes.
To this end we generated four clusters composed by 113 and 78 atoms for the \textit{bcc} and \textit{fcc} symmetry respectively, with a radius of $6.1$~\AA\ by the TKATOMS code \cite{Ravel2}. The lattice parameters were set equal to the pure \textit{bcc} and \textit{fcc} Fe structures, i.e. 2.870 \AA\ and 2.531 \AA\ respectively. The absorber central atom was either Fe or Ga and exclusively Fe or Ga as scatterer atoms.
Theoretical amplitudes and phase shifts were calculated \textit{ab-initio} by \texttt{FEFF8} code \cite{ankudinov_real-space_1998} for the model clusters taking into account $\vec{\varepsilon}$.
The theoretical EXAFS signals for Fe-Ga (Fe K-edge) or Ga-Fe (Ga K-edge) systems were obtained by combining theoretical phases and amplitudes of the pure Fe and pure Ga clusters with a population factor \textit{x}.
Atomic background subtraction in the EXAFS region was performed by \texttt{AUTOBK} code implemented by the \texttt{ATHENA} graphical interface \cite{ravel_athena_2005}. Fit of theoretical signal to EXAFS was performed by using the \texttt{IFEFFIT} \cite{newville_ifeffit_2001} code implemented by the \texttt{ARTEMIS} interface \cite{ravel_athena_2005}.
The Fourier Transformed (FT) EXAFS spectra are shown in Fig. \ref{EXAFS_Ga} (Ga K-edge) and Fig. \ref{EXAFS_Fe} (Fe K-edge) for some of the samples of Table \ref{tab_estruc}. An EXAFS spectrum of a pure \textit{bcc} Fe film grown epitaxially on MgO is also reported for comparison in Fig. \ref{EXAFS_Fe}.
In this study we restrict the fit procedure to the first peak of the FT spectrum showing up in the \textit{R} region from $1$~\AA\ to $3$~\AA. It corresponds, for the \textit{bcc} structure, to the contribution of the I and II coordination shells, i.e. to the atoms at center and at the corner of the \textit{bcc} cube respectively, that due both to the small difference in distance from the central atom and to the limited \textit{R}-space resolution associated to the limited spectrum \textit{k}-range, cannot be resolved. For the \textit{fcc} structure, it corresponds to a single shell of 12 NN situated at the center of the \textit{fcc} cube faces.
The FT have been calculated in the range $2-12.5$~\AA$^{-1}$ and provide by sight qualitative information on the Fe and Ga short range order environment. For most of samples the FT spectra are similar to that of pure \textit{bcc} Fe showing that the \textit{bcc} symmetry is maintained.
Nevertheless, when the substrate temperature reaches 400 $^o$C, the \textit{fcc} symmetry becomes possible and a phase change can take place as also observed by diffraction and reported in the previous section: Samples S2-400 and S2-600 have a \textit{fcc} phase.
\begin{figure}
\includegraphics[width=0.9\textwidth]{FT_Ga_bcc_fcc_label}
\caption{ (Color online) FT amplitude of $k^2\chi(k)$ EXAFS signals (circles) with out-of-plane ([001])
X-ray beam polarizations, at Ga $K$ edge, for several Fe-Ga alloys together with the correspondent best fit curves (solid lines) performed in \textit{q} space. Curves are vertically shifted for clarity.}
\label{EXAFS_Ga}
\end{figure}
\subsubsection{bcc samples}
The fitting was performed on the Fourier filtered $\chi(q)$ in the range $2-12.5 $~\AA$^{-1}$ refining
the following parameters: the origin of photoelectron energy $E_{0}$; the interatomic distances
$d_{I(Fe-Fe)}$, $d_{I(Fe-Ga)}$ (Fe K-edge), $d_{I(Ga-Ga)}$, $d_{I(Ga-Fe)}$ (Ga K-edge), $d_{II\bot(Fe-Fe)}$, $d_{II\|(Fe-Fe)}$, $d_{II\bot(Fe-Ga)}$, $d_{II\|(Fe-Ga)}$ (Fe K-edge), $d_{II\bot(Ga-Ga)}$, $d_{II\|(Ga-Ga)}$, $d_{II\bot(Ga-Fe)}$, $d_{II\|(Ga-Fe)}$ (Ga K-edge); the Debye-Waller factors for I and II shells, $\sigma_{I}^2$ and $\sigma_{II}^2$; the Ga population factor in the I and II coordination shell \textit{y} and \textit{z}. The $E_{0}$ best fit values for the different samples were very close to each other ($E_{0}\cong 1eV$), showing differences lower than 0.5 eV. The $S_{0}^{2}$ amplitude reduction factor was fixed to 0.7 for all the samples; the coordination numbers were kept fixed to their crystallographic values for the \textit{bcc} system, \textit{i.e.} $N_{I} = 8$ and $N_{II} = 6$. Note that the interatomic distances $d_{II\|(\bot)}$ correspond to the lattice parameters $a_{\|(\bot)}$ reported by XRD, but, due to the EXAFS chemical selectivity they are specific for Fe-Ga and Ga-Ga pairs and not average as for diffraction.
The Ga concentration factor \textit{x} was split out into \textit{y} and \textit{z}, for the I and II shells respectively. If \textit{y} were found to be equal to \textit{z}, and equal to \textit{x}, \textit{i.e.} the nominal Fe$_{100-x}$Ga$_{x}$ composition, the Ga distribution would be random; other values of \textit{y} and \textit{z} indicate ordering or clustering phenomena.
The possibility of non-random or ordered Ga distribution has to be taken into account according to previous EXAFS \cite{Pascarelli2008} and XRD results \cite{Du2010}. We underline that in previous papers the FeGa samples studied were bulk samples obtained with different methods. In our case all the samples are thin epitaxial samples with very different growth dynamics compared with bulk samples and the phase diagram of which has never been studied.
\paragraph{Ga K-edge}
The fit results are reported in Table \ref{tab_Ga}.
The concentration of Ga atoms in the I and II coordination shells, \textit{y} and \textit{z}, around the Ga absorber, is found to be equal to 0 within the fit absolute error bar ($\Delta{y} = \Delta{z} =$ 10) for all the samples except S1-24a, in which they both approach 10. No difference was observed for \textit{y} and \textit{z}, regardless of the beam polarization. It has to be compared with the values found by EDX reported in Table \ref{tab_estruc} for \textit{x} that range from 12 to 28. This result shows a clear tendency of Ga to undergo an anticlustering/ordering mechanism since the Ga concentration at a local scale is remarkably lower than the average value obtained by EDX. Imposing the presence of one Ga atom in the I or II shell, as reported in ref \cite{Pascarelli2008}, always produced in our case an increase of \textit{R}-factor.
This difference between the expected number of Ga NN atoms corresponding to a random distribution of Ga in the Fe lattice and the experimental value found by EXAFS can be associated to the presence of ordered B2 or D0$_3$ phases at local scale (B2-like and D0$_3$-like) for which the identity of NN and NNN correspond to the values expected for these phases.
The values expected according to D0$_3$ and B2 ordered clusters are \textit{y} = \textit{z} = 0 and \textit{y} = 0, \textit{z} = 100 respectively. It can be easily understood looking at Fig. \ref{fig:FiguraDiagramas} that shows the D0$_3$ crystal structure typical of Fe$_3$Al (space group Fm3m) together with the B2 structure typical of AuCd (space group Pm3m).
In the D0$_3$ structure A2 (pure \textit{bcc} Fe) and B2 cells are stacked alternatively in each direction forming an \textit{fcc} structure with a doubled cell parameter. A full B2 ordering could take place with a Ga overall ratio of 50 at. \% Ga and a D0$_3$ structure could occur with 25 at. \% Ga, that is close to the Ga content of our samples.
We can state then that the short range order Ga distribution tends for all the samples studied to a D0$_3$-like structure.
Concerning interatomic distances, the $d_{I(Ga-Fe)}$, values range from 2.50 \AA\ to 2.53~\AA, being close to the values found
for bulk samples with about the same concentration. $d_{II\bot(Ga-Fe)}$ and $d_{II\|(Ga-Fe)}$ range
from 2.84 \AA\ to 2.88 \AA\ and from 2.88 \AA\ to 2.94 \AA, respectively, showing that $d_{II\bot(Ga-Fe)} < d_{II\|(Ga-Fe)}$ in the samples grown with $T_{s}$ = 150 $^{o}$C. This indicates the presence of residual tensile in-plane strain due to the mismatch with the MgO substrate, that produces an out-of-plane compression and tends to relax at higher $T_s$.
\begin{table}
\begin{tabular}{c|cccccc}
\textit{Sample} & $d_{I(Ga-Fe)}$ & $d_{II\bot(Ga-Fe)}$ & $d_{II\|(Ga-Fe)}$& \textit{y}=\textit{z} &$\sigma^2_{I}$&$\sigma^2_{II}$ \\
& (\AA) & (\AA) & (\AA) & (at \% Ga) & (\AA${^2}$) & (\AA${^2}$) \\
\hline
S1-21 &2.521 &2.88 &2.92 &0 &0.008 & 0.02\\
S1-18 &2.525 &2.85 &2.91 &0 & 0.008&0.017 \\
S1-13 &2.53 &2.85 &2.90 &0 &0.007 & 0.014\\
S1-24a &2.512 & 2.87 & 2.94 & 10 & 0.008&0.025 \\
S2-500 &2.53 &2.88 &2.88 &0 &0.006 &0.014\\
S2-600 &2.53 &2.88 &2.88 &0 &0.006 &0.011\\
\hline
\end{tabular}
\caption {Best fit results at Ga K-edge. The perpendicular and parallel polarization spectra were fitted simultaneously. The interatomic distances refer to Ga-Fe pairs since no Ga-Ga pairs were observed for all samples except S1-24a in which the low Ga-Ga pairs presence does not allow one to determine the Ga-Ga distances that are set equal to Ga-Fe. Statistical error, calculated by the fit covariance matrix, are equal to 0.01 \AA~ for $d_I$, 0.02 \AA~ for $d_{II\bot/\|}$ and 0.005 for $\sigma^2$.}
\label{tab_Ga}
\end{table}
\begin{table}
\begin{tabular}{c|ccccc}
\textit{Sample} & $d_{I(Fe-Fe)}$ & $d_{II\bot(Fe-Fe)}$ & $d_{II\|(Fe-Fe)}$ &$\sigma^2_{I}$ &$\sigma^2_{II}$ \\
& (\AA) & (\AA) & (\AA)& (\AA$^2$) & (\AA${^2}$) \\
\hline
S1-0 &2.467(4) &2.854(7) &2.854(8) & 0.0045(5) & 0.05(1)\\
S1-13 &2.46 &2.84 &2.88 & 0.005 & 0.014\\
S1-18 &2.47 &2.85 &2.91 & 0.005&0.014 \\
S1-21 &2.48 &2.87 &- & 0.004 & 0.018\\
S1-24a &2.49 & 2.87 & 2.92 & 0.008&0.019 \\
S2-500 &2.47 &2.93 &2.91 & 0.006 &0.019\\
S2-600 &2.471 &2.88 &2.88 &0.007 &0.019\\
\hline
\end{tabular}
\caption {Best fit results at Fe K-edge. The perpendicular and parallel polarization spectra were fitted simultaneously. The interatomic distances $d_{II\bot/\|}$ refer to Fe-Fe pairs. Statistical error, calculated by the fit covariance matrix, are equal to 0.01 \AA~ for $d_I$, 0.02 \AA~ for a $d_{II\bot/\|}$, 10 for \textit{z} and 0.005 for $\sigma^2$.For sample S1-21 $d_{II\bot(Fe-Fe)}$ is not present because the specter with $\vec{\varepsilon}_{\|}$ could not be registered.}
\label{tab_Fe}
\end{table}
\paragraph{Fe K-edge}
The analysis of the Fe K-edge should confirm the findings at the Ga K-edge, suggesting atomic ordering of Ga. According to a D0$_3$ ordered structure, the iron atoms have two different Wyckoff positions in the lattice, 8(c) and 4(b). For the 8(c) site, there are 4 Ga atoms out of 8 in the I shell and 0 Ga atoms on the II shell, while the atom at site 4(b) has 0 NN Ga atoms and 6 out of 6 as NNN. Therefore, the average values of \textit{y} and \textit{z} would be equal to 33.
On the other side, the fit sensitivity to \textit{y} and \textit{z} at the Fe K-edge is low, as also observed by other authors \cite{Pascarelli2008}
and is not possible to give a reliable value for these parameters.
As for the Ga K-edge, no difference was observed depending on polarization.
The fit results are reported in Table \ref{tab_Fe}. The Fe-Fe interatomic distances have been refined keeping the correspondent Fe-Ga distances fixed to the values found at the Ga K-edge ($d_{I(Ga-Fe)}$, $d_{II\bot(Ga-Fe)}$ and $d_{II\|(Ga-Fe)}$, see Table \ref{tab_Ga}).
The \textit{y} and \textit{z} values are for all the samples less or equal to 30 $\pm$ 20. Therefore, we can say that for the samples with $x=24$, the best-fits results at the Fe K-edge are compatible with \textit{y} and \textit{z} values expected for D0$_3$ structure.
\begin{figure}
\includegraphics[width=0.8\textwidth]{FT_Fe_bcc_fcc_label}
\caption{ (Color online) FT amplitude of $k^2\chi(k)$ EXAFS signals (circles) with out-of-plane ([001])
X-ray beam polarizations, at Fe $K$ edge, for several Fe-Ga alloys together with the correspondent best fit curves (solid lines) in \textit{q} space. Curves are vertically shifted for clarity.}
\label{EXAFS_Fe}
\end{figure}
\begin{table}
\begin{tabular}{cc|ccc|ccc}
& &Ga K-edge & & &Fe K-edge & & \\
\hline \hline
\textit{Sample} & $T_{s} $ & $d_{1(Ga-Fe/Ga)}$ & y & $\sigma^2_{I}$ & $d_{1(Fe-Fe/Ga)}$& y & $\sigma^2_{I}$ \\
& ($^o$C) & (\AA) & (at \% Ga) & (\AA$^2$) & (\AA) & \textbf{} & (\AA$^2$) \\ \hline
S2-400 & 400 &2.60 &0 &0.008 &2.56/2.59 &40 & 0.009 \\
S2-600 & 600 &2.60 & 0 &0.007 & 2.57/2.60 &40 & 0.008 \\
\hline
\end{tabular}
\caption{Best fit results at Ga and Fe K-edge for the \textit{fcc} Fe-Ga samples. Statistical error, calculated by the fit covariance matrix, are equal to $0.01$~\AA\ for $d_I$, 13 for \textit{y} and 0.005 for $\sigma^2$.}
\label{tab_fcc}
\end{table}
\subsubsection{fcc samples}
The fit approach was the same as for the \textit{bcc} samples but in this case the first FT peak is due to the contribution of a single first coordination shell of 12 NN atoms to which the fit is restricted. The lattice parameter now corresponds to the interatomic distance of the II coordination shell the contribution of which is the low intensity FT peak showing at about 3.5 \AA. Polarized EXAFS spectra in the parallel and perpendicular to surface directions were practically identical showing that neither tetragonal deformation nor local composition anisotropy are present.
The fitting was performed on the Fourier filtered $\chi(q)$ in the range $2-12.5 $~\AA$^{-1}$ refining
the following parameters: the origin of photoelectron energy $E_{0}$; the interatomic distances
$d_{I(Fe-Fe)}$, $d_{I(Fe-Ga)}$ (Fe K-edge) , $d_{I(Ga-Ga)}$, $d_{I(Ga-Fe)}$ (Ga K-edge), the Debye-Waller factors for I shell and the Ga population factor shell \textit{y}. The $E_{0}$ best fit values for the different samples were very close to each other ($E_{0}\cong$ 3 eV), showing differences lower than 0.5 eV. The $S_{0}^{2}$ amplitude reduction factor was fixed to 0.7 for all the samples; the coordination number was kept fixed to its crystallographic value $N_{I} = 12$.
The results at the Ga and Fe K-edge are reported in Table \ref{tab_fcc}.
Regarding the Ga distribution we observe a Ga anticlustering mechanism at the Ga K-edge analogous to that observed for the \textit{bcc} samples. No Ga atoms are found around the Ga absorbers within the fit error bar on \textit{y} ($\Delta y$=10). At the Fe K-edge, the Ga population is found to be equal to $\approx$ 40, that is higher than the nominal Ga concentration \textit{x} = 24. It is not far from the value expected for the \textit{fcc} L1$_2$ ordered structure, compatible with these samples composition, in which 4 Ga atoms out of 12 are expected in the Fe I coordination shell (\textit{y} = 33) and no Ga atoms are foreseen in the Ga I shell.
Regarding interatomic distances we find $d_{I(Fe-Fe)}$= 2.56 - 2.57~\AA\ and $d_{I(Fe-Ga)}$ = 2.59 -2.60~\AA = $d_{I(Ga-Fe)}$.
The Fe-Fe and Fe-Ga interatomic distances show a difference of about 0.03 ~\AA, giving an average \textit{fcc} \textit{a} parameter of 3.64~\AA, that is in fair agreement with the values found by XRD.
\subsection{Molecular dynamics results}
Molecular dynamics (MD) calculations ware performed, by using the VASP code \cite{KRESSE199615}, to obtain the equilibrium distribution of Fe and Ga atoms in the \textit{bcc} lattice and help to interprete the EXAFS results. A (4 $\times$ 4 $\times$ 3)\textit{a} supercell is used in the calculation containing 19 Ga atoms and 77 Fe atoms, all of them initially randomly distributed in the \textit{bcc} structure. VASP uses DFT-based first-principles calculations with the generalized gradient approximation (GGA) for correlation and exchange \cite{PhysRevB.23.5048} and the plane basis is set on projector augmented wave (PAW) pseudopotential for describing the core electrons \cite{PhysRevB.50.17953}. The valence states of the Fe and Ga are [Ar]-3p$^{6}$3d$^{7}$4s$^{1}$ and [Ar]-3p$^{6}$3d$^{10}$4s$^{2}$4p$^{1}$, respectively, and the calculation was done at $\Gamma$ point only due to computational limitations. The MD simulation was done under canonic ensemble using the algorithm of Nos\'e-Hoover, that controls the frequency of the temperature oscillations during the simulation \cite{doi:10.1063/1.447334,PhysRevA.31.1695}. During the annealing process, up to 2000 K, the melting state was obtained at around 0.5 ps getting a stable energy of $\approx$ -645 eV/cell. After that, from this melting state, the cooling process down to 250 K was carried out using a cooling rate of 1 K/fs. Finally, the structure was optimized at 0 K by using a RMM-DIIS-Quasi-Newton method \cite{PULAY1980393} with a force on each atom $\approx$ 1-2 mRy/a.u per atom and $\approx$ 0.1 meV for the energy convergence. The final optimized cluster is analyzed to compare the Ga-Fe distribution with the results obtained by EXAFS.
MD calculations were carried out for a Ga concentration x $\approx$ 20. In order to compare the MD calculations with the EXAFS results we analyze the number of Ga-Ga pairs in first and second shell, N$_{PI}$ and N$_{PII}$ respectively in the optimized MD cluster. If the
Ga atoms were randomly distributed in the Fe lattice one should observe N$_{PI}$ = (8 $\times$ 0.2) $\times$ 19 = 30 and
N$_{PII}$=(6 $\times$ 0.2) $\times$ 19 = 22.8.
If we count the Ga-Ga pairs in the cluster obtained by MD calculation we obtain N$_{PI}$ = 16 and N$_{PII}$ = 10, \textit{i.e.} values that are much lower than what it can be expected for a random Ga distribution, indicating a tendency to an anticlustering of the Ga atoms.
To compare with the \textit{y} and \textit{z} values reported in Table \ref{tab_Ga}, we normalize N$_{PI}$ and N$_{PII}$ to the total number of pairs that Ga atoms have in the I and II shells, 8 $\times$ 19 = 152 and 6 $\times$ 19 = 114, respectively, and multiply by 100.
Therefore, the Ga concentrations (in Ga relative units) in the I and II shell are n$_{PI}$ = (16/152) x 100 = 10.7 and n$_{PII}$ = (10 /114) x 100 = 8.8 for the MD cluster and n$_{PI}$= (30/152) x 100 = 19.7 and n$_{PII}$ = (22.8/114) x 100 = 20, as expected, for a random distribution of Ga atoms. One can observe that the EXAFS values are in agreement, within the error bars, with the concentration values obtained by the MD calculations that show the existence of a local Ga ordering in the epitaxial Fe-Ga films.
We note that for the composition studied here by MD, x $\approx$ 20, the equilibrium phase diagram, see Fig. \ref{fig:FiguraDiagramas}a, indicates a coexistence of the A2 and Ll$_2$ phases, however the calculation is performed for the \textit{bcc} structure and the interpretation of this result can be done under the consideration of a metastable equilibrium observed in the films prepared at low temperature reported here, since the volume of the film is constrained by the substrate.
\section{Discussion}
\subsection{bcc films structure}
The EXAFS results, supported by MD calculations, show that Ga has a clear tendency to anticlustering, \textit{i.e.} the Ga atoms tend to stay as far as possible from each other in the FeGa lattice leading to a local D0$_3$ ordering that is the kind of atomic arrangement minimizing the number of Ga-Ga pairs.
Nevertheless, EXAFS give us a picture of the short-range-order that in our case, since we can perform a quantitative analysis of the first FT peak, is limited to a distance from the absorber equal to one single lattice parameter.
The evidence of a D0$_3$ long-range-order should be provided by XRD or TEM but it is not the case, as reported in the previous section, since no (1/2 1/2 1/2) reflections have been observed. On the other side, some samples (see Table \ref{tab_estruc}) present the (001) reflection that is a signature of chemical ordering in a \textit{bcc} structure. The B2 structure requires a 50\% Ga content value far away from the values obtained by EDX, see Table \ref{tab_estruc}, while the D0$_3$ structures will rise diffuse (1/2 1/2 n/2) peaks, not found in TEM and XRD experiments.
These results, suggest that the films adopts a D0$_3$-like structure, which means that only Fe atoms are present in the I and II shells, but with absence of that long-rage order.
If we compare our results with previous literature on bulk slow-cooled or quenched FeGa samples \cite{Du2010} in which a clear D0$_3$ ordered phase was observed, we can state that epitaxial growth at low temperature looks to reduce the occurrence of long-range D0$_3$ ordering. This gives to epitaxial growth a further advantage over other techniques since the formation of D0$_3$ phase is known to be detrimental for magnetostriciton strength \cite{Srisukhumbowornchai2002}.
Also, Pascarelli et al. \cite{Pascarelli2008} report EXAFS results on one melt-spun FeGa sample in which about the same kind of anticlustering mechanism is observed at the Ga K-edge whereas a random distribution of Ga was found at the Fe K-edge
suggesting a less clear atomic arrangement of the Ga atoms in contrast with the samples of ref. \cite{Du2010}.
These authors found no Ga atoms in the I coordination shell, as in our case, and 1 Ga atom out of 6 in the II one. This results was consistent with the formation of Ga-Ga pairs able to enhance MS strength. We must note that in our case all the samples are thin epitaxial layers with different growth dynamics compared with bulk samples. Our results show that for most of the samples the possibility of Ga-Ga pairs formation is less than 10\% (\textit{i.e.} the fit error bar on \textit{y}) and for one of them (S1-24a) is of the same order as in the cited paper.
A systematic study comparing structural and MS properties in thin Fe-Ga films should be carried out to prove the role of Ga-Ga pairs often invoked but not actually verified so far.
This tendency to a local ordering mechanism is observed also for the \textit{fcc} samples and it is in agreement with the appearance of reflection (001) in complementary XRD experiments reported in previous section.
It is consistent with the short-range-ordering mechanism observed in the \textit{bcc} samples obeying to an anticlustering tendency of the Ga atoms in the FeGa lattice.
Our results also show that the driving force of the ordering mechanism is not the residual mismatch strain, since samples with different strain content show the same local Ga ordering.
\subsection{fcc films growth}
The experimental data reported here show that \textit{fcc} L1$_2$ films with the (001) growing plane can be obtained directly onto the MgO(001) surface without the need of time consuming heat treatments, despite of the larger misfit between MgO and th
e \textit{fcc} structure. However, in other systems, with large lattice mismatch between the lattice parameters of over-layer and substrate, epitaxial films has been obtained, for instance Ni(100) \textit{fcc} films onto MgO(100) (misfit $\sim$ 15\%) if the Ni is deposited with the substrate heated from 100 $^o$C to 200 $^o$C \cite{Ni_MgO} or Nb on Al$_2$O$_3$\cite{0305-4608-12-6-001}.
Instead of being due to the matching condition
with the substrate,
this heteroepitaxy is partially explained as the result of an
adjustment between surpercells involving a different number of unit cells for film and substrate. Thus, here for the S2-600 film with \textit{a} = 3.697 \AA, a cell with 8 atomic distances (1.847 \AA\ $\times$ 8 = 14.79 \AA) facing a MgO cell with seven atomic distances (2.107 \AA\ $\times$ 7 = 14.75 \AA ) will have a small effective mismatch of $\sim$ 0.3 \%.
\section{Conclusions}
We have shown that the Fe-Ga films grown at low (150 $^o$
C) temperature present \textit{bcc} structure with anticlustering ordering for the Ga atoms and chemical superorder for compositions above \textit{x} $\approx$ 17, without the presence of long range D0$_3$ order. The equilibrium Ll$_2$ crystal phase is obtained for a composition around \textit{x} = 25 for T$_s$ larger that 400 $^o$C. The information is summarized in a metastable phase diagram composition \textit{vs.} growing temperature.
\section{Acknowledgments}
This work has been supported by Spanish MICINN (Grants No. MAT2012-31309 and MAT2015-66726-R), MECD (Programa Campus de Excelencia Internacional Iberus) and Gobierno de Arag\'{o}n (Grant E10-17D) and Fondo Social Europeo. Authors would like to acknowledge the use of Servicio General de Apoyo a la Investigación-SAI, Universidad de Zaragoza. We acknowledge the use of the microscopy infrastructure available in the Laboratorio de Microscopías Avanzadas (LMA) at Instituto de Nanociencia de Arag\'{o}n (University of Zaragoza, Spain). We also thanks Dr. Antonio Bad\'{\i}a at Universidad de Zaragoza the use of the SIE cluster in MD calculations. The XANES experiments were performed on beamline BM30D at the European Synchrotron Radiation Facility (ESRF), Grenoble, France and on beamline XAFS at Elettra, Trieste, Italy. We are grateful to Isabelle Kieffer at the ESRF (beamline BM30B) and, Giuliana Aquilanti and Clara Guglieri at Elettra (beamline XAFS) for providing assistance in using synchrotron facilities. AB thanks MINECO for the Ph. D. grant BES-2016-076482.
\bibliographystyle{unsrt}
\section{Bibliography}
|
{
"timestamp": "2018-06-08T02:09:25",
"yymm": "1802",
"arxiv_id": "1802.08740",
"language": "en",
"url": "https://arxiv.org/abs/1802.08740"
}
|
\section{INTRODUCTION}\label{sec:intro}
Stochastic gradient descent (SGD) is the most popular optimization method to train analytics models, e.g., the back-propagation algorithm for deep neural networks~\cite{deep-nets-sgd}, in a wide variety of application domains ranging from image~\cite{cvpr-sgd} and speech~\cite{speech-sgd} recognition to finance~\cite{finance-sgd}. SGD is implemented in a form or another by every modern analytics system, including Google's Brain~\cite{google-brain}, Microsoft's Project Adam~\cite{project-adam} and Vowpal Wabbit~\cite{vowpal-wabbit}, IBM's SystemML~\cite{systemml}, Pivotal's MADlib \cite{madlib}, and Spark's MLlib~\cite{mllib}. Since these billion-dollar enterprises depend on processes which rely on SGD, it is important to understand its optimal behavior and limitations on modern computing architectures.
\textbf{Motivation.}
Over the past decade, CPU design has been moving towards highly-parallel architectures with tens of cores on a die. The culmination of this trend is best exemplified by the current Graphics Processing Units (GPU) having thousands of cores\footnote{\url{https://en.wikipedia.org/wiki/Nvidia_Tesla}}. GPUs are assumed to be the ideal platform for analytics model training due to the compute-intensive nature of the task. This is exemplified by the extensive GPU support across many analytics frameworks, e.g., Caffe\footnote{\url{http://caffe.berkeleyvision.org/}}, TensorFlow\footnote{\url{https://www.tensorflow.org/}}, MXNet\footnote{\url{https://mxnet.incubator.apache.org/}}, BIDMach\footnote{\url{https://github.com/BIDData/BIDMach}}, SINGA~\cite{singa}, Theano\footnote{\url{https://github.com/Theano/Theano}}, and Torch\footnote{\url{http://torch.ch/}}. Published results that compare CPU and GPU implementations, however, do not support the assumption that GPU is always superior~\cite{Intel-GPU-CPU-comp,google-brain,tensorflow}. Quite the opposite, it is often the case that the CPU optimizer outperforms the GPU implementation, even though the degree of parallelism is much lower. A possible reason is the choice of the SGD algorithm. The asynchronous Hogwild-family of algorithms~\cite{hogwild,bismarck,dimm-witted,hogbatch,buckwild,hogwild-disk} are the preferred SGD implementation on multi-core CPUs due to their simplicity -- the parallel code is identical to the serial one, without any synchronization primitives -- and near-linear scaling across a variety of analytics tasks~\cite{RRTB12,LWR+14,DJM13}. The SGD solutions on GPU resort to a synchronous implementation in which only the highly-optimized linear algebra kernels are offloaded to the GPU. The reasons behind this strategy are the original role GPUs had as accelerators for certain classes of computations and the intricate data access pattern incurred by asynchronous execution. The algorithmic difference in model update strategy -- which is an open debate both in theoretical circles~\cite{sync-vs-asynch-sgd,asynch-sgd-delay-comp} and practice~\cite{tensorflow} -- makes a direct comparison between SGD on CPU and GPU challenging. As far as we know, there is no work that performs an in-depth comparison across architectures and SGD algorithms. Given its central role in analytics training, we believe it is imperative to identify which SGD algorithm performs better on which architecture and what type of data.
\textbf{Problem.}
We briefly present the setup for model training using SGD. The input data is a matrix in $\mathbb{R}^{N\times d}$ containing $N$ $d$-dimensional training examples. The goal is to find a $d$-dimensional vector that minimizes the (convex) loss function over the examples specific to each model. SGD makes several complete passes over the input data and updates the model one or several times in each pass. SGD performance is measured by the time it takes to reach the loss function minimum. This depends both on the number of passes over the data and the time per pass. In parallel SGD, the input data is partitioned across threads which share a single common model. While this reduces the time per pass, it has the potential to increase the number of passes. The overall effect depends on several factors---including parallelization strategy, model update, data characteristics, and loss function type.
\textbf{Contributions.}
In this paper, \textit{we perform the first comprehensive study of parallel SGD for training generalized linear models that investigates the combined impact of three axes -- computing architecture, model update strategy, and data sparsity -- on three measures---hardware efficiency, statistical efficiency, and overall time to convergence}. We allocate a significant part of the study to the design of a novel asynchronous SGD algorithm on GPU -- a missing topic in the existing literature -- that explores exhaustively possible data organization, access, and replication alternatives. To this end, \textit{we introduce an optimized asynchronous SGD algorithm for GPU that leverages architectural characteristics such as warp shuffling and cache coalescing for data and model access}.
\begin{figure*}[htbp]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\textwidth]{study-axes}
\caption{Exploratory axes.}
\label{fig:study-axes}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\textwidth]{perf-axes}
\caption{Performance axes.}
\label{fig:perf-axes}
\end{minipage}
\end{figure*}
\textbf{Exploratory axes.}
Figure~\ref{fig:study-axes} depicts the space of the three axes studied in this work. On the \textit{computing architecture} axis, we consider multi-core CPUs with Non-Uniform Memory Access (NUMA) and many-core GPUs with wide Single Instruction Multiple Data (SIMD) processing units. The specific representatives of the two architectures we use in our work are a dual-socket machine with two 14-core 28-thread Intel Xeon E5-2660 v4 CPUs (56 threads overall, 256 GB memory) and an NVIDIA Tesla K80 GPU with 2496 cores, a 32-wide SIMD unit, and 24 GB memory. Both of them are top-of-the-line representatives in their architectural class. The \textit{model update} strategies we consider are synchronous and asynchronous. Synchronous updates follow a transactional semantics and allow a single thread to update the model. While this strategy limits the range of parallel execution inside the SGD algorithm, it is suitable for batch-oriented high-throughput GPU processing. In the asynchronous strategy, multiple threads update the model concurrently. Our focus is on the Hogwild algorithm which ignores any synchronization to the shared model. \textit{Data sparsity} represents the third axis. At one extreme, we have dense data in which there is a non-zero entry for each feature in every training example. This allows for a complete dense 2-D matrix representation. When the model is large, it is often the case that the examples have only a few non-zero features. A sparse matrix format, e.g., Compressed Sparse Row (CSR), is the only alternative that fits in memory in this case.
\noindent
Out of the eight possible combinations, a limited set is implemented in practice---the full circles in Figure~\ref{fig:study-axes}. The majority of the GPU solutions implement synchronous model updates over dense data, while the CPU implementations use asynchronous Hogwild which is suited for sparse data---the darker circles in the figure. In this paper, we explore the complete space and map the remaining combinations. \textit{We design an efficient Hogwild algorithm for GPU that is carefully tuned to the underlying hardware architecture, the SIMD execution model, and the deep GPU memory hierarchy.} The considerably larger number of threads available on the GPU and their complex memory access pattern pose significant data management challenges in achieving an efficient implementation with optimal convergence. The \textit{Hogwild GPU kernel} considers data access path and replication strategies for data and the model to identify the optimal configuration for a given task-dataset input. Moreover, we introduce specific optimizations to enhance the coalesced memory access for SIMD processing. \textit{Synchronous SGD on CPU} turns out to be a simplification of the GPU solution. Instead of executing the linear algebra kernels on the GPU, invoke functions on the CPU. This is easily achieved by using computational libraries with dual CPU and GPU support, e.g., ViennaCL\footnote{\url{http://viennacl.sourceforge.net/}}. Since the CPU functions do not require data transfer, they have the potential to outperform a sequence of GPU kernels that are not cross-optimized. The massive number of powerful threads in our testbed CPU provides an additional boost in performance.
\textbf{Performance axes.}
Figure~\ref{fig:perf-axes} depicts the three axes across which we measure the performance of the SGD algorithms. The \textit{hardware efficiency} measures the average time to do a complete pass -- or iteration -- over the training examples. Ideally, the larger the number of physical threads, the shorter an iteration takes since each thread has less data to work on---thus, higher hardware efficiency. In practice, though, this holds only when there is no interaction between threads---even then, the size and location of data can be limiting factors. In asynchronous SGD, however, the model is shared by all (or a group of) the threads. This poses a difficult challenge both in the CPU and GPU case. For CPU, the implicit cache coherency mechanism across cores can decrease the hardware efficiency dramatically. Non-coalesced memory accesses inside a SIMD unit have the same effect on GPU. The \textit{statistical efficiency} measures the number of passes over the data until a certain value of the loss function is achieved, e.g., within $1\%$ of the minimum. This number is architecture-independent for synchronous model updates which are executed at the end of each pass. In the case of asynchronous model updates during a data pass, however, the number and order of updates may have a negative impact on the statistical efficiency. The third performance axis is represented by the \textit{time to convergence}. This is, essentially, the product between the hardware and statistical efficiency. The reason we include it as an independent axis is because there are situations when two algorithms have reversed hardware and statistical efficiency -- algorithm A has better hardware efficiency than algorithm B and worse statistical efficiency -- and only the time to convergence allows for a full comparison. Such a case is common for synchronous and asynchronous updates and for CPU and GPU execution, respectively.
\textbf{Summary of results.}
We organize the results based on the components of the exploratory axes. In the following, we present only the main findings, while we discuss the details in the experimental evaluation (Section~\ref{sec:experiments}):
\begin{compactitem}
\item For synchronous SGD, GPU is always faster than parallel CPU in time per iteration and, thus, in time to convergence. The gap between GPU and parallel CPU is more than 5X on sparse data, while super-linear speedup of more than 400X is achieved over sequential CPU on cached data.
\item The optimized asynchronous Hogwild algorithm we design for GPU uses different data access path + model replication + data replication configurations for different tasks and datasets---identifying the optimal configuration is highly-dependent on all these properties. However, even with these optimizations, asynchronous GPU outperforms (parallel) CPU in time to convergence only in a limited number of situations despite better hardware efficiency---the reason is poor statistical efficiency due to heavy model update conflicts.
\item While GPU is the optimal architecture for synchronous SGD and CPU is optimal for asynchronous SGD, choosing the better of synchronous GPU and asynchronous CPU is task- and dataset-dependent.
\item Our SGD implementations always outperform the synchronous solutions from TensorFlow and BIDMach in time per iteration, number of iterations to convergence, and time to convergence.
\end{compactitem}
\textbf{Outline.}
In Section~\ref{sec:grad-desc}, we introduce SGD and classify it according to the exploratory axes. The parallel architecture of multi-core CPUs and modern GPUs is presented in Section~\ref{sec:architecture}. The SGD implementation and its optimizations are discussed in Section~\ref{sec:synch-sgd} -- synchronous -- and Section~\ref{sec:asynch-sgd}---asynchronous. Section~\ref{sec:experiments} presents the experimental results. Related work is discussed in Section~\ref{sec:rel-work}, while Section~\ref{sec:conclusions} concludes the paper.
\section{STOCHASTIC GRADIENT DESCENT}\label{sec:grad-desc}
Consider the following model training problem with a linearly separable objective function:
\begin{equation}\label{eq:optim-form}
\Lambda(\vec{w}) = \textit{min}_{w \in \mathbb{R}^{d}} \sum_{i=1}^{N} f\left(\vec{w}; \vec{x_{i}}, y_{i}\right)
\end{equation}
in which a $d$-dimensional vector $\vec{w}$, $d \geq 1$, i.e., the model, has to be found such that the objective function is minimized. The constants $\vec{x_{i}}$ and $y_{i}$, $1 \leq i \leq N$, correspond to the feature vector of the $\text{i}^{\text{th}}$ data example and its scalar label, while $f$ is the loss. For example, the loss corresponding to binary classification with logistic regression (LR) and support vector machines (SVM) is $f_{\textit{LR}}(\vec{w}) = \log\left(1+e^{-y_{i} \vec{x}_{i} \cdot \vec{w}}\right)$ and $f_{\textit{SVM}}(\vec{w}) = \textit{max}\left(0, 1 - y_{i} \vec{x}_{i} \cdot \vec{w}\right)$, respectively.
SGD is an iterative optimization algorithm for solving this class of model training problems. It starts from an arbitrary model which is updated iteratively based on a batch of $B$ random training examples $\vec{\vec{X}}_{k}$ and their corresponding scalar labels $\vec{Y}_{k}$. The updated model is computed by moving along the opposite direction of the loss function gradient $\overrightarrow{\nabla f}$. The gradient is a $d$-dimensional vector consisting of entries given by the partial derivative with respect to each dimension, i.e., $\overrightarrow{\nabla f}(\vec{w}) = \left[\frac{\partial f(\vec{w})}{\partial{w_{1}}},\dots,\frac{\partial f(\vec{w})}{\partial{w_{d}}}\right]$. For example, the gradients for LR and SVM are defined as:
\begin{equation*}
\frac{\partial f_{\textit{LR}}(\vec{w})}{\partial{w_{j}}} = x_{ij} \left( -y_{i} \frac {e^{-y_{i} \vec{x}_{i} \cdot \vec{w}}} {1+e^{-y_{i} \vec{x}_{i} \cdot \vec{w}}} \right)
\hspace*{2cm}
\frac{\partial f_{\textit{SVM}}(\vec{w})}{\partial{w_{j}}} =
\left\{ \begin{array}{rl}
-y_{i} x_{ij}, & \mbox{if} \hspace*{0.25cm} y_{i} \vec{x}_{i} \cdot \vec{w} < 1 \\
0, & \mbox{otherwise}
\end{array} \right.
\end{equation*}
\subsection{Batch and Incremental SGD}\label{ssec:sgd:b-i}
\begin{figure*}[h]
\begin{minipage}{.42\textwidth}
The step size $\alpha$, the batch size $B$, and the number of iterations or epochs $t$ are parameters specific to SGD. They are known as hyper-parameters of the model training problem, wherein the dimensions of $\vec{w}$ are typically called the parameters of the model. Depending on the value of $B$, SGD can be classified into incremental ($B=1$), mini-batch ($1 < B < N$), and batch ($B=N$). While mini-batch is standard SGD, incremental and batch correspond to extreme cases of $B$ which allow for an essential data access optimization. Random access to the training examples is replaced by sequential access which is orders of magnitude more efficient---especially for massive data that do not fit in the primary storage. Moreover, they discard the batch size hyper-parameter, thus, simplifying the optimization algorithm.
\end{minipage}
\hfill
\begin{minipage}{.54\textwidth}
\underline{\textbf{Algorithm 1} Stochastic Gradient Descent (SGD)}
\algsetup{linenodelimiter=.}
\begin{algorithmic}[1]\label{alg:sgd}
\REQUIRE ~~\\
Training examples $\vec{\vec{X}} \in \mathbb{R}^{N\times d}$ and their labels $\vec{Y} \in \mathbb{R}^{N}$\\
Loss function $f$ and its gradient $\overrightarrow{\nabla f}$\\
Initial model $\vec{w} \in \mathbb{R}^{d}$ and step size $\alpha \in \mathbb{R}$\\
Number of epochs $t$ and batch size $B$
\FOR {$k=1$ \textbf{to} $t$}
\item[\hspace*{.9cm}\textbf{\underline{OPTIMIZATION EPOCH}}]
\STATE Select a random subset $\vec{\vec{X}}_{k} = \{ \vec{x}_{i_{1}},\dots,\vec{x}_{i_{B}} \}$ of $B$\\
examples and their labels $\vec{Y}_{k} = \{ y_{i_{1}},\dots,y_{i_{B}} \}$
\STATE Compute gradient estimate: $\vec{g} \leftarrow \sum_{\vec{\vec{X}}_{k}, \vec{Y}_{k}} {\overrightarrow{\nabla f} \left(\vec{w}\right)}$
\STATE Update model: $\vec{w} \leftarrow \vec{w} - \alpha \vec{g}$\label{alg:line:model-depend}
\ENDFOR
\RETURN $\vec{w}$
\end{algorithmic}
\hfill
\end{minipage}
\end{figure*}
\noindent
The optimization epoch in the SGD algorithm becomes a linear scan over the training dataset in which the gradient is incrementally computed (batch SGD) and the model updated (incremental SGD). The differences between the two algorithms are in how the gradient is computed (estimated) and how many times the model is updated. In batch SGD, the gradient is computed exactly using all the $N$ examples in the training dataset and the model is updated only once per epoch. Incremental SGD is at the other extreme. The gradient is approximated using a single example and the model is also updated for every example---$N$ times per epoch. While identical from a computational perspective, the two algorithms are significantly different in terms of convergence. It is a well-known fact that incremental SGD has a convergence rate as much as $N$ times faster than batch SGD for large $N$, when far from the minimum~\cite{bertsekas:igd}. However, when close to the minimum, incremental SGD requires diminishing step sizes in order to converge. This translates into an additional hyper-parameter, i.e., the learning rate.
\begin{figure*}[h]
\begin{minipage}{.42\textwidth}
\underline{\textbf{Algorithm 2} Batch SGD Optimization Epoch}
\algsetup{linenodelimiter=.}
\begin{algorithmic}[1]
\STATE Compute gradient:\\
\textbf{for} $i=1$ \textbf{to} $N$ \textbf{do}
$\vec{g} \leftarrow \vec{g} + {\overrightarrow{\nabla f} \left(\vec{w}; \vec{x_{i}}, y_{i} \right)}$
\STATE Update model: $\vec{w} \leftarrow \vec{w} - \alpha \vec{g}$
\end{algorithmic}
\end{minipage}
\hfill
\begin{minipage}{.52\textwidth}
\underline{\textbf{Algorithm 3} Incremental SGD Optimization Epoch}
\algsetup{linenodelimiter=.}
\begin{algorithmic}[1]
\STATE \textbf{for} $i=1$ \textbf{to} $N$ \textbf{do}
\STATE \hspace*{.25cm} Compute gradient estimate: $\vec{g} \leftarrow {\overrightarrow{\nabla f} \left(\vec{w}; \vec{x_{i}}, y_{i} \right)}$
\STATE \hspace*{.25cm} Update model: $\vec{w} \leftarrow \vec{w} - \alpha \vec{g}$
\STATE \textbf{end for}
\end{algorithmic}
\end{minipage}
\end{figure*}
\subsection{Synchronous Parallel SGD}\label{ssec:sgd:par-synch}
Parallelizing SGD seems rather impossible because of a chain dependency on model updates across epochs (batch) and even inside an epoch (incremental), where the current gradient relies on the previous model. As a result, in order to preserve the theoretical soundness of the algorithm, concurrency is limited to within each stage -- gradient computation and model update, respectively -- while synchronization has to be strictly enforced between them. Of the two stages, gradient computation entails significantly more work and is, thus, the main candidate for parallelization. This is achieved by expressing the gradient as a linear algebra formula over the training data and the model. As a concrete example, we give the linear algebra expression for the LR gradient:
\begin{equation}\label{eq:LR-grad:la}
\vec{g} = \vec{\vec{X}}_{k}^{T} \odot \left( -\vec{Y}_{k} \cdot \frac {e^{-\vec{Y}_{k} \cdot \vec{\vec{X}}_{k} \odot \vec{w}}} {1 + e^{-\vec{Y}_{k} \cdot \vec{\vec{X}}_{k} \odot \vec{w}}} \right)
\end{equation}
The algebraic operators involving vectors and matrices are element-wise when written in standard notation and have the linear algebra meaning, otherwise. For example, $\cdot$ corresponds to element-wise vector multiplication, while $\odot$ stands for vector dot-product or matrix-vector multiplication. This expression replaces the loop corresponding to gradient computation in \texttt{Batch SGD Optimization Epoch}. Efficient parallel implementations from highly-optimized linear algebra libraries are then used to speed-up the computation.
\subsection{Asynchronous Parallel SGD}\label{ssec:sgd:par-asynch}
In asynchronous parallel SGD, multiple model updates are executed concurrently---without synchronization primitives, e.g., mutexes or locks. Moreover, access to the model in gradient computation can also interfere with the update---or a stale model is used. Hogwild~\cite{hogwild,bismarck,dimm-witted,hogbatch,cyclades,hogwild-disk} is the most representative algorithm in this category. It is the exact \texttt{Incremental SGD Optimization Epoch} with the loop executed in parallel:
\begin{algorithmic}
\STATE \textbf{for} $i=1$ \textbf{to} $N$ \textbf{do \textit{in parallel}}
\end{algorithmic}
This makes the parallel implementation of Hogwild very simple---a single directive has to be added in OpenMP\footnote{\url{http://www.openmp.org/}}. Although not satisfying the specification of SGD, it has been theoretically proven that the resulting non-determinism in Hogwild enhances randomness and guarantees convergence for sparse models~\cite{hogwild}. However, due to parallelism, the overall time to convergence can be orders of magnitude faster than the sequential solution. The approach taken in synchronous SGD is more conservative---preserve the convergence of batch SGD exactly. ``Which of the two alternatives is better?'' is an open question that extends upon the debate between batch and incremental SGD---synchronous corresponds to batch, while asynchronous to incremental.
\section{COMPUTING ARCHITECTURES}\label{sec:architecture}
We present the two computing architectures considered in this work---multi-core NUMA CPU and GPU.
\textbf{NUMA CPU.}
The architecture of a NUMA machine is depicted in Figure~\ref{fig:cpu-arch}. It consists of several nodes which contain multiple cores and processor caches. The L1 and L2 caches are associated with each core, while the L3 cache is shared across all the cores in a node. Each node is directly connected to a region of the DRAM memory. NUMA nodes are connected to each other by high-bandwidth interconnects on the main board. To access DRAM regions of other nodes, data is transferred over these interconnects. However, this is slower than accessing the locally-associated memory. \textit{Cache-coherency is implicit} on NUMA machines and is implemented in hardware. In the worst case, the coherency protocol requires transfer across nodes which can generate congestion on the interconnect and, thus, significantly reduce the speedup of parallel solutions.
\begin{figure*}[htbp]
\begin{minipage}{.48\textwidth}
\vspace*{0cm}
\centering
\includegraphics[width=\textwidth]{cpu-architecture}
\caption{NUMA CPU architecture.}
\label{fig:cpu-arch}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=\textwidth]{gpu-architecture}
\caption{GPU architecture.}
\label{fig:gpu-arch}
\end{minipage}
\end{figure*}
\textbf{GPU.}
As illustrated in Figure~\ref{fig:gpu-arch}, a GPU contains multiple streaming multiprocessors (MP). Each MP consists of a large number of specialized cores targeted at a limited subset of instructions. In the \textit{CUDA programming model}, work is issued to the GPU in the form of a function, referred to as the \textit{kernel}. A logical instance of the kernel is called a thread. The kernel code is parametrized by a logical thread identifier that allows each thread to operate on a different partition of the input data. Since thousands of threads can be executed concurrently across MPs, global thread synchronization is not available. Nonetheless, synchronization can be enforced at thread block level. A \textit{thread block (or block)} is a logical group of the threads executed for a kernel and imposes an upper limit on the number of threads, e.g., at most 1024 threads can be part of a block. The programmer has to specify both the number of blocks and the number of threads in a block when launching a kernel. Physically, all the threads in a block must reside on the same MP, as shown in Figure~\ref{fig:gpu-arch}. In order to highly utilize the GPU parallelism, the number of blocks has to be at least equal to the number of MP. To manage thousands of concurrent threads running on different parts of the data, the MP employs SIMT (single-instruction, multiple-thread) or SIMD (single-instruction, multiple-data) parallelism by grouping consecutive threads of a block into a \textit{warp}. The MP issues instructions at warp level in vector-like fashion for all the threads in the warp at a time. The number of threads in a block should be a multiple of the warp size in order to achieve full warp utilization. Moreover, in order to fully utilize all the cores, there should be a sufficiently large number of warps that contain sufficiently long sequences of independent instructions.
Threads can access the various units of the deep memory hierarchy in Figure~\ref{fig:gpu-arch} explicitly -- in the code -- during execution. This is quite different from the CPU memory management which is completely hidden from the programmer. While more flexible, it also makes GPU programming harder. Global memory (or device RAM memory) is persistent over multiple kernel invocations and can be accessed from all the threads across MPs. While the largest in size, global memory has the highest latency and lowest bandwidth. Shared memory is a low-latency high-bandwidth memory available to all the threads within a thread block. It is the only mechanism that allows synchronization between threads---only within a thread block, though. The read-only constant texture memory (or scratchpad memory) is a read-only cache populated from global memory. It is accessible by all the threads on an MP. Placement on the shared and scratchpad memory has to be implemented explicitly by the programmer. The two levels of cache L1 and L2 are used to improve the latency to the global memory. L1 cache handles only local thread memory and does not cache global memory loads. As a result, there is \textit{no cache coherency} implemented across MPs. When a global memory address is requested by a thread of a warp, aligned successive addresses are converted into a single memory transaction which is called \textit{memory coalescing}. To move data efficiently from global memory, the threads in a warp have to access consecutive global memory addresses. If the requested addresses of the warp are sparse or unaligned, several memory transactions are required to support the warp computations. Until all the requested data are cached in L2, the warp cannot be scheduled for computation.
\begin{table*}[htbp]
\begin{minipage}{.52\textwidth}
Table~\ref{tbl:hardware-spec} gives the hardware specifications of the NUMA machine and the NVIDIA Tesla K80 GPU used in this paper. While the number of cores and threads is much larger for the GPU, the numbers for the NUMA machine are quite high compared to previous CPU generations, e.g., 56 independent threads can run concurrently on a single machine. Although the amount of memory available on the CPU is 20X larger than on the GPU, the L2 cache on the GPU is 6X larger. This reflects the throughput emphasis of the GPU memory hierarchy as opposed to the latency optimization for CPU.
\end{minipage}
\hfill
\begin{minipage}{.4\textwidth}
\begin{center}
\begin{tabular}{l|rr}
& \textbf{NUMA} & \textbf{GPU} \\
\hline
CPU/MP & 2 & 13 \\
cores & 14 per CPU & 192 per MP \\
blocks & - & 16 per MP \\
threads & 28 per CPU & 2048 per MP \\
L1 cache & 32+32 KB & 48 KB \\
L2 cache & 256 KB & 1.5 MB \\
L3/shared & 35 MB & 48 KB \\
RAM/global & 256 GB & 12 GB \\
\hline
\end{tabular}
\caption{Hardware specification.}\label{tbl:hardware-spec}
\end{center}
\end{minipage}
\hfill
\end{table*}
\section{SYNCHRONOUS SGD IMPLEMENTATION}\label{sec:synch-sgd}
The implementation of synchronous SGD consists of a sequence of primitive linear algebra function invocations for gradient computation and model update. In the case of the LR gradient (Eq.\eqref{eq:LR-grad:la}), the function sequence is:
\begin{sql}
vector a = matrix-vector-product(data, model)
a = vector-vector-element-product(label, a)
a = vector-element-exponent(a)
vector b = vector-element-sum(1, a)
a = vector-vector-element-division(a, b)
a = vector-vector-element-product(a, -label)
gradient = matrix-vector-product(transpose(data), a)
\end{sql}
Each of these functions is blocking, i.e., the next function in the sequence is invoked only after the previous finishes execution. This introduces a clear boundary between gradient computation and model update, essentially synchronizing access to the model. Parallelism is confined exclusively to intra-function processing. This allows for a variety of implementations, as long as the function API is preserved. ML frameworks, e.g., TensorFlow and BIDMach, capitalize on this abstraction and implement the linear algebra primitives with a unified API for both CPUs and GPUs. For example, function \texttt{matrix-vector-product} has an implementation for multi-thread CPU and one as a GPU kernel. Moreover, separate implementations are provided for dense and sparse data because of the different set of optimizations they require---this is not the case for TensorFlow and other deep learning frameworks which support only dense matrix representations and cannot, thus, handle high-dimensional models. The benefit of this approach is that switching between architectures does not require any code modification.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{forest-synch}
\caption{Synchronous SGD on dense data (\texttt{covtype}).}
\label{fig:forest-synch}
\end{figure*}
Our synchronous SGD implementation follows the common API approach. We use the ViennaCL library which implements the linear algebra primitives used in LR and SVM---we extend the library with several functions. ViennaCL has support for multi-thread CPU and GPU, and for dense and sparse data. Since the ViennaCL implementations use all the specific architectural optimizations and are among the fastest available, we do not have to apply further intra-primitive optimizations. For a specific configuration, e.g., CPU or GPU, all the primitives are executed on that device---no cross-device execution. We do not apply cross-primitive optimizations such as pipelining, fusion, and on-GPU intermediate result caching~\cite{systemml} because this requires a holistic view of the program.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{news-synch}
\caption{Synchronous SGD on sparse data (\texttt{news}).}
\label{fig:news-synch}
\end{figure*}
The benefit of highly-optimized linear algebra primitives is reflected in the \textbf{hardware efficiency} of parallel synchronous SGD. Figure~\ref{fig:forest-synch} and~\ref{fig:news-synch} show that up to two orders of magnitude reduction is achieved over the sequential SGD for LR both by a multi-thread CPU and a GPU solution. These results also confirm that choosing the optimal kernel is highly-dependent on the data characteristics and the model. For dense data (Figure~\ref{fig:forest-synch}), the GPU kernel and parallel CPU are indistinguishable, while for sparse data (Figure~\ref{fig:news-synch}), the GPU is faster by an order of magnitude. The complete details of the datasets and the models are given in Section~\ref{sec:experiments}. Since the exact semantics of sequential SGD is preserved, the \textbf{statistical efficiency} is the same for multi-thread CPU and GPU (Figure~\ref{fig:forest-synch} and~\ref{fig:news-synch}). This property is important because the convergence analysis for sequential SGD can be immediately extended to the parallel synchronous SGD. Moreover, the \textbf{time to convergence} is determined entirely by the hardware efficiency. Figure~\ref{fig:forest-synch} and~\ref{fig:news-synch} confirm this linear dependency---the time to convergence is the number of epochs multiplied with the time of an epoch. As a result, GPU has faster convergence for sparse data, while on dense data, CPU and GPU are almost the same. Deciding the architecture on which to execute parallel synchronous SGD is not a straightforward decision, even though these results suggest that GPU is better. The convergence time is conditioned by the size and sparsity of the data, and the model. There is no simple rule on how to undoubtedly choose between CPU and GPU in ViennaCL---the same conclusion is supported by experiments with TensorFlow and BIDMach.
\section{ASYNCHRONOUS SGD IMPLEMENTATION}\label{sec:asynch-sgd}
Compared to the non-intrusive synchronous approach which composes a series of architecture-optimized linear algebra functions, the asynchronous solution consists of a single function that implements the \texttt{Incremental SGD Optimization Epoch}. For each training example, this function first computes the gradient and immediately applies it to a model update. Parallelism is achieved by executing several instances of the function, i.e., threads, concurrently over partitions of the examples. Asynchronous execution is the result of unprotected access to the shared state, i.e., the model, across concurrent threads. This approach to incremental SGD -- the Hogwild algorithm~\cite{hogwild} -- prioritizes hardware over statistical efficiency and better time to convergence is often obtained. However, a naive implementation is not sufficient~\cite{hogbatch}. Rather, architectural optimizations of the hardware have to be carefully considered.
\subsection{Hogwild on NUMA CPU}\label{ssec:async-sgd:cpu}
In DimmWitted~\cite{dimm-witted}, Zhang and Re give a Hogwild implementation optimized for NUMA CPU architectures. They investigate the impact of three factors -- access method, model replication, and data replication -- on the efficiency of Hogwild. For each factor, multiple alternatives are independently evaluated in terms of statistical and hardware efficiency and the optimal configuration is selected for every dataset/model combination---while the goal of DimmWitted is to do this automatically with a cost-based optimizer, unfortunately, it is not possible. Given the exhaustive nature of the DimmWitted study, our asynchronous SGD NUMA CPU implementation follows their solution. According to Figure 14 in~\cite{dimm-witted}, fully-replicated row-wise data access to a model replicated at NUMA node granularity is the optimal configuration for LR and SVM tasks. In our experimental setting, this corresponds to having two copies of the data and the model---one for each NUMA node. Essentially, we execute two independent Hogwild instances---one for each NUMA node. Due to the non-deterministic assignment of examples to threads and the model update order, these two instance are not identical copies---they provide non-redundant statistical information which improves the statistical efficiency. A second layer asynchronous Hogwild is executed between the NUMA node models in order to reduce their divergence. Periodically, a separate thread reads these two models, merges them, and updates each replica. Since all the threads scheduled on a NUMA node access only the corresponding model replica, no cache coherence overhead is incurred. This is reflected in higher hardware efficiency. For the remainder of the paper, all the references to asynchronous SGD on NUMA CPU correspond to this implementation.
\subsection{Hogwild on GPU}\label{ssec:async-sgd:gpu}
In this paper, we provide the first in-depth study of Hogwild on GPU. While several GPU extensions to asynchronous SGD have been proposed in the literature~\cite{SVM-CF-SGD-GPU,MF-SGD-GPU,CuMF-SGD}, they either target a single application, e.g., low-rank matrix factorization, or restrict themselves to a specific data/model configuration, e.g., sparse round-robin partitioned data with a single shared model. Moreover, none of these solutions is completely asynchronous in a Hogwild sense. We take an exhaustive approach in which we organize the Hogwild design space into three dimensions and consider several strategies for each dimension. Given a training dataset, a model specification, and a GPU configuration of threads and blocks, our goal is to determine an execution plan for the asynchronous Hogwild. The execution plan has to specify the assignment of data and model to GPU threads and the access path to the assigned data and model---inside the thread. Table~\ref{tbl:asynch-sgd:design-space} summarizes the components of the execution plan -- the dimensions of the design space -- and their corresponding strategies.
\begin{table*}[htbp]
\begin{minipage}{.4\textwidth}
Although derived from the NUMA CPU factors proposed in DimmWitted~\cite{dimm-witted}, the GPU optimizations are quite different because of the layered parallelism consisting of blocks and threads, the SIMD execution within a warp, and the distinct memory hierarchy optimized for throughput rather than latency. While our initial goal has been to build an analytical model that identifies the optimal execution plan for any data/model configuration automatically, we have found experimentally that this choice is highly-dependent on data and model characteristics. Thus, we limit ourselves to providing practical rules of thumb to guide the optimal choice.
\end{minipage}
\hfill
\begin{minipage}{.56\textwidth}
\begin{center}
\begin{tabular}{l|l}
\textbf{Dimension} & \textbf{Strategies} \\
\hline
\multirow{4}{*}{Data access path} & row-major round-robin (row-rr) \\
& row-major chunking (row-ch) \\
& column-major round-robin (col-rr) \\
& column-major chunking (col-ch) \\
\hline
\multirow{4}{*}{Model replication} & kernel \\
& block \\
& thread \\
& example \\
\hline
\multirow{2}{*}{Data replication} & no replication (no-rep) \\
& k-wise replication (rep-2, rep-5, rep-10)
\end{tabular}
\end{center}
\caption{Design space for Hogwild on GPU.}\label{tbl:asynch-sgd:design-space}
\end{minipage}
\end{table*}
\subsubsection{Data Access Path}\label{ssec:asynch:data-access}
The training examples $\vec{x}_{i}$ form a 2-D matrix which can be organized in memory in row-major (row) -- by example -- or column-major (col) -- by feature -- format. These are extended to sparse data by the corresponding Compressed Sparse Row (CSR) and Compressed Sparse Column (CSC) format which store only the non-zero entries using three 1-D arrays. There are two arrays for the non-zero values and their column (or row) index -- each of size the number of non-zero entries -- and a third array with size the number of rows (or columns) that stores the index in the first two arrays where each column (row) starts. The assignment of examples to threads -- data partitioning -- is a second factor that has to be carefully considered when accessing the training data. The two standard approaches that do not require preprocessing and dynamic scheduling are round-robin (rr) and chunking (ch)---horizontal partitioning. The combination of data format and partitioning results in four configurations---row-rr, row-ch, col-rr, and col-ch. Existing NUMA CPU implementations consider only row-ch since asynchronous SGD accesses a complete example to compute the gradient and chunking allows for exclusive access to the local memory. SIMD execution coupled with memory access coalescing inside a thread block/warp pose new challenges for GPU. If the number of examples assigned to the threads of a block is not identical, the threads having fewer examples are stalled until the others finish. For dense data, all the threads in a block access the same model feature simultaneously which results in non-coalescing. This also happens for sparse data, however, the reason is the random access pattern to the model features. In both cases, the number of memory transactions necessary to execute a SIMD instruction is a bottleneck for the NUMA CPU row-ch configuration. Thus, investigating the other alternatives is essential.
Figure~\ref{fig:data-access-config} depicts a graphical representation of the four data access path configurations considered for the execution on GPU: row-rr, row-ch, col-rr, and col-ch. The dashed lines link the features accessed from the dataset and updated in the model. In the dense dataset, the hatched cells indicate the optimization method we apply to reduce the data conflicts on the model updates. The hatched cells in the sparse dataset are the padded zero features used to convert the sparse dataset into a dense representation.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.98\textwidth]{data-access-config}
\caption{Data access path configurations on GPU.}\label{fig:data-access-config}
\end{center}
\end{figure}
\textit{row-rr.}
The training examples are stored in succession and they are accessed by consecutive threads in a block/warp. For dense data, the threads of a warp read the same feature from their example when computing the gradient and modify exactly the same feature of the shared model for every update. Therefore, only a single thread successfully updates the shared model which impacts statistical efficiency negatively. Moreover, the access pattern to examples and the model is not coalesced---consecutive threads do not access consecutive memory addresses. To reduce conflicts to the shared model, each thread starts the model update process from a different index, as shown by the hatched cells in Figure~\ref{fig:data-access-config}. The starting index is assigned circularly based on the thread id. As a result, when the model size is at least half of the warp size, no update conflicts exist. This optimization also provides coalesced access to the model, thus reducing the number of memory transactions. For sparse data, the number of features across examples varies and the feature indexes are discontinuous. When threads request the features at the same position, their index is likely different, leading to independent updates.
In addition to the model, every thread has a local gradient which is stored by default in the local memory. Since local memory is mapped to L1 cache, each feature occupies a 128-byte cache line. A thread has to fetch aligned data -- including nearby unrequested features -- which wastes most of the space in a cache line for loading a single feature. The warp also jumps to non-coalesced addresses multiple times to combine the separate requested features. With half-a-warp, i.e., 16 threads, every feature requests 16 cache lines which are 2048 bytes. We consider the alternative of storing the gradients in the global memory. In this case, a feature occupies a 32-byte segment in L2 cache; then 16 features request 16 cache segments which take only 512 bytes. While each feature still caches nearby unrequested indexes, the size of memory transactions to global memory is 4 times lower than to local memory. This reduction improves hardware efficiency.
\textit{row-ch.}
The training examples are stored consecutively and a thread also processes consecutive examples (Figure~\ref{fig:data-access-config}). Examples that are chunk size apart are processed concurrently by threads in a warp. Since these are minor differences compared to row-rr, the same optimizations apply. Nonetheless, depending on the chunk size and the number of threads, the performance can be quite different (Figure~\ref{fig:news-access}).
\textit{col-rr.}
To implement the column-major representation, we transpose the training examples. This coalesces features across examples in global memory (Figure~\ref{fig:data-access-config}) and the data accessed by a warp covers full segments. On average, each feature takes only a quarter of a memory transaction to be cached. In order to avoid update conflicts, threads access the model in circular order---similar to row-rr. The CSR sparse format is, however, not adequate for column-major access because we have to traverse the 1-D array which records the row indices of non-zero features to locate all the features of an example. Therefore, we map sparse data into a dense padded format that stores all the examples at the same width---equal to the maximum number of non-zero features. When the threads of a warp request features from the same position, accesses are coalesced. Moreover, the shared model is updated without conflict.
\textit{col-ch.}
As in the case of row-major, the main difference between col-ch and col-rr is the width between examples processed concurrently in a warp. This impacts the number of memory transactions directly. When the chunk size is small -- due to a large number of threads -- the average memory transaction can be much smaller than 1 since features share a cache segment. For sparse data, the number of features is the other crucial factor impacting memory transactions. If the number of features differs greatly across the threads in a warp, the average number of memory transactions is closer to the maximum of 1.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{forest-access}
\caption{Access path selection for dense data (\texttt{covtype}).}
\label{fig:forest-access}
\end{figure*}
\textbf{Hardware efficiency.}
Figure~\ref{fig:forest-access} and~\ref{fig:news-access} depict the performance of the data access strategies for LR. For dense data, col-rr exhibits the highest hardware efficiency, taking minimum average time per iteration. Since the gradient is stored in the global memory, the number of memory transactions is mostly determined by the number of L2 cache segments accessed. col-rr takes full advantage of cache access coalescing. For sparse data, row-rr is slightly better than col-rr due to extra access to padded zero features. Moreover, the non-zero features at the same position in the CSR format are more likely to be in the same cache segment. Although round-robin access tends to perform better than chunking, this cannot be generalized since row-ch has better hardware efficiency than row-rr on dense data.
\textbf{Statistical efficiency.}
The strategy with the highest hardware efficiency has the lowest statistical efficiency in Figure~\ref{fig:forest-access} and~\ref{fig:news-access}. We find that the behavior of cache coalescing has a negative effect on model update conflicts. When data requests are coalesced, the warps are not frequently stalled for memory transactions. We can have at most 52 concurrent warps -- 1664 threads in total -- scheduled by the warp schedulers for computation. Such a large number of threads updating the model concurrently increases the conflicts to the single shared model. As a result, fewer updates survive which results in reduced statistical efficiency and more iterations to converge.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{news-access}
\caption{Access path selection for sparse data (\texttt{news}).}
\label{fig:news-access}
\end{figure*}
\textbf{Time to convergence.}
Unlike synchronous SGD, both hardware and statistical efficiency have to be considered for analyzing time to convergence. As shown in the figures, these are inversely correlated which makes reasoning more difficult. On dense data, the reduction in time per iteration obtained by col-rr is sufficient to outperform the larger number of iterations and result in faster time to convergence. On sparse data, however, the excessive number of model update conflicts introduced by higher hardware efficiency prohibits optimal convergence. The round-robin strategies -- although converging faster to 2\% error -- cannot converge to 1\% error. Overall, the \textit{rule of thumb is col-rr data access with circular model updates is the optimal strategy}---with the caveat that optimal convergence on sparse data is harder to achieve.
\subsubsection{Model Replication}\label{ssec:asynch:model-replica}
As the shared model is accessed by multiple threads simultaneously, this can lead to degradation in statistical efficiency. In the case of GPU, this problem is triggered by the SIMD execution inside a warp -- the threads in a warp read/write the model exactly at the same time -- not by a cache coherency mechanism. Among all the threads in a warp which update the model simultaneously, only a single value is arbitrarily picked for each feature, while the other updates are discarded. Moreover, a warp may execute with a stale model -- due to stalling and rescheduling caused by cache misses -- since other warps keep modifying the model in the shared L2 cache. Given the inefficiencies incurred by a shared model per \textit{kernel}, we consider several alternatives that reduce sharing by exploiting the deep GPU memory hierarchy.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.88\textwidth]{model-replication}
\caption{Model replication configurations on GPU.}\label{fig:model-replica-config}
\end{center}
\end{figure}
Figure~\ref{fig:model-replica-config} shows the memory layout of the model replication configurations on GPU: kernel, block, thread, and example. In our design, the examples of the dataset are transferred into the GPU global memory while the model replicas are stored in different levels of the memory. We allocate space in DRAM for kernel, thread, and example and compare the difference between the performance of the allocations from global and local memory. As we analyze for gradient storage in Section~\ref{ssec:asynch:data-access}, although the data in local memory are cached to on-chip L1 with 128-byte lines, most of the non-coalesced data are not requested for computation. The kernel strategy has a single model shared by all the threads on the multiprocessors. In the thread strategy, the sizes of the model replicas are not identical since we materialize only the features accessed by every thread. While the example strategy maintains the model replicas with the same features for every example, the size of the model replicas are also different. The model replicas of the block strategy are stored in shared memory. If the model size is larger than the size of shared memory per thread-block, the model replicas are spilled to global memory.
\textit{block.}
There is a model replica for each block which is initialized before each iteration and merged into the global model at the end of the iteration---CUDA supports block-level synchronization. If the model is small enough, it is allocated in the shared memory---sliced into a number of 64-bit wide banks. Bank conflicts are avoided since each feature is accessed independently by a thread. Otherwise, threads incurring a bank conflict have to execute serially. Moreover, if a warp accesses the same feature while updating and copying the model replica, conflicts can be avoided altogether because of broadcasting of features in the same bank. Large models are replicated in the global memory---one per block. Initializing and merging model replicas incur additional overhead.
\textit{thread.}
There is a model replica which resides in local memory for each thread. For small models, this is mapped into the registers and provides fast access. Otherwise, it is mapped into the global memory. There are no model update conflicts since each thread has its own replica. Nonetheless, merging the replicas incurs significant overhead due to the large number of models.
\textit{example.}
To avoid merging sparse models in CSR format, we introduce example replication in which every example has its own model replica in global memory. An example replica contains the same features as the example. After a thread finishes processing the assigned examples, it copies the model replicas to the shared model. example replication avoids conflicts between threads almost entirely at the expense of memory usage.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{forest-model}
\caption{Model replication effect on dense data (\texttt{covtype}).}
\label{fig:forest-model}
\end{figure*}
\textbf{Hardware efficiency.}
Figure~\ref{fig:forest-model} and~\ref{fig:news-model} plot the results with the col-rr access path for dense data---example replication applies only to sparse data. On dense data, block replication has the highest hardware efficiency because access to shared memory is faster than to global memory. thread has higher hardware efficiency than kernel, even though it incurs overhead to merge the local model. The reason is that the local models are cached in L1 which resides on MP, while in kernel the model is cached in L2 which resides off-chip. On sparse data, all the strategies utilize global memory to maintain the local and global models. kernel replication has the highest hardware efficiency, while block has the lowest---by a factor of 10. The synchronization required in block replication delays the threads with fewer non-zero features until the largest example is processed. example replication has no synchronization and caches models in L1, rather than L2---the case for thread and kernel.
\textbf{Statistical efficiency.}
kernel replication displays the highest statistical efficiency both for dense and sparse data. thread replication is two orders of magnitude lower. Intuitively, the more replicas, the lower the statistical efficiency. Model replicas -- whether stored on-chip or off-chip -- record only partial information. When update conflicts occur, features in the model replicas are discarded which leads to the shared model fetching limited information. In addition, the dot-product computed with the local replica generates an unreliable gradient---especially for sparse data.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{news-model}
\caption{Model replication effect on sparse data (\texttt{news}).}
\label{fig:news-model}
\end{figure*}
\textbf{Time to convergence.}
On dense data, kernel and block have similar time to convergence, while thread is completely outperformed because of its low statistical efficiency. On sparse data, kernel converges the fastest---by two orders of magnitude. block fails to converge to 1\% error in an acceptable amount of time. Although in this case example is second, for different access path configurations, example outperforms kernel in time to convergence. Overall, \textit{the rule of thumb is to choose kernel model replication}. However, if the model fits in the shared memory, block replication may be a better choice.
\subsubsection{Data Replication}\label{ssec:asynch:data-replica}
Standard data partitioning does not replicate examples across threads because generating the correct query result requires adequate post-processing which increases execution time. We adopt this \textit{no replication (no-rep)} strategy in the data access path methods. Model training, however, is approximate and using more data has the potential to improve statistical efficiency. In order to explore this tradeoff between hardware and statistical efficiency, we investigate \textit{k-wise replication} in which the example partitions assigned to threads overlap at the boundaries. For example, \textit{2-wise replication (rep-2)} assigns two additional examples to every thread. The reason behind replication at the boundary is to preserve coalesced memory access. k-wise replication interacts minimally with model replication. The only difference is that the number of examples assigned to a thread is larger by k.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{data-replication}
\caption{Data replication configurations on GPU.}\label{fig:data-replica-config}
\end{center}
\end{figure}
Figure~\ref{fig:data-replica-config} plots the data replication configurations with two horizontal partitioning strategies: no replication (no-rep) and k-wise replication (rep-2). Without replication, the examples are sliced by the round-robin access path where threads take turn to process the examples one by one. With the chunking access path, the fixed-size chunks are assigned to the threads sequentially. In the k-wise replication strategy, the thread fetches several examples after the originally-assigned ones with no-rep, as Figure~\ref{fig:data-replica-config} shows. Therefore, the examples are accessed several times by different threads.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{forest-data}
\caption{Data replication effect on dense data (\texttt{covtype}).}
\label{fig:forest-data}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{news-data}
\caption{Data replication effect on sparse data (\texttt{news}).}
\label{fig:news-data}
\end{figure*}
Figure~\ref{fig:forest-data} and~\ref{fig:news-data} depict the impact of k-wise replication on hardware and statistical efficiency for k equal to 2, 5, and 10, respectively. The trend is identical both on dense and sparse data. The larger k is, \textbf{hardware efficiency} drops almost linearly. This is somewhat unexpected because we expect coalesced memory access to be more resilient when the range increases only slightly. The configurations used in the figures are \textit{col-rr+kernel} (dense) and \textit{col-rr+example} (sparse). We observe similar trends for the other combinations. \textbf{Statistical efficiency} increases linearly with k since more information is extracted from the data for gradient computation and model update. When these opposing measures get combined in \textbf{time to convergence}, the reduction in number of iterations dominates the increase in time per iteration and we obtain faster convergence when the degree of replication is higher. The improvement diminishes beyond \textit{rep-10}. Overall, \textit{the rule of thumb is to adopt a limited degree of data replication} because it enhances statistical convergence linearly while also decreasing time to convergence by a significant margin.
\section{EXPERIMENTAL EVALUATION}\label{sec:experiments}
We perform an extended empirical study across the exploratory axes defined in Figure~\ref{fig:study-axes} with respect to the performance axes introduced in Figure~\ref{fig:perf-axes}. The goal is to fully characterize the relationship between the considered configurations and understand the relevance of each performance measure. We validate our results by comparing against two representative analytics frameworks that have support both for CPU and GPU---TensorFlow and BIDMach. We emphasize that the main objective of the comparison is to add other reference points on the performance axes beyond our implementation, while the direct comparison between frameworks is secondary. Specifically, our experiments target the following questions:
\begin{compactitem}
\item What is role of the computing architecture, i.e., CPU/GPU, on the performance of synchronous SGD?
\item What is role of the computing architecture, i.e., CPU/GPU, on the performance of asynchronous SGD?
\item What is the optimal configuration for asynchronous SGD on GPU?
\item How do synchronous and asynchronous SGD compare against each other on CPU and GPU separately, and across computing platforms?
\item Are our implementations efficient with respect to TensroFlow and BIDMach?
\item How do the proposed algorithms scale with the number of training examples and the dimensionality of the feature vector, respectively?
\end{compactitem}
\subsection{Setup}\label{sec:experiments:setup}
\textbf{Implementation.}
We implement all the 8 configurations in Figure~\ref{fig:study-axes} following the best practices for Intel multi-core CPUs and for NVIDIA GPUs, respectively. We use OpenMP for multi-thread programming on the CPU and CUDA 8.0 on the GPU. Synchronous SGD is implemented using the ViennaCL (1.7.1) linear algebra library which provides optimized primitives with the same API for CPU and GPU. This allows us to have identical implementations---only compiled with different flags. We have separate implementations for dense and sparse data that use optimized data structures. All the code is written in \texttt{C++}. For the implementations in TensorFlow (0.12.0) and BIDMach (2.0.1), we write only the driver programs which define the objective function corresponding to the analytic model. We then invoke the synchronous SGD optimizer which calls the linear algebra kernels necessary in the gradient computation. While the driver is written in \texttt{python} for TensorFlow and \texttt{scala} for BIDMach, the linear algebra kernels are coded in \texttt{C++/CUDA} and are highly-optimized.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{l|rrrrr}
\textbf{dataset} & \textbf{\#examples} & \textbf{\#features} & \textbf{\#nnz/example (avg)} & \textbf{sparse size} & \textbf{dense size} \\
\hline
covtype & 581,012 & 54 & 54 (54) & 485 MB & 485 MB\\
w8a & 64,700 & 300 & 1 to 114 (11.65) & 4.4 MB & 155 MB \\
real-sim & 72,309 & 20,958 & 1 to 3,484 (51.30) & 87 MB & 12.1 GB \\
rcv1 & 677,399 & 47,236 & 4 to 1,224 (73.16) & 1.2 GB & 256 GB \\
news & 19,996 & 1,355,191 & 1 to 16,423 (454.99) & 134 MB & 217 GB
\end{tabular}
\end{center}
\caption{Experimental datasets. nnz is number of non-zero.}\label{tbl:datasets}
\end{table}
\textbf{System.}
The properties of the computing architectures used in the experiments are presented in Figure~\ref{tbl:hardware-spec}. They are mounted in the same physical machine running Ubuntu 16.04 SMP with Linux kernel 4.4.0-77 and CUDA 8.0. Out of the two cards inside the Tesla K80 GPU, only one is used in the experiments. The two cards are seen as two independent GPUs by the operating system and have to be programmed independently. This is the default setting in both TensorFlow and BIDMach which require the programmer to specify the GPU on which the SGD optimizer executes. We plan to perform an evaluation with multiple -- possibly distributed -- GPUs in the future.
\textbf{Methodology.}
We perform all the experiments at least 3 times and report the average value as the result. Each task is run for at least 10 iterations and the hardware efficiency is measured as the average execution time over the total number of iterations. The time to evaluate the loss is not included in the iteration time. All configurations/systems are initialized with the same model which gives the same initial loss. The SGD step size is chosen by griding its range in powers of 10, e.g., $\{10^{-6},10^{-5},\dots,10^{2}\}$, and selecting the value that generates the fastest time to convergence. The optimal step size varies across configurations. For end-to-end performance, we measure the wall-clock time it takes for each configuration to converge to a loss that is within 10\%, 5\%, 2\%, and 1\% of the optimal loss. Following prior work~\cite{dimm-witted}, we obtain the optimal loss by running all configurations for one hour and choosing the lowest. The time to load the data and output the result is not included. Moreover, in the case of GPU, the time to transfer the data and the model to/from the GPU global memory is also not included---we measure only the kernel execution time. For Hogwild on GPU, we report only the result corresponding to the configuration that achieves 1\% convergence error the earliest. As discussed in the results, this configuration is different across datasets.
\textbf{Datasets and tasks.}
Rather than considering only two datasets -- dense and sparse -- we include five real datasets (Table~\ref{tbl:datasets}) that exhibit large variety in size, dimensionality, and sparsity. The number of dimensions varies from tens to more than 1 million, while the number of non-zero entries per example is as small as 1 for most of the datasets. In terms of physical size, the sparse representation is as small as 4.4 MB for \texttt{w8a} -- it can be cached (almost) completely both on CPU and GPU -- and as large as 1.2 GB for \texttt{rcv1}. In dense format, only \texttt{covtype} and \texttt{w8a} fit on the GPU, while \texttt{rcv1} and \texttt{news} cannot be processed even on the CPU. \texttt{covtype} is the representative dense dataset throughout the paper since it is complete, while \texttt{news} has the highest sparsity ratio of $3\times 10^{-4}$. These datasets have been used previously to evaluate the performance of parallel SGD on NUMA CPU~\cite{dimm-witted} and GPU~\cite{hogbatch}---more details can be found therein. We use a dense format for \texttt{w8a} in order to allow TensorFlow to execute. As a result, synchronous SGD becomes batch gradient descent since it uses the complete dataset to compute the exact gradient. Even if we reduce the batch size and have several model updates per data pass, the time to convergence does not improve. With five datasets and four points on the exploratory axes, we obtain 20 configurations per model. Since we consider two tasks -- LR and SVM -- we have 40 configurations overall. We do not include any regularization in the objective function in order to measure only the time spent in the actual computation.
\subsection{Results}\label{sec:experiments:results}
We project the results on a subset of dimensions in the exploratory axes to facilitate a direct comparison between configurations. For synchronous and asynchronous updates taken separately, we compare the CPU and GPU implementations. Then we perform a direct comparison between the best synchronous and asynchronous configurations. For each computing architecture, we compare synchronous and asynchronous SGD, and the synchronous solutions in TensorFlow and BIDMach, respectively. Lastly, we study the effect of the number of examples and features on hardware efficiency for CPU and GPU separately.
\begin{table*}[htb]
\begin{center}
\begin{tabular}{l|l||rrr|rrr|r}
\multirow{2}{*}{\textbf{task}} & \multirow{2}{*}{\textbf{dataset}} & \multicolumn{3}{c|}{\textbf{time to convergence (sec)}} & \multicolumn{3}{c|}{\textbf{time per iteration (msec)}} & \multirow{2}{*}{\textbf{\# iterations}} \\
& & \textbf{gpu} & \textbf{cpu-seq} & \textbf{cpu-par} & \textbf{gpu} & \textbf{cpu-seq} & \textbf{cpu-par} & \\
\hline
\multirow{5}{*}{LR} & covtype & \underline{1.05} & 145.11 & 1.29 & \underline{15} & 2,073 & 18.42 & 70 \\
& w8a & \underline{0.37} & 148.88 & 0.46 & \underline{4.87} & 1,959 & 6.05 & 76 \\
& real-sim & \underline{3.10} & 1,537.90 & 7.67 & \underline{4.43} & 2,197 & 10.96 & 700 \\
& rcv1 & \underline{31.69} & 2,227.05 & 48.06 & \underline{44.82} & 3,150 & 67.98 & 707 \\
& news & \underline{0.65} & 240.21 & 3.68 & \underline{6.37} & 2,355 & 36.08 & 102 \\
\hline
\multirow{5}{*}{SVM} & covtype & \underline{10.22} & 1,344.65 & 13.50 & \underline{14.27} & 1,878 & 18.85 & 716 \\
& w8a & \underline{0.78} & 342.85 & 0.80 & \underline{4.13} & 1,814 & 4.23 & 189 \\
& real-sim & \underline{0.23} & 75.59 & 0.46 & \underline{6.22} & 2,043 & 12.43 & 37 \\
& rcv1 & \underline{1.13} & 111.61 & 2.61 & \underline{29.74} & 2,937 & 68.69 & 38 \\
& news & \underline{0.30} & 98.42 & 1.69 & \underline{6.67} & 2,187 & 37.56 & 45 \\
\end{tabular}
\end{center}
\caption{Synchronous SGD performance to 1\% convergence error. The best values for each dataset are underlined.}\label{tbl:synch-bgd}
\end{table*}
\subsubsection{Synchronous SGD}\label{sec:experiments:results:synch-sgd}
Table~\ref{tbl:synch-bgd} contains the time to convergence, and hardware and statistical efficiency results for synchronous SGD implemented with the ViennaCL library. In order to show the improvement over a sequential solution, we also include results for a single-thread CPU implementation which achieves convergence in less than 5 minutes only for 6 out of the 10 dataset/task pairs. Since the parallel implementations always achieve convergence in less than 1 minute -- sometimes even in less than a second -- it is clear that parallelism helps. When comparing the multi-core CPU and GPU solutions, there is a clear trend---\textit{GPU is always faster than parallel CPU in time to convergence}. Since the statistical efficiency is identical in synchronous SGD independent of the computing platform, this also translates into faster GPU time per iteration, i.e., better hardware efficiency. Given that ViennaCL defines its internal representation for dense and sparse data and implements optimized kernels for CPU and GPU independently, the difference is due exclusively to the computational power of the two architectures---the GPU has more FLOPS than the CPU. The gap between the two architectures -- reflected by the speedup (Table~\ref{tbl:synch-sgd-speedup}) -- increases with the sparsity of the data. The GPU is faster by a small margin -- 32\% or less -- on the dense datasets \texttt{covtype} and \texttt{w8a}. This confirms that having powerful CPUs with tens of cores provides extensive parallelism that has the potential to close the gap on the GPU. In fact, we find that computing the transpose of a dense matrix on the GPU is a serious bottleneck in ViennaCL\footnote{\url{https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc/}}. When the transpose is computed in-place, the GPU performance becomes worse than the CPU---in these results, the transpose is materialized and passed as a second matrix to the kernels. The gap between GPU and CPU increases on sparse data to more than 5X on \texttt{news}. Parallelizing linear algebra operations on sparse data is known to be a difficult task because of the irregular memory access~\cite{spmv-multicore}. This turns out to be more acute for the CPU memory hierarchy. At a close inspection, we find that ViennaCL takes advantage of the programmability of the GPU memory and exploits it to optimize coalesced access to sparse data. Moreover, the linear algebra kernels invoked depend on data sparsity---a different kernel is called on \texttt{news} compared to \texttt{real-sim} and \texttt{rcv1}. The high variance in statistical efficiency across dataset/task combinations is due to different hyper-parameters. It confirms that convergence rate is a property of both the task and the dataset and is independent of sparsity. When comparing the time per iteration across LR and SVM, however, we observe very similar results on GPU and CPU although their gradients are quite different. The reason for this is matrix batch processing and vectorization which hide the higher latency incurred by individual element-wise operations---both LR and SVM have exactly the same number of matrix-matrix and matrix-vector operations.
\begin{table}[htbp]
\begin{minipage}{.47\textwidth}
Table~\ref{tbl:synch-sgd-speedup} contains the speedup in time per iteration generated by parallel CPU over sequential CPU and GPU over parallel CPU, respectively. For synchronous SGD, this also represents the speedup in time to convergence. Given that parallel CPU uses 56 threads, we expect a speedup in the range of 56 over sequential CPU. This is the case for \texttt{news}. The speedup for \texttt{rcv1} is slightly below 56---due to the large size of the dataset which does not allow for efficient caching even in L3. We obtain super-linear speedup on \texttt{covtype}, \texttt{w8a}, and \texttt{real-sim}---on \texttt{w8a} the speedup is more than 400X for SVM. The reason for this is the improved cache behavior when all the cores are in use. \texttt{w8a} can be entirely cached in L1 due to its small size, while \texttt{real-sim} and \texttt{covtype} are cached in L2 and L3, respectively. None of these datasets can be cached on a single core when executing the sequential code. GPU improves further over parallel CPU by a factor of 1 to 5X.
\end{minipage}
\hfill
\begin{minipage}{.5\textwidth}
\begin{center}
\begin{tabular}{l|l||rr}
{\textbf{task}} & {\textbf{dataset}} & {\textbf{cpu-seq/cpu-par}} & {\textbf{cpu-par/gpu}} \\
\hline
\multirow{5}{*}{LR} & covtype & 112.54 & 1.23 \\
& w8a & 323.80 & 1.24 \\
& real-sim & 200.46 & 2.47 \\
& rcv1 & 46.34 & 1.52 \\
& news & 65.27 & 5.66 \\
\hline
\multirow{5}{*}{SVM} & covtype & 99.63 & 1.32 \\
& w8a & 428.84 & 1.03 \\
& real-sim & 164.36 & 2.00 \\
& rcv1 & 42.76 & 2.31 \\
& news & 58.23 & 5.63
\end{tabular}
\end{center}
\caption{Speedup of synchronous SGD time per iteration. cpu-seq/cpu-par measures the speedup of the parallel CPU implementation over the sequential CPU. cpu-par/gpu corresponds to the speedup of GPU over parallel CPU.}\label{tbl:synch-sgd-speedup}
\end{minipage}
\hfill
\end{table}
\subsubsection{Asynchronous SGD}\label{sec:experiments:results:asynch-sgd}
Based on the three dimensions of the design space and related strategies for Hogwild on GPU, we list the optimal configuration for all the datasets used in the experiments (Table~\ref{tbl:optimal-config}). The optimal configurations are determined by the fastest time to convergence for every dataset/task pair. Although \texttt{w8a} is a sparse dataset, we can afford to use a dense representation because of its relatively small dimensionality. In fact, we perform experiments with both representations and choose the better one---sparse representation turns to be better for both LR and SVM.
As expected, \texttt{covtype} benefits from coalesced col-rr data access with circular model updates on the model replicas in the shared memory---\texttt{w8a} in dense format has the same configuration. Moreover, replication is not beneficial. For the sparse datasets, the row-major format is better since it does not include zero padded features. Compared to column-major format, row-major format trades-off memory transactions by thread stalls, improving hardware efficiency considerably. Assigning examples to threads at round-robin- or chunk-level is a property of the dataset and computation due to the length of memory jumps incurred and the number of accesses per example. Due to the large model size, kernel model replication is the optimal choice for sparse datasets due to the limited size of shared memory---thread model replication degenerates into kernel with additional overhead. k-wise data replication is almost always beneficial and in larger sizes. However, the difference in time to convergence compared to no replication is small. Overall, these results confirm the importance of our study in characterizing the design and prove that applying the same configuration in all scenarios is suboptimal.
Table~\ref{tbl:asynch-sgd} depicts the time to convergence to 1\% error, and hardware and statistical efficiency for asynchronous SGD. The parallel implementation on NUMA CPU is based on DimmWitted~\cite{dimm-witted} (Section~\ref{ssec:async-sgd:cpu}). The GPU results are based on the configurations in Table~\ref{tbl:optimal-config} corresponding to each dataset/task pair.
\begin{table}[htbp]
\begin{minipage}{.44\textwidth}
When comparing the CPU and GPU solutions, there is a clear trend--- \textit{(parallel) CPU is faster than GPU in time to convergence}. Specifically, on dense and low-dimensional data, the sequential CPU solution is faster, while on sparse data, parallel CPU dominates. The reason behind this Hogwild behavior on CPU is well-known~\cite{dimm-witted,hogbatch}---concurrent updates to the same features of the model generate cache-coherency conflicts that slow down execution and convergence. Essentially, parallelism is beneficial only on sparse data and models. Since there is no cache coherency mechanism on the GPU, one may expect the GPU solution to be considerably faster due to the higher degree of parallelism.
\end{minipage}
\hfill
\begin{minipage}{.56\textwidth}
\begin{center}
\begin{tabular}{l|l||l}
\textbf{task} & \textbf{dataset} & \textbf{optimal configuration} \\
\hline
\multirow{5}{*}{LR} & covtype & col-rr + block + no-rep \\
& w8a & row-rr + kernel + rep-10 \\
& real-sim & row-ch + kernel + rep-10 \\
& rcv1 & row-ch + kernel + no-rep \\
& news & row-rr + kernel + rep-10 \\
\hline
\multirow{5}{*}{SVM} & covtype & col-rr + block + no-rep \\
& w8a & row-ch + kernel + rep-10 \\
& real-sim & row-rr + kernel + rep-10 \\
& rcv1 & row-rr + kernel + rep-10 \\
& news & row-rr + kernel + rep-10
\end{tabular}
\end{center}
\caption{Optimal configurations for Hogwild on GPU.}\label{tbl:optimal-config}
\end{minipage}
\hfill
\end{table}
\noindent
However, the GPU bottleneck turns out to be vectorized execution inside a warp which generates a significant number of model update conflicts. While the warp shuffling optimization reduces the number of conflicts inside a warp, the number of concurrent warps is a lower bound that cannot be overcome. In the case of sparse data, however, update conflicts are not an issue. The problem is the irregular access to the model across the examples inside a warp. First, there is a high variance in the number of non-zero entries---several orders of magnitude. This forces threads to stall while longer examples finish. Second, all accessed model indexes have to be cached before a vectorized instruction can be executed. This incurs a large number of slow memory transactions per instruction. In order to address these issues, data and model partitioning techniques similar to the ones proposed for out-of-core processing~\cite{dot-product-join-ssdbm,hogwild-disk} have to devised for GPU. We plan to look into this topic in future work.
\begin{table*}[htb]
\begin{center}
\begin{tabular}{l|l||rrr||rrr||rrr}
\multirow{2}{*}{\textbf{task}} & \multirow{2}{*}{\textbf{dataset}} & \multicolumn{3}{c||}{\textbf{time to convergence (sec)}} & \multicolumn{3}{c||}{\textbf{time per iteration (msec)}} & \multicolumn{3}{c}{\textbf{\# iterations}} \\
& & {\textbf{gpu}} & {\textbf{cpu-seq}} & {\textbf{cpu-par}} & \textbf{gpu} & \textbf{cpu-seq} & \textbf{cpu-par} & {\textbf{gpu}} & {\textbf{cpu-seq}} & {\textbf{cpu-par}} \\
\hline
\multirow{5}{*}{LR} & covtype & 1.97 & \underline{0.60} & 1.51 & \underline{15} & 150 & 251 & 135 & \underline{4} & 6 \\
& w8a & 0.20 & 0.27 & \underline{0.18} & 21 \underline{(2.8)} & 15 & 5.9 & \underline{8} (80) & 18 & 27 \\
& real-sim & 2.46 & 1.35 & \underline{0.52} & 271 (27) & 25 & \underline{8.1} & \underline{9} (92) & 54 & 61 \\
& rcv1 & 18.29 & 20.37 & \underline{4.64} & 226 & 345 & \underline{71} & 81 & \underline{59} & 65 \\
& news & 7.35 & \underline{5.47} & $\infty$ & 615 (65) & 53 & \underline{8.7} & \underline{12} ($\infty$) & 103 & $\infty$ \\
\hline
\multirow{5}{*}{SVM} & covtype & 0.96 & \underline{0.16} & 0.35 & \underline{15} & 53 & 77 & 63 & \underline{3} & 4 \\
& w8a & 6.29 & \underline{0.54} & 1.89 & 25 (2.6) & \underline{2.2} & 5.6 & 247 ($\infty$) & \underline{239} & 333 \\
& real-sim & \underline{1.04} & 1.82 & 1.28 & 136 (14) & 11 & \underline{7.6} & \underline{7} (247) & 164 & 166 \\
& rcv1 & 8.56 & 22.71 & \underline{7.57} & 955 (94) & 216 & \underline{68} & \underline{9} (109) & 105 & 111 \\
& news & 8.75 & 20.01 & \underline{1.79} & 454 (50) & 47 & \underline{8.4} & \underline{19} ($\infty$) & 425 & 211
\end{tabular}
\end{center}
\caption{Asynchronous SGD performance to 1\% convergence error. For the GPU configurations that use replication, we also include the values without replication, e.g., 21 (2.8) time per iteration for LR on \texttt{w8a} corresponds to 21 msec with rep-10 and 2.8 msec with no-rep, respectively. The best values for each dataset are underlined. $\infty$ stands for lack of convergence in 300 seconds and an unknown number of iterations, e.g., cpu-par for LR on \texttt{news} does not converge to 1\% error within 300 seconds, thus the number of iterations to convergence is unknown.}\label{tbl:asynch-sgd}
\end{table*}
Several entries in Table~\ref{tbl:asynch-sgd} require discussion.
Parallel CPU does not converge for LR on \texttt{news}---also the case for GPU without replication. This proves the benefits of accessing an example multiple times during an iteration despite the increase in hardware efficiency. Nonetheless, the decision is particular to each dataset/task pair separately---parallel CPU converges without replication in all the other configurations, while GPU does not for SVM on \texttt{w8a} and \texttt{news}.
Sequential CPU is the fastest for SVM on \texttt{w8a} because of several reasons. In sparse representation, \texttt{w8a} is small enough to be fully cached in L3, thus, memory latency is minimal. As discussed previously, parallelism triggers the cache coherency mechanism which incurs delays. Since \texttt{w8a} has only 300 features, the rate of update conflicts is high. Although this behavior is also present on LR, the results are different. This has to do with the much simpler form of the SVM gradient which can be evaluated with a simple bit flip operation---the label is +1 or -1. The LR gradient requires exponentiation which is considerably more expensive.
GPU is the fastest for SVM on \texttt{real-sim}---the only case for asynchronous SGD across all our experiments. The reason is the very small number of iterations to convergence which are executed relatively fast. This is the perfect example for data replication. Nonetheless, the gap to CPU execution is minimal.
The number of iterations sequential CPU requires to convergence for SVM on \texttt{news} is much larger than parallel CPU, e.g., 425 compared to 211. This is contrary to all the other results and unexpected. We found that sequential CPU gets stuck on a plateau for 220 iterations before the loss starts to decrease again. This is due to a too large step size that does not decrease fast enough. All our tries to find a better step size have failed---this is the best time to convergence we managed to achieve.
We report the time per iteration, i.e., hardware efficiency, and the number of iterations to convergence, i.e., statistical efficiency, for GPU with and without replication. This clearly shows the impact of replication---higher time per iteration and smaller number of iterations to convergence. While the ratio of the values follows closely the replication factor, the time to convergence is entirely determined by the actual value. Nonetheless, the times to convergence with and without replication are very close.
\begin{table}[htbp]
\begin{minipage}{.47\textwidth}
In general, the best time per iteration follows closely the optimal time to convergence. Parallel CPU is the fastest on sparse data for the same reasons mentioned previously. With the exception of SVM on \texttt{w8a}, GPU has the fastest time per iteration on dense data. This proves that -- with the optimal data layout -- the superior FLOPS on the GPU lead to better performance. Identifying the optimal layout, however, requires the in-depth analysis performed in this work. The higher computational complexity of LR is reflected in the higher time per iteration across all the configurations. Table~\ref{tbl:asynch-sgd-speedup} summarizes the speedup in time per iteration corresponding to these values. In the best case, parallel CPU achieves a speedup of 6X over sequential CPU on \texttt{news} which is consistent with results published in the literature~\cite{hogbatch,cyclades}. The best speedup of GPU over parallel CPU is at most 5X on \texttt{covtype}.
\end{minipage}
\hfill
\begin{minipage}{.5\textwidth}
\begin{center}
\begin{tabular}{l|l||rr}
{\textbf{task}} & {\textbf{dataset}} & {\textbf{cpu-seq/cpu-par}} & {\textbf{cpu-par/gpu}} \\
\hline
\multirow{5}{*}{LR} & covtype & 0.60 & 5.80 \\
& w8a & 2.54 & 2.11 \\
& real-sim & 3.09 & 0.30 \\
& rcv1 & 4.86 & 0.31 \\
& news & 6.09 & 0.13 \\
\hline
\multirow{5}{*}{SVM} & covtype & 0.69 & 5.13 \\
& w8a & 0.39 & 2.15 \\
& real-sim & 1.45 & 0.54 \\
& rcv1 & 3.18 & 0.72 \\
& news & 5.60 & 0.17
\end{tabular}
\end{center}
\caption{Speedup of asynchronous SGD time per iteration. cpu-seq/cpu-par measures the speedup of the parallel CPU implementation over the sequential CPU. cpu-par/gpu corresponds to the speedup of GPU over parallel CPU.}\label{tbl:asynch-sgd-speedup}
\end{minipage}
\hfill
\end{table}
\noindent
The reason it does not translate into better time to convergence is the much larger number of iterations---a factor of 15 or more. As expected, sequential CPU achieves convergence in the smallest number of iterations without data replication because it avoids update conflicts altogether. Intuitively, k-wise replication is equivalent to executing k passes over the data in a single iteration, i.e., k iterations. Asynchronous parallel processing always introduces additional iterations---correlated to the degree of parallelism.
\subsubsection{CPU vs. GPU}\label{sec:experiments:results:arch}
We group all the results -- hardware efficiency, statistical efficiency, and time to convergence -- across all the datasets in Figure~\ref{fig:lr-avgtime}--\ref{fig:lr-time} for LR and Figure~\ref{fig:svm-avgtime}--\ref{fig:svm-time} for SVM, respectively. We also include in the figures the synchronous SGD implemented in TensorFlow (tf) and BIDMach (bid)---on parallel CPU and GPU. Since TensorFlow supports only dense data, we have results only for \texttt{covtype} and \texttt{w8a}. The stacked bars for asynchronous SGD on GPU correspond to the configurations with and without replication---hardware efficiency is smaller without replication, while statistical efficiency is smaller with replication. The side-by-side depiction of the results allows for immediate comparison between CPU and GPU across four different implementations. It also allows us to compare our ViennaCL synchronous SGD with the ones implemented in TensorFlow and BIDMach.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{lr-avgtime}
\caption{LR hardware efficiency comparison.}\label{fig:lr-avgtime}
\includegraphics[width=\textwidth]{lr-iter}
\caption{LR statistical efficiency comparison.}\label{fig:lr-iter}
\includegraphics[width=\textwidth]{lr-time}
\caption{LR time to convergence comparison.}\label{fig:lr-time}
\end{center}
\end{figure*}
For our parallel implementations -- synch and asynch -- the hardware efficiency results in Figure~\ref{fig:lr-avgtime} and~\ref{fig:svm-avgtime} are another representation of the time per iteration results from Table~\ref{tbl:synch-bgd} and~\ref{tbl:asynch-sgd}. As discussed previously, our synchronous GPU always outperforms synchronous CPU because of the higher degree of parallelism. We observe the same pattern for synchronous SGD in BIDMach---maybe with a smaller gap on sparse data. The TensorFlow results, while limited to dense data, are somehow different. Parallel CPU is almost identical and even better than GPU---on \texttt{covtype}. We found matrix transpose computation to be the bottleneck on the GPU---the same problem as in ViennaCL. The hardware efficiency of asynchronous SGD is more sensitive to the task and dataset. Nonetheless, the trend in these figures suggests that GPU is better on dense data, while parallel CPU is better for sparse data. The reasons are discussed in Section~\ref{sec:experiments:results:asynch-sgd}. In terms of statistical efficiency, all the synchronous SGD implementations -- including TensorFlow and BIDMach -- require exactly the same number of iterations to converge to a given loss accuracy both on CPU and GPU. While the results confirm this, they also show that statistical efficiency is sensitive to the task and dataset. The statistical efficiency of asynchronous SGD on CPU is always better than on GPU -- without replication -- because of the reduced number of model update conflicts generated by a smaller number of threads. The gap between the two is especially high on dense data and small models.
The time to convergence for synchronous SGD is a scaled-up image of the time per iteration since the number of iterations is identical on CPU and GPU. Thus, with the exception of TensorFlow, synchronous GPU always converges faster than CPU. In addition to the matrix transpose inefficiency, the GPU kernels in TensorFlow are optimized for dense matrices appearing in deep nets convolutions. These operations are more compute-intensive than the matrix-vector multiplications in LR and SVM. Since BIDMach is not exclusively targeted at deep learning, it optimizes these kernels better, while our implementation is focused on generalized linear models. The time to convergence for asynchronous SGD is a direct rendering of the results in Table~\ref{tbl:asynch-sgd}. For the reasons discussed in that context, we observe that the CPU implementation always outperforms GPU, even though GPU has better hardware efficiency on dense data. However, this is not sufficient to compensate for the much higher number of iterations to converge.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{svm-avgtime}
\caption{SVM hardware efficiency comparison.}\label{fig:svm-avgtime}
\includegraphics[width=\textwidth]{svm-iter}
\caption{SVM statistical efficiency comparison.}\label{fig:svm-iter}
\includegraphics[width=\textwidth]{svm-time}\\
\caption{SVM time to convergence comparison.}\label{fig:svm-time}
\end{center}
\end{figure*}
\subsubsection{Synchronous vs. asynchronous}\label{sec:experiments:results:sync-async}
Figure~\ref{fig:lr-avgtime}--\ref{fig:lr-time} and Figure~\ref{fig:svm-avgtime}--\ref{fig:svm-time} also provide a comparison between synchronous and asynchronous SGD for CPU and GPU, respectively. Since these are different algorithms initialized with different hyper-parameters, the only comparison that makes sense is in terms of hardware efficiency---the statistical efficiency and, thus, the time to convergence, are properties of the task and dataset. On the GPU, the hardware efficiency of synchronous SGD is generally better than that of asynchronous SGD. On the CPU, the hardware efficiency of synchronous SGD is better only for low-dimensional models, while for sparse high-dimensional data asynchronous is slightly better. To understand these results, we have to discuss first the two implementations. Synchronous SGD consists of a series of simple kernels -- one for each linear algebra primitive -- that process the complete input data. The result of each kernel is fully-materialized in memory. Essentially, synchronous SGD has a materialization execution strategy. Asynchronous SGD -- on the other hand -- consists of a single kernel that fuses all the gradient operations. Moreover, it also updates the model for each example, thus, executes more operations. In view of these, materialization is preferred on the GPU -- as long as it does not incur memory overflow -- because of simpler memory access patterns. The complete elimination of model updates -- equal to the number of examples -- is also an important factor. On the CPU, materialization is bounded by the size of the caches, not the complete memory---the smaller the intermediate results the better. Since none of the datasets generates small-enough intermediates that can be cached in the upper layers of the hierarchy, a large number of cache misses that degrade performance is incurred. While operator fusion -- or compilation -- improves cache access, the cache coherency mechanism triggered by concurrent model updates continues to be an important deterrent for performance. On low-dimensional models, fusion cannot overcome the large number of model update conflicts per example. On sparse high-dimensional models with large intermediates, however, the number of conflicts is minor. Thus, asynchronous SGD outperforms synchronous SGD in hardware efficiency.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{lr-loss}
\caption{Comparison in time to convergence between synchronous GPU and asynchronous CPU on LR.}\label{fig:lr-loss}
\includegraphics[width=\textwidth]{svm-loss}
\caption{Comparison in time to convergence between synchronous GPU and asynchronous CPU on SVM.}\label{fig:svm-loss}
\end{center}
\end{figure*}
We perform a direct comparison in time to convergence only between synchronous GPU and asynchronous CPU---the optimal configurations identified for each model update strategy. We measure the loss as a function of time for exactly the same hyper-parameters and the same initialization conditions. This allows us to isolate the effect of the update strategy while using the optimal computing architecture. The results are depicted in Figure~\ref{fig:lr-loss} for LR and Figure~\ref{fig:svm-loss} for SVM, respectively. Synchronous GPU achieves better convergence for certain dataset/task pairs, while asynchronous CPU is better for others. Specifically, synchronous GPU dominates on LR, while asynchronous CPU dominates on SVM. Given that this is essentially a comparison between batch gradient descent -- which corresponds to synchronous GPU -- and stochastic gradient descent -- which corresponds to asynchronous CPU -- we do not expect a single winner all the time. As shown previously in the literature~\cite{gd-optimization}, the best optimization strategy is particular to the task and the dataset. Our results confirm this finding for parallel optimizers with different model update strategies---a new scenario that has not been studied before. To summarize, \textit{while GPU is the optimal architecture for synchronous SGD and CPU is optimal for asynchronous SGD, choosing the better of synchronous GPU and asynchronous CPU is task- and dataset-dependent}.
\subsubsection{Comparison with TensorFlow and BIDMach}\label{sec:experiments:results:tf-bid}
We compare our synchronous SGD implementation in ViennaCL with the solutions in TensorFlow and BIDMach based on the results in Figure~\ref{fig:lr-avgtime}--\ref{fig:lr-time} and Figure~\ref{fig:svm-avgtime}--\ref{fig:svm-time}. The main point of this comparison is only to verify that our implementation is efficient. We observe that our synchronous SGD outperforms both TensorFlow and BIDMach in time per iteration and time to convergence for all the datasets and all the tasks---both on CPU and GPU. We emphasize that we measure only the time spent in critical processing across all the solutions. The performance of TensorFlow and BIDMach on dense data is comparable. TensorFlow is slightly better on the CPU because of the Java/Scala overhead incurred by BIDMach---which is better on GPU. As previously discussed, TensorFlow has an inefficient matrix transpose on GPU. In terms of statistical efficiency, there are cases where our synchronous SGD requires more iterations to converge. Nonetheless, they are rare and when they occur, the statistical efficiency of our asynchronous SGD is better than that of TensorFlow and BIDMach. The reason there are differences between the synchronous implementations despite them being algorithmically identical is different linear algebra kernels and different model update protocols. In summary, \textit{our SGD implementations always outperform TensorFlow and BIDMach in time per iteration, number of iterations to convergence, and time to convergence}.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{scal-lr-svm}
\caption{Hardware efficiency as a function of the number of examples or size of the taining dataset.}\label{fig:scal-lr-svm}
\end{center}
\end{figure*}
\subsubsection{Scalability with the number of examples}\label{sec:experiments:results:scal-examples}
In order to study the effect of the number of examples in the training dataset -- or the size of the dataset -- on the hardware efficiency of our algorithms, we generate three different instances of the \texttt{covtype} and \texttt{news} datasets with increasing number of examples. The size of these datasets is 100 MB, 500 MB, and 1 GB, respectively. We execute all the algorithms on these datasets and measure the time per iteration. The results are depicted in Figure~\ref{fig:scal-lr-svm}. As expected, the time per iteration increases almost linearly with the increase of the dataset size. More importantly, the relative ordering of the algorithms is almost always preserved. Inversions happen on the sparse \texttt{news} dataset and they are minor. Specifically, asynchronous GPU becomes slightly faster than synchronous CPU, while asynchronous CPU becomes slightly faster than synchronous GPU. To put it differently, the synchronous solutions on sparse data are impacted negatively by larger training datasets. The reason is the increase in size of the intermediate results which deteriorate cache efficiency on CPU and memory usage on GPU, respectively. The rate of model update conflicts on dense data is too high for this behavior to be observed. We point out that we cannot study the effect of dataset size on convergence because the value of the loss function changes with the number of examples.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{dim-lr-svm}
\caption{Hardware efficiency as a function of the number of features or dimensions in dense and sparse format.}\label{fig:dim-lr-svm}
\end{center}
\end{figure*}
\subsubsection{Scalability with the number of features}\label{sec:experiments:results:scal-features}
We take the three lowest-dimensional datasets -- \texttt{covtype}, \texttt{w8a}, and \texttt{real-sim} -- and materialize them in dense format while keeping the number of examples identical. The largest generated dataset, i.e., \texttt{real-sim}, is 2.15 GB in size. In order to generate data in sparse format, we extract the same number of examples from the three highest-dimensional datasets---\texttt{real-sim}, \texttt{rcv1}, and \texttt{news}. With this number of examples, \texttt{news} has the largest size---approximately 2 GB. We execute all the proposed algorithms on all the datasets for LR and SVM. Figure~\ref{fig:dim-lr-svm} depicts hardware efficiency as a function of the number of features. On data in dense format -- as expected -- the time per iteration increases with the increase in dimensionality. Unlike the increase with the number of examples, in this case we have a slightly sub-linear increase that impacts all the configurations equally---the relative order of the algorithms is preserved with the increase in dimensionality. For synchronous algorithms, the increase is due entirely to handling larger matrices. In the case of asynchronous algorithms, the examples become complete through ``densification'', thus, they access the entire model---the semantics of ``zero'' is unknown during execution. As a result, the time per iteration is proportional to the number of features. The results on sparse data are more intriguing. First, we observe that going from \texttt{real-sim} to \texttt{rcv1} -- doubling the number of features -- reduces the time per iteration for the synchronous algorithms. The increase in dimensionality, however, comes with a relatively small increase in the average number of non-zero dimensions per example. Moreover, the examples in \texttt{rcv1} display less variance in the number of non-zeros compared to \texttt{real-sim}---\texttt{rcv1} is more homogeneous than \texttt{real-sim}. We observe that ViennaCL handles the more uniform \texttt{rcv1} matrix more efficiently with respect to cache and memory access, thus, the reduced time per iteration. The other phenomenon we observe on sparse data is that the relative performance of asynchronous CPU and synchronous GPU switches with the increase in dimensionality. This is similar to the behavior in Figure~\ref{fig:scal-lr-svm} and the reasons are the same---\textit{asynchronous CPU outperforms synchronous GPU on highly-dimensional models trained over a sufficiently large number of examples}.
\subsection{Summary}\label{sec:experiments:summary}
Based on the extensive experiments we perform, we are in the position to provide answers to the questions identified at the beginning of the section. We repeat the questions -- this time with the corresponding answers -- in the following:
\begin{compactitem}
\item What is role of the computing architecture, i.e., CPU/GPU, on the performance of synchronous SGD? \textit{GPU always outperforms parallel CPU in hardware efficiency and, consequently, in time to convergence. The difference is minimal for small low-dimensional datasets and increases with dimensionality and sparsity---for a maximum speedup of 5.66X.}
\item What is role of the computing architecture, i.e., CPU/GPU, on the performance of asynchronous SGD? \textit{Although parallel CPU outperforms GPU in general, it is more difficult to identify the optimal computing architecture for asynchronous SGD. The main reason is the complex interaction between hardware and statistical efficiency.}
\item What is the optimal configuration for asynchronous SGD on GPU? \textit{The optimal data access path + model replication + data replication configuration depends on the task and the training dataset. Limiting any implementation to a single configuration results in sub-optimal -- and sometimes no -- convergence.}
\item How do synchronous and asynchronous SGD compare against each other on CPU and GPU separately, and across computing platforms? \textit{Synchronous SGD is the optimal choice on GPU and asynchronous SGD is the safe choice on CPU. The better choice between these two depends on the task and the training dataset since they mirror the comparison between BGD and SGD.}
\item Are our implementations efficient with respect to TensroFlow and BIDMach? \textit{Our synchronous SGD -- which is the equivalent of the TensorFlow and BIDMach implementations -- is always faster in time to convergence both on CPU and GPU.}
\item How do the proposed algorithms scale with the number of training examples and the dimensionality of the feature vector, respectively? \textit{On dense data, the increase in time per iteration is proportional to the increase in data size for all the algorithms. On sparse data, the distribution of the non-zero entries impact scalability to a similar degree as model dimensionality---in certain cases even more.}
\end{compactitem}
\section{RELATED WORK}\label{sec:rel-work}
\textbf{SGD on CPU.}
SGD is the most popular optimization method to train analytics models. Bismarck~\cite{bismarck} and GLADE~\cite{igd-glade} present methods to implement SGD inside a database engine. DimmWitted~\cite{dimm-witted} provides a study on how to implement parallel SGD on NUMA architectures. While similar exploratory axes and measure terminology are introduced, the focus on GPU is what distinguishes our paper from DimmWitted. Hogwild~\cite{hogwild} performs model updates concurrently and asynchronously without locks. Due to this simplicity -- and the near-linear speedup -- Hogwild is widely used in many analytics tasks~\cite{RRTB12,bismarck,LWR+14,DJM13,google-brain,project-adam}. Hogbatch~\cite{hogbatch} is an extension to Hogwild that is more scalable to cache-coherent architectures, while Cyclades~\cite{cyclades} reduces model update conflicts using graph partitioning. Hogwild extensions to big models based on model partitioning are introduced in~\cite{dot-product-join-ssdbm,hogwild-disk}. Buckwild~\cite{buckwild} is a low-precision variant of Hogwild that represents the data and model with fewer bits. Model averaging~\cite{parallel-igd} is an alternative method to parallelize SGD that is adequate in distributed settings. A detailed experimental comparison of Hogwild and averaging is provided in~\cite{igd-glade-ola}. The integration of relational join with gradient computation has been studied in~\cite{sgd-over-join,sgd-over-join-2,bgd-over-factorized-join}. These solutions work only for batch gradient descent (BGD), not SGD. A cost-based optimizer that selects between sequential BGD and SGD is proposed in~\cite{gd-optimization}.
\textbf{SGD on GPU.}
SGD is supported by all the major deep learning frameworks, including Caffe, TensorFlow, MXNet, BIDMach, SINGA, Theano, and Torch. These frameworks implement optimized kernels for GPU processing. As far as we can tell, all these kernels are for synchronous SGD---there is no Hogwild GPU kernel. As pointed out in~\cite{Caffe-con-Troll}, since convolutions are the most expensive operation in deep learning, they are the main candidate for offloading on GPU. GeePS~\cite{geeps} implements a distributed parameter server for training across multiple GPUs. Omnivore~\cite{Omnivore} is an optimizer for deep learning on CPU and GPU that achieves better SGD performance because of careful data partitioning and placement. The asynchronous SGD supported in Omnivore is cross-device, not within the GPU---the case in our work. GPUs are effectively used for querying deep neural networks in NoScope~\cite{CuMF-SGD}. The work outside deep learning is targeting low-rank matrix factorization for recommender systems. In~\cite{MF-SGD-GPU}, dynamic scheduling strategies for low-rank matrix factorization on GPU are explored. The problem is modeled as a graph and scheduling is executed for independent subgraphs which do not have update conflicts. cuMF\_SGD~\cite{CuMF-SGD} extends dynamic scheduling with optimized SGD kernels that leverage the GPU cache, warp-shuffle instructions, and low-precision arithmetic. This is the only Hogwild GPU kernel we found in the literature. However, the design space is not explored at all.
\section{CONCLUSIONS AND FUTURE WORK}\label{sec:conclusions}
In this paper, we perform a comprehensive study of parallel SGD for generalized linear models over NUMA CPU and GPU architectures. We measure hardware efficiency, statistical efficiency, and time to convergence as a function of the objective function, model updates, and data sparsity. Overall, our study shows that the optimal SGD solution for a given architecture is highly-dependent on all of these factors. Thus, the main value of this work is to map the overall solution space and provide a useful guide for applying parallel SGD in practice. In the process, we also design several optimizations for asynchronous SGD on GPU which have their stand-alone value. We draw several interesting insights from our extensive experimental study on five real datasets. For synchronous SGD, GPU always outperforms parallel CPU---they both outperform a sequential CPU solution by more than 400X. For asynchronous SGD, parallel CPU is the safest choice while GPU with data replication is better in certain situations. The choice between synchronous GPU and asynchronous CPU depends on the task and the characteristics of the data. While LR and SVM are wide-spread ML tasks, it is intriguing to see how the results extend to other types of models, such as low-rank matrix factorization and deep neural nets. This is a topic we will pursue in future work. We point out that in low-rank matrix factorization the structure of the problem imposes limitations that are not SGD-specific, but rather method-specific. Similarly, in convolution deep nets, computing the convolution kernels is the most time-consuming operation. By focusing on simpler tasks such as LR and SVM, we are able to quantify the exclusive performance of the SGD algorithms. In the future, we also plan to consider low-precision formats in data representation and study heterogeneous solutions that integrate concurrent processing across the CPU and GPU. We also intend to explore optimization strategies that select between synchronous GPU and asynchronous CPU dynamically.
\paragraph*{Acknowledgments}
This work is supported by a U.S. Department of Energy Early Career Award (DOE Career).
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-02-27T02:04:11",
"yymm": "1802",
"arxiv_id": "1802.08800",
"language": "en",
"url": "https://arxiv.org/abs/1802.08800"
}
|
\section{Introduction}
The susceptible-infectious-susceptible (SIS) model is the canonical model of infectious diseases that leave people re-susceptible to the disease upon recovery. As other compartmental models of infectious diseases~\cite{hethcote,andersonmay}, it consists of two main components. First, a local description of how the disease spreads between pairs of people, and dies. A susceptible individual in contact with an infectious individual becomes infectious with a rate $\beta$; infectious persons become re-susceptible with a rate $\nu$. Second, every epidemic model also describes how people come in contact with each other. Traditionally, one have assumed a fully-connected, or well-mixed, scenario---that anyone can meet anyone else with the same chance at all times. Lately, it has become popular to assume the population is connected into a network and everyone connected by an edge have equal probability of meeting one another, while pairs with no edge will never meet.
Research on the SIS model typically focuses on one of three questions. First, in a finite population how long time does it take for the outbreak to die out~\cite{o2,nasell,schwartz,doering,fagani}? Second, in an infinite population, there will be a threshold value of $\beta$ (given $\nu$) below which the outbreak inevitably dies out and above which it can live forever. This line of research investigates how the network structure---the probability distribution of degree (the number of neighbors), the number of triangles, etc.---affects the threshold~\cite{ps_rmp}. In almost all cases (Ref.~\cite{shadows} being an exception) authors have explored the large-size limit by stochastic simulations or approximative calculations. Other questions include the ranking of important vertices with respect to the outbreak~\cite{qu_etal} and the chains of events that are most likely to lead to extinction~\cite{hindes}.
In this work, we will investigate a mix of the questions above. Namely how the network structure and the position of infectious vertices affect extinction in small connected graphs. To our knowledge, this is the first study to investigate the time to extinction from different configurations (of who is susceptible and who is infectious). Scanning small graphs, however, has occasionally been used in network science~\cite{masuda,holme3f,heetae}. Rather than addressing these questions with stochastic simulations, we calculate the exact expression for the expected time to extinction as a function of $\beta$ (we set $\nu=1$ without loss of generality). This approach is computationally expensive, so we restrict ourselves to connected graphs of eight vertices or less. On the other hand, we are not restricted by graph models but can go through every distinct (non-isomorphic) such graph; 12,110 in total.
Our non-stochastic computational approach makes it possible to discover exact relations among small graphs, such as: what the smallest graph is such that the ranking of configurations' extinction times is independent of $\beta$. We can also discover scaling relations, whose validity on one hand is only verified for small graphs, on the other hand cover the large-$\beta$ regime that is inaccessible for stochastic simulations. In general, the study of small graphs could be seen as a complement to the (more common) large-scale studies. Of course, the large-scale limit is a limiting case just like small networks are. Once can argue that some types of networks are so small that the lack of self-averaging of the large networks, makes this approach just wrong. Animal trade networks~\cite{Bajardirsif20120289}, for example, could be represented as a graph where the nodes are farms (technically speaking metapopulations). These are often small by design, to restrict outbreaks.
In the rest of the paper, we will describe our approach, in parallel present one example and introduce the general theory. Then we will go through the numerical findings, first in the limit of large $\beta$ and finally study how the ranking of configurations of susceptible and infectious nodes depend on $\beta$.
\section{Preliminaries}
\subsection{The SIS model}
Assume a graph $G=(V,E)$ with $N$ vertices labeled from $0$ to $N-1$. Let $\phi_i$ be a binary state variable ($\phi_i\in\{0,1\}$). We will interpret $\phi_i=0$ as vertex $i$ being susceptible while $\phi_i=1$ means that $i$ is infectious. In the common formulation of the SIS model~\cite{daleygani}, the probability of a susceptible vertex $i$ being infected by an infectious neighbor $j$ is $\beta$ per time unit, independent of when $j$ was infectious. Likewise, the recovery of $j$ is time independent, leading to an exponential distribution of the duration of infections. Without loss of generality, we can take the recovery rate to unity. This lack of memory (i.e.\ Markov property) means that we can encode the current situation of the outbreak into a number
\begin{equation}\label{eq:s_def}
s=\sum_{i=0}^{N-1}\phi_i 2^i.
\end{equation}
Giving $s\in[0,2^N-1]$. Just like reading the string of S and I as 0 and 1 and interpreting it as a binary number. We will refer to $s$ as a \textit{configuration} of vertex states.
Next we will proceed to set up the equations for the expected time to extinction from a certain configuration. The derivation closely follows the derivation of the master equations (or Kolmogorov equations) giving the probability of the system being in a certain configuration~\cite{daleygani,kiss}. We thus effectively treat the SIS dynamics as a random walk in the space of configurations $s$, where $s=0$ is an absorbing configuration~\cite{masuda:rev}.
Now let $I(s)$ be all configurations reachable from $s$ by an infection event and $S(s)$ the set of configurations reachable from $s$ by a recovery. Let $\omega_s=|S(s)|$ be the number of infectious vertices, a.k.a.\ the \textit{prevalence}. Let $m_{st}$ be the number of edges between an infectious vertex in configuration $s$ and the vertex that is susceptible in $s$ and infectious in $t$ (in our encoding of configurations, this vertex is $\log_2 (t-s)$). Because of the exponential distribution of the durations in the susceptible and infectious states, the rates of events are additive. The total event rate $z_s(\beta)$ is
\begin{equation}
z_s(\beta) = \beta\sum_{t\in I(s)}m_{st}+\omega_s,
\end{equation}
where $m_{st}$ is the number of infection events that would turn $s$ into $t$. This gives the expected duration of configuration $s$ as $1/z_s(\beta)$. The probability that the next configuration becomes $t$ via a infection event is $\beta \, m_{st}/z_s(\beta)$, while the probability of the next configuration $t$ reachable through a recovery event is $1/z_s(\beta)$, see Fig.~\ref{fig:ex}.
\subsection{Expected time to extinction}
\label{sec:exptime}
Consider a graph $G$. Let $x_s$ denote the expected time to extinction from configuration $s$. We can write down self-consistency equations for $x$ by noting it is the expected life time of the configuration $s$, $T_s=1/z_s(\beta)$, plus the expected extinction times of the configurations reachable from $s$ times their transition probabilities. Symbolically:
\begin{equation}
x_s = T_s + \sum_t x_t \times \mathrm{Prob} (s\rightarrow t) , s\in[1, 2^N-1] .
\end{equation}
By the elementary laws of probability and the probabilities given in the previous section, this equation becomes
\begin{subequations}\label{eq:self}
\begin{align}
z_s(\beta) x_s &= 1 + \beta\sum_{t\in I(s)} m_{st} x_t + \sum_{t\in S(s)} x_t, ~ s>0\\
x_0&=0 .
\end{align}
\end{subequations}
From the above equation we can write the equation in the matrix form
\begin{equation}\label{eq:self2}
\mathbf{U}(\beta){\bf x} + {\bf 1} = 0
\end{equation}
where ${\bf 1}=(1,\dots,1)^T$, ${\bf x} =(x_0,\dots x_{2^N-1}) $, and $\mathbf{U}(\beta)$ is a \emph{polynomial matrix}~\cite{Gantmacher} (since some of the its elements depend on $\beta$ parameter) defined by:
\begin{equation}\label{eq:y_mat}
U_{st}(\beta) = \left\{\begin{array}{ll} 1 & \mbox{if $s-t=2^i$, $i\in V$}\\
\beta m_{st} & \mbox{if $s\neq 0$ and $t-s=2^i$, $i\in V$}\\
-z_s(\beta) & \mbox{if $s=t$}\\
0 & \mbox{otherwise}
\end{array} \right.
\end{equation}
where we use the property that $s-t=2^i$, $i\in V$, if and only if the only difference between $s$ and $t$ is that vertex with number $i$ is infectious in $s$ and susceptible in $t$.
Extending Eq.~\eqref{eq:self} for all configurations $s$ generates a linear system of equations with as many equations as unknowns. We can thus solve it (we use Gaussian elimination in favor of more elaborate methods~\cite{mcclellan}) to get the expectation value of the extinction times from any initial configuration $s$.
For the example in Fig.~\ref{fig:ex}, Eq.~\eqref{eq:self} becomes:
\begin{subequations}\label{eq_subsyst}
\begin{align}
(\beta+1)x_1 & = & 1 + \beta x_3 \\
(2\beta+1)x_2 & = & 1 + \beta x_3 + \beta x_6\\
(\beta+2)x_3 & = & 1 + x_1 + x_2 + \beta x_7\\
(\beta+1)x_4 & = & 1 + \beta x_6 \\
(2\beta+2)x_5 & = & 1 + x_1 + x_4 + 2\beta x_7 \\
(\beta+2)x_6 & = & 1 + x_2 + x_4 + \beta x_7\\
3x_7 & = & 1 + x_3 + x_6 + x_5 ,
\end{align}
\end{subequations}
where we have omitted the trivial $x_0=0$.
One can reduce this equation system further by grouping automorphically equivalent configurations (i.e.\ configurations that can be mapped to one another by a relabeling of the vertices)~\cite{Simon2011}. In the example of Fig.~\eqref{fig:ex}, configurations 1 and 4, and 3 and 6, form two automorphic equivalence classes. This reduces the equation system to:
\begin{subequations}\label{eq:redsys}
\begin{align}
(\beta+1)x_{1,4} & = & 1 + \beta x_{3,6} \\
(2\beta+1)x_2 & = & 1 + 2\beta x_{3,6}\\
(\beta+2)x_{3,6} & = & 1 + x_{1,4} + x_2 + \beta x_7\\
(2\beta+2)x_5 & = & 1 + 2x_{1,4} + 2\beta x_7 \\
3x_7 & = & 1 + 2x_{3,6} + x_5 ,
\end{align}
\end{subequations}
which, furthermore, gives a reduced version of ${\bf U}$ that we call ${\bf Y}$
\begin{equation}
\label{eq:ymat}
{\bf Y}(\beta) =
\begin{bmatrix}
-\beta-1 & 0 & \beta & 0 & 0 \\
0 & -2\beta-1 & 2\beta & 0 & 0 \\
1 & 1 & -\beta - 2 & 0 & \beta \\
2 & 0 & 0 & -2\beta-2 & 2\beta\\
0 & 0 & 2 & 1 & -3
\end{bmatrix} .
\end{equation}
Eq.~\eqref{eq:self2} holds with ${\bf U}$ replaced by ${\bf Y}$. Some properties of the matrix ${\bf Y}$ that hold for any network include:
\begin{enumerate}
\item \label{item:below} Below the diagonal, all elements are $\beta$ independent.
\item \label{item:above} Above the diagonal, the elements are integers times $\beta$.
\item \label{item:row} At each row, except the last (corresponding to the all-infectious configuration) there are two $\beta$-dependent elements. The constant coefficients of these terms sum to zero.
\item \label{item:diag} The diagonal is such that rows sum to zero, except the rows representing states that can reach $s=0$ by one recovery event, then the row sum is $-1$.
\end{enumerate}
Moreover, we note that the number of automorphic equivalence classes $n$ defines the rank of the reduced matrix.
Our example system in Eq.~\eqref{eq:redsys} has the solution:
\begin{subequations}\label{eq:ex_res}
\begin{align}
x_{1,4} & = & \frac{4\beta^4+16\beta^3+35\beta^2+34\beta+12}{16\beta^2+28\beta+12} \\
x_2 & = & \frac{4\beta^4+18\beta^3+42\beta^2+40\beta+12}{16\beta^2+28\beta+12} \\
x_{3,6} & = & \frac{4\beta^4+20\beta^3+51\beta^2+53\beta+18}{16\beta^2+28\beta+12}\\
x_5 & = & \frac{4\beta^4+20\beta^3+53\beta^2+52\beta+18}{16\beta^2+28\beta+12} \\
x_7 & = & \frac{4\beta^4+20\beta^3+57\beta^2+62\beta+22}{16\beta^2+28\beta+12} .
\end{align}
\end{subequations}
The expressions for $x_2$ and $x_{3,6}$ can be further simplified, but for comparison, we keep the same denominator.
\subsection{Algebraic calculations}
Solving Eq.~\eqref{eq:self2} is computationally complex. The major bottleneck is the polynomial algebra (to be precise---calculating the greatest common divisor needed to reduce the fractions of polynomials to their canonical form). The code was implemented in C with the FLINT library~\cite{flint} for polynomial algebra. To group automorphically equivalent configurations, it also relies on the subgraph-isomorphism algorithm VF2~\cite{vf2} as implemented in the igraph C library~\cite{igraph}. Finding subgraph isomorphisms---although a classical, computationally hard problem---is in practice relatively quick and this enables us to discover and exploit all symmetries rather than \textit{a priori} focusing on symmetrical graphs (cf.\ Ref.~\cite{kiss}).
Our code is available at \url{github.com/pholme/sis_exact/}.
\subsection{Small distinct graphs}
We systematically evaluate small distinct (non-isomorphic) connected graphs of sizes up to $8$ vertices: $3\leq N\leq 8$. There are two such graphs with $N=3$, six with $N=4$, 20 with $N=5$, 112 with $N=6$, 853 with $N=7$ and 11,117 with $N=8$, in total, $12,110$ graphs for $3\leq N\leq 8$ vertices. To generate these, we use the program Geng~\cite{McKay201494}. They can also be downloaded and viewed at \url{http://www.graphclasses.org/smallgraphs.html}.
\begin{figure}
\includegraphics[width=\columnwidth]{ex.pdf}
\caption{(Color online) Panel (a) shows the four equivalence classes of configurations of the SIS model at the unique graph of three vertices and two edges. The values on arrows gives the transition probabilities. Arrows and probabilities for equivalent configurations (1 and 4, and 3 and 6) are only shown for one of the configurations. Configuration 0 is absorbing---no arrows lead out from it. Panel (b) shows the expected extinction times $x$ derived from (a) as a function of the infection rate $\beta$. The vertical line at $\beta=1/2$ shows where configuration 5 start having a longer expected extinction time than configurations 3 and 6.}
\label{fig:ex}
\end{figure}
\subsection{Kendall's $\tau$}
We compare several types of correlations (e.g.\ between structural measures and times to extinction) in this work. To do that, we will use Kendall's $\tau$, a rank-type correlation coefficient. It is defined as the fraction of pairs connected by a line with a positive slope, minus the fraction of pairs connected by a negative slope~\cite{knight}. If its value is $+1$, there is a perfect correlation between the ranks of all data points; if the value is $-1$, there is a perfect anti-correlation; $\tau=0$ represents no correlation. We use this coefficient rather than other popular ones for three reasons. First, the output data is typically not Gaussian, so the premises for Pearson's correlation coefficient is violated. Second, to reduce the disk space usage we do not store the explicit expressions of $\mathbf{x}$, but rather the order of them in the large and small $\beta$ limits (the actual values are not needed to calculate $\tau$, only the rank). Third, the number of data points is small enough to use Kendall's $\tau$ rather than the faster, but less principled, Spearman rank correlation coefficient.
\section{Results}
\subsection{An example}
We start the discussion of our results by examining the example of Section~\ref{sec:exptime} and Fig.~\ref{fig:ex}. Many properties of the solution, Eqs.~\eqref{eq:ex_res}, hold also for other $N$.
First, in the small limit of $\beta$, the solutions are the harmonic numbers of $\omega_s$. This follows immediately from the dynamics defined above---all events are recovery events, the time to the next event is $1/\omega_t$, where $\omega_t$ decreases by one every event, leading to the harmonic number $\sum_{t=1}^s 1/\omega_t$.
Second, for large $\beta$, the extinction time approaches the asymptote $u\beta^{N-1}$. For all graphs we study, $u$ is constant with respect to $s$ but dependent on graph structure $G$. Below, we study $u$ for all our graphs.
\subsection{Solving our example with Cramer's rule}
In this section, we introduce Cramer's rule as a way to solve the extinction times. This is a computationally inefficient method, but the way to get some analytic insights into the asymptotic behavior of $u\beta^{N-1}$ (as we will in subsequent sections). Let us derive this exponent for Eqs.~\eqref{eq:ex_res}. To do this, we apply Cramer's rule to the polynomial matrix $\mathbf{Y}(\beta)$ denoted further as $\mathbf{Y}$ (we will drop the $\beta$ argument for most of the derivation below). Cramer's rule states that the $s$'th element of vector $\mathbf{x}$ from Eq.~\eqref{eq:self2} is
\begin{equation}\label{eq:solx}
x_s = \frac{\det \mathbf{Y}^s}{\det \mathbf{Y}},
\end{equation}
where $\mathbf{Y}^s$ is a matrix obtained from $\mathbf{Y}$ by replacing the $s$'th column by the vector $-\mathbf{1}$ (i.e.\ all elements being minus one).
Let us consider $x_{1,4}$ for our example above ($x_1$ and $x_{4}$ are identical since $1$ and $4$ are automorphic). We will use the row and column indices of ${\bf Y}$ in this section.
In order to calculate the polynomial degree of determinant of matrix $\mathbf{Y}^s$ we first make a subfactor expansion along the first column of matrix $\mathbf{Y}$. This gives the expressions:
\begin{subequations}\label{eq:yy1}
\begin{align}
\det {\bf Y} &= (-\beta-1) M_{11} + M_{31} - 2M_{41}\\\det {\bf Y}^1 &= - M_{11} + M_{21} - M_{31}+ M_{41}- M_{51}
\end{align}
\end{subequations}
where $M_{st}$ is the determinant of the matrix {\bf Y} without $s$'th row and $t$'th column (i.e.\ the $st$\textit{-minor} of {\bf Y}). We find that
\begin{subequations}
\begin{align}
M_{11} &=12 \beta^2 + 22 \beta + 12 \\
M_{21} &= - 4 \beta^2-6\beta \\
M_{31} &= 8 \beta^3 + 16 \beta^2 +6\beta \\
M_{41} &=- 2 \beta^3- \beta^2 \\
M_{51} &= 4 \beta^4 + 6 \beta^3 + 2 \beta^2 ,
\end{align}
\end{subequations}
giving (via Eq.~\eqref{eq:yy1}):
\begin{subequations}
\begin{align}
\det {\bf Y} &= - 16 \beta^2 - 28 \beta - 12 \\ \det {\bf Y}^1 &= -4\beta^4-16\beta^3- 35 \beta^2 - 34 \beta-12 ,\label{eq:subf}
\end{align}
\end{subequations}
which is in agreement with the numerical results from Eq.~\eqref{eq:ex_res}.
\subsection{Asymptotic scaling: exact relations}
We will prove that the leading term of $x_s$ is $u\beta^c$ for an integer $c$. For all our 12,110 graphs and all 2,963,056 configurations, we have $c=N-1$. We believe this holds in general, but we have to leave a proof of that for the future.
From Eq.~\eqref{eq:solx}, we see that our assertion will be true if we can show that the leading term of $\det {\bf Y}^s$ is independent of $s$. In the Appendix, we show that the determinants of the $ns$-minors of ${\bf Y}$ (cf.\ Eq.~\eqref{eq:subf}) have leading terms of polynomial degree $n-1$ and the same prefactor, independent of $s$. Such a large polynomial degree is impossible to attain for $st$-minors with $s<n$, since they have rows not containing any $\beta$, some of the $n-1$ factors of the Leibniz expansion of the determinant must have polynomial degree zero with respect to $\beta$. Thus the leading behavior of $\det {\bf Y}^s$ comes from $M_{ns}$ and is unique. Since $\det {\bf Y}$ is trivially independent of $s$, the leading behavior of $x_s$ is also $s$-independent. If $c=N-1$, we can conclude that $\det{\bf Y}=n-N+1$
For our example graph (and some other simple graphs of $N\leq 4$ we check) it holds that
\begin{equation} \label{eq:hypo}
\deg(M_{st}) = n - N + \omega_s ,
\end{equation}
for all states $t$. If this is true in general, then, curiously, $\det {\bf Y}$ is determined by the minors corresponding to the configurations with lowest prevalence and $\det {\bf Y}^s$ the one with the highest prevalence. This is reminiscent of current-flow networks where the determinant of the $st$-minor of the adjacency matrix is proportional to the potential drop between $s$ and $t$ \cite{detcurrflow}.
\begin{figure}
\includegraphics[width=\columnwidth]{leading.pdf}
\caption{(Color online) The size-dependence of the asymptote $u$ as a function of the number of edges $M$. In panel (a), we see a power-law dependence of $u=u_0 M^\alpha$ on the number of edges given the number of vertices. In panels (b) and (c), we see the $N$ dependence of parameters $u_0$ and $\alpha$.}
\label{fig:leading_vs_m}
\end{figure}
\begin{figure}
\includegraphics[width=0.7\columnwidth]{diff.pdf}
\caption{The only three graphs in our study where all vertices are in equivalent positions but the ranking of configurations (in order of extinction time) depends on $\beta$. Black represents infectious; white represents susceptible. $\beta^\ast$ gives the $\beta$ value where the two configurations have the same expected extinction time.
}
\label{fig:diff}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{vssd.pdf}
\caption{(Color online) The asymptotic coefficient $u$ as a function of the standard deviation of the degree of the vertices. Different panels represent different number of vertices ($N\geq 6$); different curves represent different number of edges. The curves with the smallest $u$-values, given $N$, are highlighted ($M=11,16,22$) as a reference. Note that the axes are logarithmic.}
\label{fig:vssd}
\end{figure*}
\subsection{Numerical results for the large $\beta$ asymptotics}
\label{sec_larg_b}
As mentioned above, the $\beta\rightarrow\infty$ behavior is the same for all configurations $s$, namely $x_s= u\beta^{N-1}+O(\beta)$. In this section, we investigate how the sizes of the graphs control the prefactor $u$.
As we can see in Fig.~\ref{fig:leading_vs_m}(a), for a given $N$, $u$ is a power-law of the number of edges $M$ (keeping in mind that $u$ depends on the graph structure):
\begin{equation}
u(M)=u_0 M^\alpha~.
\end{equation}
For the graphs we study, the coefficients $\alpha$ and $\ln u_0$ have a close to linear dependence of $N$ (Fig.~\ref{fig:leading_vs_m}(b) and (c)):
\begin{subequations}\label{eq:u0_alpha}
\begin{align}
u_0 & = & 126(1) \times 0.0268(2)^{N-1} \\
\alpha & = & -1.081(2) + 1.168(1) N
\end{align}
\end{subequations}
where the number in parentheses represents the standard errors in the last digit. Of course, the error estimates are based on the data we have and subjected to small-size effects. In other words it is conceivable that $\alpha$ could be taken as $N-1$, giving a large-$\beta$ approximation $\hat{x}$ of the extinction time $x$:
\begin{equation}
\hat{x}(\beta,N,M) = a(b\beta M)^{N-1}
\end{equation}
with constants $a\approx 126$ and $b\approx 0.0268$. Note also that there is a weak but consistent bend (negative second derivative) of $\ln u_0$ as a function of $N$ (i.e.\ $a$ and $b$ seem to be slowly varying functions of $N$).
\begin{table}
\caption{Correlation between measures characterizing the structure of graphs (beyond the number of vertices and edges) and the large-$\beta$.\label{tab:corr}}
\begin{ruledtabular}
\begin{tabular}{ll}
Measure & \text{Kendall's~} $\tau$ \\ \hline
Clustering coefficient & $-0.667$ \\
Degree assortativity & $0.191$ \\
Average distance & $-0.309$ \\
S.d.\ of degrees & $-0.751$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
As seen in Fig.~\ref{fig:leading_vs_m}, $u(G)$ is not completely determined by Eq.~\eqref{eq:u0_alpha}---there is also some spread of the points for a given $N$ and $M$. To understand what causes two graphs of the same $N$ and $M$ to differ, we try several structural predictors: the clustering coefficient (a.k.a.\ transitivity---the fraction of triangles among all connected subsets of three vertices), the degree assortativity (the Pearson correlation of degrees at either side of an edge), the average distance ($d(i,j)$---the fewest number of edges of any path between $i$ and $j$), and the standard deviation of the degree. See Refs.~\cite{mejn:book,barabasi:book} for detailed descriptions of our measures. For all pairs of $N$ and $M$, we calculate the correlation between the $u$ and these measures, then we average these values over all graphs. The results, shown in Table~\ref{tab:corr} shows that all correlations, except the one with degree assortativity, are negative and the strongest correlation is with the standard deviation of degree $\sigma_k$. This means epidemics in graphs with more homogeneous degree sequences tend to last longer in the large $\beta$ limit. The relationship between $u$ and $\sigma_k$ is shown explicitly in Fig.~\ref{fig:vssd}. Indeed, for every combination of $N$ and $M$ the $u$ vs.\ $\sigma_k$ curves are almost always decaying. We highlight the curves with largest range in $\ln u$, and note that these occur for close to maximally dense graphs.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{dev_diff.pdf}
\caption{(Color online) The average correlation (Kendall's) $\tau$ of the ranking of configurations in the high and low limits of $\beta$ as a function of the relative prevalence $\omega_s/N$. Panel (a) shows values averaged over all connected graphs of a certain number of vertices; panel (b) shows the same quantity for $N=8$ with curves representing averages over graphs with a certain number of edges. The gray curves represent the $M$ values not discussed in the text.}
\label{fig:dev}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{ycorr_diff.pdf}
\caption{(Color online) The average correlation (Kendall's $\tau$) between the rank of the configurations at large (a) and small (b) $\beta$ and various measures of the position of infectious vertices as function of the number $\omega$ of infectious vertices. These curves are averaged over all connected graphs with $N=8$ and $M=14$ (i.e.\ with a connectance---fraction of vertex pairs being an edge---$1/2$).}
\label{fig:ycorr}
\end{figure}
\subsection{Pervasiveness of $\beta$-dependent rankings of configurations}
Already from Fig.~\ref{fig:ex}, we know that the ranking of configurations with the same number of infectious vertices can depend on $\beta$. As it turns out, for all but 20 of the 12,110 graphs we study, there is at least one pair of configurations, where one has a longer expected time to extinction for small $\beta$ and the other for large $\beta$. The exceptions to this are all graphs where all vertices are automorphically equivalent. There are 23 such graphs among the ones we study. The three exceptions to the exceptions---the only configurations among such symmetric graphs with a $\beta$-dependent ranking---are shown in Fig.~\ref{fig:diff}. There are some interesting symmetries between these graphs evident from the figure. For example, even though the $M=8$ and $M=20$ graphs are complements to each other---two pairs of vertices in one case has an edge if and only if it does not in the other---the ranking of the configurations (which one is dominant for large vs.\ small $\beta$) is the same. However, we do not have any explanations for this observation. The actual times to extinction are extremely similar between the two configurations. In the $M=8$ case, for example, the numerator of $x$ for the left configuration starts as:
\begin{eqnarray}
97844723712\times\beta^{28}&+
& 2019406381056\times\beta^{27}+\nonumber \\
20485144313856\times\beta^{26}&+
& 136322491613184\times\beta^{25}+\nonumber\\
6704\mathbf{61968908288}\times\beta^{24}&+&\dots
\end{eqnarray}
while the right configuration has the numerator:
\begin{eqnarray}
97844723712\times\beta^{28}&+
& 2019406381056\times\beta^{27}+\nonumber \\
20485144313856\times\beta^{26}&+
& 136322491613184\times\beta^{25}+\nonumber\\
6704\mathbf{55853613056}\times\beta^{24}&+&\dots
\end{eqnarray}
(with the differences highlighted by bold face). Needless to say (since they also share the small-$\beta$ asymptotics) plotting them in the same graph does not show any visible difference. This is an example of a result that would be almost impossible to detect by stochastic simulations.
As $N$ increases, there are more opportunities for symmetry breaking configurations with the same $\omega_s$ (below, where it is clear from the context, we write simply $\omega$). One scenario is that for even larger $N$, the only graphs with $\beta$-independent rankings are the fully connected graphs (because for them, all configurations of the same $\omega$ are automorphically equivalent and this hence simplifies the system of equations from Section \ref{sec:exptime}). We note that fully connected graphs are the most common interaction structures studied in the literature, and perhaps an unfortunately atypical case.
\subsection{Correlation of asymptotic behaviors}
In this section, we continue the investigation of the $\beta$-dependence of the rankings of expected extinction times. After calculating $\mathbf{x}$, we rank the configurations in the limits of large and small $\beta$. Let $r_L(s,G)$ be the normalized rank of configuration $s$ among all configurations of the same prevalence $I$ in the large $\beta$ limit; and $r_S(s,G)$ the corresponding quantity as $\beta\rightarrow 0$. Then we use Kendall's $\tau$ coefficient to measure the correlation between $r_L$ and $r_S$.
In practice, we first put all $x_s$ on the minimal common denominator and compare the numerators (which are polynomials with integer coefficients). To rank polynomials in the small $\beta$ limit, one first compare the constant coefficient (which is the same for all configurations of the same prevalence), then we use coefficients of increasing polynomial degree as tie-breakers. To rank polynomials as $\beta\rightarrow\infty$, one goes through the coefficients in the opposite direction---one polynomial is considered larger than the other if the highest order coefficient where they differ is larger. As the full equations of the solution for $x_s(\beta)$ take much disk space, we only save the rankings.
With the ranking of the solutions at hand, we proceed to calculate $\tau$ (using the NumPy library of Python). In Fig.~\ref{fig:dev}(a), we see $\tau$ as a function of the prevalence $\omega$ averaged over all connected graphs of given number of vertices $N=3,\dots,8$. $\tau$ is strictly decreasing from $+1$ to $-1$. The decrease (in particular for larger $N$) is faster in the beginning and end than in the middle. Since the curve gets consistently less steep with $N$ for intermediate prevalence values, it seems possible that the curve would flatten out with a growing $N$. In other words, for configurations with few infectious vertices, the ranking is rather independent of $\beta$; while for high-prevalence configurations, all curves of expected time to extinction will cross as $\beta$ increases.
In Fig.~\ref{fig:dev}(b), we take a closer look at the $N=8$ graphs and split the average into different curves depending on the number of edges. $\tau$ has an intermediate maximum for $M=12$ while the sparsest graph ($M=7$) has smaller $\tau$ than the densest ($M=27$). For graphs close to the maximum number of edges, the region of slower decrease for small $\omega$ is almost gone.
\subsection{Structural determinants of the asymptotic behavior}
\label{sec_struc_det}
From the analysis of Fig.~\ref{fig:dev}, we know that the number of vertices and edges affect the ranking of configurations. Now we will look at more detailed explanations based on graph structure---what determines the ranking for graphs of the same $N$ and $M$? Fig.~\ref{fig:ycorr} presents a case study for $N=8$ and $M=14$ (when exactly half of the vertex pairs are connected by an edge).
To measure the correlation, we once again use Kendall's $\tau$. We pick four structural measures to characterize a configuration. Then we correlate each one with either $r_S$ or $r_L$. We present these measures briefly below. For a thorough account, we refer to Refs.~\cite{barabasi:book,mejn:book}.
\begin{enumerate}
\item Average \textit{degree}---number of neighbors---of the infectious vertices. Degree is the simplest notion of centrality, but also local (in the sense that a vertex' degree is only dependent on its neighborhood).
\item Average \textit{eigenvector centrality} over the infectious vertices. The eigenvector centrality is given by the eigenvector corresponding to the leading eigenvalue of the adjacency matrix. It is perhaps the most straightforward generalization of degree to account for the idea that being close to central vertices makes a vertex central.
\item Average \textit{vitality} of the infectious vertices. Vitality is the general class of measures based on measuring the response of some graph descriptor on the deletion of a vertex~\cite{koschutzki2005centrality}. Following Ref.~\cite{holme3f}, we define the vitality (technically \textit{component-size vitality}) of vertex $i$ to be $v(i) = [S(G)-1]/S(G\setminus \{i\})$. Where $S(G)$ denotes the number of vertices of the largest connected component of graph $G$. This measure will be very close to one for larger graphs and would thus be unsuitable if one were to scale this study up. It is, on the other hand, interesting as it is directly measuring the contribution the presence of a vertex makes in a worst case, $\beta\rightarrow\infty$, scenario.
\item Average \textit{distance} $d(i,j)$ between infectious vertex pairs $i\neq j$. This is the only measure we use that does not involve averaging a centrality measure.
\end{enumerate}
In addition to these we also try the standard measures \textit{betweenness} and \textit{closeness} centrality, but these do not contribute much to the understanding. Partly because they are very correlated with vitality and eigenvector centrality, respectively, for these small graphs. Partly because their rationales involves flows along shortest paths between random pairs---a type of process not present in disease spreading.
In Fig.~\ref{fig:ycorr}, we plot results of the correlation coefficient between the the above measures of position and the ranks of the configurations for all connected graphs with $N=8$ and $M=14$. Fig.~\ref{fig:ycorr}(a) shows the case of large $\beta$. For configurations with low prevalence, we see that large degree is strongly correlated with the extinction-time ranking. This is natural, since for low $\beta$ and $\omega$ secondary effects are negligible---the number of ways to increase $\omega$ counts more than other factors, i.e.\ the degree. Eigenvector centrality behaves almost like degree. Although not identical, these two quantities are strongly correlated for the small graphs we study, so it is natural the values are close. Somewhat interestingly, which one of these that gives the largest $\tau$ varies with $\beta$ in an irregular way. For sparser graphs (than the one in Fig.~\ref{fig:ycorr}), that are more prone to disintegrating upon vertex-deletion, vitality shows a correlation on par with degree and eigenvector centrality. Finally, the average distance shows an increasing correlation with $\omega$. That the infectious vertices are far away means that the surface to the susceptible vertices are larger and thus that the next event is more likely to be an infection event.
The structural correlations for the small-$\beta$ case (Fig.~\ref{fig:ycorr}(b)) is no surprise in the light of Fig.~\ref{fig:dev} and Fig.~\ref{fig:ycorr}(a). Since there is a correlation between $r_S$ and $r_L$ at small $\omega$ and a corresponding anti-correlation at large $\omega$, we expect the correlations with structural measures to be similar between the small and large $\beta$ cases for small $\omega$ and different for large $\omega$. This is also rather accurately describing what happens. In this case, the degree and eigenvector centrality are strongly correlated with $r_S$ for all $\omega$, while the distance becomes strongly anti-correlated for large $\omega$.
\section{Discussion}
We have studied the extinction of SIS epidemics on small graphs. We did so by calculating the exact expressions for the expected time to extinction for all connected graphs between three and eight vertices.
We find that, for a given graph, the limit behavior as $\beta\rightarrow\infty$ is independent of the configuration of susceptible and infectious, while for $\beta\rightarrow 0$ it is (trivially) just dependent on the number of infectious vertices. Of course both these limits are not of an immediate practical interest, but to understand the in-between reality we need to understand the extremes.
The large-$\beta$ asymptote $u$ of the extinction time depends on the size and structure of the graph---for large $\beta$ we find $u = u_0M^\alpha\beta^{N-1}$ where both $u_0$ and $\alpha$ are linear functions of $N$ for the graphs we investigate. Our final formula for the large-$\beta$ behavior of $x$ is $a(b\beta M)^{N-1}$ where $a\approx 126$ and $b\approx 0.0268$. This super-exponential $N$-dependence is in line with earlier observations~\cite{nasell,doering,o2,shadows,fagani,schwartz}. Simply speaking, even though there is a finite chance of extinction of SIS epidemics in finite graphs, for $\beta$ only a little more than one, this probability is so small it can be ignored for all practical purposes for all but the smallest graphs. For graphs of the same number of vertices and edges, the strongest determinant of the asymptotic behavior of the time to extinction is the variability (measured by the standard deviation) of the degree. Finally, we find that outbreaks tend to last longer in graphs of heterogeneous degree distributions.
Furthermore, given an interaction graph, we investigated when the ranking of configurations with respect to extinction time changes. For configurations with few infectious vertices, the rankings are typically the same for large and small $\beta$; for configurations with many infectious vertices, there is an anti-correlation between the extinction times in the large and small $\beta$ limits. The main structural predictors for the rank of configurations with the same number of infectious vertices (for the same graph) are degree and eigenvector centrality, while the correlations with vitality and inter-vertex distance are weaker.
The main contribution of this paper is to give a view of the relation between graph structure and epidemic behavior from another perspective than the usual. Rather than studying the $N\rightarrow\infty$ limit by stochastic simulations, we study exact expectation values of small graphs. This enables us to discover hypotheses that could be tested in standard stochastic simulations. It also makes it possible to discover the smallest graphs and configurations with some specific properties. For example, the graph of Fig.~\ref{fig:ex}(a)---where the configurations 3 and 6 have longer extinction times than configuration 5 in the interval $0<\beta < 1/2$ and vice versa for $\beta > 1/2$---is the smallest of a graph where configurations change the order of expected extinction time with $\beta$. The answer to the reversed question---what is the smallest graph where all same-prevalence configurations are ranked equally for large and small $\beta$---is the triangle $E=\{(0,1),(1,2),(2,0)\}$. In fact, all graphs where at least two vertices are in different positions (not automorphically equivalent) do not have the equal-ranking property, but 20 of 23 such graphs we study where all vertices are equivalent do have it. On one hand, these observations do not generalize, and thus follow more of a mathematical mode of scientific exploration. On the other hand, they are the basis of hypotheses that could be tested by future theory. For example, for every graph where not all nodes are equivalent, will the ranking of configurations always depend on $\beta$?
We anticipate more computational epidemiology studies without random numbers in the future, and simulation studies testing the findings in this work holds for larger graphs. It would also be interesting to go beyond expected times and derive the probability distribution of extinction time. That would need a different computational approach. For a model of networks one could consider mapping the problem to that of mean first-passage times~\cite{iannelli,fpt,masuda:rev}, or combinatorial stochastic processes~\cite{raaz}.
\begin{acknowledgments}
We thank Naoki Masuda and Petteri Kaski for helpful comments. PH was supported by JSPS KAKENHI Grant Number JP 18H01655. LT acknowledges the support under Grant No.\ ANR--13--JSV5--0006--01 of the French National Research Agency.
\end{acknowledgments}
|
{
"timestamp": "2018-09-19T02:08:55",
"yymm": "1802",
"arxiv_id": "1802.08849",
"language": "en",
"url": "https://arxiv.org/abs/1802.08849"
}
|
\section*{Results}
\textbf{General Model.}
\begin{figure} [t]
\centerline{\includegraphics[scale=0.5]{2-channel-new.pdf}}
\caption{(Color online) (a) Schematic view of the studied heterostructure. There are four channels in total, which contain two input and two output channels corresponding to the right and left circular polarizations (CPs), which are described by eight coefficients for the scattering amplitudes, $a_{\pm}$, $b_{\pm}$, $c_{\pm}$, $d_{\pm}$. The middle region labeled by $M$ consists of gain and loss components,
satisfying $\varepsilon(z)=\varepsilon^{\ast}(-z)$. Different types of PCCs (labeled as $L_i$ and $R_i$) can be added at the end-faces of region $M$ to extend a variety of achievable scenarios. (b) Three types of PCCs are considered: $L_0$ component changes the polarization of both the transmitted and reflected waves (perfect polarization conversion from right/left CP to left/right CP is assumed), whereas $L_1$ and $L_2$ change only polarization of the transmitted and only polarization of the reflected waves, respectively. For right-side incidence, the components $R_0$, $R_1$ and $R_2$ show the same properties as their left-side counterparts, i.e., $L_0$, $L_1$, and $L_2$. }
\label{htr}
\end{figure}
The studied photonic heterostructures are assumed to have in the general case the following three components: The loss/gain bilayer and two PCCs, one at each of its end-faces. Capability in polarization conversion is directly connected with the possibility of the creation of additional channels required for obtaining a four-channel configuration.
Different configurations can be accessed through adding or removing
one or two PCCs.
The general schematic of the structure is shown in Fig. \ref{htr}(a). The region $M$ is the loss/gain medium that
satisfies the symmetry condition $\varepsilon(z)=\varepsilon^{\ast}(-z)$.
It is noteworthy that this condition is necessary but not sufficient for ${\cal PT}$-symmetry.
For the purposes of our study, we assume that PCCs are infinitesimally thin and
capable of converting right/left circular polarization (CP) to left/right CP perfectly.
Here, we consider three different types of PCCs. The first-type PCCs denoted by $L_0$ and $R_0$ change the polarization state for both reflected and transmitted waves.
The second-type PCCs (denoted by $L_1$ and $R_1$) only change the transmitted waves' polarization, whereas the third-type PCCs (denoted $L_2$ and $R_2$) only change the reflected waves' polarization. The properties of PCCs
of the three types are illustrated in Fig. \ref{htr}(b).
Polarization state cannot be changed by using only bilayer. All of the studied structures are free-standing structures adjusted with the vacuum half-spaces.
\par
We build our formalism for a general 1D photonic heterostructure that has four channels, two input and two output, one allowing right circularly polarized light and the other allowing left
circularly polarized light to pass through. We start from casting the most general expression for the electric field along the $z$-direction that consists of right $(+)$ and left $(-)$ polarized waves. For the left-side half-space, $z<-L/2$, and normal incidence, it is given by
\begin{ceqn}
\begin{align}
{\bf E(z)} = {\bf E}_+(a_+e^{ikn_0z}+b_+e^{-ikn_0z})+{\bf E}_-(a_-e^{ikn_0z}+b_-e^{-ikn_0z}),
\label{eq:Eq1}
\end{align}
\end{ceqn}
where $a_{\pm}$, $b_{\pm}$ are scattering amplitudes, ${\bf E}_+=E_0{\bf \hat{e}_+}$ and ${\bf E}_-=E_0{\bf \hat{e}_-}$, with ${\bf \hat{e}_+} = 1/\sqrt{2} (1,i)$ and ${\bf \hat{e}_-} = 1/\sqrt{2} (1, -i)$, $k=\omega/c$, $\omega$ is angular frequency, $c$ is velocity of electromagnetic wave, $n_0$ is index of refraction.
To obtain ${\bf E}$ for the right-side half-space, i.e., at $z>L/2$, we replace $a \rightarrow c$ and $b \rightarrow d$ in Eq. (\ref{eq:Eq1}).
Similar expressions can be used for the bilayer region.
By exploiting the symmetry properties of the structure, we can cast the $generalized$ $conservation$ $relation$ for the four-channel case. It is expected to be
and really identical to such a well-known relation in the two-channel case, which is valid everywhere, except for coherent-perfect-absorption laser points,
see Ref. \citen{GCS}. It is connected with another key relation known as the generalized unitarity relation.
Starting from the condition $\varepsilon(z)=\varepsilon^{\ast}(-z)$, which also implies ${\bf E}(z)={\bf E}^{\ast}(-z)$, and using the standard $\mathbb{S}$-matrix formalism, which is commonly used in quantum mechanics and electromagnetic theory, we obtain the conservation relation in the studied four-channel case as follows:
\begin{ceqn}
\begin{align}
\vert T - 1 \vert = \sqrt{R_L R_R},
\end{align}
\end{ceqn}
where $T \equiv \vert t \vert^2$ is the transmittance, $R_L \equiv \vert r_L \vert^2$ and $R_R \equiv \vert r_R \vert^2$ are the reflectances
at left-side and right-side illumination, with no restrictions imposed on them; $t$ is transmission coefficient, $r_L$ and $r_R$ are reflection coefficients at left-side and right-side illumination. In line with the $\mathbb{S}$-matrix formalism, these coefficients are fully determined by the scattering amplitudes.
To achieve the purposes of this study, the conventional $\mathbb{S}$-matrix approach can be used for the arbitrary values
of $t$, $r_L$, and $r_R$.
\\
\textbf{Various Configurations.}
There are two general cases that are distinguished in terms of the physics they offer. The first one is the case, in which ${\cal PT}$-symmetric and ${\cal PT}$-broken eigenvalues may not exist simultaneously, and one phase is translated into another at the critical frequency (Case 1). In the second case, the
co-existence of the symmetric and symmetry-broken eigenvalues is possible, i.e., a $mixed$ $phase$ may occur
(Case 2). This case is the focus of our study. It will be shown that there are various configurations
that differ in the end-faces of the region $M$, which can be utilized for accessing Case 1 and Case 2.
We start from the simplest configuration, and then consider
more complex configurations by placing one or both of the components $L_i$ and $R_i$ at the end-faces.
\par
{\bf Case 1:} The simplest configuration to access this case
is a bilayer enabling ${\cal PT}$-symmetry (like the region $M$), which has been studied in detail in Refs. \citen{GCS_PRL,GCS}.
For the configuration $M$, $\mathbb{S}$-matrix has two eigenvalues. Let us generalize it by formally adding the polarization related degree of freedom to the system to obtain the $4X4$ $\mathbb{S}$-matrix, i.e., present it in the same form as used throughout the paper for more complex configurations. Since there are no PCCs in this case,
\begin{ceqn}
\begin{align}
\mathbb{S}(\omega) = \begin{pmatrix} r_R & t & 0 & 0 \cr t & r_L & 0 & 0 \cr 0 & 0 & r_R & t \cr 0 & 0 & t & r_L \end{pmatrix}
\label{eq:Eq3}
\end{align}
\end{ceqn}
yields the same set of eigenvalues twice, with the two eigenvalues in a set, which are given by
\begin{ceqn}
\begin{align}
\lambda_{1,2} = \frac{1}{2} \Big \{ (r_R+r_L) \pm \sqrt{(r_R-r_L)^2 +4t^2} \Big \}.
\end{align}
\end{ceqn}
\par
Next, we add the PCCs to the left and right end-faces, which are assumed to be capable of changing the polarization for both reflected and transmitted waves ($L_0$ and $R_0$). The configuration that we have now is $L_0MR_0$, meaning that if the wave is transmitted it retains the initial polarization because of passing through two PCCs. In turn, if it is reflected,
right/left CP is changed to left/right CP. Now, the two diagonal blocks of the $\mathbb{S}$-matrix in Eq. (\ref{eq:Eq3}) are coupled due to the added PCCs. After some algebra, we obtain
\begin{ceqn}
\begin{align}
\mathbb{S}(\omega) = \begin{pmatrix} 0 & t & r_R & 0 \cr t & 0 & 0 & r_L \cr r_R & 0 & 0 & t \cr 0 & r_L & t & 0 \end{pmatrix}.
\end{align}
\end{ceqn}
The four eigenvalues for this configuration are given as
\begin{ceqn}
\begin{align}
\lambda_{1-4} = \frac{1}{2} \Big \{ \pm (r_R+r_L) \pm \sqrt{(r_R-r_L)^2 +4t^2} \Big \}.
\label{eq:Eq6}
\end{align}
\end{ceqn}
\begin{figure} [t]
\centerline{\includegraphics[scale=0.38]{fig2AC-pptx.pdf}}
\caption{(Color online) Eigenvalues (moduli) in log10 scale vs $\omega{L/c}$ at $\varepsilon = 2 + 0.2i$ in Case 1. For all eigenvalues, the symmetry is spontaneously broken
at the critical frequency, $\omega_c{L/c}\approx{15}$, and unimodularity is not preserved anymore.}
\label{2}
\end{figure}
Thus, configurations $M$ and $L_0MR_0$
show the same moduli for the eigenvalues and the same basic physics. It is well-known that a $unitary$ $transformation$ preserves the eigenvalues. Hence, all other configurations, for which $\mathbb{S}$-matrices are connected by a unitary transformation with $\mathbb{S}$-matrix of one of two
above discussed configurations, will have the same eigenvalues. Therefore, these configurations also belong to Case 1, in which the mixed phase for the eigenvalues is not possible. Indeed, they are either ${\cal PT}$-symmetric or ${\cal PT}$-broken at any fixed frequency, except at the critical point.
We can gain access to such configurations by including different combinations of PCCs at one or both end-faces, so
${\cal PT}$-symmetry is spontaneously broken at the critical point.
We showed in our study that they include the configurations $L_1MR_1$, $L_2MR_2$, $L_0MR_2$, $L_2MR_0$, $L_1M$, and $MR_1$.
Here, we do not discuss each of these cases separately, because eigenvalues in all the cases are the same as for
the configurations $M$ and $L_{0}MR_{0}$, i.e., they are given by Eq. (\ref{eq:Eq6}).
It is interesting that the configurations $L_1M$ and $MR_1$ do not yield the mixed phase despite the fact that polarization conversion is possible at one of the end-faces.
Since the components $L_1$ and $R_1$
only change the polarization of transmitted waves, the
ability of polarization conversion of the reflected wave is expected to be the $necessary$ (but not sufficient) condition of existence of the mixed phase.
It is noticeable that the same $\mathbb{S}$-matrices and the same eigenvalues can be obtained
in Case 1 for the structures with PCCs at two end-faced, with a PCC at one end-face, and without PCCs.
\begin{figure} [t]
\centerline{\includegraphics[scale=0.85]{eps0-1.pdf}}
\caption{(Color online) The map displaying the magnitudes of the four eigenvalues in log10 scale
at $Re(\varepsilon)=[0,1]$ and $\omega{L}/c=[0,60]$; $Im(\varepsilon)=0.2$. The upper panels display
$\lambda_1$ (a) and $\lambda_2$ (b) that are symmetric (unimodular) at $\omega<\omega_c$;
the lower ones
display eigenvalues $\lambda_3$ (c) and $\lambda_4$ (d), which are symmetry-broken at any $\omega$. Deviation from unimodular case
(here - blue color corresponding to 0 at the scale bar) indicates the extent to which the symmetry is broken. Until the value of $\omega_c$ is reached, there is a wide range of $\omega$, where symmetric (unimodular) and symmetry-broken sets of eigenvalues co-exist. After hitting $\omega_c$, the symmetric set also experiences a spontaneous symmetry breaking, and the two eigenvalue sets with broken symmetry start to overlap. \\
}
\label{0-1}
\end{figure}
An example
is presented in Fig. \ref{2}.
At $\omega<\omega_c$, all four eigenvalues are $unimodular$ (log10$\lambda_m=0$, $m$=1,2,3,4), being in ${\cal PT}$-symmetric phase and, hence, are flux conserving. At $\omega>\omega_c$, the eigenvalues become reciprocal in two sets, which are in the symmetry-broken phase and, hence, satisfy the generalized conservation relation. These properties are identical to those well-known for one-dimensional structures with two channels. One can see that it is not possible to simultaneously obtain the symmetric and symmetry-broken phases at any fixed value of $\omega{L}/c$.
{\bf Case 2:} The simplest intuition for accessing this case says that a PCC should be added only to one of the end-faces of the component $M$.
However, it cannot ensure the phase mixing, as follows from the study of Case 1.
Thus, let us first add a PCC, specifically $L_0$ to the left end-face, i.e., we now have configuration $L_{0}M$. For the light incident from the left side, polarization of both the reflected and the transmitted wave is assumed to be changeable due to this PCC. If it is incident from the right side, the transmitted wave's
polarization is still changed, while the reflected wave retains its polarization. These properties are described by the following $\mathbb{S}$-matrix:
\begin{ceqn}
\begin{align}
\mathbb{S}(\omega) = \begin{pmatrix} r_R & 0 & 0 & t \cr 0 & 0 & t & r_L \cr 0 & t & r_R & 0 \cr t & r_L & 0 & 0 \end{pmatrix}.
\label{eq:Eq7}
\end{align}
\end{ceqn}
This configuration yields the mixed phase for the eigenvalues of the $\mathbb{S}$-matrix,
so we can obtain symmetric and symmetry-broken sets of eigenvalues at fixed $\omega$.
The four eigenvalues corresponding to
Eq. (\ref{eq:Eq7}) are given by
\begin{subequations}
\begin{ceqn}
\begin{align}
\lambda_{1,2} = \frac{1}{2} \Big \{ (r_R+r_L) \pm \sqrt{(r_R-r_L)^2 +4t^2} \Big \}
\label{eq:Eq8a}, \\
\lambda_{3,4} = \frac{1}{2} \Big \{ (r_R-r_L) \pm \sqrt{(r_R+r_L)^2 +4t^2} \Big \}.
\label{eq:Eq8b}
\end{align}
\end{ceqn}
\end{subequations}
As one set of eigenvalues ($\lambda_{1,2}$) preserves the symmetry and unimodularity
at $\omega<\omega_c$,
the other set of eigenvalues ($\lambda_{3,4}$) is translated into a symmetry-broken phase in a wide $\omega$-range, even near $\omega{L}/c=0$.
Once $\omega=\omega_c$ is reached, this symmetry-broken set of the eigenvalues starts to overlap with the
first set, whose eigenvalues are also in the symmetry-broken phase at that time. Thus, all of the eigenvalues are in the broken phase at $\omega>\omega_c$.
So, $\omega=\omega_c$ is the boundary between the mixing phase and the all-broken phase cases.
\begin{figure} [t!]
\centerline{\includegraphics[scale=0.85]{eps1-15.pdf}}
\caption{(Color online) Same as Fig. \ref{0-1} but for $Re(\varepsilon)=[1,15]$; $\lambda_1$ (a), $\lambda_2$ (b),
$\lambda_3$ (c), and $\lambda_4$ (d). As the $Re(\varepsilon)$ increases, the critical frequency $\omega_c$ is
blueshifted that results in a larger region of simultaneous symmetric (unimodular) and symmetry-broken sets of eigenvalues.}
\label{1-15}
\end{figure}
Fig. \ref{0-1} and Fig. \ref{1-15} show the maps of the magnitudes of eigenvalues $\lambda_m$, $m=1,2,3,4$, in log10 scale, which are obtained for Case 2
from Eqs. (\ref{eq:Eq8a}) and (\ref{eq:Eq8b}), in wide ranges of variation in
$Re(\varepsilon)$ and $\omega$. It is clearly seen that the symmetry of eigenvalues $\lambda_{3,4}$ is broken even
at small frequencies. Thus, there is a large region in $(\omega{L}/c, Re(\varepsilon))$-plane,
where ${\cal PT}$-symmetric and ${\cal PT}$-broken sets of eigenvalues may co-exist.
Starting from $\omega=\omega_c$, the
set of eigenvalues $\lambda_{1,2}$ also experiences spontaneous symmetry breaking, so the mixing does not exist anymore.
As shown in Fig. \ref{0-1}(a,b),
the value of $\omega=\omega_c$ and the extent to which the symmetry is broken strongly depend
on $Re(\varepsilon)$. For instance, we obtain $\mbox{min}(\omega_c{L}/c)=5$ in the vicinity of $Re(\varepsilon) = 0.5$.
For the
set of eigenvalues $\lambda_{3,4}$,
a strong deviation from the unimodular case is observed in Fig. \ref{0-1}(c,d) even at very small values of $Re(\varepsilon)$.
In particular, a strong anomaly of $\lambda_{3,4}$ occurs nearly at $0<Re(\varepsilon)<0.5$ and $2<\omega{L}/c<4$.
Hence, ${\cal PT}$-symmetric and ${\cal PT}$-broken eigenvalues may co-exist also in ENZ regime, i.e., in the close vicinity of
$Re(\varepsilon)=0$.
It is noteworthy that the behaviours of eigenvalues in Case 1 and Case 2 at $0<Re(\varepsilon)<1$ are very different.
In Fig. \ref{1-15}, the boundary between the regions with the mixed phase for eigenvalues (at smaller $\omega{L}/c$) and with
all symmetry-broken
eigenvalues (at larger $\omega{L}/c$) is clearly seen. Its location can be controlled by variations in
$Re(\varepsilon)$, while $Im(\varepsilon)$ is fixed.
A detailed investigation of these scenarios will be a subject of our future research.
Fig. \ref{3a} and Fig. \ref{3a-2} display the magnitudes for eigenvalues $\lambda_1$, $\lambda_2$, $\lambda_3$ and $\lambda_4$ in log10 scale at varying $\omega{L}/c$ for $Re(\varepsilon)=1$ and $Re(\varepsilon)=2$, respectively, for two different values of $Im(\varepsilon)$.
The location of the critical frequency for $\lambda_1$ and $\lambda_2$,
$\omega=\omega_c$, and, hence, width of the $\omega$-range, in which the mixed phase is achieved, are strongly affected by $Im(\varepsilon)$.
As the value of $Im(\varepsilon)$ increases, the critical frequency is
redshifted, whereas an increase of $Re(\varepsilon)$ results in
blueshift of $\omega_c$.
At the same time, $Re(\varepsilon)$ weakly affects the width of mixing range. On the other hand, $Re(\varepsilon)$ can strongly affect the magnitudes of
$\lambda_{3}$ and $\lambda_{4}$.
${\cal PT}$-symmetric and ${\cal PT}$-broken sets of eigenvalues can co-exist even at $Re(\varepsilon) = 1$, although the difference between them is rather weak in this case.
Since breaking symmetry at $\omega=\omega_c$ is a general property of $\lambda_{1}$ and $\lambda_{2}$,
it also occurs in the ENZ regime, e.g. at $Re(\varepsilon) = 0.02$ (not shown).
However, in this regime, the difference between $\lambda_1$ and $\lambda_2$ at $\omega>\omega_c$ is weak, whereas $\lambda_3$ and $\lambda_4$ may significantly deviate from the unimodular case in a wide range of $Im(\varepsilon)$ variation.
\begin{figure}[t!]
\centerline{\includegraphics[scale=1.2]{re1.pdf}}
\caption{(Color online) The magnitudes of the four eigenvalues (in log10 scale) are plotted against normalized frequency, $\omega{L}/c$, for $Re(\varepsilon)=1$ and $Im(\varepsilon)=0.2$ (top), $Im(\varepsilon)=0.5$ (bottom).
}
\label{3a}
\end{figure}
\begin{figure}[t!]
\centerline{\includegraphics[scale=1.2]{re2.pdf}}
\caption{(Color online) The magnitudes of the four eigenvalues (in log10 scale) are plotted against normalized frequency, $\omega{L}/c$, for $Re(\varepsilon)=2$ and $Im(\varepsilon)=0.2$ (top), $Im(\varepsilon)=0.5$ (bottom).
}
\label{3a-2}
\end{figure}
Next, let us add the component $R_0$ to the right end-face of the bilayer, so we now have the configuration $MR_0$. Then, we obtain exactly the same physical scenario for configuration $L_{0}M$,
while the roles of right and left side illuminations are interchanged. The $\mathbb{S}$-matrix is now given as:
\begin{ceqn}
\begin{align}
\mathbb{S}(\omega) = \begin{pmatrix} 0 & 0 & r_R & t \cr 0 & r_L & t & 0 \cr r_R & t & 0 & 0 \cr t & 0 & 0 & r_L \end{pmatrix}.
\end{align}
\end{ceqn}
It has four eigenvalues, which may yield the mixed phase:
\begin{subequations}
\begin{ceqn}
\begin{align}
\lambda_{1,2} = \frac{1}{2} \Big \{ (r_R+r_L) \pm \sqrt{(r_R-r_L)^2 +4t^2} \Big \},
\label{eq:Eq10a} \\
\lambda_{3,4} = \frac{1}{2} \Big \{ (r_L-r_R) \pm \sqrt{(r_R+r_L)^2 +4t^2} \Big \}.
\label{eq:Eq10b}
\end{align}
\end{ceqn}
\end{subequations}
\begin{table}[h!]
\caption{Number of eigenvalues and ability of mixing eigenvalue phases for different configurations with a different number and properties of the individual components.}
\begin{center}
\begin{tabular}{ l | c | c | c }
\hline \hline
& eigenvalues & components & mixed phase \\ \hline
${\bf M}$ & 2 & 1 & \xmark \\ \hline
{$\bf L_0$}{$\bf M$}{$\bf R_0$} & 4 & 3 & \xmark \\ \hline
{$\bf L_0$}{$\bf M$} or {$\bf M$}{$\bf R_0$} & 4 & 2 & \cmark \\ \hline
{$\bf L_1$}{$\bf M$}{$\bf R_1$} & 4 & 3 & \xmark \\ \hline
{$\bf L_2$}{$\bf M$}{$\bf R_2$} & 4 & 3 & \xmark \\ \hline
{$\bf L_1$}{$\bf M$} or {$\bf M$}{$\bf R_1$} & 4 & 2 & \xmark \\ \hline
{$\bf L_2$}{$\bf M$} or {$\bf M$}{$\bf R_2$} & 4 & 2 & \cmark \\ \hline
{$\bf L_0$}{$\bf M$}{$\bf R_1$} or {$\bf L_1$}{$\bf M$}{$\bf R_0$} & 4 & 3 & \cmark \\ \hline
{$\bf L_0$}{$\bf M$}{$\bf R_2$} or {$\bf L_2$}{$\bf M$}{$\bf R_0$} & 4 & 3 & \xmark \\ \hline
{$\bf L_1$}{$\bf M$}{$\bf R_2$} or {$\bf L_2$}{$\bf M$}{$\bf R_1$} & 4 & 3 & \cmark \\ \hline \hline
\end{tabular}
\end{center}
\label{prop}
\end{table}
Similarly to Case 1, we can obtain the sets of eigenvalues given by Eqs. (\ref{eq:Eq8a}), (\ref{eq:Eq8b}) and by Eqs. (\ref{eq:Eq10a}), (\ref{eq:Eq10b}),
in different configurations. The $\mathbb{S}$-matrices of these configurations should be equivalent to the $\mathbb{S}$-matrices
of the $L_0M$ and $MR_0$ cases, respectively, from which they must be obtainable via unitary transformations to
yield the same eigenvalues. From the obtained results,
the first set of eigenvalues [Eqs. (\ref{eq:Eq8a}), (\ref{eq:Eq8b})],
can be accessed in the configurations $L_2M$, $L_0MR_1$, and $L_1MR_2$, whereas the second set of eigenvalues [Eqs. (\ref{eq:Eq10a}), \ref{eq:Eq10b})] does in
$MR_2$, $L_1MR_0$, and $L_2MR_1$.
The simplest configurations for Case 2 are $L_0M$, $MR_0$, $L_2M$, and $MR_2$.
They only need two components, i.e., a loss/gain bilayer and one PCC.
The properties of the studied configurations are summarized in Table \ref{prop}. One can see therein which features are required for obtaining the mixing phase for eigenvalues, and which are for keeping only ${\cal PT}$-symmetric eigenvalues within a certain frequency range.
Structures with two components - one loss/gain bilayer and one PCC with peculiar properties - can be sufficient for obtaining the mixing phase, whereas those with three components do not always lead to it.
Whether the mixing of the phase is achieved or not depends on the polarization conversion scenario at the end-faces, so the role of PCC(s) is evident. If it is achieved, it occurs in a rather wide frequency range.
It is noticeable that despite having different end-faces $all$ of the studied configurations still enable the sets of symmetric eigenvalues.
On the other hand, the effect exerted by one PCC can be compensated by that of the other, so phase mixing is not achieved in some three-component
configurations. This case can be useful for the separation of two processes in one structure and, therefore, is promising for multifunctional operation.
\section*{Discussion}
The main goal of this study was to show that broadband mixing is possible, and, moreover, it may be achieved in photonic heterostructues with a one-dimensional gain/loss bilayer.
We demonstrated a way to broadband mixing of ${\cal PT}$-symmetric and ${\cal PT}$-broken phases for eigenvalues in the structures with four channels and with a one-dimensional loss/gain component.
As far as symmetric and broken phases show different properties related to flux conservation, amplification, and attenuation, it is expected that the broadband mixing of these phases may open new routes to efficient selective manipulation by electromagnetic radiation, including advanced regimes of directional selectivity, enhancement, and absorption.
Earlier, such a mixing has been expected to occur for physical dimensions higher than one. The obtained results show that a
two-dimensional loss/gain medium and, moreover, anisotropy are not required for the mixing.
We have analytically derived the eigenvalues for different configurations of photonic heterostructures consisting of a bilayer of one-dimensional loss/gain
medium and PCC(s), and proved the possibility of obtaining
the mixed phase for eigenvalues in a wide but limited frequency range, while its width depends on both the real and the imaginary part
of the permittivity of the loss/gain component. Therefore, the wideband phase mixing is a very general effect whose existence does not need
any special adjustment of the parameters, although its appearance can be strongly sensitive to the parameter choice.
While the principal possibility of obtaining four channels by replacing a two-dimensional loss/gain medium with a one-dimensional medium combined with a PCC could be expected, the obtained results indicate that this case can be achieved in different permittivity ranges for multiple configurations, which are distinguished due to the presence and properties of the PCC(s).
The utilized model based on $\mathbb{S}$-matrix formalism
properly describes a wide variety of photonic heterostructures with a 1D loss/gain component and their ${\cal PT}$-related properties.
It is noteworthy that the ${\cal PT}$-symmetric eigenvalues set may exist for all of the considered configurations, regardless of whether the loss/gain component
is end-faced with one or two PCCs, or not end-faced at all.
According to the goals of this study, we clarified, based on the obtained results, which properties of PCCs are necessary and which are sufficient in different transmission/reflection scenarios. We showed that a two-component heterostructure can be sufficient to obtain wideband phase mixing, provided that the PCC(s) show the suitable properties. On the other hand, we detected such three-component heterostructures that keep immunity against phase mixing, in spite of containing a PCC that may lead to the mixing in other, even simpler configurations.
Design of PCCs enabling the desired polarization manipulation will be one of the next steps.
A further study of the peculiarities of different ranges of material parameters is planned.
The difference between the cases with and without phase mixing for eigenvalues is especially intriguing for ultralow-permittivity regime, in which the mixing is possible even in close vicinity of $Re(\varepsilon)=0$. Thus, the consideration of the studied physical features in connection with recent advances in theory and applications of ENZ materials and specific coupling and transmission regimes realized with their use is a promising topic for future research.
\section*{Methods}
The standard $\mathbb{S}$-matrix formalism, which is commonly used in quantum mechanics, has been used to mathematically express transmission and reflection properties of different configurations. The eigenvalues of the $\mathbb{S}$-matrix were analyzed analytically.
The values of $r_L$, $r_R$ and $t$ were calculated by applying the conditions of continuity for the tangential components of electromagnetic field at the boundaries of the structural components, so the unknown coefficients in the general formulas for the field components are introduced unambiguously.
The eigenvalues were calculated for various configurations via a simple custom-made Fortran code and the results are plotted via GNUPLOT. Both of these procedures were performed on a standard laptop that works on an Ubuntu operating system.
\section*{Acknowledgements}
This work is supported by the projects DPT-HAMIT and NATO-SET-193. One of the
authors (E.O.) also acknowledges partial support from the Turkish Academy of
Sciences. A.E.S. thanks the National Science Centre of Poland for support under the
project MetaSel DEC-2015/17/B/ST3/00118. Work at Ames Laboratory was partially supported by the U.S. Department of Energy, Office of Basic Energy Science,
Division of Materials Sciences and Engineering (Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under
Contract No. DE-AC02-07CH11358). The European Research Council under ERC Advanced Grant No. 320081 (PHOTOMETA) supported work at FORTH. This work is origally published in Scientific Reports {\bf [Sci Rep. 2017; 7: 15504.]}.
|
{
"timestamp": "2018-02-27T02:05:51",
"yymm": "1802",
"arxiv_id": "1802.08840",
"language": "en",
"url": "https://arxiv.org/abs/1802.08840"
}
|
\section{Introduction \label{sec:intro}}
Recent progress of laser technologies has enabled the observation of ultrafast phenomena with attosecond resolution and offered novel opportunities to directly explore real-time electron dynamics in
matter \cite{hentschel2001attosecond,RevModPhys.81.163,krausz2014attosecond}.
Broadly speaking one can assign the available attosecond time-resolved measurement techniques to two major groups: one is based on all-optical measurements such as the attosecond transient absorption spectroscopy \cite{goulielmakis2010real,PhysRevLett.105.143002,PhysRevLett.106.123601}.
The other is based on photoelectron spectroscopy such as the reconstruction of attosecond beating by interference of two-photon transitions (RABBITT) \cite{Paul1689,muller2002reconstruction} and
the attosecond streaking camera \cite{PhysRevLett.88.173903,kienberger2004atomic}.
In the past decade, attosecond transient absorption spectroscopy has been applied to atomic and molecular systems, and ultrafast electron dynamics in these relatively small systems has been intensively
investigated both experimentally \cite{goulielmakis2010real,beck2014attosecond,warrick2016probing,reduzzi2016observation}
and theoretically \cite{PhysRevA.83.033405,CPHC:CPHC201201007}.
Recently, this technique has been extended to solid-state materials where
nonequilibrium electron dynamics has been investigated towards future applications
such as petahertz devices \cite{Schultze1348,Lucchini916,zurch2017direct}.
Although attosecond spectroscopy of solid-state materials provides in principle a wealth of information on novel
aspects of ultrafast dynamics, experimental results are often hard to interpret directly,
because of the strong nonlinearlity of light-matter interactions combined with the complex electronic
structure of solids.
To extract microscopic insight from attosecond transient absorption spectra of solids,
first-principles simulations based on the density functional theory (DFT) \cite{PhysRev.136.B864,PhysRev.140.A1133} and
the time-dependent density functional theory (TDDFT)
\cite{PhysRevLett.52.997} have played a significant role
\cite{Schultze1348,Lucchini916,Schlaepfer2018natphys}.
Likewise, attosecond photoelectron spectroscopy has been applied to atomic systems \cite{Schultze1658,PhysRevLett.106.143002,PhysRevA.85.053424,Isinger893} as well as recently to
solid-state materials \cite{cavalieri2007attosecond,Locher:15,neppl2015direct}.
However, in spite of the intensive development of several \textit{ab-initio} approaches for atomic and molecular systems \cite{PhysRevA.84.061404,PhysRevA.86.061402,PhysRevA.89.033417},
similar approaches for solids and surfaces have not yet been established.
Therefore, in order to understand experiments on
such complex systems, further development of first-principles approaches is required.
In this regard one promising candidate is represented by real-time electron dynamics simulations based on TDDFT.
Real-time TDDFT simulations of solids have been already applied to ultrafast as well as strong-field-induced phenomena such as attosecond transient absorption spectroscopy \cite{Lucchini916}, high harmonic generation \cite{doi:10.1063/1.4716192,PhysRevLett.118.087403,PhysRevA.97.011401}, laser-induced damage \cite{PhysRevB.92.205413}, and laser-induced magnetism \cite{noejung2018}.
In a recent work \cite{deGiovannini:2016bb} the authors have introduced a method to compute the angle-resolved photoelectron spectrum of solid-state systems.
This method is based on the calculation of the photoelectron flux trough a closed surface that can be simulated with TDDFT in a real-space real-time implementation.
Here we illustrate how this approach can be employed to calculate attosecond photoelectron spectra of finite systems.
This constitutes a fundamental benchmark towards the application to solid-state materials.
For this purpose, we perform the attosecond photoelectron spectroscopy simulation of an Argon atom and compare the theoretical results with recent experiments.
In particular here we focus on RABBITT experiments, since the alternative approach, attosecond streaking, provides equivalent information~\cite{RevModPhys.87.765,Cattaneo:16} and we emphasize that the TDDFT attosecond photoelectron spectroscopy can be straightforwardly applied also to the attosecond streaking technique.
RABBITT has been originally introduced for the temporal characterization of attosecond pulses
\cite{Paul1689}, and has then been employed to investigate the photoionization delay in atoms and molecules~\cite{PhysRevLett.106.143002,RevModPhys.87.765,Isinger893}
as well as, in recent times, in solid-state surfaces~\cite{Locher:15,Kasmi:17}.
The RABBITT technique employs two laser pulses in a pump-probe fashion: an attosecond extreme ultraviolet (EUV) pulse train is used as a pump to ionize the system while
a femtosecond infrared (IR) pulse is used as a probe.
This configuration is designed for experiments where the pulse train is obtained by an high harmonic generation stage seeded by the IR pulse.
As the attosecond pulse train consists of a frequency comb of odd IR frequency multiples it produces an energy comb of photoelectron spectra that are shifted by IR photons.
Probing the system with the delayed IR brings two adjacent photoelectron peaks in contact and forms an interference pattern that oscillates as a function of the delay.
This pattern encodes information on the emission delay with attosecond resolution.
In this paper we will demonstrate how the entire process can be efficiently simulated with TDDFT.
The structure of this paper is as follows: In Sec. \ref{sec:method},
we describe the theoretical and numerical methods to compute electron dynamics and photoelectron spectra
based on the TDDFT. In Sec. \ref{sec:result}, we demonstrate the first-principles simulation for attosecond photoelectron spectroscopy
and compare the theoretical results with recent experiments. We further discuss the role of a many-body effects in the photoemission process. Finally, we summarize our findings
and provide some perspective for future work in Sec. \ref{sec:summary}.
Hartree atomic units ($\hbar=e=m_e=4\pi \epsilon_0=1$) are employed throughout the paper unless otherwise specified.
\section{Method \label{sec:method}}
The fundamental concept of TDDFT is that all physical properties of a time-dependent system can be determined
through their functional dependence on the time-dependent interacting many-body density \cite{PhysRevLett.52.997},
$n({\bf r},t)$ and the initial many body state, which can be disregarded if we start from
the ground state.
The idea of both DFT and TDDFT, is to obtain this many-body density by mapping it to the density of
a fictitious auxiliary system of non-interacting electrons: the \emph{Kohn-Sham} (KS) system.
The dynamics of the KS system can be obtained by propagating the one-particle
equations for the orbitals $\varphi_i({\bf r},t)$ of a single Slater determinant, according to the
time-dependent KS (TDKS) equations
\begin{equation}
\label{eq:tdks}
i\frac{\partial}{\partial t} \varphi_i({\bf r},t) = H_{\rm KS}({\bf r},t)\varphi_i({\bf r},t) ,\;\;\; i=1,\dots,N/2\,.
\end{equation}
To simplify notation we here only consider systems with an even number
of electrons $N$, so that each spatial orbital $\varphi_i$ is doubly occupied with two electrons of opposite spin.
The KS Hamiltonian governing the dynamics of the orbitals in~(\ref{eq:tdks}) is defined as:
\begin{eqnarray}
H_{\rm KS}({\bf r},t)&=& \frac{1}{2} \left(-i \nabla + \frac{\mathbf{A}(t)}{c}\right)^2 + v_{\rm KS}[n]({\bf r},t)\,, \label{eq:hks} \\
v_{\rm KS}[n]({\bf r},t) &=& v_{\rm ion}({\bf r},t) +v_{\rm H}[n]({\bf r},t) + v_{\rm xc}[n]({\bf r},t)\,\label{eq:vks}
\end{eqnarray}
where, due to the action of the KS potential $v_{\rm KS}$, the time-dependent density $n({\bf r},t) = 2\sum_{i=1}^{N/2}\vert\varphi_i({\bf r},t)\vert^2$ corresponds both to the real and to the KS system.
The KS potential is composed of three terms.
The first term is the electron-ion potential provided by the nuclei, while
the second term is the electrostatic potential generated by the electronic charge density $v_{\rm H}[n]({\bf r},t) = \int\!\!{\rm d}{\bf r}'\;n({\bf r}',t)/\vert {\bf r}-{\bf r}'\vert$.
The last term $v_{\rm xc}[n]({\bf r},t)$ is the so-called exchange and correlation (xc) potential that accounts for the many-body effects deriving from the electron-electron interaction; it is a functional of the density at all times $n({\bf r},t)$ and, since its explicit form is unknown, it must be approximated.
In this work we employ the adiabatic local-density approximation (ALDA) \cite{PhysRevLett.45.566,PhysRevB.23.5048} which is based on
the xc potential of a homogeneous electron gas evaluated with the instantaneous density in time at every
point in space.
In order to compensate the self-interaction error \cite{PhysRevB.23.5048} of the local approximation and obtain a correct ionization
potential we employ the simplest scheme based on the averaged-density self-interaction correction (ADSIC)~\cite{Legrand:2002jf}.
Given the energy range of the lasers employed in the simulations, it is well justified to invoke the dipole approximation for the light-matter interaction.
Under this condition, the coupling with the laser field can be expressed in the velocity gauge which amounts to modifying the kinetic operator by adding the spatially homogeneous time-dependent vector potential $\mathbf{A}(t)$ of the classical laser field, as in Eq.~(\ref{eq:hks})\footnotemark.
The time profile of $\mathbf{A}(t)$ can accommodate any linear combination of laser fields and is therefore
naturally suited to describe any kind of pump-probe configuration, including the one employed for RABBITT
experiments.
\footnotetext{Note that Eq. (\ref{eq:hks}) and Eq. (\ref{eq:vks}) correctly describe the coupling with external electric fields, but neglect contributions from the magnetic fields. To correctly account for magnetic fields one would need to resort to a current-density functional theory formulation of the problem where exchange and correlation are expressed via a vector potential ${\bf A}_{xc}[{\bf j}]$.
However, the effect of the magnetic component of a laser electromagnetic radiation is much smaller than that of the electric component and can be safely neglected in the presently discussed context.}
To obtain the photoelectron spectrum from the time-dependent KS orbitals, we use the fact that it can be expressed as a flux integral of the ionization current through a closed surface.
This approach is based on the t-SURFF method, first introduced by Scrinzi \cite{Tao:2012ev} for one-electron
systems and later extended to many electrons with TDDFT \cite{Wopperer:2017bm}.
According to this formulation the momentum-resolved photoelectron probability $\mathcal{P}({\bf p})$, i.e. the probability to measure an electron with a given momentum $\mathbf{p}$, can be expressed as
\begin{equation}\label{eq:tsurfPp}
\mathcal{P}({\bf p}) =
\frac{2}{N}\sum_{i=1}^{N/2} \left| \int_0^\infty {\rm d }\tau \oint_S {\rm d } {\bf s}\cdot \langle \chi_{\bf p}(\tau) | \hat{\bf j} |\varphi_i(\tau) \rangle \right|^2
\end{equation}
where $\hat{\bf j}$ is the single-particle gauge-invariant current density operator and $\chi_{\bf p}({\bf r},t)= (2\pi)^{-\frac{3}{2}} e^{i ({\bf p}+{\bf A}(t))\cdot{\bf r}} e^{i \Phi({\bf p},t)}$ . The phases $\Phi({\bf p},t)=\int_0^t {\rm d}\tau \frac{1}{2} \left( {\bf p} + \frac{{\bf A}(t)}{c} \right)^2$
describe Volkov waves of
momentum $\mathbf{p}$ that are the analytical solutions of the time-dependent Schr\"odinger equation
for free particles in a field.
The bracket notation in the equation is thus used as shorthand to indicate the
evaluation of the current-density operator matrix element between KS orbitals and Volkov waves.
The energy-resolved photoelectron spectrum, $\mathcal{P}(E)$, employed to build the RABBITT traces can be obtained by integrating the angular dependence of $\mathcal{P}({\bf p})$ as follows
\begin{equation}
\mathcal{P}(E)=\int_0^{4\pi}{\rm d}\Omega\, \mathcal{P}(E=\frac{{\bf p}^2}{2},\Omega) \,.
\label{eq:integrated-pes}
\end{equation}
This approach to photoemission is particularly suited to numerical implementations where the TDKS
equations~(\ref{eq:tdks}) are solved in real-space and propagated in real-time.
In our implementation the spatial coordinates are discretized on a cartesian grid with spacing
$\Delta =0.3$~a.u. and the equations are solved on a spherical box of radius $R=30$~a.u..
The TDKS equations are propagated under the influence of a time dependent field with a time step $\Delta
t=0.04$~a.u. starting from the ground state configuration.
The photoelectron probability is calculated with~(\ref{eq:tsurfPp}) by collecting the flux integral calculated on
a spherical surface of radius $R_{\rm S}=20$~a.u. while the KS orbitals are propagated over time.
To prevent spurious reflections from the boundaries of the simulation box we employ a complex absorbing potential
(CAP) acting on the region outside the surface $S$ with parameters tuned in such a way to be maximally efficient
in the energy region where we expect photoelectrons to be mostly emitted~\cite{DeGiovannini:2015jt}.
The geometry employed in the simulations is summarized in Fig.~\ref{fig:tsurff}.
\begin{figure}[htbp]
\includegraphics[width=0.84\columnwidth]{tsurff_l.pdf}
\caption{\label{fig:tsurff} Scheme illustrating the geometry employed to calculated the photoelectron
spectrum with t-SURFF and TDDFT.}
\end{figure}
Finally, since the inner shell electrons of Argon are not expected to take significant part in the ionization
dynamics, we use the Hartwigsen-Goedecker-Hutter (HGH) pseudopotential~\cite{PhysRevB.58.3641} that effectively accounts for the core electrons and consider
explicitly only the $n=3$ electrons.
All the simulations presented are carried out with the \emph{Octopus} code~\cite{Strubbe:2015iz}.
\section{Result \label{sec:result}}
In this section, we examine the performance of the TDDFT simulation for the attosecond
photoelectron spectroscopy. For this purpose, we simulate the RABBITT measurement processes for an Argon atom.
We first explain how to simulate the entire the RABBITT measurement. Then, we compare the theoretical results with the recent experimental data \cite{PhysRevLett.106.143002,PhysRevA.85.053424}.
Finally, we investigate the role of many-body effects in the RABBITT photoemission delay.
\subsection{RABBITT spectroscopy \label{subsec:RABBITT-tddft}}
Here, we revisit the RABBITT pump-probe technique from a computational point of view.
The RABBITT measurement is a pump-probe experiment that employs
an attosecond EUV pulse train as a pump and an IR femtosecond pulse as a probe. Importantly, the attosecond pulse train
is generated by high-order harmonic generation of the same IR femtosecond pulse.
Therefore, the attosecond EUV pulse train consists of odd harmonics of the IR field.
In this work, we employ the following form for the femtosecond IR pulse,
\be
A_{IR}(t)= -\frac{c E_{IR}}{\omega_{0}} \left[\cos\left ( \frac{\pi t}{T_{IR}}\right)\right]^2\sin \left(\omega_{0}t\right),
\label{eq:IR-pulse}
\end{eqnarray}
in the domain $-T_{IR}/2<t<T_{IR}/2$ and zero outside.
Here $\omega_0$ is a mean frequency of the IR pulse, and $T_{IR}$ is the full duration of the pulse.
The maximum field amplitude $E_{IR}$ is related to the peak laser intensity as $I_{IR}=cE^2_{IR}/8\pi$.
We set $\omega_0$ to $1.55$ eV$ $, and $T_{IR}$ to $30$ fs. The peak intensity $I_{IR}$ is set to
$10^{11}$ W/cm$^2$.
As a corresponding attosecond pulse train, we employ the following form:
\be
A_{EUV}(t)=-\frac{c E_{EUV}}{\omega_{EUV}} \cos^4 \left(\frac{\pi t}{T_{train}}\right)
\cos^6 \left( \omega_0 t\right) \sin\left(\omega_{EUV} t \right), \nonumber \\
\label{eq:EUV-pulse}
\end{eqnarray}
in the domain $-T_{train}/2<t<T_{train}/2$ and zero outside. Here $\omega_{EUV}$ is the central frequency,
and $T_{train}$ is the full duration of the pulse train.
We set $\omega_{EUV}$ to $25\omega_0$, and $T_{train}$ to $10$ fs. We also set the peak laser intensity $I_{EUV}=cE^2_{EUV}/8\pi$
to $5\times 10^{10}$ W/cm$^2$.
Figure \ref{fig:Etw_ATPs} shows the attosecond pulse train of Eq. (\ref{eq:EUV-pulse})
in the time-domain (a) and the frequency-domain (b). As seen from Fig. \ref{fig:Etw_ATPs} (a),
several attosecond pulses follow each other in a line with equal distance in time-domain. Each pulse has
about $120$ attoseconds full-width half-maximum.
This train forms a comb in the frequency domain, as seen in Fig. \ref{fig:Etw_ATPs} (b).
We note that the comb consists of the odd order harmonics of the IR probe pulse.
\begin{figure}[htbp]
\includegraphics[width=0.84\columnwidth]{Et_ATPs.pdf}
\includegraphics[width=0.8\columnwidth]{Ew2_ATPs.pdf}
\caption{\label{fig:Etw_ATPs} Profile of the attosecond pulse train in time-domain (a)
and the frequency domain (b).}
\end{figure}
We then compute the photoelectron spectrum induced by the attosecond pulse train.
Figure \ref{fig:PES_ATPs} shows the photoelectron spectrum as a function of kinetic energy
of emitted electrons.
Since the photoelectron spectrum is computed based on the time-dependent Kohn-Sham orbitals,
one can naturally decompose the signal into each orbital contribution.
In Fig. \ref{fig:PES_ATPs}, the contribution from the Ar 3$s$ shell is shown as a red-solid line,
while that from the Ar 3$p$ shell is shown as a green-dashed line.
One sees that the two contributions are energetically well separated because of the large
difference between the ionization potentials of the 3$s$ and 3$p$ shells.
Each contribution shows the comb structure, reflecting the frequency comb feature of
the attosecond pulse train in Fig. \ref{fig:Etw_ATPs} (b).
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{PES_ATPs.pdf}
\caption{\label{fig:PES_ATPs}
Photoelectron spectrum induced by the attosecond pulse train of Figure \ref{fig:Etw_ATPs}.
The contribution from the Ar 3$s$ shell
is shown as a red-solid line, while that from the Ar 3$p$ shell is shown as a green-dashed line.
The contribution from the Ar 3$s$ shell is scaled by a factor of $5$.
}
\end{figure}
In a RABBITT experiment, the photoelectron spectrum under both the attosecond pulse train
and the femtosecond IR pulse is measured. In perfect analogy, we can compute in the theoretical simulation
the photoelectron spectrum under both the attosecond pulse train
and the IR femtosecond laser pulse. Figure \ref{fig:PES_ATPs_IR} shows
the photoelectron spectrum from the Ar 3$p$ shell. Red-solid line shows the photoelectron spectrum
created by both the attosecond pulse train and the IR femtosecond pulse, while
blue-dashed line shows the signal solely due to the pulse train.
One sees that the IR pulse results in additional peaks between those peaks that were created only by
the pulse train. These additional peaks originate from a two-photon absorption
process: one-photon from the attosecond pulse train and the other from the IR pulse.
Adding the ionization potential of the Ar 3$p$ shell to the photoelectron kinetic energy,
the absorbed photon energy can be calculated. The calculated absorbed photon energy
is shown as the secondary $x$-aixis of Fig. \ref{fig:PES_ATPs_IR}.
Each additional peak due to the IR field consists of two excitation paths:
one corresponds to the EUV photon energy plus the IR photon energy,
while the other corresponds to the EUV minus IR photon energy.
For example, as seen in the schematic picture of Fig. \ref{fig:PES_ATPs_IR}, the additional peak at the energy of $24 \omega_0$ is created by the following two
excitation paths: One is the 23rd harmonics plus the IR photon energy, while the other is
the 25th harmonics minus the IR photon energy. As discussed above, this interference between the two excitation path is the central effect used in which RABBIT spectroscopy.
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{PES_ATPs_IR_mod.pdf}
\caption{\label{fig:PES_ATPs_IR}
Photoelectron spectra from the Ar 3$p$ shell. Red-solid line shows the result with both
the attosecond pulse train and the IR femtosecond pulse, while the blue-dashed line
shows the result only with the pump pulse train.
}
\end{figure}
We next perform the RABBITT pump-probe simulations by changing the time delay
between the attosecond pulse train and the IR pulse.
Figure \ref{fig:RABBITT_Ar_3p} shows the calculated photoelectron spectrum
as a function of the time delay.
One sees that the even order side bands show an oscillating feature in time delay,
reflecting the interference of the two different two-photon absorption paths,
which is described in the schematic picture of Fig. \ref{fig:PES_ATPs_IR}.
The frequency of the oscillation is twice that of the IR frequency $\omega_0$.
Generally, each side band has its own time-delay with respect to the IR field.
Because the difference of the delay of these side bands reflects the difference of
the photoionization delay in each excitation channel,
the time delay in the RABBITT trace has been used to investigate the photoionization processes \cite{PhysRevLett.106.143002,PhysRevA.85.053424,Isinger893}.
Since TDDFT can directly simulate the whole RABBITT experimental process and
provide the resulting RABBITT trace as seen in Fig. \ref{fig:RABBITT_Ar_3p}, it enables us to
directly compare calculated results with experimental results.
In the following subsection, we demonstrate the comparison of theory with experiment.
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{rabbit_Ar_3p.pdf}
\caption{\label{fig:RABBITT_Ar_3p}
Calculated RABBITT trace from Eq. (\ref{eq:integrated-pes}) for the Ar 3$p$ shell
using the laser pulses of Fig. \ref{fig:Etw_ATPs}.}
\end{figure}
\subsection{Comparison with experimental results}
Here, we compare the computed time delays from TDDFT simulations with the experimental results \cite{PhysRevLett.106.143002,PhysRevA.85.053424}.
For this purpose, we first numerically extract the delay from the RABBITT trace
in Fig. \ref{fig:RABBITT_Ar_3p}.
To extract the delay for each side band, we average the RABBITT trace around the central frequency
of the side-band with width of $ \omega_0/2$. For example, to extract the 26th side band
in Fig. \ref{fig:RABBITT_Ar_3p}, we average the signal between $26 \omega_0 \pm \omega_0/4$.
Figure \ref{fig:Ar_3p_26th} shows the extracted signal for the 26th side-band
in Fig. \ref{fig:RABBITT_Ar_3p}. Each red-point shows the result from
a single TDDFT simulation with the corresponding time delay.
In order to extract the time delay, we further fit the numerical signal by an analytic function of the following form:
\be
S(t)=A\cdot \cos^4 \left[ \frac{\pi}{\sigma} (t-t_0)\right]
\cos^2 \left[\omega_0 (t-\tau_{delay}) \right ] +C,
\label{eq:fitting}
\end{eqnarray}
where $A$, $C$, $\sigma$, $t_0$, and $\tau_{delay}$ are fitting parameters. Here, $\tau_{delay}$ is
the time delay, which we aim to extract.
In Fig. \ref{fig:Ar_3p_26th}, the fitting function is shown as a blue line.
One sees that the fitting function represents the signal very well over the entire time delay range.
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{Ar_3p_26th.pdf}
\caption{\label{fig:Ar_3p_26th}
Extracted RABBITT trace of the Ar 3$p$ shell for the $26$th side-band. Red-points show the TDDFT result
for each delay. The blue line shows the fitting function in Eq. (\ref{eq:fitting}).
}
\end{figure}
Even though the absolute time delay with respect to the IR field can be readily extracted from these theoretical simulations,
it is in fact hard to extract this absolute delay from experimental results, since the absolute time-zero
cannot be determined in the experiments.
Therefore, so far, experimental results only provide the relative time delay between
two different excitation channels such as excitation from different atomic shells.
Figure \ref{fig:delay_Ar_3s_3p} shows the relative time delay for ionization of Ar from the 3$s$ and 3$p$
shells. Red-circles show the TDDFT results, while up and down-pointing triangles show
recent experimental results \cite{PhysRevLett.106.143002,PhysRevA.85.053424}.
One sees that the TDDFT results are in excellent agreement with the experimental results.
We note that while our work based on the TDDFT with ALDA and ADSIC shows very good agreement with the experiment,
recent results by Magrakvelidze \textit{et al.} \cite{PhysRevA.91.063415}, also based on TDDFT in the local density approximation,
appears to disagree on the same experiment.
Here, we discuss a possible origin for this apparent inconsistency and suggest that emerges from the separation of the photoemission delay into
two consecutive steps -- an approach shared by many RABBITT models.
In many works that deal with modelling RABBITT \cite{PhysRevLett.106.143002,RevModPhys.87.765}, the time delay is first decomposed
into two components:
\be
\tau_{delay} = \tau_{W} + \tau_{cc},
\end{eqnarray}
where $\tau_{W}$ is so-called Wigner delay due to the EUV single-photon
absorption process \cite{PhysRev.98.145,PhysRev.118.349},
and $\tau_{cc}$ is the so-called continuum-continuum delay due to the additional contribution
from the IR field \cite{PhysRevLett.106.143002,DAHLSTROM201353}. These two delays are often treated and computed separately.
In contrast, in the present work, we do not rely on such a decomposition of the RABBITT delay,
but directly compute the total delay, by simulating the whole measurement processes, starting from two external laser fields
and the system in its groundstate and by performing a time-propagation all the way to the detection of the emitted photo-electron.
As a result our method treats the excitation, emission and detection process on the same footing and, as shown in the present work, succeeds to accurately reproduce the experiment.
This fact indicates that a fully consistent treatment for all the delay components
is significant to correctly understand experimental results, and thus,
a direct simulation of the entire measurement processes is required.
Furthermore, a separated treatment of the delay components is highly non-trivial
for complex systems such as large molecules or solid-state surfaces.
In the last case, an additional delay related to the electron transport towards the surface has to be taken into account~\cite{Locher:15}.
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{delay_Ar_3s_3p.pdf}
\caption{\label{fig:delay_Ar_3s_3p}
Comparison of the delay differences for ionization of Ar from the 3s and 3p shells:
the theoretical results by the TDDFT with ALDA + ADSIC (red-circles),
the experimental results by Kl\"under \textit{et al.}
\cite{PhysRevLett.106.143002} (green up-pointing triangles), and the experimental results by Gu\'enot \textit{et al} \cite{PhysRevA.85.053424}.
(blue down-pointing triangles) are shown.
}
\end{figure}
\subsection{Many-body effects}
One of the strong points of the present TDDFT photoelectron spectroscopy is the capability to
investigate the impact of many-body effects directly in the experimental observable.
Therefore, the present TDDFT simulation of photoelectron spectroscopy offers novel opportunities to explore the role
of many-body effects in the photoelectron emission process.
To demonstrate this capability, we here investigate the role of the dynamical electron-electron interaction effects in the argon RABBITT spectroscopy. In our calculation we employ the local density approximation (LDA) for all exchange and correlation effects that are beyond the time-dependent Hartree approximation. While LDA is known to not be able to represent most exchange effects and only weak correlation, the dynamical Hartree potential we use in our calculation should be expected to have a large impact on the dynamics. Since, for the photo-emission process we require a functional with SIC-correction it is not possible to completely separate the effect of the xc-functional and the Hartree potential, but we posit that our results can be considered to be roughly equivalent to at least a random phase approximation level of theory.
To demonstrate the influence of some of the many-body effects as captured by the current approximation, we additionally perform TDDFT RABBITT simulations, where we neglect the time-dependence of the Hartree and the exchange-correlation potentials. That is to say that throughout the propagation of the KS equation, the Kohn-Sham potential is kept "frozen" to the ground state.
This treatment corresponds to the independent particle (IP) approximation since
all the electrons independently move in a common and fixed mean-field potential.
Figure \ref{fig:delay_Ar_comp_tddft_ip} (a) shows the relative delays $\tau_{3s}-\tau_{3p}$ computed
by the TDDFT and the IP calculations.
One sees that, while the two calculations provide the similar relative delays
in the lower photon energy region, they show a discrepancy in the higher energy region.
In this high energy range around 42 eV, the Ar 3$s$ photoionization cross section becomes very small
due to many-electron effects \cite{PhysRevA.47.3888}. This region is the so-called Cooper minimum,
and the influence of the Cooper minima in the photoionization delay has been intensively discussed \cite{RevModPhys.87.765}.
Previous TDDFT calculations with this ADSIC reported
a photoionization cross section in good agreement with
the experiment~\cite{PhysRevA.90.033412}.
To obtain further insight of the impact of the many-body effects in the photoionization delay in atoms,
we investigate the RABBITT delay for individual Ar 3$s$ and 3$p$ shells.
Figure \ref{fig:delay_Ar_comp_tddft_ip} (b) and (c) show the RABBITT delays for
Ar 3$s$ and 3$p$ shells, respectively.
As seen from Fig. \ref{fig:delay_Ar_comp_tddft_ip} (b), many-body effects
play different roles for the photoionization from Ar 3$s$ shell in the lower and higher energy ranges:
while the many-body interaction induces a positive delay in the lower energy range, it induces
a negative delay in the higher energy range.
This fact indicates a correlation among many-body effects, Cooper minima, and the photoionization delay.
In contrast, as seen from Fig. \ref{fig:delay_Ar_comp_tddft_ip} (c), the many-body effects
uniformly increase the delay of the Ar 3$p$ shell in all the investigated photon energy range.
Importantly, one sees that it induces similar amount of positive delay for both
Ar 3$s$ and 3$p$ shells in the low photon energy range. Therefore, the influence of
many-body effects on the relative 3$s$-3$p$ delay in the low photon energy range
is cancelled out and appears to have no influence on the relative delay.
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{delay_Ar_comp_tddft_ip.pdf}
\caption{\label{fig:delay_Ar_comp_tddft_ip}
Comparison of the RABBITT delay with and without the many-body effects as captured by TDDFT with LDA.
The panel (a) shows the relative Ar 3$s$-3$p$ delay. The individual delays of the Ar 3$s$ and
3$p$ shells are also shown in the panels (b) and (c), respectively.
}
\end{figure}
\section{Summary \label{sec:summary}}
In this work, we developed an efficient first-principles attosecond photoelectron spectroscopy technique
based on time-dependent density functional theory (TDDFT),
focusing on the reconstruction of attosecond beating by interference of two-photon transition (RABBITT).
We applied the TDDFT RABBITT simulation to investigate the photoemission from the 3$s$ and 3$p$ shells of Argon.
We demonstrated that the TDDFT results nicely reproduce recent experimental results \cite{PhysRevLett.106.143002,PhysRevA.85.053424}.
The good agreement of our TDDFT simulation with the experimental results is apparent inconsistency with
previous work that also employs TDDFT \cite{PhysRevA.91.063415}; Magrakvelidze, \textit{et al.} reported
that the results computed by TDDFT with the local density approximation disagrees with
the measured relative Ar 3$s$-3$p$ time delay.
While the previous work computed only the Wigner delay with TDDFT but employed another
theory to treat the continuum-continuum delay, the present work treats all the components of
the delay at the same level in the TDDFT propagation. Therefore, the apparent inconsistency between the present and previous
works may originate from the inconsistent treatment of individual delay contributions
of the previous work. This fact indicates the significance of
a consistent treatment for each delay contribution and the direct simulation of
the whole measurement processes.
Furthermore, once target systems become complex, such as large molecules and solid-state
surfaces, this kind of step-wise approach to the complete delay becomes nontrivial or unfeasible.
Therefore, the fully consistent simulations for the whole measurement processes naturally emerges as
a significant tool to attain microscopic insight of such attosecond experiments.
Furthermore, the presented TDDFT approach offers novel opportunities to investigate the role of
microscopic many-body effects in the photoemission process. In this work we have shown how, by freezing
the time-dependent Hartree and exchange-correlation potentials, the role of many-body interactions
can be systematically investigated.
As a result, it turned out that many-body effects substantially affect the RABBITT
photoionization delay.
In particular, we found that the induced delay in Ar 3$s$ photoionization changes its sign around the Cooper minimum.
At the moment, accurate description of the exchange-correlation potential as well as
electron-ion coupling is limited, and thus, many-body effects are not fully captured by our
TDDFT simulation. However, once a better description for electron-electron and electron-ion
interactions is developed, the TDDFT RABBITT simulation could be employed to investigate
the role of decoherence due to electron-electron, electron-ion and electron-phonon
scattering both in the photoemission as well as the transport processes.
While in this work we presented results on a simple system such as gas-phase Argon,
the current technique can be readily employed to more complex targets.
In particular, the current approach as well as our implementation can be already used to investigate attosecond photoelectron dynamics of solid surfaces.
It therefore represents a very powerful and timely technique to guide state-of-the-art experiments, and indeed,
work along these lines with experiments is already underway.
\section*{Acknowledgements}
We thank L. Gallmann for carefully reading the manuscript and providing valuable comments. We thank U. Keller for helpful discussions and insight into this problem.
We acknowledge financial support from the European Research Council (ERC-2015-AdG-694097), Grupos Consolidados (IT578-13) and the European Union’s Horizon 2020 Research and Innovation program under Grant Agreements no. 676580 (NOMAD). S. A. S. acknowledges support by
Alexander von Humboldt Foundation.
|
{
"timestamp": "2018-02-26T02:10:57",
"yymm": "1802",
"arxiv_id": "1802.08619",
"language": "en",
"url": "https://arxiv.org/abs/1802.08619"
}
|
\section{Introduction}
Accumulation of damage causes organismal aging \cite{Kirkwood:2005}. Even in model organisms, with controlled environment and genotype, there are large individual variations in lifespan and in the phenotypes of aging \cite{Herndon:2002, *Kirkwood:2002}. While many mechanisms cause specific cellular damage \cite{Lopez-Otin:2013}, no single factor fully controls the process of aging. This suggests that the aging process is stochastic and results from a variety of damage mechanisms.
The variability of individual damage accumulation results in differing trajectories of individual health and in differing individual life-spans, and is a fundamental aspect of individual aging. A simple method of quantifying this individual damage is the Frailty Index (FI) \cite{Mitnitski:2001, Searle:2008}. The FI is the proportion of age-related health issues (``deficits'') that a person has out of a collection of health attributes. The FI is used as a quantitative tool in understanding the health of individuals as they age. There have been hundreds of papers using an FI based on self-report or clinical data, both for humans \cite{Rockwood:2017} and for animals \cite{Kane:2017}. Individuals typically accumulate deficits as they age, and so the FI increases with age across a population. The FI captures the heterogeneity in individual health and is predictive of both mortality and other health outcomes \cite{Kulminski:2007, *Evans:2014, *Mitnitski:2016, *Kojima:2018}.
In previous work we developed a stochastic network model of aging with damage accumulation \cite{Taneja:2016, Farrell:2016}. Each individual is modeled as a network of interacting nodes that represent health attributes. Both the nodes and their connections are idealized and do not specify particular health aspects or mechanisms. Connections (links) between neighboring nodes in the network can be interpreted as influence between separate physiological systems. In our model, damage facilitates subsequent damage of connected nodes. We do not specify the biological mechanisms that cause damage, only that damage rates depend on the proportion of damaged neighbors. Damage promotes more damage and lack of damage facilitates repair. Rather than model the specific biological mechanisms of aging, we model how damage to components of generic physiological systems can accumulate and propagate throughout an organism --- ending with death.
Even though our model includes no explicit age-dependence in damage rates or mortality, it captures Gompertz's law of mortality \cite{Gompertz:1825, *Kirkwood:2015}, the average rate of FI accumulation \cite{Mitnitski:2001, Mitnitski:2013}, and the broadening of FI distributions with age \cite{Rockwood:2004, Gu:2009}. By including a false-negative attribution error (i.e. a finite sensitivity) \cite{Farrell:2016}, we can also explain an empirical maximum of observed FI values -- typically between $0.6 - 0.8$ \cite{Searle:2008, Gu:2009, Rockwood:2004, Mitnitski:2013, Hubbard:2015, Bennett:2013}. This shows that age-dependent ``programming'' of either mortality or damage rates are not necessary to explain these features \cite{Kirkwood:2005}.
We had chosen the Barab\'asi-Albert (BA) preferential attachment algorithm \cite{Barabasi:1999} to generate our scale-free network, both due to the simplicity of the BA algorithm and due to the numerous examples of these scale-free networks in biological systems \cite{Barabasi:2009}. While we had constrained the scale-free network parameters with the available phenomenology, we did not examine whether other common network structures could also recover the same phenomenology. More specifically, we did not identify which observable behavior sensitively depends on the network structure.
Ideally, we could directly reconstruct the network from available data. However, the direct assessment of node connectivity from observational data is a challenging and generally unsolved problem. Nevertheless, we show here that we can reliably reconstruct the relative connectivity (i.e. the rank-order) of high degree nodes in both model and in large-cohort observational data by measuring mutual dependence between pairs of nodes. This reconstruction allows us to qualitatively confirm the relationship between the connectivity of nodes and how informative they are about mortality \cite{Farrell:2016}. Specifically, we demonstrate that a network with a wide range of node connectivities (such as a scale-free network) is needed to describe the observational data.
Recently, the FI approach has been extended to laboratory \cite{Howlett:2014} and biomarker data \cite{Mitnitski:2015} and used in clinical \cite{Klausen:2017, *King:2017} and population settings \cite{Blodgett:2017}. Two different FIs have been constructed to measure different types of damage, $F_{\mathrm{clin}}$, with clinically evaluated or self-reported data, and $F_{\mathrm{lab}}$, with lab or biomarker data. Clinical deficits are typically based on disabilities, loss of function, or diagnosis of disease, and they measure clinically observable damage that typically occurs late in life. Lab deficits or biomarkers use the results of lab tests (e.g. blood tests or vital signs) that are binarized using standard reference ranges \cite{McPherson:2016}. Since frailty indices based on laboratory tests measure pre-clinical damage, they are distinct from those based on clinical and/or self-report data \cite{Howlett:2014, Blodgett:2017}.
Even though they measure very different types of damage, both FIs are similarly associated with mortality \cite{Blodgett:2016, Howlett:2014}. Earlier observational studies have found (average) $\langle F_{\mathrm{lab}} \rangle$ larger than $\langle F_{\mathrm{clin}} \rangle$ \cite{Blodgett:2016, Howlett:2014, Mitnitski:2015}. However, a study of older long-term care patients has found $\langle F_{\mathrm{lab}} \rangle$ less than $\langle F_{\mathrm{clin}} \rangle$ \cite{Rockwood:2015}. While differences between studies could be attributed to classification differences, a large single study including ages from 20-85 from the National Health and Nutrition Examination Survey (NHANES) \cite{Blodgett:2017} also found that $\langle F_{\mathrm{lab}} \rangle$ was higher than $\langle F_{\mathrm{clin}} \rangle$ at earlier ages, but below at later ages.
The observed age-dependent relationship (or ``age-structure'') between $F_{\mathrm{lab}}$ and $F_{\mathrm{clin}}$ challenges us to examine whether network properties can determine similar age-structure in model data. We aim to determine what qualitative network features are necessary to explain age-structure. Our working hypothesis is that low-degree nodes should correspond to $F_{\mathrm{lab}}$, just as high-degree nodes correspond to $F_{\mathrm{clin}}$ \cite{Taneja:2016, Farrell:2016}.
Complex networks have structural features beyond the degree distribution. For example, nearest-neighbor degree correlations describe how connections are made between specific nodes of different degree \cite{Barabasi:2016}. Accordingly, we consider networks with three types of degree correlations: assortative, disassortative, and neutral \cite{Barabasi:2016, Newman:2002}. Networks with assortative correlations tend to connect like-degree nodes, those with disassortative correlations tend to connect unlike-degrees, and those with neutral correlations are random. We probe and understand the internal structure of these networks by examining $F_{\mathrm{high}}$ and $F_{\mathrm{low}}$, i.e. damage to high degree nodes and damage to low degree nodes.
Since networks have many properties other than degree distribution and nearest-neighbor degree correlations, we have also constructed a mean-field theory that {\em only} has these properties. With it we can better connect specific network properties with qualitatively observed phenomenon, within the context of our network model.
We show how network properties of degree distribution and degree correlations are essential for our model to recover results from observational data. Doing so, we can explain how damage propagates through our network and what makes nodes informative of mortality. This allows us to understand the differences between $F_{\mathrm{low}}$ and $F_{\mathrm{high}}$, or between pre-clinical and clinical damage in observational health data.
\section{Methods}
\subsection{Stochastic model}
Our model was previously presented \cite{Farrell:2016}. Individuals are represented as a network consisting of $N$ nodes, where each node $i \in \{1, 2, \ldots, N \}$ can take on binary values $d_i = 0,1$ for healthy or damaged, respectively. Connections are undirected and all nodes are undamaged at time $t = 0$.
A stochastic process transitions between healthy and damaged ($d_i = 0,1$) states. Healthy nodes damage with rate $\Gamma_+ = \Gamma_0 \exp{(f_i \gamma_+)}$ and damaged nodes repair with rate $\Gamma_- = (\Gamma_0/R) \exp{(-f_i \gamma_-)}$. These rates depend on the local frailty $f_i = \sum_{j\in \mathcal{N}(i)} d_j/k_i$, which is the proportion of damaged neighbors of node $i$. This $f_i$ quantifies local damage within the network. Transitions between the damaged and healthy states of nodes are implemented exactly using a stochastic simulation algorithm \cite{Gillespie:1977, Gibson:2000}. For each step, the algorithm chooses a transition to perform from all of the possible transitions. The probability of choosing a particular transition is determined by its transition rate, and after the transition is performed time is incremented by sampling a time increment from an exponential waiting-time distribution with mean rate given by the transition rate. Individual mortality occurs when the two highest degree nodes are both damaged.
We generate our default network ``topology'' using a linearly-shifted preferential attachment algorithm \cite{Krapivsky:2001,Fotouhi:2013}, which is a generalization of the original Barab\'asi-Albert algorithm \cite{Barabasi:1999}. This generates a scale-free network $P(k) \sim k^{-\alpha}$, where the exponent $\alpha$ and average degree $\langle k \rangle$ can be tuned. (The minimum degree varies as $k_{\mathrm{min}} = \langle k \rangle/2$.) This network is highly heterogeneous in both degree $k_i$ and nearest-neighbor degree (nn-degree) $k_{i,\mathrm{nn}} = \sum_{j\in \mathcal{N}(i)} k_j/k_i$.
Since we are concerned with the properties of individual nodes and groups of nodes, we use the same randomly generated network for all individuals. As a result, connections between any two nodes are the same for every individual. To ensure that our randomly generated network is generic, we then redo all of our analysis for $10$ different randomly generated networks. All of these networks behave qualitatively the same, and so we present results averaged over them. Previously \cite{Farrell:2016}, we generated a distinct network realization for each individual.
We have used observational data for mortality rate and FI vs age to fine-tune the network parameters \cite{Taneja:2016, Farrell:2016}. A systematic exploration of parameters was done in previous work \cite{Taneja:2016, Farrell:2016}. Most of our parameterization ($N = 10000$, $\alpha=2.27$, $\langle k \rangle=4$, $\gamma_-=6.5$) is the same as reported previously \cite{Farrell:2016}. However, three parameters ($\Gamma_0 = 0.00183/\mathrm{yr}$, $\gamma_+=7.5$, $R = 3$) have been adjusted because we now disallow multiple connections between pairs of nodes during our network generation. This simplifies analysis and adjustment of the network topology, but would also affect mortality rates (see e.g. Fig.~\ref{Mortality Rates} below) without the parameter adjustment. Other network topologies, see Sect.~\ref{network structure}, also use this ``default'' parameterization unless otherwise noted.
Typically, binary deficits have a finite sensitivity \cite{Clegg:2016}, while our model gives us exact knowledge of when a node damages. We have modeled this finite sensitivity by applying non-zero false-negative attribution errors to our raw model FI \cite{Farrell:2016}. This has no effect on the dynamics or on mortality, but does affect the FI. For any raw FI $f_0= \sum_i d_i/n$ from $n$ nodes, there are $n_0 = f_0 n$ damaged nodes. With a false-negative rate of $q$, $n_q$ of these are overturned, where $n_q$ is individually-sampled from a binomial distribution $p(n_q; n_0, 1 - q) = \binom{n_0}{n_q}(1-q)^{n_q} q^{n_0-n_q}$. We use $f=n_q/n$ as the corrected individual FI. Since our model $f_0$ tends to reach the arithmetic maximum of $1$ at old ages, this effectively gives a maximum observed FI of $\langle f_{\mathrm{max}} \rangle = 1 - q$ \cite{Farrell:2016}. We use $q=0.4$ throughout.
\subsection{Observational Data analysis}
Observational data is typically ``censored'', meaning that the study ended or an individual dropped out before their death occurred, leaving no specific death age. To avoid this problem, we use a binary mortality outcome e.g. $M = 0$ if an individual is alive within $5$ years of follow-up, or $M = 1$ otherwise. We use $5$ year outcomes throughout for observational data unless otherwise specified. We adapt this approach in our analysis of mutual information \cite{Cover, Shannon:1948}. Our entropy calculations will use binary entropy, $S(M|t) = -p(0|t) \log{p(0|t)} - p(1|t)\log{p(1|t)}$, which we use to calculate information $I(M;D_i|t) = S(M|t) - S(M|D_i,t)$. See also Blokh and Stambler \cite{Blokh:2017}, for other varieties of information analysis for observational data.
We compare our information theory results to a more standard survival analysis with hazard ratios \cite{Spruance:2004}. The hazard ratio is the ratio of instantaneous event rates for two values of an explanatory variable --- e.g. with/without a deficit. A larger hazard ratio means a lower likelihood of surviving with the deficit than without. Hazard ratios are ``semi-parametric'', since they extract the effects of variables on mortality rate from a phenomenological mortality model. We use the Cox proportional hazards model \cite{Cox:1972}, which assumes exponential dependence of mortality rates. We show below that these survival analysis techniques are consistent with our non-parametric mutual information measures.
\subsection{High-$k$ network reconstruction} \label{Reconstruction}
To reconstruct network connections from observed states of nodes, we use the state of each deficit (node) at a given age $t$ (or narrow range of ages in observational data) for each individual in the sample, and calculate the mutual information between individual deficits, $I(D_i;D_j|t)$ \cite{Butte:2000, Margolin:2006}. Connections in the model create correlations between nodes, so a large $I(D_i;D_j|t)$ could indicate a connection. We use data where individuals are the same age (or $\pm$ 5 years in observational data), so that time is not a confounding variable. Nevertheless, determining whether a given connection exists or not requires a threshold on $I(D_i;D_j|t)$. If we took this route, we would only assign a connection between nodes if the mutual information is above this threshold. However, we have no practical way of determining such a threshold, though attempts have been made in the past \cite{Mitnitski:2002}.
In preliminary tests with our model we have found that matching the reconstructed average degree with the exact average degree is a reliable way of determining a threshold (data not shown), but we still have no way of determining the average degree from observational data. Instead, we use a simple parameter-free method adapted from work on gene co-expression networks \cite{Zhang:2005}. We construct weighted networks, with the mutual information between pairs of nodes as the strength or weight of the connections. We then calculate a ``reconstructed'' degree by adding the information for each possible connection to the node in the network, $\hat{k}_i \equiv \sum_{j\ne i} I(D_i;D_j|t)$ ~\cite{Barrat:2004}. For nodes that aren't connected, $I(D_i;D_j|t) \approx 0$, while $I(D_i;D_j|t)$ is expected to be large for connected nodes. While we cannot reconstruct the actual network, we can reconstruct the rank-order degree of high-$k$ nodes -- since we find that $\hat{k}$ is roughly monotonic with the actual degree $k$ for high-$k$ nodes.
\subsection{Mean-field theory of network dynamics}
\label{MFT}
Here, we present a mean-field theory of our network model to understand the mechanisms underlying our model results. Our mean-field theory (MFT) is based on work on epidemic processes in complex networks by Pastor-Satorras \emph{et al}. \cite{Pastor-Satorras:2015} together with ideas from Gleeson \cite{Gleeson:2011} that we use to include mortality dynamics.
By MFT we mean a set of deterministic dynamical equations for damage probabilities of network nodes, including mortality nodes. Here, we retain the full degree distribution $P(k)$ and degree correlations $P(k'|k)$ of our stochastic network model. This allows us to identify what model behavior is controlled by the degree distribution and degree correlations. (A simpler MFT, with all nodes having the same degree, has been published \cite{Farrell:2016}.) With a degree distribution we then solve (see below) thousands of coupled ordinary differential equations (ODEs) with standard numerical integrators.
We average the damaged probabilities $p(d_i=1,t)$ and the undamaged probabilities $p(d_i=0,t)$, conditioned on the damage of the mortality nodes, over all nodes of the same degree $k$:
\begin{gather*}
p_{k,d_{m_1},d_{m_2}}(t) \equiv \sum_{\mathrm{deg}(i) = k} p(d_i = 1, d_{m_1},d_{m_2},t)/(NP(k)),\\
q_{k,d_{m_1},d_{m_2}}(t) \equiv \sum_{\mathrm{deg}(i) = k} p(d_i = 0, d_{m_1},d_{m_2},t)/(NP(k)),
\end{gather*}
where the mortality states are indicated by $d_{m_1},d_{m_2} \in \{0,1\}$, $N$ is the number of nodes, and $P(k)$ is the degree distribution. The resulting joint probabilities are $p_{k,d_{m_1},d_{m_2}}$ and $q_{k,d_{m_1},d_{m_2}}$, for damaged and undamaged nodes respectively. These joint probabilities satisfy
\begin{eqnarray}
\sum_{d_{m_1},d_{m_2}} && (p_{k,d_{m_1},d_{m_2}} + q_{k,d_{m_1},d_{m_2}}) = 1, \\
p_{d_{m_1},d_{m_2}} &&= p_{k,d_{m_1},d_{m_2}} + q_{k,d_{m_1},d_{m_2}},\ \textrm{and}\\
p_{k|d_{m_1},d_{m_2}} &&= p_{k,d_{m_1},d_{m_2}}/p_{d_{m_1},d_{m_2}},
\end{eqnarray}
where the first equation is a normalization condition, the second completeness, and the third Bayes' theorem for conditional probabilities. From our mortality rule of $d_{m_1},d_{m_2} = 1$, the probability of mortality is $p_{\mathrm{dead}} = p_{k,1,1} + q_{k,1,1}$, for any $k$.
The probability of a neighbor of a node of degree $k$ being damaged (which is its local frailty $f$) given a particular mortality state is
\begin{equation}
f_{k|d_{m_1},d_{m_2}}(t) = \sum_{k'} P(k'|k) p_{k'|d_{m_1},d_{m_2}},
\label{local damage}
\end{equation}
where $P(k'|k)$ is the conditional degree distribution, or ``nearest-neighbor'' degree distribution. $P(k'|k)$ describes the structure of connections in the network, and can be varied independently of the degree distribution $P(k)$.
Writing exact master equations for $N$ nodes is impractical since there would be $2^N$ distinct states to track, with even more distinct transition rates. As an enormous simplification, we use averaged damage and repair rates of nodes of a given connectivity $k$. This is our key mean-field simplification. To do this we approximate $\langle d_id_j\rangle = \langle d_i\rangle\langle d_j\rangle$ for all nodes, and approximate the number of damaged neighbors by a binomial distribution $n_d \sim B(n_d;f_{k| d_{m_1},d_{m_2}}, k) = \binom{k}{n_d}f_{k| d_{m_1},d_{m_2}}^{n_d} (1 - f_{k| d_{m_1},d_{m_2}})^{k - n_d}$ where the average proportion of damaged neighbors will be $f_{k| d_{m_1},d_{m_2}}$. Using Eq.~\ref{local damage}, we can then calculate our MFT damage and repair rates,
\begin{gather} \nonumber
\langle \Gamma_{\pm}(f_{k|d_{m_1},d_{m_2}})\rangle = \Gamma_{0,\pm} \Big\langle \exp{\big(\gamma_{\pm}n_d/k\big)}\Big\rangle\\
=\Gamma_{0,\pm} \Big(f_{k|d_{m_1},d_{m_2}}e^{\pm\gamma_{\pm}/k} + 1 - f_{k|d_{m_1},d_{m_2}}\Big)^k. \label{average rates}
\end{gather}
The node degree is explicit in Eq.~\ref{average rates}, while the degree correlation is included through the average local damage in Eq.~\ref{local damage}.
Using these averaged damage/repair rates as transition probabilities, we can write a master equation for nodes with connectivity $k = k_{\mathrm{min}},...,k_{m_2} - 1$ and given the global state of the mortality nodes:
\begin{widetext}
\begin{eqnarray}
\dot{p}_{k,0,0}(t) &=& q_{k,0,0}\langle\Gamma_+(f_k)\rangle - p_{k,0,0}\Big[\langle\Gamma_+(f_{m_1})\rangle + \langle\Gamma_+(f_{m_2})\rangle \Big]
- p_{k,0,0}\langle\Gamma_-(f_k)\rangle + p_{k,1,0}\langle\Gamma_-(f_{m_1})\rangle + p_{k,0,1}\langle\Gamma_-(f_{m_2})\rangle \nonumber \\
\dot{q}_{k,0,0}(t) &=& -q_{k,0,0}\langle \Gamma_+(f_k)\rangle - q_{k,0,0} \Big[\langle\Gamma_+(f_{m_1})\rangle + \langle\Gamma_+(f_{m_2})\rangle \Big]
+ p_{k,0,0}\langle\Gamma_-(f_k)\rangle + q_{k,1,0}\langle\Gamma_-(f_{m_1})\rangle + q_{k,0,1}\langle\Gamma_-(f_{m_2})\rangle \nonumber \\
\dot{p}_{k,1,0}(t) &=& q_{k,1,0}\langle\Gamma_+(f_k)\rangle - p_{k,1,0}\langle\Gamma_+(f_{m_2})\rangle + p_{k,0,0}\langle \Gamma_+(f_{m_1})\rangle
- p_{k,1,0}\langle\Gamma_-(f_k)\rangle - p_{k,1,0}\langle\Gamma_-(f_{m_1})\rangle \nonumber \\
\dot{q}_{k,1,0}(t) &=& -q_{k,1,0}\langle\Gamma_+(f_k)\rangle - q_{k,1,0}\langle\Gamma_+(f_{m_2})\rangle+ q_{k,0,0}\langle \Gamma_+(f_{m_1})\rangle
+q_{k,1,0}\langle\Gamma_-(f_k)\rangle - q_{k,1,0}\langle\Gamma_-(f_{m_1})\rangle \nonumber \\
\dot{p}_{k,0,1}(t) &=& q_{k,0,1}\langle\Gamma_+(f_k)\rangle - p_{k,0,1}\langle\Gamma_+(f_{m_1})\rangle + p_{k,0,0}\langle \Gamma_+(f_{m_2})\rangle
- p_{k,0,1}\langle\Gamma_-(f_k)\rangle - p_{k,0,1}\langle\Gamma_-(f_{m_2})\rangle \nonumber \\
\dot{q}_{k,0,1}(t) &=& -q_{k,0,1}\langle\Gamma_+(f_k)\rangle - q_{k,0,1}\langle\Gamma_+(f_{m_1})\rangle+ q_{k,0,0}\langle \Gamma_+(f_{m_2})\rangle
+p_{k,0,1}\langle\Gamma_-(f_k)\rangle - q_{k,0,1}\langle\Gamma_-(f_{m_2})\rangle \nonumber \\
\dot{p}_{k,1,1}(t) &=& p_{k,1,0}\langle\Gamma_+(f_{m_2})\rangle+ p_{k,0,1}\langle \Gamma_+(f_{m_1})\rangle \nonumber \\
\dot{q}_{k,1,1}(t) &=& q_{k,1,0}\langle\Gamma_+(f_{m_2})\rangle+ q_{k,0,1}\langle \Gamma_+(f_{m_1})\rangle.
\label{MFTequations}
\end{eqnarray}
\end{widetext}
In these equations we have not shown the mortality state indices of $f_k$ for readability, but they are the same as the associated $p$ or $q$ factors. We have also defined $f_{m_1}$ and $f_{m_2}$ as the local frailties of the first and second mortality node, respectively. We have $8$ equations for each distinct degree $k$. The last two equations determine the mortality rate, $ \dot{p}_{k,1,1}+\dot{q}_{k,1,1}$.
The mean-field model couples the dynamics of the lowest degree ($k=2$) with all degrees up to the two highest (mortality nodes). Solving the equations requires us to explicitly determine the two mortality node degrees. While approximate calculations of the maximum degree of scale-free networks are available \cite{Dorogovtsev:2002}, we need the two highest degrees. We use $k_{m_1} = 885$ and $k_{m_2} = 768$, based on the averages from simulations of the network. Similarly, we use $k_{m_1} = 14$ and $k_{m_2} = 13$ for ER random networks and $k_{m_1} = 7$ and $k_{m_2} = 6$ for WS small-world networks. Qualitatively, our qualitative MFT results do not depend on these mortality node degrees, as long as they are sufficiently large. The minimum degree $k_{\mathrm{min}}$ is determined by the network topology.
Our default model uses a linearly-shifted preferential-attachment model, which has explicit functional forms for the degree distribution $P(k)$ and the nearest-neighbor degree distribution $P(k'|k)$ as $N \to \infty$ \cite{Fotouhi:2013}.
We numerically solve Eq.~\ref{MFTequations} for the probabilities $p_{k,d_{m_1},d_{m_2}}(t)$ and $q_{k,d_{m_1},d_{m_2}}(t)$. These then allow us to calculate the average FI,
\begin{eqnarray} \label{MFT FI}
\langle F(t)\rangle &=&
\frac{\sum\limits_{k = k_{\mathrm{low}}}^{k_{\mathrm{high}}} P(k) p_{k|\mathrm{alive}}}
{\sum\limits_{k = k_{\mathrm{low}}}^{k_{\mathrm{high}}} P(k)},\\
p_{k|\mathrm{alive}} &\equiv&
\frac{p_{k,0,0} + p_{k,0,1} + p_{k,1,0}}
{p_{k,0,0} + p_{k,0,1} + p_{k,1,0} + q_{k,0,0} + q_{k,0,1} + q_{k,1,0}},
\nonumber
\end{eqnarray}
so that the average is over the surviving individuals. Our averaged damage-rates overestimate the true values, so for the same parameterization mortality occurs on a shorter timescale in the MFT. This is because rapidly damaging nodes drop out of the full model once they are damaged, but continue to contribute to the average damage rates in the mean-field model through Eq.~\ref{average rates}. Because of this, when plotting MFT results we scale time by $t_{\mathrm{scale}}$, the time at which every node is damaged ($p_{k} = 1$).
\section{Results}
We will focus on measures that can be compared between model and observational data, or that provide insight into the network structure of organismal aging. We start with observational data, to expand the observed aging phenomenology. Then we explore how our network model behaves, with a focus on how network properties determine the qualitative behavior of the model.
\subsection{Observational Data}
Dauntingly, we have three challenges for assessing network properties from observational data: human studies are small (typically with $\lesssim 10^4$ individuals) so that results will be noisy, different studies will have quantitative differences due to cohort differences and choices of measured health attributes, and we have no robust way of reconstructing networks from observed deficits so that the absolute connectivity of health-attributes is unknown. We face these challenges by focusing on qualitatively robust behavior from larger observational studies; this will also help us to confront our results with the behavior of our generic network model.
From the American National Health and Nutrition Examination Survey (NHANES, see \cite{NHANES:2014}), the 2003-2004 and 2005-2006 cohorts were combined, with up to $5$ years of mortality reporting (one measurement of age and FI with either age of death or last age known to be still alive). Laboratory data were available for $9052$ individuals and clinical data on $10004$, aged $20+$ years. Thresholds used to binarize lab deficits are found in \cite{Blodgett:2017}. From the Canadian Study of Health and Aging (CSHA, see \cite{CSHA:1994}), $5$ year mortality reporting are obtained from 1996/1997. Laboratory data were available for $1013$ individuals and clinical data for $8547$, aged $65+$ years. Thresholds used to binarize lab deficits are found in \cite{Howlett:2014}. By approaching both the NHANES and CSHA studies with the same approaches, we can identify qualitatively robust features of both.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=3mm 5mm 5mm 3mm, clip,width=\textwidth]{NHANES_FI_VsAge_CSHA_Inset.pdf}
\caption{Average FI vs age $t$ with $\langle F_{\mathrm{lab}} \rangle$ (red squares) and $\langle F_{\mathrm{clin}} \rangle$ (blue circles) from the NHANES dataset (main figure). The inset shows the same plot for the CSHA dataset. Error bars show the standard error of the mean. All individuals used in this plot have both $F_{\mathrm{clin}}$ and $F_{\mathrm{lab}}$ measured.}
\label{Observational Time Structure}
\end{minipage}
\end{figure}
Fig.~\ref{Observational Time Structure} shows the average FI vs age for $F_{\mathrm{lab}}$ in red and $F_{\mathrm{clin}}$ in blue for the NHANES in the main plot and CSHA in the inset. In both studies lab deficits accumulate earlier than clinical deficits. A crossover appears in the NHANES data around age $55$ after which clinical deficits are more damaged than lab deficits. A similar crossover does not appear to happen in the CSHA data.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=6mm 10mm 5mm 3mm, clip,width=\textwidth]{NHANES_Information_Fingerprint_FULL_LAB.pdf}
\caption{Rank-ordered deficits in terms of information $I(M;D_i|t\in[75,85])$ for the NHANES dataset. Red points are lab deficits, blue points are clinical deficits. Error bars are standard errors found from bootstrap re-sampling. Small numbers next to the points indicate the number of individuals that were available in the data for the corresponding deficit. Insets show the corresponding hazard ratios for the deficits found from a Cox proportional hazards model regression, with the deficit and age used as covariates. The error bars show standard errors, and the line shows a linear regression through these points with the standard error in slope and intercept shown in a lighter color.}
\label{NHANES Information Fingerprint}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=10mm 10mm 5mm 3mm, clip,width=\textwidth]{CSHA_Information_Fingerprint.pdf}
\caption{Rank-ordered deficits in terms of information $I(M;D_i|t\in[75,85])$ for the CSHA dataset. Red points are lab deficits, blue points are clinical deficits. Error bars are standard errors found from bootstrap re-sampling. Small numbers next to the points indicate the number of individuals that were available in the data for the corresponding deficit. Insets show the corresponding hazard ratios for the deficits found from a Cox proportional hazards model regression, with the deficit and age used as covariates. The error bars show standard errors, and the line shows a linear regression through these points with the standard error in slope and intercept shown in a lighter color.}
\label{CSHA Information Fingerprint}
\end{minipage}
\end{figure}
Figs.~\ref{NHANES Information Fingerprint} and \ref{CSHA Information Fingerprint} show deficits rank-ordered in information $I(M;D_i|t)$ for the NHANES and CSHA studies, respectively. These are information ``fingerprints''. Red points correspond to lab deficits and blue to clinical deficits, as indicated. Both types of deficits have similar magnitudes of information, although clinical deficits are typically more informative. The comparable magnitudes of mutual information for the majority of individual deficits between lab and clinical FIs is consistent with earlier analysis that found similar association between lab and clinical FIs with mortality using survival analysis \cite{Howlett:2014, Blodgett:2017, Blodgett:2016}.
Insets in Figs.~\ref{NHANES Information Fingerprint} and \ref{CSHA Information Fingerprint} show the corresponding hazard ratio (HR) for the deficit found from a Cox proportional hazards model regression, with the deficit value and age used as covariates. This semi-parametric analysis is often done with medical data \cite{Jones:2005}. The HR tends to increase as the rank-ordered information increases, indicating that our mutual-information approach is capturing similar effects. Nevertheless, we prefer mutual-information because it is non-parametric (not model-dependent) and so relies on fewer assumptions.
Our deficit-level analysis highlights the great variability of mutual information (and HR ratios) between individual deficits. We have shown that lab and clinical deficits have a range of mutual information. We further note that the top 5 - 7 most informative clinical deficits in both the NHANES and CSHA datasets measure functional disabilities or dysfunction \cite{Rockwood:2017b}. We find that these high level deficits are the most informative of mortality, and more informative than any of the lab deficits. From this, we hypothesize that highly informative clinical deficits will also be highly connected.
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=5mm 6mm 3mm 3mm, clip,width=\textwidth]{Reconstruction_Validation_Degree_32.pdf}
\caption{Information $I(A;D_i|t=80)$ vs rank-ordered deficits using reconstructed degree $\hat{k}_i$, for our computational model. The top $32$ most connected nodes are reconstructed with $10000$ individuals. The smaller (blue circles) points show individual nodes, the larger points show a binned average, and error bars are the standard error of the mean within each bin. The inset (black squares) shows the exact degree $k$ vs the reconstructed degree $\hat{k}$.}
\label{Reconstruction Validation}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=10mm 10mm 5mm 5mm, clip,width=\textwidth]{Reconstructed_Fingerprint.pdf}
\caption{Rank-ordered clinical deficits in terms of reconstructed degree $\hat{k}$ vs information with respect to mortality $I(M;D_i|t\in[75,85])$ for the NHANES and CSHA datasets. The reconstruction algorithm is detailed in Sec. \ref{Reconstruction}. Error bars are standard errors found from bootstrap re-sampling. }
\label{Reconstructed Clinical Fingerprint}
\end{minipage}
\end{figure}
We have been able to partially reconstruct the network structure of clinical measures, as detailed in Sec. \ref{Reconstruction}. In Fig.~\ref{Reconstruction Validation}, we have validated this approach with the top $32$ most-connected model nodes. We use $10000$ individuals for our validation, approximately the same number of people we have available in the observational studies. We know that our model information tends to increase with degree for the high degree nodes (see \cite{Farrell:2016}, and also Fig.~\ref{Model Spectra} below). Fig.~\ref{Reconstruction Validation} shows that information also increases with the reconstructed degree $\hat{k}$, as expected for a good reconstruction. The inset showing $k$ vs $\hat{k}$ indeed shows that the reconstructed degree is approximately monotonic with the exact degree --- especially at higher $k$.
This means the reconstructed degree should provide a reasonable rank-order in connectivity for observational data. Nevertheless, low-degree nodes are not reliably rank-ordered. Accordingly we only attempt to reconstruct clinical $\hat{k}$ with this approach.
In Fig.~\ref{Reconstructed Clinical Fingerprint}, we plot information with respect to mortality $I(M;D_i|t\in[75,85])$ for each deficit, where deficits are rank-ordered in terms of reconstructed degree $\hat{k}$. Information increases with reconstructed degree for both the NHANES and CSHA clinical data. This shows that high information deficits correspond to high connectivity in the observational data. Also, nearly all of the functional disabilities intuitively hypothesized to have a high connectivity are also found to have a large reconstructed degree.
\subsection{Model Age-structure}
We saw, in Fig.~\ref{Observational Time Structure}, that pre-clinical (lab) damage accumulates before clinical damage in observational data. This is a qualitatively robust observation, seen in both NHANES and CSHA observational data. We also observed, in Fig.~\ref{Reconstructed Clinical Fingerprint}, that (in terms of rank order) highly connected clinical deficits were more informative than less connected deficits. We expect that health-attributes assessed by laboratory tests are less connected than the high level functional attributes assessed clinically. We hypothesize that $F_{\mathrm{lab}}$ and $F_{\mathrm{clin}}$ should behave \textit{qualitatively} like collections of low or high degree nodes, respectively, within our network model of aging.
We construct two distinct FIs to capture the difference between well-connected hub nodes and poorly connected peripheral nodes. We measure low-degree damage by constructing $F_{\mathrm{low}}=\sum_i d_i/n$ from a random selection of $n=32$ nodes all with $k = k_{\mathrm{min}} = 2$. Similarly, we measure high-degree damage with $F_{\mathrm{high}}$ from the top $32$ most connected nodes (excluding the two most connected nodes, which are the mortality nodes).
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=5mm 5mm 2mm 2mm, clip,width=\textwidth]{Model_TimeStructure.pdf}
\caption{Average degree of damaged model deficits $\langle k_{\mathrm{dam}}(t) \rangle$, scaled by the average degree of the network $\langle k \rangle$, vs time $t$. Error bars, barely visible at low $t$, represent the standard deviation between randomly generated networks. As indicated, at earlier times low-connectivity nodes are preferentially damaged while at later times higher connectivity nodes are preferentially damaged. The inset shows the average damage of low-connectivity nodes $\langle F_{\mathrm{low}}\rangle$ (red squares) and of high-connectivity nodes $\langle F_{\mathrm{high}}\rangle$ (blue circles) vs age.}
\label{Model Time Structure}
\end{minipage}
\end{figure}
Fig.~\ref{Model Time Structure} shows the cumulative average degree of damaged nodes $\langle k_{\mathrm{dam}} \rangle = \langle \sum_{i = 0}^N k_i d_i / \sum_{i = 0}^N d_i \rangle$ vs age $t$. Error bars represent the standard deviation between 10 different randomly generated networks. They are each comparable to or smaller than the point size, indicating that the age-structure represents the network topology rather than a single network realization.
For a uniform network or for damage rates independent of the degree of a node, we would expect $\langle k_{\mathrm{dam}} \rangle = \langle k \rangle$ for all ages $t$. However, we see the average degree of damaged deficits start at $\langle k \rangle$, with an initial decrease until around age $25$ and then an increase back to $\langle k \rangle$ --- implying damage does not uniformly propagate through the network.
Initially damage is purely random, so $\langle k_{\mathrm{dam}}(0) \rangle = \langle k \rangle$. Nodes with degree $k_i < \langle k \rangle$ are being damaged when $\langle k_{\mathrm{dam}} \rangle/\langle k \rangle$ decreases from $1$, and nodes of degree $k_i > \langle k \rangle$ are being damaged when $\langle k_{\mathrm{dam}} \rangle/\langle k \rangle$ increases towards $1$.
The inset of Fig.~\ref{Model Time Structure} shows the average FI vs age for $F_{\mathrm{low}}$ and $F_{\mathrm{high}}$. We see $\langle F_{\mathrm{low}}\rangle$ initially larger than $\langle F_{\mathrm{high}}\rangle$. Eventually with age, $\langle F_{\mathrm{high}}\rangle$ increases to match $\langle F_{\mathrm{low}}\rangle$ and even slightly exceed at very old ages. Thus, low-$k$ nodes behave similarly to lab deficits, and high-$k$ nodes behave similarly to clinical deficits in observational health data. Low-$k$ nodes and lab measures both damage early and high-$k$ nodes and clinical measures both damage late.
We have not tuned our model parameterization to obtain this age-structure of damage in the network model. Indeed, for other parameter choices we see qualitatively similar behavior (data not shown) for the scale-free networks that we have been using. To better understand this age-structure we consider the effects of network connectivity within our mean-field theory.
\begin{figure}
\begin{minipage}[th]{0.45\textwidth}
\includegraphics[trim=5mm 5mm 3mm 3mm, clip, width=\textwidth]{DamageRates_Separate.pdf}
\caption{Average mean-field damage rates $\langle \Gamma_+\rangle/\Gamma_0$ for nodes of a given degree $k$ (as indicated) vs the local frailty of these nodes $f$, as given by Eq.~\ref{average rates}. Low-connectivity nodes exhibit significantly higher damage rates at intermediate values of $f$.}
\label{Damage Rates Figure}
\end{minipage}
\end{figure}
In our mean-field theory, we find our averaged damage rates explicitly depend on $k$ in Eq.~\ref{average rates}. This is shown in Fig.~\ref{Damage Rates Figure}, these mean-field damage rates increase with smaller $k$ at a given $f$. This results from Jensen's inequality, since the damage rate is convex in the local frailty $f$ and the lower degree nodes will have a broader distribution of local frailty for the same global frailty. This implies that low-$k$ nodes should damage more frequently until they are exhausted and $F_{\mathrm{low}}$ saturates.
We can confirm this with the full MFT results. We can determine the FI from Eq.~\ref{MFT FI} and calculate both $F_{\mathrm{high}}$ and $F_{\mathrm{low}}$ by choosing which degrees to include. The $k_{\mathrm{low}}$ and $k_{\mathrm{high}}$ determine the nodes included in the FI. For $F_{\mathrm{high}}$, we choose $k_{\mathrm{high}} = k_{m_2} - 1$ and $k_{\mathrm{low}}$ so that $N\sum_{k=k_{\mathrm{low}}}^{k_{\mathrm{high}}}P(k) = n \simeq 32$ for the smallest possible $k_{\mathrm{low}}$ (32 is the number of FI nodes typically used in our model and observational studies). For $F_{\mathrm{low}}$, we choose $k_{\mathrm{low}}=k_{\mathrm{min}}$ and choose the smallest $k_{\mathrm{high}}$ so that $n \simeq 32$.
We also calculate
\begin{equation}
\langle k_{\mathrm{dam}}(t) \rangle =\frac{\sum\limits_{k = k_{\mathrm{low}}}^{k_{\mathrm{high}}} kP(k) p_{k|\mathrm{alive}}}{\sum\limits_{k = k_{\mathrm{low}}}^{k_{\mathrm{high}}} P(k)p_{k|\mathrm{alive}}},
\end{equation}
which is the cumulative average degree of damaged nodes as was done for our computational results.
\begin{figure}
\begin{minipage}[th]{0.45\textwidth}
\includegraphics[trim=5mm 5mm 3mm 3mm, clip, width=\textwidth]{MFM_AverageKdam_Disassortative_Inset.pdf}
\caption{From our mean-field calculation in Sec. ~\ref{MFT}, we show the average degree of damaged deficits $\langle k_{\mathrm{dam}} \rangle$ scaled by the average network degree $\langle k \rangle$ vs time scaled by the time when the network becomes fully damaged, $t/t_{\mathrm{scale}}$. The inset shows the average damage of high connectivity nodes $\langle F_{\mathrm{high}}\rangle$ in blue and low connectivity nodes $\langle F_{\mathrm{low}}\rangle$ vs the scaled time.}
\label{Mean Field Time Structure}
\end{minipage}
\end{figure}
In Fig.~\ref{Mean Field Time Structure} the age-structure from the mean-field calculation shows the same early damage of low-$k$ nodes shown in Fig.~\ref{Model Time Structure} and (in inset) the more-rapid growth of $F_{\mathrm{low}}$ compared to $F_{\mathrm{high}}$ at earlier times. Our mean-field calculation also shows a more-rapid growth of $F_{\mathrm{high}}$ compared to $F_{\mathrm{low}}$ at later times, as shown in the inset of Fig.~\ref{Mean Field Time Structure}. This largely is explained by the saturation of $F_{\mathrm{low}}$.
We conclude that the age-structure seen observationally and in our network model, can be explained by the degree distribution and neighbor-degree correlations of our MFT. This motivates us to investigate how node degree and neighbor-degree affect mortality within the context of our network model.
\begin{figure}
\begin{minipage}[th]{0.45\textwidth}
\includegraphics[trim=14mm 16mm 6mm 6mm, clip,width=\textwidth]{I_Bot_I_Top_Fingerprint.pdf}
\caption{Mutual-information of selected model deficits $I(A;D_i| t = 80)$ at age $80$ years, averaged over $10$ randomly generated networks with $10^7$ individuals each and rank-ordered. Red points are low-$k$ deficits, blue points are high-$k$ deficits.}
\label{Model Information Fingerprint}
\end{minipage}
\end{figure}
\subsection{Model Node Information}
Fig.~\ref{Model Information Fingerprint} shows the mutual information between death age and individual nodes $I(A;D_i)$ for our model. Red points are a random selection of $100$ low-connectivity nodes all with $k = k_{\mathrm{min}} = 2$, the blue points are the top $100$ most connected nodes (excluding the $2$ mortality nodes). For each selection, we have rank-ordered the nodes in terms of mutual-information. The mutual-information for both high and low connectivity nodes are comparable. This is surprising since previous work showed a monotonic increase of the average information with connectivity \cite{Farrell:2016}. However that work used a different network for each individual, so that network properties other than the average degree were lost by pooling nodes of the same degree.
Without parameter tuning, we obtain striking qualitative agreement of the magnitude of the mutual-information with mortality for both model and observational data (see Figs.~\ref{NHANES Information Fingerprint} and \ref{CSHA Information Fingerprint}). We also obtain an overlap of magnitudes of the mutual-information of low-degree and high-degree nodes that is similar to that seen between pre-clinical and clinical deficits. Since we know the model network connectivity, we can now examine what network properties cause this behavior for our model.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=17mm 16mm 6mm 6mm, clip,width=\textwidth]{I_Bot_I_Top_Spectrums.pdf}
\caption{Model information spectra $I(A;D_i|t=80)$ vs degree $k_i$ for the top $100$ most connected nodes in blue, or vs $k_{i,\mathrm{nn}}$ for a random selection of $100$ peripheral nodes all with $k=k_{\mathrm{min}} =2$. Points show a sample of a single network, line shows an average over $10$ randomly generated networks and the random choice of $100$ nodes with $k=2$, the shaded error region shows the standard deviation over the random networks.}
\label{Model Spectra}
\end{minipage}
\end{figure}
In Fig.~\ref{Model Spectra}, we show the ``spectrum'' of mutual information between death age and individual nodes $I(A;D_i|t=80)$. We use individuals at age $t=80$ years, where the mutual information is close to maximal \cite{Farrell:2016}. We use the same network for every individual, so that we do not lose the properties of the network between individuals. For the most connected nodes, in blue, we plot mutual information vs. the connectivity of the nodes. Here we see the monotonic trend of mutual information vs connectivity, though there is significant variation for individual nodes. For the least connected nodes, in red, all of the nodes have $k=2$. Instead of connectivity, we considered the nearest neighbor degree $k_{i,\mathrm{nn}} = \sum_{j\in \mathcal{N}(i)} k_j/k_i$ --- i.e. the connectivity of the neighbors of a node. With respect to $k_{\mathrm{nn}}$, we see a similar monotonic increase of the mutual information for $k=2$ nodes.
Neighbor-connectivity $k_{\mathrm{nn}}$ is predictive of mortality for minimally connected nodes. We hypothesize that this is because the neighbor-connectivity affects when peripheral ($k=2$) nodes are damaged, i.e. that peripheral nodes with low-$k_{\mathrm{nn}}$ are damaged earlier than those with large $k_{\mathrm{nn}}$.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=6mm 6mm 3mm 3mm, clip,width=\textwidth]{AverageTimeDamaged.pdf}
\caption{Average time of damage $\langle t_{\mathrm{dam}}\rangle$ vs degree $k$ for all non-mortality nodes in the network. Inset shows $\langle t_{\mathrm{dam}}\rangle$ for $k=2$ nodes vs nn-degree $k_{\mathrm{nn}}$. Nodes are binned based on $k$. The solid colored bars represent the entire range of average damage times observed for individual nodes within a bin, while the horizontal black lines indicate the average over the bin. All results are averaged for $10$ randomly generated networks.}
\label{Model knn damage}
\end{minipage}
\end{figure}
In the inset of Fig.~\ref{Model knn damage} we confirm that high-$k_{\mathrm{nn}}$ $k=2$ nodes damage later. This allows high-$k_{\mathrm{nn}}$ nodes to be informative of mortality because they are diagnostic of a more highly damaged network. From Fig.~\ref{Model knn damage} we see that there is a large range of times for which lower-$k$ nodes damage. Nevertheless, on average the high-$k_{\mathrm{nn}}$ nodes at $k=2$ damage before high-$k$ nodes even though (see Fig.~\ref{Model Spectra}) they can be similarly informative.
\subsection{Model Network Structure}
\label{network structure}
We have seen that our network model of aging is able to capture detailed behavior of lab and clinical FIs such as the the larger damage rates for low-$k$ nodes at the same time as the surprising informativeness of some low-$k$ nodes. The network is an important aspect of our model, and so far we have assumed that it is a preferential attachment scale-free network \cite{Krapivsky:2001, Fotouhi:2013, Barabasi:1999}. In this section, we explore the qualitative behavior of different network topologies.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=5mm 7mm 5mm 3mm, clip,width=\textwidth]{NetworkPlot_knn.pdf}
\caption{Average nn-degree $\langle k_{\mathrm{nn}}(k)\rangle$ vs degree $k$ for a disassortative network (default network) (purple circles), an assortative network created by reshuffling the links (green squares) \cite{Brunet:2004}, a Erd\H os-R\`enyi (ER) random network (yellow triangles), and a Watts-Strogatz (WS) small-world network (blue triangles). Note that $\langle k_{\mathrm{nn}}(k)\rangle$ is grouped into bins of powers of $2$ and averaged within the bins for the scale-free networks. A bin for each degree is used for the ER random and WS small-world networks.}
\label{knn}
\end{minipage}
\end{figure}
Our network model has predominantly disassortative correlations (due to the scale-free exponent $\alpha<3$ \cite{Barrat:2005}) --- meaning that low-$k$ nodes tend to connect to high-$k$ nodes, and that the average nn-degree decreases with degree \cite{Newman:2002}. We see this in Figure~\ref{knn}, where we plot the average nn-degree $\langle k_{\mathrm{nn}}(k)\rangle$ as a function of degree for our network. The purple points indicate our preferential attachment model network, and we see that the average nn-degree is inversely related to the degree.
The green curve shows a rewired assortative network \cite{Newman:2002} made by preserving the degrees of the original network but swapping links. To do this we use the method of Brunet \textit{et al}, using $N^2$ rewiring iterations with a parameter $p=0.99$ \cite{Brunet:2004}. By modifying the nn-degrees of low degree nodes, we can investigate whether $k_{\mathrm{nn}}$ causes or is just correlated with informative low-$k$ nodes. Note that we use only the largest connected component of the rewired network, with $\langle N \rangle=9989$ nodes over 10 network realizations.
The yellow triangles in Figure \ref{knn} show an Erd\H os-R\`enyi random network (ER). A random network is created by starting with $N$ nodes, and randomly connecting each pair of nodes with probability $p_{\mathrm{attach}} = \langle k \rangle/(N - 1)$ \cite{Barabasi:2016}. This results in a (peaked) binomial degree distribution, and completely uncorrelated connections where $k_\mathrm{nn} = \langle k^2\rangle/\langle k \rangle$ which is independent of individual node degree. As before, we only use the largest connected component, with $\langle N \rangle =9805$ nodes over 10 network realizations. The ER network also allows us to explore whether the heavy tail of the scale-free degree distribution is required to recover our observational results.
The light blue triangles in Figure \ref{knn} show a Watts-Strogatz (WS) small-world network \cite{Watts:1998}. This network starts with a uniform ring network with $k_i = \langle k \rangle$ for all nodes, and randomly rewires each link with probability $p_{\mathrm{rewire}}$ to another randomly selected node. We use $p_{\mathrm{rewire}} = 0.05$ to get the effects of both high clustering (i.e. links between neighbors of nodes) and short average path-lengths between arbitrary pairs of nodes \cite{Barabasi:2016}. This network has a narrowly peaked degree distribution, with a rapidly decaying exponential tail. ER and WS networks are similar, as both have short average path lengths between arbitrary nodes and non-heavy-tailed degree distributions, but the WS small-world network also has high clustering for small $p_{\mathrm{rewire}}$.
To examine network effects on our network aging model, we have kept the same model parameters for the (default) preferential attachment disassortative network, the assortative network, the ER random network, and the WS small-world network. (The scale-free exponent $\alpha$ is only used in the disassortative and assortative networks.) We examine $10$ random realizations of each network. We have also varied model parameters independently for each of these networks (data not shown) and obtain the same qualitative results.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=15mm 13mm 5mm 5mm, clip,width=\textwidth]{I_Bot_I_Top_Fingerprint_Networks_4.pdf}
\caption{Rank ordered information $I(A;D_i|t=80)$ for the different networks, as indicated. The top $100$ most connected nodes in the network are in blue circles, and $100$ randomly selected nodes of the lowest degrees are in red squares. Results for each different network topology are averaged over $10$ randomly generated network realizations.}
\label{Networks Fingerprint}
\end{minipage}
\end{figure}
In Fig.~\ref{Networks Fingerprint} we show rank ordered information fingerprints for individual deficits $I(A;D_i|t)$, for the different network topologies as indicated. We observe striking differences in the scale and range of the mutual information with respect to mortality, and in the differences between the most and least connected nodes. The random and small-world network both have a significantly smaller scale of mutual information, together with a much smaller range of variation.
The scale-free disassortative (default) and assortative networks both have significantly higher scale of information for the most connected nodes, as well as considerable variation (approximately 10-fold) among them. However, while the disassortative network exhibits similar scales of information between the most and least connected nodes the assortative network does not. Furthermore, the assortative network shows only minimal variation of information among its least connected nodes.
Only the disassortative (default) network exhibits the fingerprint of mutual information of the NHANES and CSHA observational studies, in Figs.~\ref{NHANES Information Fingerprint} and \ref{CSHA Information Fingerprint} respectively: with considerable variation of mutual information between deficits, overlapping ranges between lab (low) and clinical (high) connectivity deficits, and mutual information on the order of $10^{-2}$ for individual deficits.
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=10mm 10mm 3mm 5mm, clip,width=\textwidth]{FI_TopVsBot_pA_4.pdf}
\caption{Average low-$k$ $\langle F_{\mathrm{low}}(t)\rangle$ vs average high-$k$ $\langle F_{\mathrm{high}}(t)\rangle$ plotted for $t = 0$ to $t = 110$ for our default network parameters (purple), the shuffled assortative network (green), the Erd\H os-R\`enyi random network (yellow), and the Watts-Strogatz small world network (light blue). The dashed black line shows the line $\langle F_{\mathrm{low}}(t)\rangle = \langle F_{\mathrm{high}}(t)\rangle$. Results are averaged over $10$ randomly generated networks and the standard deviations are smaller than the line width.}
\label{Shuffled F vs F}
\end{minipage}
\end{figure}
In Fig.~\ref{Shuffled F vs F}, we investigate the age-structure of the FIs generated by the low and high connectivity nodes. We plot $\langle F_{\mathrm{low}}(t) \rangle$ vs $\langle F_{\mathrm{high}}(t) \rangle$ for the different network topologies. We see that the assortative network shows a rapid increase in $F_{\mathrm{low}}$, followed by growth of $F_{\mathrm{high}}$. In contrast, for the disassortative, random, and small-world networks there is comparable growth of both $F_{\mathrm{low}}$ and $F_{\mathrm{high}}$, though with higher $F_{\mathrm{low}}$ and a later cross-over for the disassortative network.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=10mm 10mm 6mm 6mm, clip,width=\textwidth]{Model_Mortality_4.pdf}
\caption{Mortality rate vs age for each of the networks. a) Disassortative scale-free network (purple circles), b) assortative scale-free network (green), c) WS small-world network (light blue), and d) ER random network (yellow). Computational results (circles) are averaged over $10$ randomly generated networks and error bars show the standard deviations. Black squares are observed human mortality rates \cite{Arias:2014}.}
\label{Mortality Rates}
\end{minipage}
\end{figure}
In Fig.~\ref{Mortality Rates} we plot the average mortality rates vs age for different network topologies, with colored circles showing the computational model results and colored lines for the corresponding mean-field model results. Black squares indicate observed mortality rates \cite{Arias:2014}. Similarly, in Fig.~\ref{Frailty Growth} we plot $\langle F_{\mathrm{high}} \rangle$ vs age $t$ for both observational data (black squares) and model data for different networks (coloured points).
Even without parameter adjustment, most of the network topologies approximately capture the observational data after $t=20$ years. Some differences are seen, particularly for the assortative scale-free network in the mortality rate. This agreement indicates that mortality and frailty data alone do not strongly constrain the network topology.
\begin{figure}
\begin{minipage}[htb]{0.45\textwidth}
\includegraphics[trim=10mm 10mm 6mm 6mm, clip,width=\textwidth]{FI_VsAge_4.pdf}
\caption{$\langle F_{\mathrm{high}} \rangle$ vs age for each of the networks, as indicated. a) Disassortative scale-free network (purple circles), b) assortative scale-free network (green), c) WS small-world network (light blue), and d) ER random network (yellow). Computational results (circles) are averaged over $10$ randomly generated networks and error bars show the standard deviations. Black squares are observed human clinical frailty \cite{Mitnitski:2013}.}
\label{Frailty Growth}
\end{minipage}
\end{figure}
From Fig.~\ref{Networks Fingerprint}, we observed early damage of $F_{\mathrm{low}}$ in the assortative network. Our MFT allows us to narrow down what aspects of the network are leading to this behavior, since the only aspects of the network structure included are the degree distribution $P(k)$ and nn-degree correlations $P(k'|k)$.
Different network topologies are easily introduced provided $P(k)$ and $P(k'|k)$ are known. The exact $P(k)$ for our default shifted-linear preferential attachment networks \cite{Fotouhi:2013}, ER random networks, and WS small-world networks \cite{Barrat:2000} are known. (We remove zero degree nodes from the ER random degree distribution, so that $P_{k\neq 0}(k) = P(k)/\sum_{l \neq 0}P(l)$.) Using various $P(k'|k)$ we can then put different degree correlations into our MFT network. We include three types of degree correlations, uncorrelated (neutral), assortative, and disassortative \cite{Barabasi:2016}.
For a network with uncorrelated (neutral) connections, $P(k'|k) = k'P(k')/\langle k \rangle$. We then have $k_{\mathrm{nn}}(k) = \sum_{k'} k' P(k'|k) = \langle k^2 \rangle/\langle k \rangle$, so that all nodes have the same nn-degree. These correlations are used for ER random and WS small-world networks, and recover the approximately constant $k_{\mathrm{nn}}$ that we observed in Fig.~\ref{knn}.
In a network with assortative correlations, nodes tend to be connected to other nodes of similar degree. Assortative correlations that approximate those used in our computational model in Sec.~\ref{network structure} are \cite{Moreno:2003} $P(k'|k) = \alpha \delta_{k'k} + (1 - \alpha) k'P(k') /\langle k \rangle$. These lead to, $k_{\mathrm{nn}}(k) = \sum_{k'} k' P(k'|k) = \alpha k + (1 - \alpha) \langle k^2 \rangle /\langle k \rangle$, which increases linearly with $k$ (see Fig.~\ref{knn}). Changing $\alpha$ modifies the amount of assortative correlation; we use $\alpha = 0.8$.
In a network with disassortative connections, nodes tend to be connected to other nodes of differing degree. The (disassortative) correlations for our default shifted-linear preferential attachment network are \cite{Fotouhi:2013},
\begin{gather} \nonumber
P(k'|k) = \frac{ \Gamma(k + \lambda + \alpha) \Gamma(k' + \lambda) }{ k \Gamma(m + \lambda) \Gamma(k + k' + 2\lambda+\alpha) } \\\times \Bigg[ \sum_{i=m+1}^k \frac{\Gamma(i + m + 2\lambda + \alpha - 1)}{\Gamma(i + \lambda + \alpha - 1)}\binom{k+k'-m-i}{k'-m} \\ \nonumber + \sum_{i = m+1}^{k'}\frac{\Gamma(i + m + 2\lambda + \alpha - 1)}{\Gamma(i + \lambda + \alpha - 1)}\binom{k+k'-m-i}{k-m} \Bigg], \label{default correlations}
\end{gather}
where $m = \langle k \rangle/2 = k_{\mathrm{min}}$ and $\lambda = m (\alpha - 3)$. This is exact in the limit $N \to \infty$ \cite{Fotouhi:2013}, and gives disassortative correlations where $k_{\mathrm{nn}}(k)$ decreases with $k$.
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=3mm 5mm 4mm 3mm, clip,width=\textwidth]{MFM_FbotFtop_DisassortativeAssortativeRandomSmallWorld.pdf}
\caption{Average low-$k$ $\langle F_{\mathrm{low}}(t)\rangle$ vs average high-$k$ $\langle F_{\mathrm{high}}(t)\rangle$ from our mean-field model in Sec.~\ref{MFT}. The dashed black line shows the line $\langle F_{\mathrm{low}}(t)\rangle = \langle F_{\mathrm{high}}(t) \rangle$. A scale-free network with preferential attachment disassortative correlations (default network) in purple, scale-free network with assortative correlations in green, and a WS small-world network with neutral correlations in light blue.}
\label{Mean Field F vs F}
\end{minipage}
\end{figure}
In Fig.~\ref{Mean Field F vs F} we show the average low-$k$ FI vs the average high-$k$ FI, $\langle F_{\mathrm{low}}(t)\rangle$ vs $\langle F_{\mathrm{high}}(t)\rangle$ from our MFT. In purple we use the (default) preferential attachment disassortative correlations, in green we use assortative correlations, and in light blue we use a WS small-world network. We see qualitative agreement with the age-structure shown in Fig.~\ref{Shuffled F vs F} -- confirming that nn-degree correlations (included in our MFT) are important for the observed age-structure. [We have not shown MFT results for the ER random network since $\langle F_{\mathrm{low}} \rangle$ behaves poorly when it includes nodes with $k \leq 2$, due to their great variability of local frailty $f_i$.]
\subsection{Mutual information of FI with mortality}
We have seen that $F_{\mathrm{low}}$ damages earlier than $F_{\mathrm{high}}$ (Fig.~\ref{Model Time Structure}) and that the mutual information of poorly connected ($k=2$) nodes with large nearest-neighbor degree significantly overlaps with the informativeness of the most connected nodes (Fig.~\ref{Model Spectra}) in our (disassortative) scale free network model. Because of these informative earlier damaged nodes, we were interested in whether $F_{\mathrm{low}}$ could be more informative of mortality than $F_{\mathrm{high}}$, particularly at younger ages. In Fig.~\ref{Finformation} we show the difference in information for $F_{\mathrm{low}}$ and $F_{\mathrm{high}}$ for different mortality outcomes vs age. We find that $F_{\mathrm{low}}$ is slightly more informative at ages less than $\approx 65$ and is increasingly more informative than $F_{\mathrm{high}}$ at these younger ages for longer mortality outcomes. This is the result of $F_{\mathrm{low}}$ nodes damaging early but having a delayed effect on mortality, so that they are an early predictor of later mortality, but not so much immediate mortality. The relatively large standard deviations for different randomly generated networks shows that this result is affected by the particular randomly generated network.
\begin{figure}
\begin{minipage}[thb]{0.45\textwidth}
\includegraphics[trim=3mm 5mm 4mm 3mm, clip,width=\textwidth]{Model_BinaryMortality_Information.pdf}
\caption{The difference in mutual information of $F_{\mathrm{low}}$ and $F_{\mathrm{high}}$ ( $I(M;F_{\mathrm{low}}|t)- I(M;F_{\mathrm{high}}|t)$ ) vs age $t$ for different binary mortality outcomes. 5 year mortality outcomes are shown as turquoise circles and 10 year as orange squares. The dashed line shows when the information of both FIs are equal. Error bars represent the standard deviation between randomly generated networks. The purple down and green up triangles indicate the information difference for 10 and 5 year mortality, respectively, of $F_{\mathrm{low}}^{\mathrm{high}\text{-}k_{\mathrm{nn}}}$ which is constructed with $n=32$ nodes with $k=2$ that are randomly chosen from those with above-average $k_{\mathrm{nn}}$.}
\label{Finformation}
\end{minipage}
\end{figure}
While the observational NHANES and CSHA sample-sizes are much smaller, a similar calculation shows a slightly lower $F_{\mathrm{lab}}$ information $-0.002 \pm 0.013$ compared to $F_{\mathrm{clin}}$ in the NHANES data for younger people ($65-75$ years) and a slightly higher mutual $F_{\mathrm{lab}}$ information $+0.033 \pm 0.027$ compared to $F_{\mathrm{clin}}$ in the CSHA data. While we do not have sufficient data to vary our mortality outcome to determine if $F_{\mathrm{lab}}$ is more predictive of later mortality outcomes as we did in the model, we can see in the CSHA data that $F_{\mathrm{lab}}$ is more informative for younger people.
Since we found that the most informative low-connectivity nodes were those with large $k_{\mathrm{nn}}$, we also considered an FI constructed from $n=32$ randomly chosen nodes of lowest degree ($k=2$) from those that have above-average $k_{\mathrm{nn}}$. The information advantage of $F_{\mathrm{low}}^{\mathrm{high}\text{-}k_{\mathrm{nn}}}$ is indicated in Fig.\ref{Finformation} with down and up triangles for 10 and 5 year mortality, as indicated. The advantage over $F_{\mathrm{high}}$ is large and significant for ages below $t=80$ years, with a stronger advantage at earlier ages for later mortality. This will be an attractive avenue to pursue.
\section{Summary and Discussion}
The observational $F_{\mathrm{clin}}$ or $F_{\mathrm{lab}}$ respectively measure clinically observable damage that tends to occur late in life or pre-clinical damage that is typically observable in lab tests or biomarkers before clinical damage is seen. However, they are similarly informative of human mortality \cite{Howlett:2014, Blodgett:2017, Blodgett:2016}. Our analysis indicates that individual laboratory and clinical deficits have broad and overlapping ranges of mutual information.
Our working hypothesis is that clinical deficits correspond to high connectivity nodes of a complex network, while laboratory deficits correspond to lower connectivity nodes. With our network model of individual aging and mortality, we have confirmed that $F_{\mathrm{high}}$ and $F_{\mathrm{low}}$, formed from high and low connectivity nodes respectively, behave similarly to the observational $F_{\mathrm{clin}}$ and $F_{\mathrm{lab}}$.
Within the context of our aging model, we uncover the mechanisms of this observed behavior. In our model low-$k$ nodes tend to damage before high-$k$ nodes. This is because of the larger average damage rates of low-$k$ nodes compared to high-$k$ nodes (as calculated with our network mean field theory, and illustrated in Fig.~\ref{Damage Rates Figure}). At the same time, our information spectrum shows that information $I(A;D_i|t)$ increases with $k$. Roughly speaking, high-$k$ nodes need a larger local frailty $f$ to have comparable damage rates as low-$k$ nodes. Thus, damage of high-$k$ nodes is informative of high network damage, which also leads to mortality. This is why high-$k$ nodes both damage later and are informative of mortality (Fig.~\ref{Model Spectra}b).
However, some low-$k$ nodes also damage later and are highly informative of mortality. Information $I(A;D_i|t)$ increases with $k_{\mathrm{nn}}$ for the low-$k$ nodes, and low-$k$ high-$k_{\mathrm{nn}}$ nodes damage later. This can also be explained using the network structure. Low-$k$ nodes are protected from damage when they are connected to high-$k$ nodes. Rapidly damaging low-$k$ nodes without this protection tend to damage early for most individuals, giving these nodes a low information value of mortality. Conversely, protected nodes tend to damage only when their high degree neighbors start to damage, which only occurs when the network is heavily damaged and close to mortality. As a result, only the low-$k$ nodes with high-$k_{\mathrm{nn}}$ are highly informative (Fig.~\ref{Model Spectra}a). Interestingly these nodes still tend to damage before high-$k$ nodes, leading to an early predictor of mortality.
Degree correlations control the average degree of neighboring nodes and hence control the amount of protection in low-$k$ nodes. By modifying the degree correlations in the network in our computational model we have shown that this protection can be caused by disassortative correlations --- where low-$k$ nodes tend to attach to high-$k$ nodes. Conversely, eliminating low-$k$ high-$k_{\mathrm{nn}}$ nodes by modifying the network to introduce assortative correlations removes this protection, and we then find all low-$k$ nodes have low information (Fig.~\ref{Networks Fingerprint}b).
Our mean-field model allows us to explicitly modify the degree distribution and the degree correlations with the nearest-neighbor degree distribution $P(k'|k)$, and to include no other network features. In our mean-field model we see similar results to our computational model where, e.g., adding assortative correlations increases the rate at which $F_{\mathrm{low}}$ increases with respect to $F_{\mathrm{high}}$. This confirms that degree distribution and degree correlations largely determine the early damage of low-$k$ nodes that we observe in scale-free networks.
Degree distributions and correlations only weakly control the behavior of ER random and WS-small world networks. The low variation in $k$ and $k_{\mathrm{nn}}$ in those networks results in a lack of contrast between the damage rates of nodes. This leads to node information that is nearly constant throughout the network and to only small differences in the damage structure of low-$k$ and high-$k$ nodes (Fig.~\ref{Networks Fingerprint}c and d). This also leads to low magnitude of the mutual information per node, since nodes behave much more uniformly and ``randomly'' than in a scale-free network. However, we can still see some protection in low-$k$ nodes. This is particularly apparent in the ER random network when $F_{\mathrm{high}}$ surpasses $F_{\mathrm{low}}$ (Fig.~\ref{Shuffled F vs F}d).
The behavior of observational deficits seems to best resemble the behavior of the computational model with a scale-free network and disassortative correlations. Node information seen in the (default) scale-free disassortative network is a much better qualitative match of observational data, as compared with scale-free assortative, WS small-world, or ER random networks.
Our analogy between observational deficits and model nodes allows us to make predictions about the underlying network structure of observational health deficits, even though we cannot directly measure this network. The observational network should have a heavy-tail degree distribution, so that a large range of possible information values can be obtained. The network should also include disassortative correlations so that there are connections between high-$k$ and low-$k$ nodes, allowing low-$k$ nodes to be informative of mortality.
From observational data we find that clinical deficits that integrate many systems into their performance (e.g. functional disabilities, or social engagement) are very informative (Figs.~\ref{NHANES Information Fingerprint} and \ref{CSHA Information Fingerprint}). In contrast, single diagnoses, even ones strongly associated with age such as osteoporosis, on their own offer less value. The model interpretation of this is that these high information disability deficits have a higher connectivity than lower information clinical deficits. It intuitively makes sense for deficits that integrate many systems to have a large connectivity. In support of this, our partial network reconstruction (Fig.~\ref{Reconstructed Clinical Fingerprint}) shows that high information clinical deficits in both the NHANES and CSHA correspond to nodes with a high reconstructed degree.
We have shown that the age-structure of network damage is related to the network structure. Highly informative low-degree nodes (pre-clinical deficits) damaged early in life promote the damage of their high-degree neighbors, but the damage to their high-degree neighbors takes time and is not seen in the high-degree (clinical) FI until later ages. Indeed, we have shown that a $F_{\mathrm{low}}$ is slightly more informative at earlier ages, and is increasingly informative for longer mortality outcomes ($5$ year vs $10$ year) (see Fig.~\ref{Finformation}). Choosing more high-$k_{\mathrm{nn}}$ nodes in $F_{\mathrm{lab}}$ significantly enhances this effect. Low-$k$ nodes are informative of long-term mortality rather than short-term. Similar results are seen in the observational CSHA data, which indicates that $F_{\mathrm{lab}}$ could be used as an early measure of risk of future poor health.
Our network model is generic, without a specific mapping between model nodes and observed human deficits. This is because we have no reliable way of extracting a specific network from observational data, though we have shown that rank-ordering of high-connectivity nodes can be done. This is also because distinct parameterization of every node of such a network model would require enormous amounts of observational data, if it could be done at all. Nevertheless, we can used our generic model to explore robust qualitative phenotypes --- to uncover generic mechanisms, to predict behavior, and to improve the utility of the Frailty Index in human aging and mortality.
In this paper we have kept our model parameterization unchanged from the default parameters, though we have checked (data not shown) that our results are qualitatively robust to parameter variation. This has allowed us to explore the impact of network topology on mortality statistics (a small effect) and on mutual information between health deficits (a strong and distinctive effect). The $F_{\mathrm{high}}$ and $F_{\mathrm{low}}$ model phenomenology are also affected by changes in network topology. This indicates that both $F_{\mathrm{high}}$ and $F_{\mathrm{low}}$ are usefully distinct characteristics of health in our network model. Our results provide insight into the mechanisms of the similarly useful and distinct observational $F_{\mathrm{clin}}$ and $F_{\mathrm{lab}}$ \cite{Blodgett:2016, Howlett:2014, Mitnitski:2015}.
\subsection{Acknowledgments}
We thank ACENET and Compute Canada for computational resources. ADR thanks the Natural Sciences and Engineering Research Council (NSERC) for operating grant RGPIN-2014-06245. KR is funded in this work by career support as the Kathryn Allen Weldon Professor of Alzheimer Research from the Dalhousie Medical Research Foundation, and with operating funds from the Canadian Institutes of Health Research (MOP-102544) and the Fountain Innovation Fund of the Queen Elizabeth II Health Science Foundation. SGF thanks NSERC for a CGSM fellowship.
|
{
"timestamp": "2018-06-07T02:15:03",
"yymm": "1802",
"arxiv_id": "1802.08708",
"language": "en",
"url": "https://arxiv.org/abs/1802.08708"
}
|
\section{Proofs}
\subsection{Definitions}
\paragraph{Distribution re-weighting}
\begin{repdef}{def:valid_weight}
A function $w : \mathcal X \times \mathcal T \rightarrow \mathbb{R}_+$ is a valid re-weighting of $p_\mu$ if
$$
\mathbb{E}_{x, t\sim p_\mu}[w(x,t)] = 1 \;\; \mbox{and} \;\; p_\mu(x,t) > 0 \Rightarrow w(x,t) > 0.
$$
We denote the re-weighted density $p_\mu^w(x,t) := w(x,t)p_\mu(x,t)$.
\end{repdef}
\paragraph{Expected \& empirical risk}
We let the (expected) risk of $f$ measured by $\ell_h$ under $p_\mu$ be denoted
$$
R_\mu(h) = \mathbb{E}_{p_\mu}[l_h(x,t)]
$$
where $l_h$ is an appropriate loss function, and the empirical risk over a sample $D_\mu = \{(x_1, t_1, y_1) ..., (x_n, t_n, y_n)$ from $p_\mu$
$$
\hat{R}_\mu(f) = \frac{1}{n}\sum_{i=1}^n l_f(x_i, t_i, y_i) ~.
$$
We use the superscript $w$ to denote the re-weighted risks
$$
R^w_\mu(f) = \mathbb{E}[w(x, t)l_f(x, t)]
$$
$$
\hat{R}^w_\mu(f) = \frac{1}{n}\sum_{i=1}^n w(x_i, t_i) l_h(x_i, t_i, y_i)
$$
\begin{thmappdef}[Importance sampling]
For two distributions $p, q$ on $\mathcal Z$, of common support, $\forall z \in \mathcal Z : p(z) > 0 \iff q(z) > 0$, we call
$$
w_{IS}(z) := \frac{q(z)}{p(z)}
$$
the \emph{importance sampling} weights of $p$ and $q$.
\end{thmappdef}
\begin{repdef}{def:ipm}
The integral probability metric (IPM) distance, associated with the function family $\mathcal H$, between distributions $p$ and $q$ is defined by
$$
\mbox{\emph{IPM}}_\mathcal H(p,q) := \sup_{h : \|h\|_{\mathcal H} = 1} \left| \mathbb{E}_p[h] - \mathbb{E}_q[h] \right|
$$
\end{repdef}
\subsection{Learning bounds}
\label{sec:bound_proofs}
We begin by bounding the expected risk under a distribution $p_\pi$ in terms of the expected risk under $p_\mu$ and a measure of the discrepancy between $p_\pi$ and $p_\mu$. Using definition~\ref{def:ipm} we can show the following result.
\begin{replemma}{lem:ipm_bound}
For hypotheses $f$ with loss $\ell_f$ such that $\ell_f/\|\ell_f\|_{\mathcal H} \in \mathcal H$, and $p_\mu, p_\pi$ with common support, there exists a valid re-weighting $w$ of $p_\mu$, see Definition~\ref{def:valid_weight}, such that,
\begin{equation}
\begin{array}{ll}
R_\pi(f) & \leq R^w_\mu(f) + \|\ell_f\|_{\mathcal H}\mbox{\emph{IPM}}_{\mathcal H}(p_\pi, p^w_\mu) \\
& \leq R_\mu(f) + \|\ell_f\|_{\mathcal H}\mbox{\emph{IPM}}_{\mathcal H}(p_\pi, p_\mu)~.
\end{array}
\label{eq:app_ipm_bound}
\end{equation}
The first inequality is tight for importance sampling weights, $w(x,t) = p_\pi(x, t) / p_\mu(x, t)$. The second inequality is not tight for general $f$, even if $\ell_f \in \mathcal H$, unless $p_\pi = p_\mu$.
\label{lem:app_ipm_bound}
\end{replemma}
\begin{proof}
The results follows immediately from the definition of $\mbox{IPM}$.
\begin{align*}
R_\pi(f) - R_\mu^w(f) &=
\mathbb{E}_{\pi}[\ell_f(x, t)] - \mathbb{E}_{\mu}[w(x, t) \ell_f(x, t)] \\
& \leq \sup_{h\in \mathcal H_\ell} \left| \mathbb{E}_{\pi}[h(x, t)] - \mathbb{E}_{\mu}[w(x, t) h(x, t)] \right| \\
& = \mbox{IPM}_{\mathcal H_\ell}(p_\pi, p_\mu^w)
\end{align*}
Further, for importance sampling weights $w_{IS}(x,t) = \frac{\pi(t;x)}{\mu(t;x)}$, for any $h\in \mathcal H$,
\begin{align*}
\mathbb{E}_{\pi}[h(x, t)] - \mathbb{E}_{\mu}[w_{IS}(x, t) h(x, t)]\\
=
\mathbb{E}_{\pi}[h(x, t)] - \mathbb{E}_{\mu}[\frac{\pi(t; x)}{\mu(t; x)} h(x, t)] = 0
\end{align*}
and the LHS is tight.
\end{proof}
We could apply Lemma~\ref{lem:ipm_bound}\ to bound the loss under a distribution $q$ based on the weighted loss under $p$. Unfortunately, bounding the expected risk in terms of another expectation is not enough to reason about generalization from an empirical sample. To do that we use Corollary~2 of \citet{cortes2010learning}, restated as a Theorem below.
\begin{thmappthm}[Generalization error of re-weighted loss~\citep{cortes2010learning}]
For a loss function $\ell_h$ of any hypothesis $h \in \mathcal H \subseteq \{h' : \mathcal X \rightarrow \mathbb{R}\}$, such that $d = \text{\emph{Pdim}}(\{\ell_h : h\in \mathcal H\})$ where $\text{\emph{Pdim}}$ is the pseudo-dimension, and a weighting function $w(x)$ such that $\mathbb{E}_p[w] = 1$, with probability $1-\delta$ over a sample $(x_1, ..., x_n)$, with empirical distribution $\hat{p}$,
\begin{align*}
R^w_p(h) & \leq \hat{R}^w_p(h) \\
& + 2^{5/4} V_{p,\hat{p}}[w(x)l(x)]\left(\frac{d \log \frac{2ne}{d} + \log \frac{4}{\delta}}{n}\right)^{3/8}
\end{align*}
with
\begin{align*}
& V_{p,\hat{p}}[w(x)l(x)] \\
& = \max(\sqrt{\mathbb{E}_p[w^2(x)\ell_h^2(x)]}, \sqrt{\mathbb{E}_{\hat{p}}[w^2(x)\ell_h^2(x)]}) ~.
\end{align*}
With
$$
\mathcal C^{\mathcal H}_n = 2^{5/4} \left(d \log \frac{2ne}{d} + \log \frac{4}{\delta}\right)^{3/8}
$$
we get the simpler form
$$
R^w_p(h) \leq \hat{R}^w_p(h) + V_{p,\hat{p}}[w(x)l(x)]\frac{\mathcal C^{\mathcal H}_n}{n^{3/8}}~.
$$
\label{thm:cortes_bound}
\end{thmappthm}
We will also need the following result about estimating $\mbox{IPM}$s from finite samples from~\citet{sriperumbudur2009integral}.
\begin{thmappthm}[Estimation of IPMs from empirical samples~\citep{sriperumbudur2009integral}]
Let $M$ be a measurable space. Suppose $k$ is measurable kernel such that $\sup_{x\in M} k(x,x) \leq C \leq \infty$ and $\mathcal H$ the reproducing kernel Hilbert space induced by $k$, with $\nu := \sup_{x\in M,f\in \mathcal H}f(x) < \infty$. Then, with $\hat{p}, \hat{q}$ the empirical distributions of $p, q$ from $m$ and $n$ samples respectively, and with probability at least $1-\delta$,
\begin{align*}
\left|\mbox{\emph{IPM}}_{\mathcal H}(p,q) - \mbox{\emph{IPM}}_{\mathcal H}(\hat{p}, \hat{q}) \right| \\
\leq \sqrt{18 \nu^2 \log \frac 4 \delta C} \left(\frac{1}{\sqrt m} + \frac{1}{\sqrt n} \right)
\end{align*}
\label{thm:mmd_approx}
\end{thmappthm}
We consider learning twice-differentiable, invertible representations $\Phi : \mathcal X \rightarrow \mathcal Z$, where $\mathcal Z$ is the representation space, and $\Psi : \mathcal Z \rightarrow \mathcal X$ is the inverse representation, such that $\Psi(\Phi(x)) = x$ for all $x$. Let $\mathcal E$ denote space of such representation functions. For a design $\pi$, we let $p_{\pi,\Phi}(z,t)$ be the distribution induced by $\Phi$ over $\mathcal Z \times \mathcal T$, with $p^w_{\pi,\Phi}(z,t) := p_{\pi,\Phi}(z,t)w(\Psi(z),t)$ its re-weighted form and $\hat{p}^w_{\pi,\Phi}$ its re-weighted empirical form, following our previous notation. Note that we do not include $t$ in the representation itself, although this could be done in principle. Let $\mathcal G \subseteq \{h : \mathcal Z\times \mathcal T \rightarrow \mathcal Y\}$ denote a set of hypotheses $h(\Phi,t)$ operating on the representation $\Phi$ and let $\mathcal{F}$ denote the space of all compositions, $\mathcal{F} = \{f = h(\Phi(x), t) : h \in \mathcal G, \Phi \in \mathcal E\}$. We now restate and prove Theorem~\ref{thm:main}.
\begin{reptheorem}{thm:main}
Given is a labeled sample $D_\mu = \{(x_1, t_1, y_1), ..., (x_n, t_n, y_n)\}$ from $p_\mu$, and an unlabeled sample $D_\pi = \{(x'_1, t'_1), ..., (x'_m, t'_m)\}$ from $p_\pi$, with corresponding empirical measures $\hat{p}_{\mu}$ and $\hat{p}_{\pi}$. Suppose that $\Phi$ is a twice-differentiable, invertible representation, that $h(\Phi, t)$ is an hypothesis, and $f = h(\Phi(x),t) \in \mathcal{F}$. Define $m_t(x) = \mathbb{E}_Y[Y\mid X=x,T=t]$, let $\ell_{h, \Phi}(\Psi(z), t) := L(h(z, t), m_t(\Psi(z)))$ where $L$ is the squared loss, $L(y,y') = (y-y')^2$, and assume that there exists a constant $B_\Phi > 0$ such that $\ell_{h, \Phi}/B_\Phi \in \mathcal H \subseteq \{h : \mathcal Z\times \mathcal T \rightarrow \mathcal Y\}$, where $\mathcal H$ is a reproducing kernel Hilbert space of a kernel, $k$ such that $k((z,t),(z,t)) < \infty$. Finally, let $w$ be a valid re-weighting of $p_{\mu, \Phi}$. Then with probability at least $1-2\delta$,
\begin{equation}
\begin{array}{ll}
R_\pi(f) & \leq \hat{R}^w_\mu(f) + B_\Phi\text{\emph{IPM}}_{\mathcal H}(\hat{p}_{\pi,\Phi}, \hat{p}_{\mu,\Phi}^w) \\
& +
V_{\mu}(w, \ell_f)\frac{\mathcal C_{n,\delta}^{\mathcal{F}}}{n^{3/8}} \\
& + \mathcal D^{\Phi,\mathcal H}_{\delta}\left(\frac{1}{\sqrt{m}} + \frac{1}{\sqrt{n}}\right) + \sigma_Y^2
\end{array}
\label{eq:app_thm_main}
\end{equation}
where $\mathcal C_{n,\delta}^{\mathcal{F}}$ measures the capacity of $\mathcal{F}$ and has only logarithmic dependence on $n$, $\mathcal D^{\mathcal H}_{m,n,\delta}$ measures the capacity of $\mathcal H$, $\sigma^2_Y$ is the expected variance in potential outcomes, and
\begin{align*}
& V_{\mu}(w, \ell_f) \\
& = \max(\sqrt{\mathbb{E}_{p_\mu}[w^2(x,t)\ell_f^2(x,t)]}, \sqrt{\mathbb{E}_{\hat{p}_\mu}[w^2(x,t)\ell_f^2(x,t)]})~.
\end{align*}
A similar bound exists where $\mathcal H$ is the family of functions Lipschitz constant at most 1, but with worse sample complexity.
\end{reptheorem}
\begin{proof}
We have by definition
\begin{align*}
& R_\pi(f) - R^w_\mu(f) = \mathbb{E}_{\pi}[\ell_f(x,t,y)] - \mathbb{E}_{\mu}[w(x,t)\ell_f(x,t,y)] \\
& = \int_{x,t,y} \ell_f(x,t,y)p(y\mid t,x) (p_\pi(x,t)-p^w_\mu(x,t))dxdtdy
\end{align*}
Define $\ell_{h,\Phi}(x,t) = L(h(\Phi(x),t), m_t(x))$ where $m_t(x) := E[Y \mid T=t, X=x])$. Then, with $L$, the squared loss, $L(y,y') = (y-y')^2$, we have,
$$
\mathbb{E}_{\pi}[\ell_{h,\Phi}(x,t,y)] = \mathbb{E}_{\pi}[\ell_{h,\Phi}(x,t)] + \sigma^2_\pi
$$
where $\sigma^2_\pi = \mathbb{E}_{p_\pi}[(Y - m_t(x))^2]$, and analogously for $\mu$. We get that
\begin{align*}
& R_\pi(f) - R^w_\mu(f) = \\
& \int_{\substack{z\in \mathcal Z\\ t \in \mathcal T}} \ell_{h,\Phi}(x,t)(p_\pi(x,t)-p^w_\mu(x,t))dxdt + \sigma^2_\pi + \sigma^2_\mu \\
& = \int_{\substack{z\in \mathcal Z\\ t \in \mathcal T}} \ell_{h,\Phi}(\Psi(z),t)(p_{\pi,\Phi}(z,t)-p^w_{\mu,\Phi}(z,t))|J_\Psi(z)| dzdt \\
& \;\;\;\; + \sigma^2_\pi + \sigma^2_\mu \\
& \leq A_\Phi \int_{\substack{z\in \mathcal Z\\ t \in \mathcal T}} \ell_{h,\Phi}(\Psi(z),t)(p_\pi(z,t)-p^w_\mu(z,t)) dzdt \\
& \;\;\;\; + \sigma^2_\pi + \sigma^2_\mu \\
& \leq A_\Phi \|\ell_{h,\Phi}\|_\mathcal H \sup_{h \in \mathcal H} \left| \int_{\substack{z\in \mathcal Z\\t\in \mathcal T}} h(\Psi(z),t)\left(p_{\pi,\Phi}(z,t)-p^w_{\mu,\Phi}(z,t)\right) dzdt \right| \\
& \;\;\;\; + \sigma^2_\pi + \sigma^2_\mu \\
& = B_\Phi\cdot \mbox{IPM}_\mathcal H(p_{\pi,\Phi}, p^w_{\mu,\Phi}) + \sigma^2_\pi + \sigma^2_\mu
\end{align*}
where $J_\Psi(z)$ is the Jacobian matrix of $\Psi$ evaluated at $z$ and $A_\Phi \geq |J_\Psi(z)|$ for all $z \in \mathcal Z$, where $|J|$ is the absolute determinant of $J$.
By application of Theorem~\ref{thm:cortes_bound} we have with probability at least $1-\delta$,
$$
R^w_\mu(f) \leq \hat{R}^w_\mu(f) + V_{\mu}(w, \ell)\frac{\mathcal C_{n,\delta}^{\mathcal H}}{n^{3/8}}~.
$$
and by applying Theorem~\ref{thm:mmd_approx}, we have with probability at least $1-\delta$,
\begin{align*}
& \left|\mbox{IPM}_{\mathcal H}(p_{\pi,\Phi}, p^w_{\mu,\Phi}) - \mbox{IPM}_{\mathcal H}(\hat{p}_{\pi,\Phi}, \hat{p}^w_{\mu,\Phi}) \right| \\
& \leq \sqrt{18 \nu^2 \log \frac 4 \delta C} \left(\frac{1}{\sqrt m} + \frac{1}{\sqrt n} \right)
\end{align*}
We let $\sigma^2_Y = \sigma^2_\pi + \sigma^2_\mu$ and
$$
\mathcal D^{\Phi,\mathcal H}_{\delta} := B_\Phi \sqrt{18 \nu^2 \log \frac 4 \delta C}
$$
Combining these results, observing that $(1-\delta)^2 \geq 1-2\delta$, we obtain the desired result.
\end{proof}
\subsection{Asymptotics}
\begin{reptheorem}{thm:asymptotics}
Suppose $\mathcal H$ is a reproducing kernel Hilbert space given
by a bounded kernel. Suppose weak overlap holds in that
$\mathbb E[(p_\pi(x,t)/p_\mu(x,t))^2]<\infty$.
Then,
$$
\min_{h,\Phi,w}\mathcal L_\pi(h, \Phi, w; \beta)] \leq \min_{f\in\mathcal F}R_\pi(f) + O(1/\sqrt{n}+1/\sqrt{m}) ~.
$$
\end{reptheorem}
\begin{proof}
Let $f^*=\Phi^*\circ h^*\in\argmin_{f\in\mathcal F}R_\pi(f)$
and let $w^*(x,t) = p_{\pi, \Phi}(\Phi^*(x), t)/p_{\mu, \Phi}(\Phi^*(x), t)$.
Since $\min_{h,\Phi,w}\mathcal L_\pi(h, \Phi, w; \beta)\leq\mathcal L_\pi(h^*, \Phi^*, w^*; \beta)$,
it suffices to show that $\mathcal L_\pi(h^*, \Phi^*, w^*; \beta) = R_\pi(f^*) + O(1/\sqrt{n}+1/\sqrt{m})$. We will work term by term:
\begin{align*}
& \mathcal L_\pi(h^*, \Phi^*, w^*; \beta) = \underbrace{\frac{1}{n}\sum_{i=1}^n w_i \ell_h(\Phi(x_i), t_i)}_{\encircle{A}} \\
& + \lambda_h\ \underbrace{\frac{\mathcal{R}(h)}{\sqrt{n}}}_{\encircle{B}} + \alpha\ \underbrace{\mbox{IPM}_G(\hat{q}_\Phi, \hat{p}^{w^k}_\Phi)}_{\encircle{C}} + \lambda_{w}\ \underbrace{\frac{\|w\|_2}{n}}_{\encircle{D}}.
\end{align*}
For term $\encircle{D}$,
letting $w_i^*=w^*(x_i,t_i)$,
we have that by weak overlap
$$
\encircle{D}^2=\frac{1}{n}\times\frac1n\sum_{i=1}^n(w_i^*)^2=O_p(1/n),
$$
so that $\encircle{D}=O_p(1/\sqrt{n})$.
For term $\encircle{A}$, under ignorability, each term in the sum in the first term has expectation equal to $R_\pi(f^*)$ and so, so by weak overlap and bounded second moments of loss, we have $\encircle{A}=R_\pi(f^*)+O_p(1/\sqrt{n})$. For term $\encircle{B}$, since $h^*$ is fixed we have deterministically that $\encircle{B}=O(1/\sqrt{n})$.
Finally, we address term $\encircle{C}$, which when expanded can be written as
$$\sup_{\|h\|\leq1}(\frac 1 m \sum_{i=1}^m h(\Phi^*(x'_i),t'_i)-\frac 1 n \sum_{i=1}^n w_i^* h(\Phi^*(x_i),t_i)).$$
Let $x''_i,t''_i$ for $i=1,\dots,m$ and $x'''_i,t'''_i$ for $i=1,\dots,n$ be new iid replicates of $x_1',t_1'$, i.e., new ghost samples drawn from the target design. By Jensen's inequality,
\begin{align*}
\mathbb{E}[\encircle{C}^2] & =
\mathbb{E}[\sup_{\|h\|\leq 1} (\frac 1 m \sum_{i=1}^m h(\Phi^*(x'_i),t_i') - \frac 1 n \sum_{i=1}^n w_i^*h(\Phi^*(x_i),t_i))^2] \\
& =
\mathbb{E}[\sup_{\|h\| \leq 1}(\frac 1 m \sum_{i=1}^m (h(\Phi^*(x'_i),t'_i) - \mathbb{E}[h(\Phi^*(x''_i),t''_i)]) \\
& \phantom{=}- \frac 1 n \sum_{i=1}^n (w_i^*h(\Phi^*(x_i),t_i) - \mathbb{E}[h(\Phi^*(x'''_i),t'''_i)]))^2] \\
& \leq
\mathbb{E}[\sup_{\|h\|\leq 1}(\frac 1 m \sum_{i=1}^m (h(\Phi^*(x'_i),t'_i)-h(\Phi^*(x''_i),t''_i)) \\
& \phantom{=}- \frac 1 n \sum_{i=1}^n (w_i^*h(\Phi^*(x_i),t_i)-h(\Phi^*(x'''_i),t'''_i)))^2] \\
& \leq
2\mathbb{E}[\sup_{\|h\|\leq1}(\frac 1 m \sum_{i=1}^m (h(\Phi^*(x'_i),t'_i) - h(\Phi^*(x''_i),t''_i)))^2] \\
& \phantom{=}+ 2\mathbb{E}[\sup_{\|h\|\leq1}(\frac 1 n \sum_{i=1}^n (w_i^*h(\Phi^*(x_i),t_i)-h(\Phi^*(x'''_i),t'''_i)))^2]
\end{align*}
Let $\xi_i(h)=h(\Phi^*(x'_i),t'_i)-h(\Phi^*({X'}_i^q)$ and let $\zeta_i(h)=w_i^*h(\Phi^*(x_i),t_i)-h(\Phi^*(x'''_i),t'''_i)$. Note that for every $h$, $\mathbb{E}[\zeta_i(h)]=\mathbb{E}[\xi_i(h)]=0.$ Moreover, $\mathbb{E}[\|\zeta_i\|^2]\leq 4E[K(\Phi^*(x'_i),t'_i,\Phi^*(x'_i),t'_i)]\leq M$. Similarly, $\mathbb{E}[\|\xi_i\|^2]\leq 2E[(w_i^*)^2]M+2M\leq M'<\infty$ because of weak overlap. Let $\zeta_i'$ for $i=1,\dots,n$ be iid replicates of $\zeta_i$ (ghost sample) and let $\epsilon_i$ be iid Rademacher random variables. Because $\mathcal H$ is a Hilbert space, we have that $\sup_{\|h\|\leq 1}(A(h))^2=\|A\|^2=\left<A,A\right>$. Therefore, by Jensen's inequality,
\begin{align*}
& \mathbb{E}[\sup_{\|h\|\leq1}(\frac 1 n \sum_{i=1}^n(w_i^*h(\Phi^*(x_i),t_i)-h(\Phi^*(x'''_i),t'''_i)))^2] \\
& = \mathbb{E}[\sup_{\|h\|\leq1}(\frac1n\sum_{i=1}^n\zeta_i(h))^2] \\
& = \mathbb{E}[\sup_{\|h\|\leq1}(\frac1n\sum_{i=1}^n(\zeta_i(h)-\mathbb{E}[\zeta'_i(h)]))^2] \\
& \leq \mathbb{E}[\sup_{\|h\|\leq1}(\frac1n\sum_{i=1}^n(\zeta_i(h)-\zeta'_i(h)))^2] \\
& = \mathbb{E}[\sup_{\|h\|\leq1}(\frac 1 n \sum_{i=1}^n\epsilon_i(\zeta_i(h)-\zeta'_i(h)))^2] \\
& \leq \frac4{n^2}\mathbb{E}[\sup_{\|h\|\leq1}(\sum_{i=1}^n\epsilon_i\zeta_i(h))^2] \\
& = \frac4{n^2}\mathbb{E}[\|\sum_{i=1}^n\epsilon_i\zeta_i\|^2] \\
& = \frac4{n^2}\mathbb{E}[\sum_{i,j=1}^n\epsilon_i\epsilon_j\left<\zeta_i,\zeta_j\right>] \\
& = \frac4{n^2}\mathbb{E}[\sum_{i=1}^n\|\zeta_i\|^2] \\
& = \frac4{n^2}\sum_{i=1}^n \mathbb{E}[\|\zeta_i\|^2] \\
& \leq \frac{4M'}n
\end{align*}
An analogous argument can be made of $\xi_i$'s, showing that $\mathbb{E}[\encircle{C}^2]=O(1/n)$ and hence $\encircle{C}=O(1/\sqrt{n})$ by Markov's inequality.
\end{proof}
\section{Implementation}
\label{app:model}
We implemented all neural network models (IPM-WNN, RCFR{}) in TensorFlow as feed-forward fully-connected networks with ELU activations. All architectures have a representation with two hidden layers of 32 and 16 hidden units, and hypotheses (one for each outcome) of 1 layer of 16 hidden units. The networks were trained using stochastic gradient descent with the ADAM optimizer with a learning rate of $10^{-3}$. The batch size was 128. Representations were normalized by dividing by the norm. Weight functions were implemented as 2 hidden layers of 32 units each, as functions of the representation $\Phi$. $\sigma$ in the RBF kernel was set to 1.0. $\lambda_w$ was set to 0.1 and $\lambda_h$ to $10^{-4}$.
\section{Experiments}
\subsection{Synthetic}
We use a two-layer MLP with ELU units and layer sizes 10, 10 as parameterization of the sample weights. Weights are normalized by dividing by the mean.
\subsection{IHDP}
\label{app:exp}
In Figures~\ref{fig:app_ihdp_cate_vs_w-reg}--\ref{fig:ihdp_cate_vs_factual}, we see two different views of the IHDP results.
\begin{figure}[tbp!]
\centering
\includegraphics[width=.9\textwidth]{fig/ihdp_almost_linear.pdf}
\caption{\label{fig:app_ihdp_cate_vs_w-reg} Error in CATE estimation on IHDP as a function of re-weighting regularization strength $\lambda_w$. We see that a) for small imbalance penalties $\alpha$, re-weighting (low $\lambda_w$) has no effect, b) for moderate $\alpha$, less uniform re-weighting (smaller $\lambda_w$) improves the error, c) for large $\alpha$, weighting helps, but overall error increases.}
\end{figure}
\begin{figure}[tbp!]
\centering
\includegraphics[width=.9\textwidth]{fig/ihdp_almost_linear_gap_vs_factual_label.pdf}
\caption{\label{fig:ihdp_cate_vs_factual} Source prediction error on IHDP. we compare the ratio of CATE error to source error. Color represents $\alpha$ (see left) and size $\lambda_w$. We see that for large $\alpha$, the source error is more representative of CATE error, but does not improve in absolute value without weighting. Here, $\alpha$ was fixed. Best viewed in color.}
\end{figure}
\section{Experiments}
\label{sec:exp}
\subsection{Synthetic experiments for domain adaptation}
\label{sec:synth}
We create a synthetic domain adaptation experiment to highlight the benefit of using a learned re-weighting function to minimize weighted risk over using importance sampling weights $w^*(x) = p_\pi(x)/p_\mu(x)$ for small sample sizes. We observe $n$ labeled source samples, distributed according to $p_{\mu}(x) = \mathcal{N}(x; m_{\mu}, I_d)$ and predict for $n$ unlabeled target samples drawn according to $p_{\pi}(x) = \mathcal{N}(x; m_{\pi}, I_d)$ where $I_d$ is the $d$-dimensional identity matrix, $m_{\mu} = \boldsymbol{1}_d/2$, $m_{\pi} = -\boldsymbol{1}_d/2$ and $\boldsymbol{1}_d$ is the $d$-dimensional vector of all 1:s, here with $d=10$. We let $\beta \sim \mathcal{N}(\boldsymbol{0}_d,1.5I_d)$ and $c \sim \mathcal{N}(0,1)$ and let $y = \sigma(\beta^\top x + c)$ where $\sigma(z) = 1/(1+e^{-z})$. Importance sampling weights, $w^*(x) = p_{\pi}(x)/p_{\mu}(x)$, are known. In experiments, we vary $n$ from 10 to 600. We fit (misspecified) linear models---the identity representation $\Phi(x) = x$ is used for both approaches---$f(x) = \beta^\top x + \gamma$ to the logistic outcome, and compare minimizing a weighted source risk by a) parameterizing sample weights as a small feed-forward neural network to minimize \eqref{eq:emp_loss} (ours) b) using importance sampling weights (baseline), both using gradient descent. For our method, we add a small variance penalty, $\lambda_w = 10^{-3}$, to the learned weights, use MMD with an RBF-kernel of $\sigma=1.0$ as IPM, and let $\alpha=10$. We compare to exact importance sampling weights (IS) as well as clipped IS weights (ISC), $w_M(x) = \min(w(x), M)$ for $M \in \{5, 10\}$, a common way of reducing variance of re-weighting methods~\citep{swaminathan2015counterfactual}.
\begin{figure}[tbp!]
\centering
\includegraphics[width=.92\textwidth]{fig/finite_dim10_N100_clip_log.pdf}
\caption{\label{fig:finite}Target prediction error on synthetic domain adaptation experiment, comparing learned re-weighting (RCFR) and exact/clipped importance sampling weights (IS/ISC). Variance of IS hurts performance for small sample sizes.}
\end{figure}
In Figure~\ref{fig:finite}, we see that our proposed method behaves well at small sample sizes compared to importance sampling methods. The poor performance of exact IS weights is expected at smaller samples, as single samples are given very large weight, resulting in hypotheses that are highly sensitive to the training set. While clipped weights alleviates this issue, they do not preserve relevance ordering of high-weight samples, as many are given the truncation value $M$, in contrast to the re-weighting learned by our method. True domain densities are known only to IS methods.
\subsection{Conditional average treatment effects --- IHDP}
\label{sec:exp_cate}
We evaluate our framework in the CATE estimation setting, see Section~\ref{sec:cate}. Our task is to predict the expected difference between potential outcomes conditioned on pre-treatment variables, \emph{for a held-out sample} of the population. We compare our results to ordinary least squares (OLS) (with one regressor per outcome), OLS-IPW (re-weighted OLS according to a logistic regression estimate of propensities), Random Forests, Causal Forests~\citep{wager2017estimation}, BART~\citep{chipman2010bart}, and CFRW~\citep{shalit2016estimating} (with Wasserstein penalty). Finally, we use as baseline (IPM-WNN): first weights are found by IPM minimization in the input space~\citep{gretton2009covariate,kallus2016generalized}, then used in a re-weighted neural net regression, with the same architecture as our method.
Our implementation, dubbed RCFR{} for \emph{Re-weighted CounterFactual Regression}, parameterizes representations $\Phi(x)$, weighting functions $w(\Phi,t)$ and hypotheses $h(\Phi,t)$ using neural networks, trained by minimizing \eqref{eq:emp_loss}. We use the RBF-kernel maximum mean discrepancy as the IPM~\citep{gretton2012kernel}. For a description of the architecture, training procedure and hyperparameters, see the Appendix. We compare results using uniform $w=1$ and learned weights, setting the balance parameter $\alpha$ either fixed, by an oracle (test-set error), or adaptively using the heuristic described in Section~\ref{sec:model}. To pick other hyperparameters, we split training sets into one part used for function fitting and one used for early stopping and hyperparameter selection. Hyperparameters for regularization are chosen based on the empirical loss on a held-out source (factual) sample.
The Infant Health and Development Program (IHDP) dataset is a semi-synthetic binary-treatment benchmark~\citep{hill2011bayesian}, split into training and test sets by~\citet{shalit2016estimating}. IHDP has a set of $d=25$ real-world continuous and binary features describing $n=747$ children and their mothers, a real-world binary treatment made non-randomized through biased subsampling by~\citet{hill2011bayesian}, and a synthesized continuous outcome that can be used to compute the ground-truth CATE error. Average results over 100 different realizations/settings of the outcome are presented in Table~\ref{tab:ihdp}. We see that our proposed method achieves state-of-the-art results, and that adaptively choosing $\alpha$ does not hurt performance much. Furthermore, we see a substantial improvement from using non-uniform sample weights. In Figure~\ref{fig:ihdp} we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following conclusions: a) For moderate to large $\alpha \in [10, 100]$, we observe a marginal gain from using the IPM penalty. This is consistent with the observations of \citet{shalit2016estimating}. b) For large $\alpha \in [10, 1000]$, we see a large gain from using a non-uniform re-weighting (small $\lambda_w$). c) While large $\alpha$ makes the factual error more representative of the counterfactual error, using it without re-weighting results in higher absolute error. We believe that the moderate sample size of this dataset is one of the reasons for the usefulness of our method. See the Appendix for a complementary view of these results.
\begin{table}[tbp]
\caption{Causal effect estimation on IHDP. CATE error $\mbox{RMSE}(\hat{\tau})$, target prediction error $\hat{R}_\pi(f)$ and std errors. Lower is better.}\label{tab:ihdp}
\begin{tabular}{lcc}
\hline
& $\mbox{RMSE}(\hat{\tau})$ & $\hat{R}_\pi(f)$ \\
\hline
OLS & $2.3 \pm .11$ & $1.1 \pm .05$ \\
OLS-IPW & $2.4 \pm .11$ & $1.2 \pm .05$ \\
Random For. & $6.6 \pm .30$ & $4.1 \pm .18$ \\
Causal For. & $3.8 \pm .18$ & $1.8 \pm .08$ \\
BART & $2.3 \pm .10$ & $1.7 \pm .07$ \\
IPM-WNN & $1.2 \pm .12$ & $.65 \pm .02$ \\
CFRW & $.76 \pm .02$ & $.46 \pm .01$ \\
\hline
RCFR{} {\small{Oracle $\alpha$, $w=1$}} & $.81 \pm .07$ & $.47 \pm .03$ \\
RCFR{} {\small{Oracle $\alpha$}} & $.65 \pm .04$ & $.38 \pm .01$ \\
RCFR{} {\small{Adapt. $\alpha$}} & $.67 \pm .05$ & $.37 \pm .01$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[tbp!]
\centering
\includegraphics[width=.92\textwidth]{fig/ihdp_almost_linear.pdf}
\caption{\label{fig:ihdp} For small imbalance penalties $\alpha$, re-weighting (low $\lambda_w$) has no effect. For moderate $\alpha$, non-uniform re-weighting (smaller $\lambda_w$) lowers error, c) for large $\alpha$, weighting helps, but overall error increases. Best viewed in color.}
\end{figure}
\section{Discussion}
We have proposed a theory and an algorithmic framework for learning to predict outcomes of interventions under shifts in design---changes in both intervention policy and feature domain. The framework combines representation learning and sample re-weighting to balance source and target designs, emphasizing information from the source sample relevant for the target. Existing re-weighting methods either use pre-defined weights or learn weights based on a measure of distributional distance in the input space. These approaches are highly sensitive to the choice of metric used to measure balance, as the input may be high-dimensional and contain information that is not predictive of the outcome. In contrast, by learning weights to achieve balance in representation space, we base our re-weighting only on information that is predictive of the outcome. In this work, we apply this framework to causal effect estimation, but emphasize that joint representation learning and re-weighting is a general idea that could be applied in many applications with design shift.
Our work suggests that distributional shift should be measured and adjusted for in a representation space relevant to the task at hand. Joint learning of this space and the associated re-weighting is attractive, but several challenges remain, including improving optimization of the proposed bound and relaxing the invertibility constraint on representations. For example, variable selection methods are not covered by our current theory, as they induce a non-ivertible representation, but a similar intuition holds there---only predictive attributes should be used when measuring imbalance. We believe that addressing these limitations is a fruitful path forward for future work.
\subsubsection*{Acknowledgements}
This work was supported by Office of Naval Research Award No. N00014-17-1-2791 (DS \& FJ) and by the National Science Foundation under Grant No. 1656996 (NK).
\section{Introduction}
A long-term goal in artificial intelligence is for agents to learn how to \emph{act}. This endeavor relies on accurately predicting and optimizing for the outcomes of actions, and fundamentally involves estimating counterfactuals---what would have happened if the agent acted differently? In many applications, such as the treatment of patients in hospitals, experimentation is infeasible or impractical, and we are forced to learn from \emph{biased}, observational data. Doing so requires adjusting for the \emph{distributional shift} that exists between groups of patients that received different treatments. A related kind of distributional shift arises in unsupervised domain adaptation, the goal of which is to learn predictive models for a target domain, observing ground truth only in a source domain.
In this work, we pose both domain adaptation and treatment effect estimation as special cases of prediction across shifting \emph{designs}, referring to changes in both action policy and feature domain. We separate policy from domain as we wish to make \emph{causal} statements about the policy, but not about the domain.
For example, to learn a treatment policy from observational data, personalizing the choice between medication $A$ and $B$, one must adjust for the fact that treatment $A$ was systematically given to patients of different characteristics from those who received treatment $B$. We call this predicting \emph{under a shift in policy}. Furthermore, if all of our observational data comes from hospital $P$, but we wish to predict counterfactuals for patients in hospital $Q$, with a population that differs from $P$, an additional source of distributional shift is at play. We call this \emph{a shift in domain.} Together, we refer to the combination of domain and policy as the \emph{design}. The design for which we observe ground truth is called the \emph{source}, and the design of interest the \emph{target}.
The two most common approaches for addressing distributional shift are to learn shift-invariant representations of the data~\citep{ajakan2014domain} or to perform sample re-weighting or matching~\citep{shimodaira2000improving, kallus2016generalized}.
Representation learning approaches attempt to extract only information from the input that is invariant to a change in design \emph{and} predictive of the variable of interest. Such representations are typically learned by fitting deep neural networks in which activations of deeper layers are regularized to be distributionally similar across designs~\citep{ajakan2014domain,long2015learning}.
Although representation learning can be shown to reduce the error associated to distributional shift \citep{long2015learning} in some cases, standard approaches are biased, even in the limit of infinite data, as they also penalize the use of predictive information.
In contrast, re-weighting methods correct for distributional shift by assigning higher weight
to samples from the source design that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, causal inference~\citep{rosenbaum1983central}, domain adaptation~\citep{shimodaira2000improving} and reinforcement learning~\citep{precup2001off}. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the inverse probability of observed treatments (treatment propensity). Re-weighting with knowledge of importance sampling weights often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples~\citep{swaminathan2015counterfactual}.
A significant hurdle in applying re-weighting methods is that optimal weights are rarely known in practice. Weights can be estimated as the inverse of estimated feature or treatment densities \citep{rosenbaum1983central,freedman2008weighting} but this plug-in approach can lead to highly unstable estimates. More stable methods learn weights by minimizing distributional distance metrics~\citep{gretton2009covariate,kallus2016generalized,kallus2017optimal,zubizarreta2015stable}. Closely related, matching \citep{stuart2010matching} produces weights by finding units in the source design that are similar in some metric to units in the target design. Specifying a distributional or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can can also be made small though weighting. This has inspired heuristics such as first performing variable selection and then balancing or finding matches only in the selected covariates.
In this work, we bring together shift-invariant representation learning and re-weighting methods. We show that existing representation learning approaches minimize an upper bound on the generalization under design-shift, implicitly using uniform sample weights, and that there exist weights that improve the tightness of these bounds. Our key algorithmic contribution is to jointly learn a representation $\Phi$ of the input space and a weighting function $w(\Phi)$ to minimize a) the re-weighted empirical risk and b) a re-weighted measure of distributional shift between designs. This is useful also for the identity representation $\Phi(x) = x$, as it allows for principled control of the variance of estimators through regularization of the re-weighting function $w(x)$, mitigating the issues of exact importance sampling methods. Further, this allows us to evaluate $w$ on hold-out samples to select hyperparameters or do early stopping. Finally, letting $w$ depend on $\Phi$ alleviates the problem of choosing a metric by which to optimize sample weights, as $\Phi$ is trained to extract information predictive of the outcome. We apply our theory and algorithmic framework for generalization error under a shift in design to the case of treatment effect estimation.
\paragraph{Main contributions}
We bring together two techniques used to overcome distributional shift between designs---re-weighting and representation learning, with complementary robustness properties, generalizing existing methods based on either technique.
We give finite-sample generalization bounds for prediction under design shift, \emph{without} assuming access to importance sampling weights \emph{or} to a well-specified model, and develop an algorithmic framework to minimize these bounds.
We propose a neural network architecture that jointly learns a representation of the input and a weighting function to improve balance across changing settings. Finally, we apply our proposed algorithm to the task of predicting causal effects from observational data, achieving state-of-the art results on a widely used benchmark.
\section{Joint learning of representations and sample weights}
\label{sec:model}
Motivated by the theoretical insights of Section~\ref{sec:theory}, we propose to jointly learn a representation $\Phi(x)$, a re-weighting $w(x,t)$ and an hypothesis $h(\Phi, t)$ by minimizing a bound on the risk under the target design, see \eqref{eq:thm_main}. This approach improves on previous work in two ways: it alleviates the bias of \citet{shalit2016estimating} when sample sizes are large, see Section~\ref{sec:theory}, and it increases the flexibility of the balancing method of \cite{gretton2009covariate} by learning the representation to balance.
For notational brevity, we let $w_i = w(x_i, t_i)$. Recall that $\hat{p}^w_{\pi,\Phi}$ is the re-weighted empirical distribution of representations $\Phi$ under $p_\pi$. The training objective of our algorithm is the RHS of \eqref{eq:thm_main}, with hyperparameters $\beta = (\alpha, \lambda_h, \lambda_w)$ substituted for model (and representation) complexity terms,
\begin{align}
\mathcal L_\pi(h, \Phi, w; \beta) & = \underbrace{\frac{1}{n}\sum_{i=1}^n w_i \ell_h(\Phi(x_i), t_i) + \frac{\lambda_h}{\sqrt{n}} \mathcal{R}(h)}_{\mathcal L_\pi^h(h, \Phi, w; D, \alpha, \lambda_h)} \nonumber \\
& + \underbrace{\alpha\ \mbox{IPM}_G(\hat{p}_{\pi, \Phi}, \hat{p}^{w}_{\mu, \Phi}) + \lambda_{w} \frac{\|w\|_2}{n}}_{\mathcal L_\pi^w( \Phi, w; D, \alpha, \lambda_w)}
\label{eq:emp_loss}
\end{align}
where $\mathcal{R}(h)$ is a regularizer of $h$, such as $\ell_2$-regularization. We can show the following result.
\begin{thmthm}
Suppose $\mathcal H$ is a reproducing kernel Hilbert space given
by a bounded kernel. Suppose weak overlap holds in that
$\mathbb E[(p_\pi(x,t)/p_\mu(x,t))^2]<\infty$.
Then,
$$
\min_{h,\Phi,w}\mathcal L_\pi(h, \Phi, w; \beta) \leq \min_{f\in\mathcal F}R_\pi(f) + O_p(1/\sqrt{n}+1/\sqrt{m}) ~.
$$
Consequently, under the assumptions of Thm.~\ref{thm:main}, for sufficiently large $\alpha$ and $\lambda_w$,
$$
R_\pi(\hat f_n)\leq \min_{f\in\mathcal F}R_\pi(f) + O_p(1/n^{3/8}+1/\sqrt{m}).
$$
In words, the minimizers of $\eqref{eq:emp_loss}$ converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples.
\label{thm:asymptotics}
\end{thmthm}
\paragraph{Implementation}
Minimization of $\mathcal L_\pi(h, \Phi, w; \beta)$ over $h, \Phi$ and $w$ is, while motivated by Theorem~\ref{thm:asymptotics}, a difficult optimization problem to solve in practice. For example, adjusting $w$ to minimize the empirical risk term may result in overemphasizing ``easy'' training examples, resulting in a poor local minimum. Perhaps more importantly, ensuring invertibility of $\Phi$ while maintaining good accuracy is non-trivial for many representation learning frameworks, such as deep neural networks. In our implementation, we deviate from theory on these points, by fitting the re-weighting $w$ based only on imbalance and variance terms, and don't explicitly enforce invertibility. As a heuristic, we split the objective, see \eqref{eq:emp_loss}, in two and use only the IPM term and regularizer to learn $w$. In short, we adopt the following alternating procedure.
\begin{align}
h^{k}, \Phi^{k} & = \argmin_{h, \Phi} \;\; \mathcal L_\pi^h(h, \Phi, w; D, \alpha, \lambda_h), \\
w^{k+1} & = \argmin_{w} \;\; \mathcal L_\pi^w( \Phi^k, w; D, \alpha, \lambda_w)
\end{align}
\begin{figure}[tbp!]
\centering
\includegraphics[width=.9\textwidth]{fig/architecture_gen.pdf}
\caption{Architecture for predicting outcomes under design shift. A re-weighting function $w$ is fit jointly with a representation $\Phi$ and hypothesis $h$ of the potential outcomes, minimizing a bound on the target risk. Dashed lines are not back-propagated through. Regularization not shown.}\label{fig:architecture}
\end{figure}
The re-weighting function $w(x,t)$ could be represented by one free parameter per training point, as it is only used to learn the model, not for prediction. However, we propose to let $w$ be a parametric function of $\Phi(x)$. Doing so ensures that information predictive of the outcome is used for balancing, and lets us compute weights and the objective on a hold-out set, to perform early stopping or select hyperparameters. This is not possible with existing re-weighting methods such as~\citet{gretton2009covariate,kallus2016generalized}.
An example architecture for the treatment effect estimation setting is presented in Figure~\ref{fig:architecture}. By Proposition~\ref{prop:cate}, estimating treatment effects involves predicting under the two constant policies---treat-everyone and treat-no-one. In Section~\ref{sec:exp}, we evaluate our method in this task.
As noted by \citet{shalit2016estimating}, choosing hyperparameters for counterfactual prediction is fundamentally difficult, as we cannot observe ground truth for counterfactuals. In this work, we explore setting the balance parameter $\alpha$ adaptively. $\alpha$ is used in \eqref{eq:emp_loss} in place of $B_\Phi$, a factor measuring the complexity of the loss and representation function as functions of the input, a quantity that changes during training. As a heuristic, we use an approximation of the Lipschitz constant of $\ell_f$, with $f = h(\Phi(x),t)$, based on observed examples: $\alpha_{h,\Phi} = \max_{i,j \in [n]} |\ell_f(x_i,t_i,y_i) - \ell_f(x_j,t_j,y_j)|/\|x_i - x_j\|_2$. We use a moving average to improve stability.
\section{Predicting outcomes under design shift}
\label{sec:problem}
The goal of this work is to accurately predict outcomes of interventions $T \in \mathcal T$ in contexts $X \in \mathcal X$ drawn from a \emph{target design} $p_\pi(X, T)$.
The result of intervening with $t\in\mathcal T$ is the potential outcome $Y(t)\in\mathcal Y$ \citep[Ch. 1--2]{imbens2015causal}, which has a stationary distribution $p(Y(t) \mid X)$ given context $X$. Assuming a stationary outcome is akin to the \emph{covariate shift} assumption~\citep{shimodaira2000improving}, often used in domain adaptation.\footnote{Equivalently, we may write $p_\pi(Y(t)\mid X) = p_\mu(Y(t)\mid X)$.}
For example, in the binary intervention setting, $Y(1)$ represents the outcome under treatment and $Y(0)$ the outcome under control.
The target design consists of two components: the \emph{target policy} $p_\pi(T\mid X)$, which describes how one intends to map observations of contexts (such as patient prognostics) to interventions (such as pharmacological treatments) and the \emph{target domain} $p_\pi(X)$, which describes the population of contexts to which the policy will be applied.
The target design is known to us only through $m$ unlabeled samples $(x'_1, t'_1), \dots, (x'_m, t'_m)$ from $p_\pi(X, T)$.
Outcomes are only available to us in labeled samples from a source
domain:
$(x_1, t_1, y_1), \dots, (x_n, t_n, y_n)$, where $(x_i,t_i)$ are draws from a source design $p_\mu(X,T)$ and $y_i=y_i(t_i)$ is a draw from $p_T(Y\mid X)$, corresponding only to the \emph{factual} outcome $Y(T)$ of the treatment administered.
Like the target design, the source design consists of a domain of contexts for which we have data and a policy, which describes the (unknown) historical administration of treatment in the data.
Only the factual outcomes of the treatments administered are observed, while the counterfactual outcomes $y_i(t)$ for $t\neq t_i$ are, naturally, unobserved.
Our focus is the \emph{observational} or \emph{off-policy} setting, in which interventions in the source design are performed dependent on attributes $X$, $p_\mu(T\mid X) \neq p_\mu(T)$, and that covariate marginals are shifted in general, $p_\mu(X) \neq p_\pi(X)$. This encapsulates both the covariate shift often observed between treated and control populations in observational studies and the covariate shift between the domain of the study and the domain of an eventual wider intervention.
Examples of this problem are plentiful: in addition to the example given in the introduction, consider predicting the return of an advertising policy based on the historical results of a different policy, applied to a different population of customers.
We stress that we are interested in the \emph{causal effect} of an intervention $T$ on $Y$, conditioned on $X$. As such, we cannot think of $X$ and $T$ as a single variable.
Without additional assumptions, it is impossible to deduce the effect of an intervention based on observational data alone~\citep{pearl2009causality}, as it amounts disentangling correlation and causation. Crucially, for any unit $i$, we can observe the potential outcome $y_i(t)$ of at most one intervention $t$. In our analysis, we make the following standard assumptions.
\begin{thmasmp}[Consistency, ignorability and overlap]%
For any unit $i$, assigned to intervention $t_i$, we observe $Y_i = Y(t_i)$. Further, $\{Y(t)\}_{t \in \mathcal T}$ and the data-generating process $p_\mu(X, T, Y)$ satisfy strong ignorability: $\{Y(t)\}_{t \in \mathcal T} \independent T \mid X$ and overlap: $\Pr_{p_\pi}(p_\mu(T\mid X)>0)=1$.%
\label{asmp:std}%
\end{thmasmp}
Assumption~\ref{asmp:std} is a sufficient condition for \emph{causal identifiability}~\citep{rosenbaum1983central}. Ignorability is also known as the \emph{no hidden confounders} assumption, indicating that all variables that cause both $T$ and $Y$ are assumed to be measured. Under ignorability therefore, any domain shift in $p(X)$ cannot be due to variables that causally influence $T$ and $Y$, other than through $X$. Under Assumption~\ref{asmp:std}, potential outcomes equal conditional expectations: $\mathbb{E}[Y(t) \mid X = x] = \mathbb{E}[Y \mid X=x, T=t]$, and we may predict $Y(t)$ by regression.
We further assume common domain support, $\forall x\in \mathcal X: p_\pi(X=x) > 0 \Rightarrow p_\mu(X=x) > 0$. Finally, we adopt the notation $p(x) := p(X = x)$.
\subsection{Re-weighted risk minimization}
We attempt to learn predictors $f : \mathcal X \times \mathcal T \rightarrow \mathcal Y$ such that $f(x, t)$ approximates $\mathbb{E}[Y \mid X=x, T=t]$. Recall that under Assumption~\ref{asmp:std}, this conditional expectation is equal to the (possibly counterfactual) potential outcome $Y(t)$, conditioned on $X$. Our goal is to ensure that hypotheses $f$ are accurate under a design $p_\pi$ that deviates from the data-generating process, $p_\mu$. This is unlike standard supervised learning for which $p_\pi = p_\mu$.
We measure the (in)ability of $f$ to predict outcomes under $\pi$, using the expected risk,
\begin{equation}
R_\pi(f) := \mathbb{E}_{\substack{x, t, y \sim p_\pi}}[\ell_f(x, t, y)]
\label{eq:risk}
\end{equation}
based on a sample from $\mu$, $D_\mu^n = \{(x_i, t_i, y_i) \sim p_\mu; i=1, ..., n\}$. Here, $\ell_f(x, t, y) := L(f(x,t), y)$ is an appropriate loss function, such as the squared loss, $L(y,y') := (y - y')^2$ or the log-loss, depending on application. As outcomes under the target design $p_\pi$ are not observed, even through a Monte Carlo sample, we cannot directly estimate $\eqref{eq:risk}$ using the empirical risk under $p_\pi$. A common way to solve this is to use \emph{importance sampling}~\citep{shimodaira2000improving}---the observation that if $p_\mu$ and $p_\pi$ have common support, with $w^*(x,t) = p_\pi(x,t)/p_\mu(x,t)$,
\begin{equation}
R^{w^*}_\mu(f) := \mathbb{E}_{\substack{x, t, y \sim p_\mu}}[w^*(x,t)\ell_f(x, t, y)] = R_\pi(f)~.
\label{eq:wrisk}
\end{equation}
Hence, with access to $w^*$, an unbiased estimator of $R_\pi(f)$ may be obtained by re-weighting the (factual) empirical risk under $\mu$,
\begin{equation}
\hat{R}^{w^*}_\mu(f) := \frac{1}{n}\sum_{i=1}^n {w^*}(x_i, t_i) \ell_f(x_i, t_i, y_i)~.
\label{eq:emr}
\end{equation}
Unfortunately, importance sampling weights can be very large when $p_\pi$ is large and $p_\mu$ small, resulting in large variance in $\hat{R}^{w^*}_\mu(f)$~\citep{swaminathan2015counterfactual}. More importantly, $p_\mu(x,t)$ is rarely known in practice, and neither is $w^*$. In principle, however, any re-weighting function $w$ with the following property yields a valid risk under the re-weighted distribution $p_\mu^w$.
\begin{thmdef}
A function $w : \mathcal X \times \mathcal T \rightarrow \mathbb{R}_+$ is a valid re-weighting of $p_\mu$ if
$$
\mathbb{E}_{x, t\sim p_\mu}[w(x,t)] = 1 \;\; \mbox{and} \;\; p_\mu(x,t) > 0 \Rightarrow w(x,t) > 0.
$$
We denote the re-weighted density $p_\mu^w(x,t) := w(x,t)p_\mu(x,t)$.
\label{def:valid_weight}
\end{thmdef}
A natural candidate in place of $w^*$ is an estimate $\hat{w}^*$ formed by estimating densities $p_\pi(x,t)$ and $p_\mu(x,t)$. In this work, we adopt a different strategy, learning parameteric re-weighting functions $w$ from observational data, that minimize an upper bound on the risk under $p_\pi$.
\subsection{Conditional treatment effect estimation}
\label{sec:cate}
An important special case of our setting is when treatments are binary, $T \in \{0, 1\}$, often interpreted as treating ($T=1$) or not treating ($T=0$) a unit, and the domain is fixed across designs, $p_\mu(X) = p_\pi(X)$. This is the classical setting for estimating treatment effects---the effect of choosing one intervention over another~\citep{morgan2014counterfactuals}.\footnote{Effects for non-binary interventions are not considered here.} The effect of an intervention $T=1$ in context $X$, is measured by the \emph{conditional average treatment effect} (CATE),
$
\tau(x) = \mathbb{E}\left[Y(1) - Y(0) \mid X=x \right]
$.
Predicting $\tau$ for unobserved units typically involves prediction of both potential outcomes\footnote{This is sufficient but not necessary.}. In a clinical setting, knowledge of $\tau$ is necessary to assess which medication should be administered to a certain individual. Historically, the (population) \emph{average treatment effect}, $\mbox{ATE} = \mathbb{E}_{x \sim p}[\tau(x)]$, has received comparatively much more attention~\citep{rosenbaum1983central}, but is inadequate for personalized decision making.
Using predictors $f(x, t)$ of potential outcomes $Y(t)$ in contexts $X=x$, we can estimate the CATE by $\hat{\tau}(x) = f(x, 1) - f(x, 0)$ and measure the quality using the mean squared error (MSE),
\begin{equation}
\mbox{MSE}(\hat{\tau}) = \mathbb{E}_{p}\left[ (\hat{\tau}(x) - \tau(x))^2 \right]
\label{eq:cateloss}
\end{equation}
In Section~\ref{sec:theory_cate}, we argue that estimating CATE from observational data requires overcoming distributional shift with respect to the treat-all and treat-none policies, in predicting each respective potential outcome, and show how this can be used to derive generalization bounds for CATE.
\section{Related work}
A large body of work has shown that under assumptions of ignorability and having a well-specified model, various regression methods for counterfactual estimation are asymptotically consistent~\citep{chernozhukov2017double,athey2016recursive,belloni2014inference}. However, consistency results like these provide little insight into the case of model misspecification. Under model misspecification, regression methods may suffer from additional bias when generalizing across designs due to distributional shift.
A common way to alleviate this is \emph{importance sampling}, see Section~\ref{sec:problem}. This idea is used in propensity-score methods~\citep{austin2011introduction}, that use the observed treatment policy to re-weight samples for causal effect estimation, and more generally in re-weighted regression, see e.g.~\citep{swaminathan2015counterfactual}. A major drawback of these methods is the assumption that the design density is known. To address this, others~\citep{gretton2009covariate,kallus2016generalized}, have proposed \emph{learning} sample weights $w$ to minimize a distributional distance between samples under $p_\pi$ and $p^w_\mu$, but rely on specifying the data representation a priori, without regard for which aspects of the data matter for outcome prediction.
On the other hand, \citet{johansson2016learning,shalit2016estimating} proposed \emph{learning representations} for counterfactual inference, inspired by work in unsupervised domain adaptation~\citep{mansour2009domain}. The drawback of this line of work is that the generalization bounds of \citet{shalit2016estimating} and \citet{long2015learning} are loose---even if infinite samples are available, they are not guaranteed to converge to the lowest possible error. Moreover, these approaches do not make use of important information that can be estimated from data: the treatment/domain assignment probabilities.
\subsubsection*{Acknowledgements}
\section{Generalization under design shift}
\label{sec:theory}
We give a bound on the risk in predicting outcomes under a target design $p_\pi(T, X)$ based on unlabeled samples from $p_\pi$ and labeled samples from a source design $p_\mu(T, X)$. Our result combines representation learning, distribution matching and re-weighting, resulting in a tighter bound than the closest related work, \citet{shalit2016estimating}. The predictors we consider are compositions $f(x,t) = h(\Phi(x),t)$ where $\Phi$ is a representation of $x$ and $h$ an hypothesis. We first give an upper bound on the risk in the general design shift setting, then show how this result can be used to bound the error in prediction of treatment effects. In Section~\ref{sec:model} we give a result about the asymptotic properties of the minimizers of this upper bound.
\paragraph{Risk under distributional shift} Our bounds on the risk under a target design capture the intuition that if either a) the target design $\pi$ and source design $\mu$ are close, or b) the true outcome is a simple function of $x$ and $t$, the gap between the target risk and the re-weighted source risk is small. These notions can be formalized using integral probability metrics (IPM)~\citep{sriperumbudur2009integral} that measure distance between distributions w.r.t. a normed vector space of functions $\mathcal H$.
\begin{thmdef}
The integral probability metric (IPM) distance, associated with a normed vector space of functions $\mathcal H$, between distributions $p$ and $q$ is,
$
\mbox{\emph{IPM}}_\mathcal H(p,q) := \sup_{h \in \mathcal H} \left| \mathbb{E}_p[h] - \mathbb{E}_q[h] \right|
$.
\label{def:ipm}
\end{thmdef}
Important examples of IPMs include the Wasserstein distance, for which $\mathcal H$ is the family of functions with Lipschitz constant at most 1, and the Maximum Mean Discrepancy for which $\mathcal H$ are functions in the norm-1 ball in a reproducing kernel Hilbert space. Using definitions~\ref{def:valid_weight}--\ref{def:ipm}, and the definition of re-weighted risk, see \eqref{eq:wrisk}, we can state the following result (see the Appendix for a proof).
\begin{thmlem}
For hypotheses $f$ with loss $\ell_f$ such that $\ell_f/\|\ell_f\|_{\mathcal H} \in \mathcal H$, and $p_\mu, p_\pi$ with common support, there exists a valid re-weighting $w$, see Definition~\ref{def:valid_weight}, such that,
\begin{align}
\arraycolsep=1.4pt\def1.4{1.4}
\begin{array}{rcl}
R_\pi(f) & \leq & R^w_\mu(f) + \|\ell_f\|_{\mathcal H}\mbox{\emph{IPM}}_{\mathcal H}(p_\pi, p^w_\mu) \\
& \leq & R_\mu(f) + \|\ell_f\|_{\mathcal H}\mbox{\emph{IPM}}_{\mathcal H}(p_\pi, p_\mu)~.
\end{array}
\label{eq:ipm_bound}
\end{align}
The first inequality is tight for importance sampling weights, $w(x,t) = p_\pi(x, t) / p_\mu(x, t)$. The second inequality is not tight for general $f$, even if $\ell_f/\|\ell_f\|_{\mathcal H} \in \mathcal H$, unless $p_\pi = p_\mu$.
\label{lem:ipm_bound}
\end{thmlem}
The bound of Lemma~\ref{lem:ipm_bound} is tighter if $p_\mu$ and $p_\pi$ are close (the IPM is smaller), and if the loss lives in a small family of functions $\mathcal H$ (the supremum is taken over a smaller set). Lemma~\ref{lem:ipm_bound} also implies that there exist weighting functions $w(x,t)$ that achieve a tighter bound than the uniform weighting $w(x,t) = 1$, implicitly used by~\citet{shalit2016estimating}. While importance sampling weights result in a tight bound in expectation, neither the design densities nor their ratio are known in general. Moreover, exact importance weights often result in large variance in finite samples~\citep{cortes2010learning}. Here, we will search for a weighting function $w$, that minimizes a finite-sample version of \eqref{eq:ipm_bound}, trading off bias and variance. We examine the empirical value of this idea alone in Section~\ref{sec:synth}.
\paragraph{Representation learning} The idea of learning representations that reduce distributional shift in the induced space, and thus the source-target generalization gap, has been applied in domain adaptation~\citep{ajakan2014domain}, algorithmic fairness~\citep{zemel2013learning} and counterfactual prediction~\citep{shalit2016estimating}. The hope of these approaches is to learn predictors that predominantly exploit information that is common to both source and target distributions. For example, a face detector should be able to recognize the structure of human features even under highly variable environment conditions, by ignoring background, lighting etc. We argue that re-weighting (e.g. importance sampling) should also be done only with respect to features that are predictive of the outcome. Hence, in Section~\ref{sec:model}, we propose using re-weightings that are functions of learned representations.
We follow the setup of \citet{shalit2016estimating}, and consider learning twice-differentiable, invertible representations $\Phi : \mathcal X \rightarrow \mathcal Z$, where $\mathcal Z$ is the representation space, and $\Psi : \mathcal Z \rightarrow \mathcal X$ is the \emph{inverse} representation, such that $\Psi(\Phi(x)) = x$ for all $x$. Let $\mathcal E$ denote space of such representation functions. For a design $\pi$, we let $p_{\pi,\Phi}(z,t)$ be the distribution induced by $\Phi$ over $\mathcal Z \times \mathcal T$, with $p^w_{\pi,\Phi}(z,t) := p_{\pi,\Phi}(z,t)w(\Psi(z),t)$ its re-weighted form and $\hat{p}^w_{\pi,\Phi}$ its re-weighted empirical form, following our previous notation.
Finally, we let $\mathcal G \subseteq \{h : \mathcal Z\times \mathcal T \rightarrow \mathcal Y\}$ denote a set of hypotheses $h(\Phi,t)$ operating on the representation $\Phi$ and let $\mathcal{F}$ the space of all compositions, $\mathcal{F} = \{f = h(\Phi(x), t) : h \in \mathcal G, \Phi \in \mathcal E\}$. We can now relate the expected target risk $R_\pi(f)$ to the re-weighted empirical source risk $\hat{R}^w_\mu(f)$.
\begin{thmthm}
Given is a labeled sample $(x_1, t_1, y_1), ..., (x_n, t_n, y_n)$ from $p_\mu$, and an unlabeled sample $(x'_1, t'_1), ..., (x'_m, t'_m)$ from $p_\pi$, with empirical measures $\hat{p}_{\mu}$ and $\hat{p}_{\pi}$. Suppose that $\Phi$ is a twice-differentiable, invertible representation, that $h(\Phi, t)$ is an hypothesis, and $f = h(\Phi(x),t) \in \mathcal{F}$. Define $m_t(x) = \mathbb{E}_Y[Y\mid X=x,T=t]$, let $\ell_{h, \Phi}(\Psi(z), t) := L(h(z, t), m_t(\Psi(z)))$ where $L$ is the squared loss, $L(y,y') = (y-y')^2$, and assume that there exists a constant $B_\Phi > 0$ such that $\ell_{h, \Phi}/B_\Phi \in \mathcal H \subseteq \{h : \mathcal Z\times \mathcal T \rightarrow \mathcal Y\}$, where $\mathcal H$ is a reproducing kernel Hilbert space of a kernel, $k$ such that $k((z,t),(z,t)) < \infty$. Finally, let $w$ be a valid re-weighting of $p_{\mu, \Phi}$. Then with probability at least $1-2\delta$,
\begin{align}
R_\pi(f) & \leq \hat{R}^w_\mu(f) + B_\Phi\text{\emph{IPM}}_{\mathcal H}(\hat{p}_{\pi,\Phi}, \hat{p}_{\mu,\Phi}^w)
\label{eq:thm_main} \\
& + V_{\mu}(w, \ell_f)\frac{\mathcal C_{n,\delta}^{\mathcal{F}}}{n^{3/8}} + \mathcal D^{\Phi,\mathcal H}_{\delta}\left(\frac{1}{\sqrt{m}} + \frac{1}{\sqrt{n}}\right) + \sigma_Y^2 \nonumber
\end{align}
where $\mathcal C_{n,\delta}^{\mathcal{F}}$ is a function of the pseudo-dimension of $\mathcal{F}$, $\mathcal D^{\mathcal H}_{m,n,\delta}$ a function of the kernel norm of $\mathcal H$, both only with logarithmic dependence on $n$ and $m$, $\sigma^2_Y$ is the expected variance in $Y$, and
$$
V_{\mu}(w, \ell_f) = \max\left(\sqrt{\mathbb{E}_{p_\mu}[w^2\ell_f^2]}, \sqrt{\mathbb{E}_{\hat{p}_\mu}[w^2\ell_f^2]}\right)~.
$$
A similar bound exists where $\mathcal H$ is the family of functions Lipschitz constant at most 1, and $\mbox{\emph{IPM}}_{\mathcal H}$ the Wasserstein distance, but with worse sample complexity.
\label{thm:main}
\end{thmthm}
See the Appendix for a proof of Theorem~\ref{thm:main} that involves applying finite-sample generalization bounds to Lemma~\ref{lem:ipm_bound}, as well and a change of variables to the space induced by the representation $\Phi$.
Theorem~\ref{thm:main} has several implications: non-identity feature representations, non-uniform sample weights, and variance control of these weights can all contribute to a lower bound. Using uniform weights $w(x,t) = 1$ in \eqref{eq:thm_main}, results in a bound similar to that of \citet{shalit2016estimating} and \citet{long2015learning}. When $\pi\neq \mu$, minimizing uniform-weight bounds results in biased hypotheses, even in the asymptotical limit, as the IPM term does not vanish with increased sample size. This is an undesirable property, as even $k$-nearest-neighbor classifiers are consistent in the limit of infinite samples. We consider minimizing \eqref{eq:thm_main} with respect to $w$, improving the tightness of the bound.
Further, Theorem~\ref{thm:main} indicates that even though importance sampling weights $w^*$ yield estimators with small bias, they can suffer from high variance, as captured by the factor $V_{\mu}(w, \ell_f)$.
The factor $B_\Phi$ in \eqref{eq:thm_main} is not known in general as it depends on the true outcome, and is determined by $\|\ell_f\|_\mathcal H$ as well as the determinant of the Jacobian of $\Psi$, see the Appendix for proofs. Qualitatively, $B_\Phi$ measures the joint complexity of $\Phi$ and $\ell_f$ and is sensitive to the scale of $\Phi$---as the scale of $\Phi$ vanishes, $B_\Phi$ blows up. To prevent this in practice, we normalize $\Phi$. As $B_\Phi$ is unknown, \citet{shalit2016estimating} substituted a hyperparameter $\alpha$ for $B_\Phi$, but discussed the difficulties of selecting its value without access to counterfactual labels. In our experiments, we explore a heuristic for adaptively choosing $\alpha$, based on measures of complexity of the observed held-out loss as a function of the input. Finally, the term $\mathcal C_{n,\delta}^{\mathcal{F}}$ follows from standard learning theory results~\citep{cortes2010learning} and $\mathcal{F}$, and $\mathcal D^{\Phi, \mathcal H}_\delta$ from concentration results for estimating IPMs~\citep{sriperumbudur2012empirical}, see the Appendix.
Theorem~\ref{thm:main} is immediately applicable to the case of unsupervised domain adaptation in which there is only a single potential outcome of interest, $\mathcal T = \{0\}$. In this case, $p_\mu(T \mid X) = p_\pi(T \mid X) = 1$. Another important special case is where $p_\mu(X) = p_\pi(X)$, such as in the classical setting of causal effect estimation.
\paragraph{Conditional average treatment effects}
\label{sec:theory_cate}
A simple argument shows that the error in predicting the conditional average treatment effect, $\mbox{MSE}(\hat{\tau})$ can be bounded by the sum of risks under the constant treat-all and treat-none policies. As in Section~\ref{sec:cate}, we consider the case of a fixed domain $p_\pi(X) = p_\mu(X)$ and binary treatment $\mathcal T = \{0, 1\}$. Let $R_{\pi_t}(f)$ denote the risk under the constant policy $\pi_t$ such that $\forall x\in \mathcal X: p_{\pi_t}(T=t\mid X=x) = 1$.
\begin{thmprop}
We have with $\mbox{\emph{MSE}}(\hat{\tau})$ as in \eqref{eq:cateloss} and $R_{\pi_t}(f)$ the risk under the constant policy $\pi_t$,
\begin{equation}
\mbox{\emph{MSE}}(\hat{\tau}) \leq 2(R_{\pi_1}(f) + R_{\pi_0}(f)) - 4\sigma^2
\label{eq:t_decomp}
\end{equation}
where $\sigma$ is such that $\forall t\in \mathcal T, x\in \mathcal X, \sigma_Y(x,t) \geq \sigma$ and $\sigma^2_Y(x,t)$ is variance of $Y(t)$ conditioned on $X=x$.
\label{prop:cate}
\end{thmprop}
The proof involves the relaxed triangle inequality and the law of total probability. By Proposition~\ref{prop:cate}, we can apply Theorem~\ref{thm:main} to $R_{\pi_1}$ and $R_{\pi_0}$ separately, to obtain a bound on $\mbox{MSE}(\tau)$. For brevity, we refrain from stating the full result, but emphasize that it follows from Theorem~\ref{thm:main}. In Section~\ref{sec:exp_cate}, we evaluate our framework in treatment effect estimation, minimizing this bound.
\iffalse
We start with the binary treatment ($\mathcal T = \{0, 1\}$) and one-sided case: estimating e.g. the treatment outcome $Y_1$ for the control population $p(X\mid T=0)$. The generalization to the two-sided case, and to larger action spaces follows immediately.
First, we note that, for any $t \in \{0,1\}$ with $u_t = p(T=t)$,
$$
L^{Y_t} = u_t \underbrace{\mathbb{E}\left[L^{Y_t} \mid T=t\right]}_{\text{Factual}} + (1-u_t) \underbrace{\mathbb{E}\left[L^{Y_t} \mid T=1-t\right]}_{\text{Counterfactual}}~.
$$
Let the factual error of outcome $t$ be denoted $\epsilon_F^{t} = \mathbb{E}\left[L^{Y_t} \mid T=t\right]$ and the counterfactual $\epsilon_{CF}^{t} = \mathbb{E}\left[L^{Y_t} \mid T=1-t\right]$. Samples of the factual error are observed and $\epsilon_F^{t}$ can be consistently estimated by the average. We focus our attention on bounding and estimating the counterfactual error $\epsilon_{CF}^{t}$, by adjusting for the distributional shift between treatment groups.
For any loss function $\ell(x, t)$ we have that
$$
\mathbb{E}_\pi[\ell(x, t)] - \mathbb{E}_\mu[\ell(x, t) \cdot w(x, t)] \leq \|\ell\|_G \cdot \mbox{IPM}_G(p_\pi, p_\mu^w)~.
$$
where $\mbox{IPM}_G$ is the integral probability metric (IPM) w.r.t. $G$,
\begin{align}
\mbox{IPM}_G(q, wp) & = \sup_{g\in G} \int_{x,t \in \mathcal X\times \mathcal T} g(x,t) (q(x,t) - w(x,t)\cdot p(x,t)) \\
& = \sup_{g\in G} \mathbb{E}_q[g(x,t)] - \mathbb{E}_p[w(x,t)g(x,t)]
\end{align}
As an important special case, we have the importance weights $w(x,t) = \rho(x,t) := p(x,t)/q(x,t)$ for which $\mbox{IPM}_G(p, wq) = 0$. However, In most real-world settings $\rho$ is not known, granted, with important exceptions~\cite{???}. We focus on the former case and consider various ways of estimating a weighting $w$, with the property $\mathbb{E}_p[w] = 1$. In the finite-sample case, this amounts to finding a normalized re-weighting of the observed samples.
\fi
|
{
"timestamp": "2018-02-27T02:22:11",
"yymm": "1802",
"arxiv_id": "1802.08598",
"language": "en",
"url": "https://arxiv.org/abs/1802.08598"
}
|
\section{Introduction}
Nitrogen-vacancy centers are formed in diamonds when a nitrogen atom substitutes for one of the carbon atoms in the lattice and it pairs with a vacancy. A center can acquire an electron to form NV$^-$ , which we hearafter indicate as NV. The defect center can also be electrically neutral (NV$^0$) or carry a positive charge as the optically inactive(NV$^+$).\cite{NVPlus}
NV centers have relatively long-lived spin coherence features that can be initialized and detected with visible light.\cite{dohertyReview} Recent studies of single NV centers are motivated by applications including quantum computing \cite{Childress} and nanoscale sensing of magnetic \cite{bala} and electric\cite{Dolde} fields.
The electronic ground state $^3A_2$ is an orbital singlet state and consists of a $m_s=0$ level separated from nearly degenerate $m_s=\pm 1$ levels by $D_{gs}$ $\approx$ 2.87 GHz (Fig. \ref{fig:EnergyLevels}). The NV center can absorb off-resonant 532 nm light, after which it fluoresces from the $^3$E excited state. The zero-phonon line(ZPL) is 637 nm, but significant fluorescence is observed in wavelengths up to 800 nm. A competing non-radiative pathway via intermediate singlet states also exists for the excited state and is stronger for the excited state m$_s$=$\pm$1 levels. Because of this non-radiative pathway, the overall fluorescence of the excited state depends on the excited state $m_s = \pm 1$ population which in turn depends on the ground state $m_s = \pm 1$ population. This allows the spin of the ground state of the NV center to be determined by monitoring the brightness of the emitted fluorescence.\cite{dohertyReview}
Either CW or pulsed microwave techniques can be used to examine magnetic dipole transitions from the $m_s=0$ to $m_s=\pm$1 levels in the ground state with high spectral resolution.\cite{dohertyReview} This technique of resolving the ground state electronic structure is called optically detected magnetic resonance (ODMR) and we use it in this work to measure DC Stark shifts.
Recent work\cite{Dolde} has shown how a single NV can be used for measuring the transverse component of the DC electric field and how NV ensembles can be used to measure electric fields applied across a 300 um wide diamond. \cite{Braje}
We are interested in developing the capability to measure electric and magnetic fields near the interaction region of experiments to measure the electric dipole moment of the neutron.\cite{nEDM} Such experiments seek to push the boundaries of the Standard Model.
It is critical that the probe makes only small perturbations to the fields that it is measuring. In this regard, NV diamonds show promise, as the diamond is electrically and magnetically inert. While this work uses microwaves, previous work has shown that magnetic\cite{AcostaEIT}$^,$\cite{Sharma} and electric field\cite{Tamarat} effects can be detected with all-optical NV diamond probes. In this paper, we present measurements of strain and extenral electric fields using ODMR in NV ensembles.
To compare our results with theory, we model the ground state Hamiltonian, $H_{gs}$ as\cite{Doherty}
\begin{equation}
\hat{H}_{gs} = \hat{H}_{hfs} + \hat{H}_{es} \, .
\end{equation}
Here $ \hat{H}_{hfs}$ is the zero-field Hamiltonian which includes the hyperfine interactions with the $^{14}$N (spin = 1) nucleus. $ \hat{H}_{es}$ is the electronic spin Hamiltonian which includes the strain, magnetic and electric field interactions.
The hyperfine interaction Hamiltonian is given by
\begin{equation}
\hat{H}_{hfs} = \dfrac{1}{\hbar^2} [D_{gs} S_z^2 + A_\parallel S_z I_z +A_\perp (S_xI_x + S_yI_y) + PI_z^2] \, .
\end{equation}
$D_{gs} \approx 2.87$ GHz is the ground state zero-field splitting which arises from spin-spin interaction of the electrons and splits the $m_s = 0$ and $m_s = \pm 1$ sublevels as shown in Fig. \ref{fig:EnergyLevels}. $A$ and $P$ are the magnetic hyperfine parameter and the nuclear quadrupole parameter respectively. The solution to Eq.1 includes 9 eigenstates, 3 associated with $m_s$ = 0 and 6 associated with $m_s = \pm$ 1 levels. Pairs of these six undergo avoided crossings, or level repulsions, at $B_{axial}$ = 0, $\pm$ 77 uT, as described below.
The electronic spin component of the ground state energy is given by
\begin{multline}
\hat{H}_{es} = \dfrac{1}{\hbar^2} d_\parallel \Pi_z S_z^2 + \dfrac{\mu_B}{\hbar} \vec{S} . \bar{g} .\vec{B} - \dfrac{1}{\hbar^2} d_\perp \Pi_y (S_x^2 - S_y^2) \\ - \dfrac{1}{\hbar^2} d_\perp \Pi_x (S_xS_y + S_yS_x)
\end{multline}
Here, $\vec{ \Pi} = \vec{E} + \vec{\sigma}$ is the total effective electric field which also includes contributions from the strain, $\vec{\sigma}$. The ground state electric dipole moment, $d$, has a much larger transverse component $d_{\perp} \approx 17 $ Hz cm/V compared to the longitudinal component, $d_{\parallel} \approx 0.3 $ Hz cm/V. The effective g-factor tensor, $\bar{g}$ has a longitudinal, $g_\parallel$ and a transverse, $g_\perp$ component as detailed in Ref. 6
\begin{figure}
{\includegraphics[scale = 0.15]{"Diamond_electrodes".png}}
\caption{a)Four orientations of NV axes as seen looking down the [100] crytallographic axis and the electrodes printed on the (100) face of the diamond. b)Magnitude of the optical coupling strength, measured as change in the ODMR contrast(arb. units) for the two pairs of the NV center orientations(directions labelled 1+2 and 3+4) as a function of the polarization of the exciting green laser. c) ODMR spectrum for horizontally (orange) and vertically (blue) polarized green excitation light. }
\label{fig:Electrodes}
\end{figure}
\section{Experimental Details}
Diamonds formed by chemical vapor deposition with a nominal nitrogen content of $\approx$ 100 ppb were obtained from Element 6.\cite{E6} The diamonds are approximately 5x5x0.5 mm, cut so the 5x5 mm$^2$ faces are (100) planes and the 0.5x5 mm$^2$ edges are (011) planes, as shown in Fig. \ref{fig:Electrodes}. The diamonds were irradiated with an electron beam with an energy of 2 MeV and a fluence of 10$^{17}$ cm$^{-2}$ to generate the vacancies. They were then annealed for 2 hours at 875 K to encourage the coupling of vacancies with nitrogen nuclei. \cite{anneal} By integrating the visible absorption spectrum of the diamonds,\cite{conc} the density of NV states after processing was estimated to be 20 ppb. Two chromium electrodes, a few hundred nanometers in thickness, were photolithographically deposited on the top surface of one diamond. The electrodes are separated by a gap that is about 80 um wide at the center of the surface, growing to about 350 um near the edges. The patterned electrodes create a slowly varying electric field in the diamond.
\begin{figure}
{\includegraphics[scale = 0.28]{"Schematic".png}}
\caption{Experimental setup for electrometry using NV ensembles. The collected photoluminescence is spatially filtered to constrain the confocal volume being investigated. A high voltage source is used to apply a voltage of $\pm$ 3000 V across the electrodes. }
\label{fig:Schematic}
\end{figure}
\begin{figure}
{\includegraphics[scale = 0.32]{"StrainVsDepth".png}}
\caption{a) The green curve shows the magnitude of the laser light that is reflected from the top and bottom faces of the diamond. The red curves represent the intensity of the 637 nm ZPL. Curves are rescaled for clarity. The blue data points show the strain measured as a function of depth in the diamond. b) Difference in fluorescence at 0 V(blue), 2500 V (red) and their difference (yellow) }
\label{fig:StrainVsDepth}
\end{figure}
The experimental apparatus is shown schematically in Fig. \ref{fig:Schematic}. A specially constructed confocal microscope is used to investigate the ground state structure of NV ensembles. Green light from a frequency-doubled Nd:YAG laser\cite{laserGlow} can be adjusted in power with a variable attenuator, while the angle of the linear polarization can be adjusted with a half-wave plate. The light beam is brought onto the optical axis of a long working distance, 50x microscope objective by means of a dichroic mirror. The diamond is mounted in vacuum in a cryostat, with one electrode grounded to the cryostat as the potential of the other is varied from 0 to $\pm$ 3000~V DC. The rear, ($\bar{1}00$) face of the diamond is also grounded. The light is focused into the gap between the electrodes using the microscope objective located outside the cryostat. Fluorescence is collimated by the objective and filtered through the dichroic beam combiner and by an optical longpass filter (Andover 590FG05, passing $ \lambda > 600$ nm).\cite{Andover} The light is spatially filtered with a 50 um pinhole before being sent to an avalanche photodiode\cite{Avalanche}.
The microwave source is a digitally-synthesized oscillator (WindFreak Synth NV) \cite{SynthNV} amplified to 25 dBm\cite{Amplifier}.
The microwaves are coupled to the diamond using a resonant loop of wire, about 1 cm from the diamond, located outside the cryostat.
Since there are four possible crystallographic orientations of NV axes, we see four different sets of resonances from within the ensemble. The magnetic dipole transitions have the following selection rules:\cite{AcostaThesis} $\Delta m_s = \pm 1$ and $\Delta m_I = 0$; this leads to 6 ground state resonances. A total of 24 resonances (= 4 crystallographic orientations x 6 hyperfine transitions) can be observed in the ODMR spectrum. Polarization of the green laser controls coupling strength\cite{Polarization} among the four NV orientations. This technique allows the selective excitation of NV centers, as shown in Fig \ref{fig:Electrodes} .
\begin{figure}
{\includegraphics[scale = 0.29]{"StrainVsPolarB".png}}
\caption{ a) Schematic illustrating the two NV orientations used to measure the strain direction. The white ellipses represent the plane in which the transverse magnetic field ($B_{\perp} \approx 1$ mT) is rotated. Strain direction as determined by the polar plots of the Stark shift is drawn in orange. The green arrows represent the initial direction of the magnetic field ($\theta = 0^{\circ}$ in the graphs below). b) Change in strain induced Stark shift as a function of the polar angle of the transverse magnetic field for the two NV orientations. Dashed black lines show the lines connecting the vacancy to the three adjacent carbon atoms. Here the strain induced Stark shifts are measured in kV/cm. The orange vector represents the strain as it projects on the transverse plane and the shaded region represents the uncertainty associated with the strain measurementw. Simulated data is shown as a the blue curve. }
\label{fig:StrainVsPolarB}
\end{figure}
\section{Results}
The axial resolution is measured by moving the diamond through the focus of the microscope objective. As shown in Fig. 4a, the axial resolution is about 80 um. The transverse resolution, measured using a similar method, is about 1 um.
By measuring the fluoresence intensity at the NV$^-$ (637 nm) and NV$^0$ (575 nm) ZPL, we can measure changes in their relative concentrations. Closer to the surface, in regions of large electric field (E~$>$~40~kV/cm - see results below), there was marked decrease in the concentration of NV$^0$ centers with a simultaneous increase in NV$^-$ centers (up to 100\%). This effect was attenuated as we moved towards the grounded face of the diamond. We believe this effect is related to the charge state conversion observed in single NV centers because of field induced band-bending.\cite{ChargeStateConversion}
As shown in Ref. 5, by solving the electronic Hamiltonian presented in Eq. (3), we can calculate the magnetic transition frequencies, $\omega_\pm$, between the states $m_s = 0$ and $m_s = \pm 1$
\begin{multline}
\hbar \omega_\pm = D_{gs} + d_{gs}^\parallel \Pi_z \pm [ (\mu_B g_e B_z)^2 + (d_{gs}^\perp \Pi_{\perp})^2 \\ - \dfrac{(\mu_B g_e B_\perp)^2 }{D_{gs}} d_{gs}^\perp \Pi_{\perp} cos(2 \theta_B + \theta_\Pi) + \dfrac{(\mu_B g_e B_\perp)^4 }{4D_{gs}^2} ]^{1/2}
\end{multline}
Here, $\theta_\Pi$ and $\theta_B$ are the polar angles for the non-axial electric and magnetic fields, respectively. As shown below, we can use this dependence on the polar angle of the transerve magnetic and strain induced electric fields to estimate the direction of the strain field.
\subsection{Strain}
Since the total electric field measured in these experiments is the sum of the externally applied electric and effective electric field produced by the intrinsic strain, it is important to characterize the strain magnitude and direction for electric field measurements. As shown in Fig. 4a, the axial variation in the strain magnitude was measured by scanning the confocal microscope in the (100) crystallographic direction.
In order to measure the strain direction, the strain induced Stark shift was calculated for NVs along all four crystallographic directions. A largely orthogonal magnetic field with a small axial component of 77 uT was applied to isolate NV centers oriented along the four different (111) crystal directions. The magnitude of the strain was measured to be equal for all four axes. This narrowed down the strain heading to the three possible degenerate directions - (100), (010) and (001). Given this result, there is also the possibility of the strain to be isotropic.
To complete our strain analysis, we used the dependence of the magnetic field polar angle, $\theta_B$ in eq.~(4) for a suitably large orthogonal magnetic field (1 mT), generated by a pair of Helmholtz coils and ceramic magnets. As the magnetic field was rotated in the non-axial plane, changes in the strain induced Stark shift were recorded. The polar plot from this experiment is shown in Fig. \ref{fig:StrainVsPolarB}. Fitting the graph to the energy shifts predicted by the Hamiltonian shows that the strain points in the (100) direction, out of the large face of the diamond. This direction is also parallel to the crystal growth direction. The total strain was determined to be $\sigma = 21 \pm 4$ kV/cm.
\subsection{Electric Field}
The ODMR transitions are shifted by the component of the electric field in the plane perpendicular to the NV axis. The Stark shift is typically small, a few Hz per kV/cm, but the shift is enhanced when the magnetic field is adjusted so that levels of the appropriate symmetry are nearly degenerate. As described in other work,\cite{Dolde} the shift is enhanced in the transitions to the $| m_S= +1,m_I = 0\rangle$ and $| m_S= -1,m_I = 0\rangle$ states when the axial magnetic field is zero. From the Hamiltonian given by Doherty,\cite{Doherty} we can also predict similar enhancements near $B_{z} = 77$ uT where the $|-1, -1\rangle$ and $|+1,-1\rangle$ levels cross. Likewise, the $|-1, +1\rangle$ and $|+1,+1\rangle$ levels cross at $B_{z} = - 77 $ uT. Stark shift enhancement at the avoided crossings of these hyperfine sublevels is discussed in our previous work. \cite{Sharma}
\begin{figure}
{\includegraphics[scale = 0.32]{"Depth_figure".png}}
\caption{Electric field measured as a function of depth the diamond. The simulated data is obtained from finite element analysis done in COMSOL.\cite{COMSOL} }
\label{ElectricFielDepth}
\end{figure}
The electrodes were patterned to produce variation in the electric field. A map of the electric field was produced by moving the focal spot axially into the diamond and measuring the Stark shift for the same voltage difference across the electrodes, as shown in Fig.\ref{ElectricFielDepth}.
The electric field measurements were obtained by subtracting the strain($\vec{\sigma}$) induced Stark shift from the total Stark shift($\vec{\Pi} = \vec{E} + \vec{\sigma}$).
The largest contributor to the systematic error was the uncertainty in the electric dipole moment, $d_{\perp}$~=~$17 \pm$~$ 3$~Hz~cm/V. With just statistical errors, the typical uncertainty in the electric field measurement was $\sigma \, = \, $ 2~kV/cm. After including systematic uncertainties, the cumulative error was 5 ~kV/cm. This shows that we can map electric fields throughout the whole volume of the diamond.
Future work will show how this technique can use NV center ensembles to serve as vector electrometers. The orthogonal component of the electric field from each of the four NV axes can be used to recreate the electric field. We also plan to investigate the influence of electric and magnetic fields on decoherence rates of the NV center.
\section{Acknowledgements}
This material is based upon work supported by the Department of Energy under Award Number DE-SC0011266.\cite{Disclaimer} This work is also in supported in part by NSF PHY-1506416 (Sharma, S. and Beck, D.H.). This work was carried out in part, at the Frederick Seitz Material Research Laboratory Central Research Facilities, University of Illinois.
\pagebreak
|
{
"timestamp": "2018-02-27T02:02:46",
"yymm": "1802",
"arxiv_id": "1802.08767",
"language": "en",
"url": "https://arxiv.org/abs/1802.08767"
}
|
\section{Motivation}
\label{sec:intro}
Particle-in-cell (PIC) method was first proposed by \citet{Harlow:1964} for modeling compressed fluids.
PIC combines particles which follow material motion and carry conserved quantities such as mass and momentum, with a grid on which the equations of motion are solved.
Computational particles in PIC could be considered as a moving refined grid, hence the method is useful for modeling highly distorted flows and interface flows.
Early implementations were cumbersome (especially in treating boundary conditions), memory-hungry, and unable to treat certain physical problems such as, e.g., stagnating flows, which made the method obsolete.
However, in the last two decades the method got its second wind thanks to the Cloud-in-Cell (CIC) algorithm \citep{Langdon:Birdsall:1970} for kinetic modeling of plasmas, and the Material Point Method (MPM) applied in solid mechanics \citep{Sulsky:1995}.
It is common nowadays to treat PIC solely as a method for kinetic plasma modeling, therefore narrowing its scope only to CIC \citep{Brackbill:2005}.
Both MPM and CIC delivered impressive new results and are further developing; we refer the reader to the reviews of the applications of CIC in space plasma modeling \citep{Lapenta:2012}; MPM in geophysics \citep{Sulsky:etal:2007,Fatemizadeh:Moormann:2015}, engineering \citep{NME:NME910}, and 3D graphics \citep{Stomakhin:etal:2013}.
Motivated by the above success we proposed a new PIC algorithm for magnetized fluids \citep{Bacchini:2017} which clarified and simplified many aspects of the original FLIP-MHD algorithm by \citet{Brackbill:1991}.
This article discusses the realization of our algorithm, \href{https://bitbucket.org/volshevsky/slurm}{Slurm}, implemented in C++ with OpenMP multi-threading.
We will walk you through all steps of the typical computational cycle of Slurm, its initialization, and main solver loop, explaining, where appropriate, data structures, architecture and physical models used in the code.
\section{Physical model}
\subsection{MHD equations}
Slurm is designed (but not limited) to numerically advance in time a discretized set of conventional MHD equations in Lagrangian formulation which describe the dynamics of plasmas or magnetized fluids
\begin{eqnarray}
\label{eq:continuity} \frac{d\rho}{dt} = - \rho \frac{\partial u_l}{\partial x_l}, \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \\
\nonumber \rho\frac{du_j}{dt} = -\frac{\partial p}{\partial x_j} + \rho g_j + \frac{\partial}{\partial x_i}\left[ B_i B_j - \frac{1}{2}B^2\delta_{ij} \right] + \quad \quad \quad \quad \quad \quad \quad \quad\\
\label{eq:momentum} \frac{\partial}{\partial x_i}\left[ \mu\left(\frac{\partial u_j}{\partial x_i} +\frac{\partial u_j}{\partial x_i}\right) + \left( \mu_v - \frac{2}{3}\mu\right)\frac{\partial u_l}{\partial x_l}\delta_{ij} \right], \\
\label{eq:energy} \rho\frac{de}{dt} = -p\frac{\partial u_l}{\partial x_l} + \mu\left( \frac{\partial u_j}{\partial x_i} +\frac{\partial u_j}{\partial x_i} - \frac{2}{3}\frac{\partial u_l}{\partial x_l}\delta_{ij} \right)^2 + \mu_v\left( \frac{\partial u_l}{\partial x_l} \right)^2 + \eta J^2,
\end{eqnarray}
where summation is implied over the repeating indices $i, j, l$, which denote vector components; $d/dt=\partial/\partial t + u_j \partial / \partial x_j$ is the convective derivative.
The plasma is described by density $\rho$, velocity $\mathbf{u}$, pressure $p$, internal energy per unit mass $e$, and magnetic field $\mathbf{B}$; $\mathbf{g}$ represents external forces (gravity); $\mu=\nu\rho$ is the dynamic shear viscosity and $\mu_v=\nu_v \rho$ is the dynamic bulk viscosity\footnote{the notation and terminology follows the excellent book by \citet{Kundu:Kohen:2012}}; $\eta$ is the resistivity, and $\mathbf{J}=\nabla\times\mathbf{B}$ is electric current; $\delta_{ij}$ is the Kronecker delta.
In most cases reported in this paper, the adiabatic equation of state is assumed
\begin{equation}
e = \frac{1}{\rho}\frac{p}{\gamma - 1},
\label{eq:EOS}
\end{equation}
where $\gamma$ is the predefined gas constant.
\subsection{Magnetic field equation}
To close the above system of MHD equations, a condition must be imposed on the evolution of $\mathbf{B}$.
As a proxy for magnetic field, Slurm uses electromagnetic potentials $\mathbf{A}$ and $\phi$, where
\begin{equation}
\label{eq:BfromA}
\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}.
\end{equation}
Using vector potential as a proxy for magnetic field, on a staggered grid (Yee lattice), the $\nabla\cdot\mathbf{B}=0$ condition is satisfied to machine precision \citep{Bacchini:2017}.
The staggered grid also makes our method second-order accurate in space.
In Slurm, the magnetic field is specified on the grid's cell centers, and electromagnetic vector potential is given on cell vertices (nodes).
The evolution of $\mathbf{A}$ is governed by the equation
\begin{equation}
\label{eq:A}
\frac{d\mathbf{A}}{dt} = \mathbf{u}\times\mathbf{B} + \left( \mathbf{u}\cdot\mathbf{\nabla} \right)\mathbf{A} - \nabla\phi - \eta \mathbf{J},
\end{equation}
where $\phi$ can be chosen freely to satisfy a certain gauge condition.
Although it has no effect on the solenoidality of the magnetic field, appropriate measures have to be taken to preserve the gauge condition throughout the simulation.
\citet{Tricco:Price:2012,Tricco:Price:2016} have introduced the constrained hyperbolic divergence cleaning of $\nabla\cdot\mathbf{B}$ in smoothed particle hydrodynamics, which was used later by \citet{Stasyszyn:Elstner:2015} to clean the divergence of $\mathbf{A}$.
According to this strategy, the scalar potential $\phi$ is evolved as
\begin{equation}
\label{eq:phi}
\frac{d\phi}{dt} = -c_h^2\nabla\cdot\mathbf{A}-\frac{\sigma c_h}{h}\phi,
\end{equation}
where $c_h$ is the fast MHD wave speed, $h$ is the smoothing length (particle size), and $\sigma\approx1$ is a constant.
This way, the Coulomb gauge $\nabla\cdot\mathbf{A}=0$ or any other initial gauge could be preserved throughout the simulation.
Interestingly, in two dimensions
\begin{eqnarray}
B_x=\frac{\partial A_z}{\partial y}, \\
B_y=-\frac{\partial A_z}{\partial x},
\end{eqnarray}
and by definition
\begin{equation}
\label{eq:Az2D}
\frac{d A_z}{dt}=0,
\end{equation}
hence $A_z$ defined on a moving grid (particles), does not change over time \citep{Bacchini:2017}.
Therefore in 2D (with no out-of-plane $B$ field) the gauge condition is preserved to roundoff, and no divergence cleaning is necessary.
\subsection{Artificial viscosity}
The PIC method has very low numerical dissipation, and in many problems related to shock handling a numerical bulk viscosity $\mu_\upsilon$ is useful to stabilize the solution.
Among many possible options we had tested, we obtained satisfactory results with the Kuropatenko's form of artificial viscosity \citep{Kuropatenko:1966,Chandrasekhar:1961}.
This viscosity is non-zero only in grid cells for which $\nabla\cdot\mathbf{u}>0$.
As given by \citet{Caramana:1998},
\begin{equation}
\mu_\upsilon = \rho\left[ c_2\frac{\gamma+1}{4} \left| \Delta\mathbf{u} \right| + \sqrt{c_2^2\left( \frac{\gamma+1}{4} \right)^2 \left( \Delta\mathbf{u} \right)^2 + c_1^2 c_s^2} \right] \left| \Delta\mathbf{u} \right|,
\end{equation}
where $\left|\Delta\mathbf{u}\right| = \left| \Delta u_x + \Delta u_y + \Delta u_z \right|$ is the velocity jump across the grid cell, $c_s = \sqrt{\gamma p / \rho}$ is the adiabatic sound speed, $c_1$ and $c_2$ are constants close to $1$.
\section{Main classes and data structures}
\label{sec:architecture}
Traditional implementations of PIC (or, rather, CIC) algorithms use conventional domain decomposition for parallelizing the computation which, obviously, can lead to strong processor load imbalance.
Different strategies were proposed to overcome it, such as particle splitting/merging \citep{Beck:Frederiksen:2016} or grid refinement and temporal sub-stepping \citep{Innocenti:etal:2015CPC}.
Particle methods, instead, provide excellent opportunity for task or event based parallel approach.
The latter has been discussed by, e.g., \citet{Karimabadi:etal:2005JCP}, but no productive implementation has been reported yet, to our knowledge.
Slurm embraces a task based approach in which all particle operations are split between a user-defined number of OpenMP threads.
Distributing tasks, not computational sub-domains, is advantageous in problems with strong density imbalances.
Particles and computational grid exchange information several times during each computational cycle, but otherwise grid operations and particle operations are independent.
Therefore two main classes, \lstinline|Grid| and \lstinline|ParticleManager|, were chosen to manage grid elements (cells and nodes) and particles, respectively.
\subsection{Basic players: cells, nodes, and particles.}
We use a staggered grid, therefore three types of basic entities are represented by the \lstinline|GridCell|, \lstinline|GridNode|, and \lstinline|Particle| classes, which share some common properties as illustrated in Fig.~\ref{fig:grid_elm}.
Particles carry the following physical quantities: volume $V$, mass $m$, momentum $m\mathbf{u}$, internal energy $\epsilon=em$, and electromagnetic potentials $\mathbf{A}$ and $\phi$ (the latter is solely used to preserve the gauge of $\mathbf A$).
Each grid element is assigned a mass $m$ and a volume $V$, hence the density is computed (upon interpolation from particles) as $\rho=m/V$, for both nodes and cells.
Grid nodes keep the fluid velocity $\mathbf{u}$ and the vector potential $\mathbf{A}$, while grid cells keep the internal energy density $e$, the magnetic field $\mathbf{B}$ and the scalar potential $\phi$.
In addition, all grid elements carry connectivity information as explained below.
\begin{figure}
\begin{centering}
\includegraphics[width=0.5\textwidth]{grid_elements.pdf}
\caption{Inheritance diagram for the basic entities in Slurm: points, grid elements and particles. Letters in each box indicate basic physical quantities carried by the corresponding item.
}
\label{fig:grid_elm}
\end{centering}
\end{figure}
\subsection{Grid}
\label{sub:grid}
The \lstinline|Grid| object is initialized by the solver first.
Slurm supports rectilinear grids with $n_{xc}\times n_{yc}\times n_{zc}$ cells in each dimension, and $n_{xn}\times n_{yn}\times n_{zn}$ nodes in each dimension ($n_{x,y,z\,n} = n_{x,y,z\,c} + 1$).
At each boundary, there is one layer of ghost cells, one layer of ghost nodes.
Each cell and each node stores information about its role (`general', `boundary', or `ghost') and pointers to its neighbors.
Besides geometrical neighbors, logical neighbors are also computed.
Logical neighbors denote the grid elements which must be used for interpolating to/from this grid element.
For instance, a `logical neighbor node' of a boundary cell on a periodic boundary is the node on the opposite boundary.
The number of particles is much larger than the number of grid elements, hence connectivity information doesn't lead to excessive memory use.
However, it saves a substantial amount of resources and provides capabilities for implementing irregular grids.
\subsection{Particle Manager}.
\label{sub:PM}
Class \lstinline|ParticleManager| stores a double-linked list of \lstinline|Particle| objects, which makes adding and removing particles rather efficient and trivial to implement.
The list could easily be processed in parallel using OpenMP directives, e.g.,
\begin{lstlisting}
#pragma omp parallel
{
int thread_num = omp_get_thread_num();
// Make sure particles were divided between threads according to the current number of threads
if (omp_get_num_threads() != IteratorsNumberOfThreads)
{
if (thread_num == 0)
IteratorsNumberOfThreads = omp_get_num_threads();
#pragma omp barrier
computeOMPThreadRange();
}
// Process particles which belong to this thread.
for(ParticleIterator pi = IteratorsBegin[thread_num]; pi != IteratorsEnd[thread_num]; ++pi)
{
// Do something
}
}
\end{lstlisting}
where \lstinline|IteratorsNumberOfThreads| is the number of threads for which the particle ranges \lstinline|IteratorsBegin| and \lstinline|IteratorsEnd| were computed.
All expensive procedures: interpolation from particles to grid, interpolation from grid to particles, and particle push, are handled by the \lstinline|ParticleManager| object, and are parallelized in the above manner.
Strong scaling tests performed on 36-core Intel Xeon processor E5-2600 v4 (Broadwell) have shown that parallel performance improves with the number of particles per cell (Figure~\ref{fig:scaling}).
To compute the parallel speedup, we ran simulations on the same $32^3$ grid, increasing the number of threads from $1$ to $36$.
The parallel speedup equals to the ratio of the average time of one computational cycle on one thread to the time on multiple threads $s(N)=\left<T(1)\right> / \left<T(N)\right>$.
On 36 threads the code executes 18 times faster than on a single thread, i.e., parallel efficiency reaches 50\%.
Note, the code has neither been optimized, nor tuned for performance.
We used GNU C++ compiler 6.1 with the following options: \lstinline|-std=c++11 -O3 -fopenmp -lpthread|.
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{scaling_mhd.png}
\caption{Strong scaling on Intel Xeon E5-2600 v4 (Broadwell). The black line denotes ideal scaling; each color line corresponds to the number of particles per cell: from 27 to 216. The more particles per cell, the better the scaling is.
}
\label{fig:scaling}
\end{centering}
\end{figure}
\section{Main solver loop}
The main solver loop of an explicit PIC algorithm consists of four steps: (1) interpolation particles$\rightarrow$grid, (2) grid advancement, (3) interpolation grid$\rightarrow$particles, and (4) particle mover, as illustrated in Figure~\ref{fig:algorithm}.
In a typical PIC simulation the number of computational particles far exceeds the number of grid cells, and traversing all particles is the most time-consuming operation in the cycle.
In Slurm, particles are only traversed twice per cycle: first time in step (1) to interpolate to the grid.
The second loop invokes both grid$\rightarrow$particle interpolation and particle push, therefore steps (3) and (4) are combined.
In the same loop particles are confronted against the boundary conditions, and are marked for deletion if needed.
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{slurm_cycle.pdf}
\caption{Four-step explicit particle-in-cell algorithm implemented by Slurm.
}
\label{fig:algorithm}
\end{centering}
\end{figure}
\subsection{Step 1. Interpolation from particles to grid.}
\label{sec:interpolation}
Projection of data from particles to grid begins with setting the corresponding quantities ($m$, $V$, $e$, $\phi$ for cells and $m$, $V$, $\mathbf{u}$, $\mathbf{A}$ for nodes) to zero.
Interpolation is implemented in the \lstinline|ParticleManager| class in one loop over all particles.
It fetches the required information from each particle and adds the corresponding contribution to the surrounding grid nodes and grid cells.
When all particles have been processed, the accumulated values of the physical quantities on each grid element are normalized to the total interpolated weight at that grid element.
Information from each particle ($\epsilon$, $m$, $V$, $V\phi$) is projected onto 27 cells: the cell that encloses this particle, and 26 cells that share a face or a vertice with it.
Interpolation weights are given by the second order b-spline
\begin{equation}
w_{c,x} = \begin{cases}
\frac{1}{2} x^2 - \frac{3}{2} \left| x \right| + \frac{9}{8} \quad \frac{1}{2} \leq \left| x \right| < \frac{3}{2} \\
-x^2 + \frac{3}{4} \quad\quad\quad\quad 0 \leq \left|x\right| < \frac{1}{2}
\end{cases},
\end{equation}
where $x=\left(x_c - x_p\right) / \Delta x$ is the difference between the cell center coordinate $x_c$ and the particle's coordinate $x_p$ normalized to the cell's extent $\Delta x$.
The total weight is the product of the three weights in each dimension
\begin{equation}
w_c = w_{c,x} \cdot w_{c,y} \cdot w_{c,z}
\end{equation}
Conserved quantities to be projected on the nodes include $\mathbf{u}m$, $m$, $V$.
These are interpolated onto 8 vertices (nodes) of the cell which encloses this particle.
To compute interpolation weights, the conventional first order b-spline is used as described in \citet{Bacchini:2017}
\begin{equation}
w_n = \left(1 - \left|x_n - x_p\right|\right)\cdot\left(1 - \left|y_n - y_p\right|\right)\cdot\left(1 - \left|z_n - z_p\right|\right),
\end{equation}
where $x_n$, $y_n$, $z_n$ are the coordinates of this vertex (node).
A different procedure used to interpolate $\mathbf{A}$ from particles to grid.
Electromagnetic vector potential is not an additive conserved quantity, but rather a smooth vector function of three coordinates.
We interpolate $\mathbf{A}$ to each grid node from an arbitrary hexahedron formed by the node's closest particles.
Each grid node keeps information about the eight closest particles, hence the interpolation is realized in the \lstinline|Grid| class.
It is called when all conserved quantities have been projected.
Interpolation from irregular grids is quite straightforward in 2D\footnote{\href{https://www.particleincell.com/2012/quad-interpolation/}{https://www.particleincell.com/2012/quad-interpolation}}, but in three dimensions it is more sophisticated, and requires the iterative solution of a nonlinear system.
For the sake of completeness, we provide the detailed description of the interpolation procedure adopted from NASA's b4wind User's Guide\footnote{\href{https://www.grc.nasa.gov/WWW/winddocs/utilities/b4wind_guide/trilinear.html}{https://www.grc.nasa.gov/WWW/winddocs/utilities/b4wind\_guide/trilinear.html}} in \ref{app:interpolation}.
Note, the same interpolation approach may be used for projecting the scalar potential $\phi$ on the cells, however none of our tests indicated that this is beneficial.
\subsection{Step 2. Advance the grid}
\subsubsection{Boundary conditions}
Boundary conditions should applied to all quantities interpolated from particles, and to all derived quantities such as $\mathbf{B}$ and $\mathbf{J}$.
All ghost and boundary grid cells and nodes carry the references to the applicable \lstinline|BoundaryCondition| objects.
They implement methods which modify the given cell or node quantities according to the specific rule.
For instance, in reflective boundary
\begin{lstlisting}
// class BoundaryConditionReflective : public BoundaryCondition
void BoundaryConditionReflective::imposeOnNode(GridNode * n, SlurmDouble t)
{
for (SlurmInt i = 0; i < 3; i++)
if (i == normal)
n->set_u(normal, 0);
else
n->set_u(i, 2 * n->get_u(i));
n->set_m(2 * n->get_m());
}
\end{lstlisting}
\subsubsection{Directional derivatives}
To evolve the equations of MHD in time, and to compute certain derived quantities such as $\mathbf{B}$, $\mathbf{J}$, etc., the gradients or directional derivatives of the discretized quantities should be computed.
In Slurm, a directional derivative of a node quantity $q$ in the direction $x$ is cell-centered.
It is computed as the average over the four corresponding edges of that cell.
Using three indices $\textrm{i}$, $\textrm{j}$, $\textrm{k}$, corresponding to $x$, $y$, $z$ coordinate dimensions, we designated the node-centered value as $q_{\textrm{ijk}}$, where each index is either $0$ for the `left' (bottom, front) or $1$ for the `right' (top, back) corner of the given cell.
Then the discrete directional derivatives are equal to
\begin{eqnarray}
\frac{\partial q}{\partial x} = \frac{q_{\textrm{100}} - q_{\textrm{000}} + q_{\textrm{110}} - q_{\textrm{010}} + q_{\textrm{101}} - q_{\textrm{001}} + q_{\textrm{111}} - q_{\textrm{011}}}{4 \Delta x}, \\
\frac{\partial q}{\partial y} = \frac{q_{\textrm{010}} - q_{\textrm{000}} + q_{\textrm{110}} - q_{\textrm{100}} + q_{\textrm{011}} - q_{\textrm{001}} + q_{\textrm{111}} - q_{\textrm{101}}}{4 \Delta y}, \\
\frac{\partial q}{\partial z} = \frac{q_{\textrm{001}} - q_{\textrm{000}} + q_{\textrm{101}} - q_{\textrm{100}} + q_{\textrm{011}} - q_{\textrm{010}} + q_{\textrm{111}} - q_{\textrm{110}}}{4 \Delta z},
\end{eqnarray}
where $\Delta x$, $\Delta y$, and $\Delta z$ are the cell's extents in the corresponding dimension.
The gradients of cell-centered quantities are node-centered.
They are computed in exactly the same way, except that $\Delta x$, $\Delta y$, and $\Delta z$ should represent the distances between the corresponding neighbor cell centers.
This way, the second derivative of a node quantity is node-centered, and the second derivative of a cell quantity is cell-centered.
\subsubsection{Solution of MHD equations}
A semi-implicit scheme is used to march the MHD equations on the grid in time.
First, from the cell-centered properties at the previous time step $\textrm{n}$, the new velocity is computed on each node.
\begin{equation}
\mathbf{u}^{\textrm{n+1}} = \mathbf{u}^{\textrm{n}} + \Delta t Q^{\textrm{n}},
\end{equation}
where $Q^{\textrm{n}}$ is the right-hand side of the discretized momentum equation \ref{eq:momentum} computed with the `old' grid values.
Other grid properties are advanced using the new velocity according to discretized equations \ref{eq:continuity}, \ref{eq:energy}, \ref{eq:A}, \ref{eq:phi}, after the boundary conditions are imposed on $\mathbf{u}^{\textrm{n+1}}$
\begin{equation}
q^{\textrm{n+1}} = q^{\textrm{n}} + \Delta t Q\left(\mathbf{u}^{\textrm{n+1}}, \rho^{\textrm{n}}, p^{\textrm{n}}, \mathbf{A}^{\textrm{n}}, \phi^{\textrm{n}}\right),
\end{equation}
where $q$ is one of $\rho$, $e$, $\mathbf{A}$, or $\phi$, and $Q$ is the right-hand side of the corresponding MHD equation.
Note, boundary conditions need to be taken care of before interpolating the updated information to the particles.
The numerical scheme described above is a special case of the implicit scheme analyzed by \citet{Brackbill:Ruppel:1986}.
It is stable when the Courant condition is met, and offers better accuracy than a simple foward Euler method.
\subsection{Steps 3 and 4. Interpolate on, and push the particles}
The weights used to interpolate from grid to particles are the same as those used for particles to grid projection (Section~\ref{sec:interpolation}).
From grid cells to particles we interpolate the change of internal energy $\Delta \epsilon = \epsilon^{\textrm{n+1}} - \epsilon^{\textrm{n}}$, the cell-centered velocity gradient $\nabla\mathbf{u}^{\textrm{n+1}}$, and the change of scalar potential $\Delta \phi = \phi^{\textrm{n+1}} - \phi^{\textrm{n}}$.
From grid nodes, the following quantities are interpolated: $\mathbf{u}^{\textrm{n+1}}$, $\Delta\mathbf{u} = \mathbf{u}^{\textrm{n+1}} - \mathbf{u}^{\textrm{n}}$, and $\Delta\mathbf{A} = \mathbf{A}^{\textrm{n+1}} - \mathbf{A}^{\textrm{n}}$.
Physical quantities carried by the $p-th$ particle are updated as
\begin{eqnarray}
\epsilon_p^{\textrm{n+1}} = \epsilon_p^{\textrm{n}} + m_p \Delta e, \\
\mathbf{u}_p^{\textrm{n+1}} = \mathbf{u}_p^{\textrm{n}} + \Delta\mathbf{u},\\
\mathbf{A}_p^{\textrm{n+1}} = \mathbf{A}_p^{\textrm{n}} + \Delta\mathbf{A}, \\
\phi_p^{\textrm{n+1}} = \phi_p^{\textrm{n}} + \Delta \phi.
\end{eqnarray}
The velocity gradient is used to advance the particle's volume according to either the strategy described by \citet{Bacchini:2017}, or in a simpler way:
\begin{equation}
V_p^{\textrm{n+1}} = V_p^{\textrm{n}} \left( 1 + \Delta t \Tr \nabla\mathbf{u}^{\textrm{n+1}} \right),
\end{equation}
where
\begin{equation}
\Tr \nabla\mathbf{u}^{\textrm{n+1}} \equiv \nabla\cdot\mathbf{u}^{\textrm{n+1}} = \nabla\mathbf{u}^{\textrm{n+1}}_{11} + \nabla\mathbf{u}^{\textrm{n+1}}_{22} + \nabla\mathbf{u}^{\textrm{n+1}}_{33}
\end{equation}
is the trace of the interpolated velocity gradient tensor.
Note that particle quantities are updated using the interpolated changes in the grid quantities ($\Delta q$), which ensures very low numerical diffusivity of the method.
The interpolated velocity $\mathbf{u}^{\textrm{n+1}}$ is used only to advance the particle's position
\begin{equation}
\mathbf{r}^{\textrm{n+1}} = \mathbf{r}^{\textrm{n}} + \mathbf{u}^{\textrm{n+1}} \Delta t.
\end{equation}
Once the new position is found, it is checked against boundary conditions.
If necessary, the particle is either reflected from the wall, flipped around the periodic boundary, or marked for deletion if it crosses an open boundary.
When all particles have been updated, the list of particles is walked through one more time, and all particles marked for deletion are removed.
Finally, if necessary, new particles are injected and the pre-computed particle ranges for each thread are updated.
\section{Examples}
Most testcases considered in this section could be directly compared with an excellent paper on Athena code validation by \citet{Stone:2008}.
Where appropriate, we provide references to the corresponding figures in this paper.
\subsection{Two interacting blast waves}
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{two_blasts.pdf}
\caption{Simulations of the two interacting blast waves at two resolutions, with $2400$ cells (solid line), and with $200$ cells (crosses), at $t=0.038$.
}
\label{fig:two-blasts}
\end{centering}
\end{figure}
A simple one-dimensional 1D shock tube proposed by \citet{Woodward:Colella:1984} is widely used to study the shock-capturing and stability properties of the codes.
In this problem, the initial state posesses a uniform density $\rho=1$, specific heats ratio $\gamma=1.4$, zero velocity, and three different pressures.
In the leftmost tenth of the domain $p=1000$, in the rightmost tenth $p=100$, and in between $p=0.01$.
The shock tube evolves quickly and produces multiple shock waves at high Mach numbers which are reflected from the walls and interact with each other.
This test is very hard to handle for Eulerian codes, however Slurm deals with it rather easy.
Step-by-step description of the complex evolution could be found in \citet{Woodward:Colella:1984}, we only present the final result at time $t=0.038$.
We performed two runs, with $2400$ and $200$ cells; both with $40$ particle/cell, and the same $\Delta t=4\cdot10^{-6}$.
The CFL condition computed on the fine grid using the largest initial value of the sound speed $c_s=\sqrt{\gamma p/\rho}=37.4$ suggests $\Delta t < 1.1\cdot 10^{-5}$.
The density and velocity profiles at $t=0.038$ in both simulations are shown in Figure~\ref{fig:two-blasts}.
These plots can be compared to the reference solutions in Figure~2h of the original paper \citep{Woodward:Colella:1984}.
Even at low grid resolution (200 cells), the contact discontinuity at $x=0.6$ (Fig.~\ref{fig:two-blasts}b) is very sharp.
However, even at high resolution there is an artificial spike at density discontinuity at $x=0.8$.
This spike neither grows, nor is a source of fictious oscillations, even when the simulation is run further.
More sophisticated forms of hyperviscosity might be needed to get rid of it in our simple explicit first-order numerical scheme.
\subsection{Brio \& Wu shock tube}
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{brio_wu.pdf}
\caption{Simulations of Brio \& Wu shock tube at two resolutions, with $3200$ cells (solid line), and with $400$ cells (crosses), at $t=0.1$.
}
\label{fig:brio-wu}
\end{centering}
\end{figure}
\citet{Brio:Wu:1988} proposed an extension of the Sod shock tube to MHD.
There are two initial states in the initial domain with $L_x=1$: the left state has $\rho=1$, $p=1$, and $B_y=1$; the right state has $\rho=0.125$, $p=0.1$, and $B_y=-1$.
In the whole domain, $B_x=0.75$, $\gamma=2$, and all other quantities are zero.
In terms of vector potential, the initial field is given by
\begin{equation}
A_z = \begin{cases}
-x \quad\quad 0 \leq x < 0.5 \\
x-1 \quad 0.5 \leq x \leq 1
\end{cases}.
\end{equation}
At each computational cycle, when $\mathbf{B}$ is computed on grid cells from the interpolated $\mathbf{A}$ according to Eq.~\ref{eq:BfromA}, $B_x=0.75$ is added in each cell.
The aim of this test is to check how well the code distinguishes and handles different MHD shocks.
We present here the results of two runs: with $3200$ cells, $9$ particles/cell, $\Delta t=10^{-5}$, and with $400$ cells, $36$ particles/cell, $\Delta t=10^{-4}$.
In Figure~\ref{fig:brio-wu} the plots of $\rho$, $B_y$, $u_x$, and $u_y$ are shown at $t=0.1$.
These plots can be compared with Figure~2 of the original paper \citep{Brio:Wu:1988}.
Slurm accurately reproduces two fast rarefaction waves (FR), a slow compound wave (SM), a contact discontinuity (C), and the slow shock (SS).
There is still some numerical noise around the slow shock boundary, similar to a density spike in the interacting blast waves simulation.
This noise is efficiently damped by the Kuropatenko's hyper-viscosity, and is not affecting other cells of the grid.
It neither grows in amplitude, nor propagates further when the simulation runs well beyond the $t=0.1$ time.
In this simulation, high spatial resolution is essential to correctly handle the shocks.
\subsection{Orszag-Tang}
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{Fig6.pdf}
\caption{Orszag-Tang vortex simulation at $t=0.5$. Left: pressure along two cuts, at $y=0.3125$ (a) and at $y=0.4277$ (b) in the simulations with $256\times256$ cells (solid) and $128\times128$ cells (crosses).
Right: magnetic field amplitude.
This figure may be compared with Figures~5 and 6 in \citet{Stasyszyn:etal:2013MNRAS}, and Figure~23 in \citet{Stone:2008}.
}
\label{fig:orszag-tang}
\end{centering}
\end{figure}
Orszag-Tang vortex represents a complex interaction of different MHD shocks in 2D, with a translation to MHD turbulence, and is often used as a reference test case for MHD code validation.
Initial pressure and density in a periodic domain with $L_x=L_y=1$ are uniform, $\rho=25/\left(36\pi\right)$, and $p=5/\left(12\pi\right)$.
Initial velocity components are $u_x=-\sin\left(2\pi y\right)$ and $u_y=\sin\left(2\pi x\right)$.
The magnetic field is given by $A_z=\left(B_0/4\pi\right)\cos\left(4\pi x\right) + \left(B_0/2\pi\right)\cos\left(2\pi y\right)$ with $B_0=1/\sqrt{4\pi}$.
Simulations at two grid resolutions, with $256\times256$ cells, $\Delta t=10^{-4}$, and with $128\times128$ cells, $\Delta t=5\cdot10^{-4}$, both using 25 particles/cell, are compared in Figure~\ref{fig:orszag-tang}.
Panels on the left represent gas pressure along two cuts, $y=0.3125$ (a) and at $y=0.4277$ (b), and may be used for quantitative comparison with reference results provided in Figure~23 of \citet{Stone:2008}, and in Figure~6 of \citet{Stasyszyn:etal:2013MNRAS}.
Slurm simulation with $256\times 256$ grid cells reproduces all major features of the reference solutions, except a few sharpest shock interfaces.
This is also confirmed by a snapshot of the magnetic field amplitude $B$ shown in the right panel of Fig.~\ref{fig:orszag-tang}.
This image could be compared with Fig.~5 in \citet{Stasyszyn:etal:2013MNRAS}.
\subsection{Rayleigh-Taylor}
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{Fig7.pdf}
\caption{Snapshots of Rayleigh-Taylor instability simulation at time moments $t=5$, $t=10$, $t=12.5$, and $t=15$.
Each little cube corresponds to a computational particle (only $1$ out of $27$ particles are saved).
Purple particles represent the heavy fluid, initially on top of the lighter fluid (green particles), and completely mixed at the end of the simulation.
The $3/4$ of the box in front of each snapshot have been clipped to visualize the motion inside the domain.
}
\label{fig:rayleigh-taylor}
\end{centering}
\end{figure}
A Rayleigh-Taylor instability, together with Kelvin-Helmholtz instability are the key ingredients in fluid modeling of many astrophysical phenomena, from solar convection to the solar wind-magnetosphere interaction.
As shown by \citet{Bacchini:2017}, particle volume evolution enables Slurm to successfully model the Kelvin-Helmholtz instability, even with explicit time-stepping.
Rayleigh-Taylor instability simulated by different numerical codes was compared by \cite{Liska:Wendroff:2003}.
There is no unique solution to the problem in the nonlinear regime, therefore we do not attempt to reproduce results of other codes.
Instead, we use the three-dimensional Rayleigh-Taylor problem to test the reflecting boundaries of our code, the symmetry of the solution, and demonstrate its unique capability to study fluid mixing.
No pure grid-based code is able to track individual fluid particles and thus be self-consistently used to investigate mixing of different fluids or plasmas.
The computational domain is a $0.5\times0.5\times1.5$ box with periodic X and Y boundaries, and reflective Z boundaries.
In the top half of the box the fluid is heavier with $\rho=2$, while in the bottom half $\rho=1$.
Gravity acceleration $g_z=-0.1$ and specific heats ratio $\gamma=1.4$ are constant throughout the domain.
The initial pressure is given by the hydrostatic equilibrium $p(z)=2.5+g_z\rho z$.
The instability is excited by a single-mode velocity perturbation of the form
\begin{equation}
\nonumber
u_z=u_0\frac{\left[1 + \cos\left(4\pi\left(x - 0.25\right)\right)\right] \left[1 + \cos\left(4\pi\left(y - 0.25\right)\right)\right] \left[1 + \cos\left(3\pi\left(z - 0.75\right)\right)\right]}{8},
\end{equation}
where $u_0=0.01$.
Figure~\ref{fig:rayleigh-taylor} shows the nonlinear evolution of the instability simulated in a $32\times32\times96$ grid with $27$ particles/cell and $\Delta t=10^{-3}$.
The flow exhibits classical features of the Rayleigh-Taylor instability with secondary Kelvin-Helmholtz instabilities and growing turbulence on the edges of the flow.
Thanks to the computational particles we can catch these features even at rather low grid resolution, and also track how initially different fluids mix together.
\subsection{3D magnetic loop}
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\textwidth]{Fig8.pdf}
\caption{Current density magnitude $J=\left|\nabla\times\mathbf{B}\right|$ in the beginning (left), during boundary crossing (in the middle), and after 2 crossings.
}
\label{fig:ML3D}
\end{centering}
\end{figure}
As mentioned above, in 2D simulations Slurm preserves magnetic topology exactly (Eq.~\ref{eq:Az2D}), and two-dimensional magnetic loop test is only useful to track possible bugs.
Here we show that Slurm deals with a 3D magnetic loop as easy.
In this test, the electromagnetic vector potential inside a magnetic loop, inclined by $45^{\circ}$ to the vertical axis, is given by $A_x=A_z=A_0\left( r-R \right)$, and is zero outside the loop, $r>R$, where $r$ is the distance to the loop's axis, $R=0.3$ is the loop radius, and $A_0=0.001/\sqrt{2}$.
The domain's extent is $1$ in all dimensions, and the loop is advected in all three dimensions with speed $u_x=u_y=u_z=\sqrt{3}$.
Loop's magnetic field strength is very small to ensure high plasma beta and the absence of pressure imbalance effects.
The results of the simulation in a $100^3$ cells grid with $27$ particle/cell are shown in Fig.~\ref{fig:ML3D}.
The loop is formed by two current sheets: one infinitely thin (as thin as the numerical scheme allows) on the axis, and one cylindrical on the outer boundary of the loop.
The color in the figure depicts the current density $J$ in the loop.
The right snapshot, taken after two crossings, is indistinguishable from the left snapshot which is taken in the beginning of the simulation, hence Slurm perfectly preserves magnetic topology, electric current and magnetic energy of the advected loop.
This figure could be compared with Figure~34 of \citet{Stone:2008}.
\section{Summary}
We have successfully implemented and tested a particle-in-cell MHD model, Slurm.
Several features distinguish it from all previous (known) implementations of fluid PIC:
\begin{itemize}
\item Magnetic field evolution strategy. Fluid particles carry electromagnetic potential $\mathbf{A}$ which is projected on grid nodes from the closest particles. This way, solenoidality of magnetic field is ensured to machine precision, there is no diffusion of magnetic energy and magnetic topology is preserved accurately.
\item Particle volume evolution is taken into account. Volume is used to normalize the quantites which are interpolated between the grid and the particles. Volume evolution damps numerical ringing instability, and allows us to model key hydrodynamic instabilities such as Kelvin-Helmholtz and Rayleigh-Taylor.
\item Efficient bulk hyper-viscosity allows to resolve shocks and accurately model reference problems.
\item Particles are carried as a double-linked list of objects. Particles can efficiently be deleted and inserted on-the-fly without reducing the code's performance. This is particularly important for space weather simulations, where open boundaries are crucial.
\item The code uses task-based parallel approach in which particles are split between multiple OpenMP threads operating in shared memory.
\end{itemize}
Future developments in Slurm include open boundary conditions in complex physical models such as solar wind; optimizations of parallel performance and additional parallelization for non-shared memory systems.
Slurm is fully functional and tested, and is available online at bitbucket\footnote{\href{https://bitbucket.org/volshevsky/slurm}{https://bitbucket.org/volshevsky/slurm}. The code will be made completely open, but at the moment send us an E-mail to get access to the repository.}.
Although Slurm's primary application is envisioned in space weather modeling, the range of problems for which it could be suitable include interface flows, surface flows, fluid mixing, etc..
\vspace{5mm}
\noindent{\bf Acknowledgements}
This work is conducted under the Air Force Office of Scientific Research, Air Force Materiel Command, USAF under Award No. FA9550-14-1-0375.
V.O. and F.B. are thankful to Craig DeForest for useful discussions during the 2017 AFOSR $B_z$ meeting.
|
{
"timestamp": "2018-06-26T02:10:40",
"yymm": "1802",
"arxiv_id": "1802.08838",
"language": "en",
"url": "https://arxiv.org/abs/1802.08838"
}
|
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
By submitting a manuscript to WACV, the authors assert that it has not been
previously published in substantially similar form. Furthermore, no paper
which contains significant overlap with the contributions of this paper
either has been or will be submitted during the WACV 2018 review period to
{\bf either a journal} or any conference (including WACV 2018) or any
workshop.
{\bf Papers violating this condition will be rejected.}
If there are papers that may appear to the reviewers
to violate this condition, then it is your responsibility to: (1)~cite
these papers (preserving anonymity as described in Section 1.6 below),
(2)~argue in the body of your paper why your WACV paper is non-trivially
different from these concurrent submissions, and (3)~include anonymized
versions of those papers in the supplemental material.
\subsection{Paper length}
Consult the call for papers for page-length limits.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven. If you submit 8 for review expect to pay the added page
charges for them.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\wacvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors06} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2011 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors06b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the WACV 70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors06} to
\cite{Alpher02,Alpher03,Authors06}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high. Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
By submitting a manuscript to WACV, the authors assert that it has not been
previously published in substantially similar form. Furthermore, no paper
which contains significant overlap with the contributions of this paper
either has been or will be submitted during the WACV 2018 review period to
{\bf either a journal} or any conference (including WACV 2018) or any
workshop. {\bf Papers violating this condition will be rejected.}
If there are papers that may appear to the reviewers
to violate this condition, then it is your responsibility to: (1)~cite
these papers (preserving anonymity as described in Section 1.6 below),
(2)~argue in the body of your paper why your WACV paper is non-trivially
different from these concurrent submissions, and (3)~include anonymized
versions of those papers in the supplemental material.
\subsection{Paper length}
Consult the call for papers for page-length limits.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven. If you submit 8 for review expect to pay the added page
charges for them.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\wacvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $097.15$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors06} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2011 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors06b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the WACV 70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors06} to
\cite{Alpher02,Alpher03,Authors06}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be
kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches
(22.54 cm) high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or
Fax (714) 761-1784.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Five billion people in the world lack access to safe surgical care \cite{alkire2015global}. According to the World Health Organization, there is a global mortality rate of 0.5-5\% for major procedures, and up to 25\% of patients undergoing operations that require a stay in the hospital suffer complications \cite{weiser2008estimation}. Many of these complications are caused by poor individual and team performance, for which inadequate training and feedback play an important role \cite{gawande1999incidence,healey2002complications,kable2002adverse}. These outcomes can be improved, as studies show that half of all adverse surgical events are preventable.\par
\begin{figure}
\centering
\includegraphics[scale=0.36]{overview.png}\\
\caption{\small{Real-time automated surgical video analysis could facilitate objective and efficient assessment of surgical skill and provide feedback on surgical performance. In this work, we introduce an approach to: (1) automatically detect, classify, and localize surgical instruments in real-world laparoscopic surgical videos; and (2) efficiently analyze surgical performance based on the extracted tool information.}}
\label{fig:overview}
\end{figure}
One major challenge to improving these outcomes is that surgeons currently lack individualized, objective feedback on their surgical technique and how to improve it. Manual assessment of surgeon performance requires expert supervision which is both subjective and time consuming, with many operations lasting hours. Real time automated surgical video analysis could provide a way to objectively and efficiently assess surgical skill.
A number of works address frame-level tool detection in laparoscopic surgical videos, including~\cite{sahu2016m2cai,twinanda2016single,raju2016m2cai,twinanda2016endonet}, as a part of the 2016 M2CAI Tool Presence Detection Challenge which includes a m2cai16-tool dataset \cite{m2cai}. However, rich analysis of surgeon performance involves analyzing tool movement such as movement range and economy of motion, and requires detecting the spatial bounds of tools in addition to their presence.
In this work, we address this task of spatial tool detection in laparoscopic surgical videos, which to our knowledge has not been previously studied. In order to study this, we introduce a new dataset, m2cai16-tool-locations, which extends the m2cai16-tool dataset with spatial bounds of tools. We develop an approach leveraging region-based convolutional neural networks (R-CNNs) to perform spatial detection of tools, and show that our method is able to both effectively detect the spatial bounds of tools as well as significantly outperform previous work on detecting tool presence.
Finally, because our deep learning approach allows for tool localization in addition to frame-level detection, it enables richer analysis of tool movements. We demonstrate the ability of our method to assess surgical quality through analysis of tool usage patterns, movement range, and economy of motion (Figure \ref{fig:overview}). We extract key quantitative and qualitative metrics that are proven to reflect surgical skill \cite{stylopoulosu2003celts}. We collaborated with surgeons who manually reviewed each surgical video. Their assessments substantiated our findings, validating the ability of our method to efficiently and accurately assess surgical quality. This enables not only avoiding the subjectivity inherent to human assessment, but also significantly reduces the time it takes to analyze a procedure.
In summary, we introduce m2cai16-tool-locations, a new dataset extending the m2cai16-tool dataset with spatial bounds of the tools. We use an approach leveraging region-based convolutional neural networks to effectively perform spatial detection of the tools, and show that conversion of these detections to frame-level presence detections also significantly outperforms state-of-the-art on that task. Importantly, we demonstrate that our spatial detections in these real-world laparoscopic surgical videos enables automatic assessment of surgical quality through analysis of tool usage patterns, movement range, and economy of motion.
\section{Related Work}
Early work on surgical tool detection, categorization, and tracking include those based on radio frequency identification (RFID) tags~\cite{kranzfelder2013rfid}; segmentation, contour processing and 3D modelling~\cite{speidel2009automatic}; and the Viola-Jones detection framework~\cite{lalys2012framework}.
Furthermore, deep learning approaches based on convolutional neural networks have shown impressive performance on computer vision tasks~\cite{russakovsky2015imagenet}, and works including \cite{sahu2016m2cai,twinanda2016single,raju2016m2cai,twinanda2016endonet} leverage deep learning architectures to achieve state-of-the-art performance on surgical tool presence detection and phase recognition. As a part of the M2CAI 2016 Tool Presence Detection Challenge~\cite{m2cai}, they introduced a benchmark for surgical tool presence detection.
While several existing studies address frame-level presence detection, Sarikaya \textit{et al.} perform surgical tool localization in videos of robot-assisted surgical training tasks, using multimodal convolutional neural networks~\cite{sarikaya2017}. Automated surgical scene understanding and skill assessment are further areas of study. A few studies have addressed specific components of surgical video analysis, including surgical phase recognition and activity recognition. As a part of the M2CAI 2016 Surgical Workflow Challenge, works including ~\cite{jin2016workflow, cadene2016workflow, twinanda2016endonet, stauder2017workflow} address surgical phase recognition in cholecystectomy videos. Additionally, Zia \textit{et al.} analyze task-specific suturing and knot-tying videos, using symbolic, texture, and frequency features \cite{zia2016}. Similarly, Lalys \textit{et al.} propose a framework using Hidden Markov Model and visual features, such as shape, color, and texture, to identify surgical tasks ~\cite{lalys2012framework}. Lea \textit{et al.} use skip-chain conditional random field as well as handcrafted features to classify actions in short segments of robotic surgery training tasks, including suturing and knot tying~\cite{lea2015}, and Reiter \textit{et al.} perform surgical tool localization and pose estimation; however, their approach is limited to robotic arms that return kinematic data \cite{reiter}. Also, Lin \textit{et al.} demonstrate the effectiveness of a Bayes classifier in differentiating the skill level of intermediate and expert surgeons on a suturing task, processing features extracted from motion data output by the da Vinci surgical system ~\cite{lin2010}. Additionally, Reiley \textit{et al.} use Hidden Markov Models for the task and gesture levels to assess robotic laparoscopic surgery ~\cite{reiley2009}. A limitation to these task-specific studies is that surgical tasks and gestures differ substantially from and do not accurately reflect surgical performance in real-world surgeries.
Our work builds on these prior contributions and uses region-based convolutional neural networks~\cite{ren2015faster} to detect the spatial bounds of tools, enabling richer, more comprehensive assessment of surgical quality in real-world laparoscopic cholecystectomies, or minimally invasive surgical removal of the gallbladder. Moreover, while studies often only use short segments of procedures or of surgeons performing simulated training tasks to analyze surgical skill, our work leverages unedited, full-length surgical operations. This differentiation is crucial for performance assessment, since real-time operations include smoke, lens fogging, variable anatomy, and different usage patterns not found in simulation scenarios. Limited segments of real operations may not give a comprehensive assessment of surgical performance. Thus, both simulation and video segments of actual operations are limited in their ability to facilitate assessment of surgical performance. In contrast, our extracted metrics are generated from comprehensive post-operative assessment of full surgical procedures.
They have the added benefit of being able to be correlated with post-operative clinical results, thus providing the link between surgical skill and outcomes.
\section{Dataset}
\begin{figure*}
\centering
\includegraphics[scale=0.375]{7tools.png}\\\centering
\vspace{2ex}
\captionsetup{justification=centering}
\caption{\small{Top: the seven tools in m2cai16-tool-locations. Bottom: example frames with their spatial tool annotations. Color of the bounding box corresponds to tool identity.}}
\label{fig:7tools}
\end{figure*}
There are few existing datasets on automated tool identification. Most center around frame-level tool presence detection, including m2cai16-tool, which was released for the M2CAI 2016 Tool Presence Detection Challenge, and Cholec80~\cite{twinanda2016endonet}. Both m2cai16-tool and Cholec80 contain cholecystectomy surgical videos performed at the University Hospital of Strasbourg in France, where each surgical video frame is labeled with binary annotations indicating tool presence. Sarikaya \textit{et al.} also address tool localization in videos of robot-assisted surgical training tasks ~\cite{sarikaya2017}. We expand upon these earlier works and introduce a new dataset to study the task of surgical tool localization in real-world laparoscopic surgeries and to enable higher-level analysis of surgical videos.
\begin{table}
\centering
\scalebox{0.84}{
\begin{tabular}{@{}cc@{}}
\toprule
\rowcolor[HTML]{EFEFEF}
\textbf{Tool} & \textbf{Number of annotated instances} \\ \midrule
\textbf{Grasper} & $923$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Bipolar} & $350$ \\
\textbf{Hook} & $308$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Scissors} & $400$ \\
\textbf{Clipper} & $400$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Irrigator} & $485$ \\
\textbf{Specimen Bag} & $275$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Total} & \textbf{$3141$} \\
\textbf{Number of Frames} & \textbf{$2532$} \\
\bottomrule
\end{tabular}}
\caption{\small{Number of annotated frames for each tool.}}
\label{table:toolannot}
\end{table}
In order to study this task, a dataset containing annotations of spatial bounds of tools is required. However, to the best of our knowledge, no such dataset currently exists for real-world laparoscopic surgical videos. We therefore collect and introduce a new dataset, m2cai16-tool-locations, which extends the m2cai16-tool dataset \cite{m2cai} with spatial annotations of tools. We will publicly release this dataset.
m2cai16-tool consists of 15 videos recorded at 25 fps of cholecystectomy procedures. Videos 1 to 10 are used for training the R-CNN and videos 11 to 15 are used for testing the model. The videos, whose durations range from 20 to 75 minutes, are downsampled to 1 fps for processing. As a result, the dataset contains 23,000 frames labeled with binary annotations indicating presence or absence of seven surgical tools: grasper, bipolar, hook, scissors, clip applier, irrigator, and specimen bag.
In m2cai16-tool-locations, we label 2532 of the frames, under supervision and spot-checking from a surgeon, with the coordinates of spatial bounding boxes around the tools. We use 50\%, 30\%, and 20\% for training, validation, and test splits. The 2532 frames were selected from among the 23,000 total frames. We first annotate all frames containing just one tool, and increase the number of annotated instances per tool class by additionally labeling frames with two and three tools. The breakdown of number of annotations per tool class is detailed in Table~\ref{table:toolannot}. Figure \ref{fig:7tools} shows each tool in the dataset, its number of spatial annotations, and examples of annotations in a number of frames.
\section{Approach}
\begin{figure*}
\centering
\includegraphics[scale=0.47]{faster_rcnn.png}
\captionsetup{justification=centering}
\caption{\small{Faster R-CNN architecture. The input to the network is a frame from a surgical video. The base network of Faster R-CNN is a VGG-16 convolutional neural network. This is connected to a region proposal network (RPN) that shares convolutional features with object detection networks. For each input image, the RPN generates region proposals likely to contain an object, and features are pooled over these regions before being passed to a final classification and bounding box refinement network. The output is the spatial bounding box positions of detected surgical tools in the video frame.}}
\label{fig:faster_rcnn}
\end{figure*}
Our approach for spatial detection of surgical tools is based on Faster R-CNN~\cite{ren2015faster}, a region-based convolutional neural network described in more detail below. The input is a video frame, and the output is the spatial coordinates of bounding boxes around any of the seven surgical instruments in Figure \ref{fig:7tools}. This output allows us to perform qualitative and quantitative analyses of tool movements, from tracking tool usage patterns to evaluating motion economy, and to correlate these measures with surgical skill, ultimately setting the stage for higher-level analysis of surgical performance.
The Faster R-CNN architecture we use is shown in Figure \ref{fig:faster_rcnn}. The base network is a VGG-16 convolutional neural network~\cite{simonyan2014very} with 16 convolutional layers, which extracts powerful visual features. On top of this network is a region proposal network (RPN) that shares convolutional features with object detection networks. For each input image, the RPN generates region proposals likely to contain an object, and features are pooled over these regions before being passed to a final classification and bounding box refinement network. The use of the RPN enables significant computational gains over related previous work, R-CNN \cite{girshick2014rich} and Fast R-CNN \cite{girshick2015fast}.
The RPN is trained by optimizing the following loss function for each image:
\begin{equation}
\begin{split}
\label{eq1}
L(\{p_i\},\{t_i\}) & =\frac{1}{N_{cls}}\sum_{i}L_{cls}(p_i,p_i^*) \\ & +\lambda\frac{1}{N_{reg}}\sum_{i}p_i^*L_{reg}(t_i,t_i^*)
\end{split}
\end{equation}
Here \(i\) indexes ``anchors'' corresponding to each sliding window position of the input feature map, \(p_i\) is the anchor's objectness probability, and \(t_i\) is the coordinates of the predicted bounding box. \(p_i^*\) is the ground-truth label of whether an anchor is a true object location based on Intersection over Union (IoU) with ground-truth annotations, and \(t_i^*\) is the coordinates of the ground-truth box corresponding to a positive anchor. The loss function is therefore a weighted combination of a classification loss \(L_{cls}\) for the binary objectness label and a regression loss \(L_{reg}\) for bounding box coordinates. \(N_{cls}\) and \(N_{reg}\) are normalization constants and \(\lambda\) weights the contributions of classification and regression. The classification and bounding box refinement networks on top of the pooled regions of interest are trained using standard cross-entropy and regression loss functions.
While Faster R-CNN has shown impressive performance on detection of everyday objects, the domain of surgical videos and surgical tools has quite different visual characteristics. We pre-train the network on the ImageNet dataset~\cite{russakovsky2015imagenet}, where a large amount of data is available to learn general visual features, and then fine-tune the network on our m2cai16-tool-locations dataset, where a smaller amount of data is labeled with the surgical tools of interest.
To train the RPN, we assign a binary objectness label to each anchor at each sliding window position of the feature map. We also assign a positive label to anchors with an overlap greater than 0.8 with the ground-truth box or if those do not exist, to an anchor or anchors with the highest Intersection over Union (IoU), and a negative label to anchors with an IoU of less than 0.3.
We fine-tune the VGG-16 network to optimize model performance using stochastic gradient descent. We modify the classification layer of the network to output softmax probabilities over the seven tools. All layers are fine-tuned for 40K iterations with a mini-batch size of 50, and a 3$\times$3 kernel size is used. We perform data augmentation by randomly flipping frames horizontally. The learning rate is initialized at 10$^{-3}$ for all layers, and decreased by a factor of 10 every 10K iterations. Total training time was approximately two days on an NVIDIA Tesla K40 GPU, and the network's processing speed at deployment is 5 fps, achieving real-time surgical tool detection.
\section{Results}
In this section, we quantitatively evaluate our approach on the tasks of spatial detection and frame-level presence detection of surgical tools, using m2cai16-tool-locations and m2cai16-tool, respectively. We demonstrate strong performance on the new task of spatial detection, and by leveraging spatial annotations, we significantly outperform existing works on presence detection. Finally, we qualitatively illustrate the ability of our approach to analyze tool usage patterns, movement range, and economy of motion for assessment of surgical performance.
\subsection{Spatial detection}
\begin{figure*}
\centering
\includegraphics[scale=0.5]{sample.png}
\captionsetup{justification=centering}
\caption{\small{Example frames of spatial detection results. Bounding box color corresponds to predicted tool identity. Correct predictions are boxed in green (top), and mistakes are boxed in red (bottom). The model is able to successfully detect, classify, and localize surgical instruments despite varying tool positions and angles, and despite some parts of the tools being occluded, as shown in column (c).}}
\label{fig:sample}
\end{figure*}
To the best of our knowledge, this study was the first to perform surgical tool localization in real-world laparoscopic surgical videos, which would set the stage for richer analysis of surgical performance. Table \ref{table:spatialAP} presents performance using average precision (AP) on spatial detection of the surgical tools in m2cai16-tool-locations.
\begin{table}[h!]
\centering
\scalebox{0.84}{
\begin{tabular}{@{}cc@{}}
\toprule
\rowcolor[HTML]{EFEFEF}
\textbf{Tool} & \textbf{AP} \\ \midrule
\textbf{Grasper} & $48.3$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Bipolar} & $67.0$ \\
\textbf{Hook} & $78.4$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Scissors} & $67.7$ \\
\textbf{Clipper} & $86.3$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Irrigator} & $17.5$ \\
\textbf{Specimen Bag} & $76.3$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{mAP} & $\mathbf{63.1}$ \\ \bottomrule
\end{tabular}}
\caption{\small{Spatial detection average precision (AP) per-class and mean average precision (mAP) in m2cai16-tool-locations. Clipper achieves highest performance, likely due to its well-visualized usage pattern. Irrigator has notably low performance, likely due to its generic shape (it looks like the pole of every other instrument) and its lack of use in every procedure.}}
\label{table:spatialAP}
\end{table}
Intersection over Union (IoU) of 0.5 with a ground-truth bounding box for a given class is considered to be a correct detection. The overall mAP is 63.1, indicating strong performance overall. Clipper is the highest performing tool, likely due to its usage pattern: while only present for a specific step in cholecystectomies, surgeons make sure it is well visualized the entire time it is being used. Thus, its consistent usage pattern may have contributed to its high performance. The irrigator, on the other hand, is difficult likely due to its generic shape (it looks like the pole of every other instrument) and its lack of use in every procedure.
Figure~\ref{fig:sample} provides example frames of detection results. Columns (a) through (d) display frames with increasing numbers of tools present in them; from left to right, either 1, 2, or 3 tools are present per frame. We find that our model is able to successfully detect, classify, and localize surgical instruments despite varying tool positions and angles, and despite some parts of the tools being occluded, as shown in column (c). (1) through (4) present incorrect or partially incorrect detections. In (1), the clipper is mistaken as a scissor; perhaps the angle of the tool renders its identity ambiguous. In (2), the center grasper bounding box is a false positive; rather, the structure and shape of the gallbladder and liver form an outline that may appear to be a grasper to the model. Interestingly, in both (3) and (4), the poles of the hook and grasper, respectively, are identified as an irrigator. This mistake is not surprising, as the irrigator tip takes on the generic shape of any pole of a tool. Additionally, (4) presents a false negative, as the grasper in the center of the frame is left undetected by the model. Overall, our model has strong performance on the task of spatial tool detection.
\subsection{Frame-level presence detection}
The spatial detections output from our model can also be converted to frame-level presence predictions, and evaluated on the m2cai16-tool presence detection benchmark. Table~\ref{table:frameAP} presents AP performance on frame-level presence detection of tools in m2cai16-tool.
\begin{table}[h!]
\centering
\scalebox{0.84}{
\begin{tabular}{@{}cc@{}}
\toprule
\rowcolor[HTML]{EFEFEF}
\textbf{Tool} & \textbf{AP} \\ \midrule
\textbf{Grasper} & $87.2$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Bipolar} & $75.1$ \\
\textbf{Hook} & $95.3$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Scissors} & $70.8$ \\
\textbf{Clipper} & $88.4$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Irrigator} & $73.5$ \\
\textbf{Specimen Bag} & $82.1$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{mAP} & $\mathbf{81.8}$ \\ \bottomrule
\end{tabular}}
\caption{\small{Frame-level presence detection average precision (AP) per-class and mean average precision (mAP) in m2cai16-tool. Hook achieves the highest performance, perhaps due to its distinct tip shape. Scissors has lowest performance, most likely due to its brief usage during the operation and common two-pronged tip shape.}}
\label{table:frameAP}
\end{table}
Hook achieves the highest performance; one possible explanation is that it has a distinct tip shape, making it easily distinguishable from other tools. Clipper, grasper, and specimen bag also perform well, while bipolar and irrigator are more frequently misidentified, perhaps due to their generic tip shapes as well as sparse and irregular appearances, as usage of these tools is not absolutely essential during the cholecystectomy procedure. For the bipolar and irrigator, usage is conditional on the real-time circumstances of the operation. The bipolar, an instrument for dissection and hemostasis, is used to prevent likely bleeding, or to cauterize bleeding when it does occur. The irrigator, an instrument used for flushing and suctioning an area, is typically used during a cholecystectomy procedure when blood accumulates in the surgical field. Scissors has lowest performance, most likely due to its brief usage during the operation and common two-pronged tip shape.
Figure~\ref{fig:comptable} compares mAP performance with winners of the 2016 M2CAI Tool Presence Detection Challenge~\cite{m2cai}. By leveraging the new spatial annotations during training, our approach is able to significantly outperform all previous work. In comparison to the M2CAI 2016 Tool Presence Detection Challenge studies, we achieve a 28\% improvement in accuracy, increasing mAP from 63.8 to 81.8, using just 2,500 of the 23,000 frames—roughly a tenth of the data—in m2cai16-tool, which the previous studies used in its entirety to train their tool presence detection models. We also surpass EndoNet~\cite{twinanda2016endonet}, a novel CNN architecture based on AlexNet~\cite{krizhevsky2012alexnet}, in the tool presence detection task.
\begin{figure}
\centering
\includegraphics[scale=0.38]{comptable.png}
\captionsetup{justification=centering}
\caption{\small{Comparison with winners of the M2CAI Tool Presence Detection Challenge~\cite{m2cai} on frame-level presence detection in m2cai16-tool. By leveraging the new spatial annotations during training, our approach is able to significantly outperform all previous work.}}
\vspace{-4ex}
\label{fig:comptable}
\end{figure}
\subsection{Assessment of Surgical Performance}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.55]{fig.png}
\caption{\small{Timelines (top), heat maps (middle), and trajectories (bottom) of tool usage for the testing videos 1 through 4 in m2cai16-tool. In the timelines, (a)-(g) correspond to Grasper, Bipolar, Hook, Scissors, Clipper, Irrigator, and Specimen Bag, respectively. These metrics effectively measure bimanual dexterity, efficiency, and overall operative skill and enable us to efficiently examine back and forth switching of instruments, movement range, and motion patterns of tools. We find that testing video 2 correlates with the most well-executed surgery, reflecting focused and skillful execution of each step of the surgical procedure. In contrast, the surgeons in the other testing videos have much less economy of motion, handle the instruments with less dexterity, and struggle with certain parts of the procedure.}}
\vspace{-2ex}
\label{fig:fig}
\end{figure*}
The current standard for surgical performance assessment is to have an expert surgeon observe the operation in its entirety and provide feedback to the surgeon performing the surgery. The Global Operative Assessment of Laparoscopic Skills (GOALS) rating system, is a validated rubric for grading surgeons on laparoscopic surgery performance. For each of the five assessment domains of GOALS—depth perception, bimanual dexterity, efficiency, and tissue handling, and autonomy, surgeons are rated on a scale of 1 to 5, from least to most technically proficient, totaling to a final score out of 25.
Operative skill significantly impacts patient outcomes. Birkmeyer \textit{et al.}~\cite{birkmeyer2013surgical} has demonstrated that performance on standardized ratings of surgical skill are correlated with patient complication rates and hospital readmissions. Despite this, most surgeons get no formal feedback on their operative performance. The process of manually rating surgeries is time-consuming and subject to bias. Motivated by these factors, we use the output of our deep learning model, which consists of frame-by-frame tool presence, identity, and location information, to extract key metrics that are proven to reflect surgical skill, such as instrument usage times and path length~\cite{stylopoulosu2003celts}. We also create visual depictions of the progression of the operation, like instrument usage timelines and tool trajectory maps, to parallel the GOALS rating system.
By working with a group of surgeons who each independently reviewed and rated each testing video manually, we are able to validate our approach to assess surgical performance on the five testing videos in our dataset. In the following paragraphs, we interweave the surgeons' independent ratings and comments with our objective analyses of the surgical performance in each video.
To assess instrument usage patterns, we generate timelines displaying tool usage over the course of each of the five testing videos (Figure \ref{fig:fig}, top). In examining the timelines, we find that testing video 2 correlates with the most well-executed surgery. The surgery itself is efficient, and the number of times that different instruments are switched out for one another is minimal, reflecting focused and skillful execution of each step of the surgical procedure. On the contrary, the timeline for testing video 3 reflects inefficient and poorer technique, with more frequent switching back and forth of instruments.
Moreover, the instrument usage timelines and bar graphs quantifying the total time each instrument is used reflect the level of technical proficiency in handling tissue (Figure \ref{fig:toolusagetime}). The longer presence and periodic appearance of the bipolar in testing video 3 indicates rough tissue handling resulting in tissue damage, as the bipolar is used to stop bleeding.
\begin{figure}
\centering
\includegraphics[scale=0.38]{toolusagetime.png}
\caption{\small{Total instrument usage times, by video. Reflecting level of skill in handling tissue, the longer presence of the bipolar in testing video 3 indicates tissue damage, as the bipolar is used to stop bleeding.}}
\vspace{-3ex}
\label{fig:toolusagetime}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.38]{totaldistance.png}
\caption{\small{Total distance traveled by the tools during the clipping phase, by video. Testing video 2 reflects the most technically excellent surgery, with the greatest economy of motion and most deft tool handling.}}
\vspace{-3ex}
\label{fig:totaldistance}
\end{figure}
To study the movement range of the surgical instruments, we generate heat maps of bounding box occurrences and locations. Figure \ref{fig:fig}, middle shows the heat maps of the testing videos. As an indicator of technical proficiency, especially of bimanual dexterity and efficiency, the heat maps reveal the surgical skill demonstrated in testing video 2 to be the highest among the test set once again. Better surgeons generally handle instruments in a more focused region of the operative field, exhibiting greater precision and economy of motion.
Finally, our deep learning model for automated tool detection and localization enabled tool tracking to study motion patterns. To further gain insight into bimanual dexterity and efficiency, we generated tool trajectory maps for the clipping phase of each of the five testing videos. We chose to examine this phase in particular because placing clips on the cystic artery and cystic duct is one of the most important steps of the cholecystectomy procedure. If these clips are placed in the wrong place, or if they come loose, the patient can suffer devastating complications.
The clipper (blue) and grasper (red) trajectories in testing video 2 reflect the deftness with which the surgeon placed the clips. The trajectory maps for testing videos 2 and 3 also provide valuable information for evaluating bimanual dexterity. While the clipper is used to clip the cystic artery and cystic duct, the grasper is used to hold the gallbladder in place and facilitate clip application. Based on the trajectory maps, the surgeons in testing videos 2 and 3 adeptly maneuver the grasper in a way such that they do not need to constantly adjust position, tension, or grip, in contrast to the trajectory maps for testing videos 1, 4, and 5. Indeed, when surgeons manually reviewed each of the testing videos to rate them with GOALS, they found that the surgeon in testing video 1 struggled to place the clips properly. He or she placed an additional clip, had difficulty prying it loose, and ultimately removed it.
We also quantitatively measured efficiency by computing total distance traveled by the tools during the clipping phase (Figure \ref{fig:totaldistance}). Once again, testing video 2 reflects the most technically excellent surgery, with the greatest economy of motion and most deft tool handling.
\subsection{Validation of approach with GOALS}
In addition to the blinded, subjective assessments of the test videos embedded in the preceding paragraphs, the three surgeons independently rated four of the test videos using a modified version of the GOALS assessment rubric. For this study the autonomy domain was omitted, which can only be assessed in person and is not applicable to every surgery. Thus, the total possible score for each video was 20. Test videos 1-4 received average composite scores of 10.00, 18.67, 9.33, and 13.33. The scores and subcomponent scores of GOALS, such as the efficiency score, directly correlated with the metrics extracted by our algorithm, such as the heat maps and tool trajectories. Table ~\ref{table:goals} presents the GOALS surgeon ratings for each of the testing videos.
\begin{table}[]
\centering
\scalebox{0.84}{
\begin{tabular}{@{}ccccc@{}}
\toprule
\rowcolor[HTML]{EFEFEF}
\textbf{} & \textbf{Video 1} & \textbf{Video 2} & \textbf{Video 3} & \textbf{Video 4} \\ \midrule
\textbf{Depth Perception} & $2.67$ & $4.67$ & $2.33$ & $3.67$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Bimanual Dexterity} & $3.00$ & $4.67$ & $2.00$ & $3.33$ \\
\rowcolor[HTML]{FFFFFF}
\textbf{Efficiency} & $2.00$ & $4.67$ & $2.33$ & $3.00$ \\
\rowcolor[HTML]{EFEFEF}
\textbf{Tissue Handling} & $2.33$ & $4.67$ & $2.67$ & $3.33$ \\
\rowcolor[HTML]{FFFFFF}
\textbf{Total} & \textbf{$10.00$} & \textbf{$18.67$} & \textbf{$9.33$} & \textbf{$13.33$} \\ \bottomrule
\end{tabular}}
\caption{\small{Average composite GOALS ratings for the four testing videos, including average subcomponent scores, as rated independently by the three surgeons. According to these ratings, the surgeon in testing video 2 is the most skilled, with a high score of 18.67 out of a maximum of 20.00, and the surgeons in testing videos 1 and 3 need to improve their operative technique, with scores of 10.00 and 9.33, respectively. These findings are consistent with our analyses based on the extracted assessment metrics, proving our approach to be a much more efficient way to review surgeon performance.}}
\label{table:goals}
\end{table}
\section{Conclusion}
In this work, we present a new dataset m2cai16-tool-locations and an approach based on region-based convolutional neural networks, to address the task of spatial tool detection in real-world laparoscopic surgical videos. We show that our method achieves strong performance on the new spatial detection task, outperforms previous work on frame-level presence detection, and runs in real-time at a frame rate of 5fps. Furthermore, it can be used to extract rich surgical assessment metrics such as tool usage patterns, movement range, and economy of motion, which directly correlate with independent assessments of the same videos made by experienced surgeons. Future work includes continuing to build on the types of meaningful information that can be automatically extracted from surgical videos, including smoothness of motion, tissue damage, repeated movements, and phase of procedure, as well as developing an automated GOALS rating system.
{\footnotesize
|
{
"timestamp": "2018-07-24T02:09:13",
"yymm": "1802",
"arxiv_id": "1802.08774",
"language": "en",
"url": "https://arxiv.org/abs/1802.08774"
}
|
\section{Introduction}
\label{sec:intro}
Astronomers have made use of visual galaxy morphologies to understand the dynamical structure of these systems for nearly ninety years
\citep[e.g.,][]{Hubble1936,
deVauc1959,
Sandage1961,
vandenBergh1976,
NairAbraham2010,
Baillard2011}.
The division between early-type and late-type systems corresponds, for example, to a wide range of parameters from mass and luminosity, to environment, colour, and star formation history
\citep[e.g.,][]{Kormendy1977,
Dressler1980,
Strateva2001,
Blanton2003,
Kauffman2003,
Nakamura2003,
Shen2003,
Peng2010};
while detailed observations of morphological features such as bars and bulges
provide information about the history of their host systems
\citep[e.g., reviews by][]{KK04,
Elmegreen2008,
Sheth2008,
Masters2010,
Simmons2014}.
Modern studies of morphology divide systems into broad classes
\citep[e.g.,][]{Conselice2006,
Lintott2008,
Kartaltepe2015,
Peth2016},
but a wealth of information can be gained from identifying new and often rare classes, such as low redshift clumpy galaxies \citep[e.g.,][]{Elmegreen2013}, polar-ring galaxies \citep[e.g.,][]{Whitmore1990}, and the green peas \citep{Cardamone2009}.
While the Galaxy Zoo project has provided a solution that scales visual classification for current surveys by harnessing the combined power of thousands of volunteers \citep{Lintott2008, Lintott2011, Willett2013, Willett2017, Simmons2017}, producing a prolific amount of scientific output \citep[e.g.,][]{Land2008, Bamford2009, Darg2010, Schawinski2014, Galloway2015, Smethurst2016}; upcoming surveys such as~\textit{LSST} and \textit{Euclid} will require a different approach, imaging more than a billion new galaxies \citep{LSST, Euclid}. If detailed morphologies can be extracted for just 0.1\% of this imaging, we will have millions of images to contend with. A project of this magnitude would take more than sixty years to classify at Galaxy Zoo's current rate and configuration. Standard visual morphology methods will thus be unable to cope with the scale of data.
Another approach has been the automated extraction of morphologies with the development of parametric \citep{Sersic1968, Odewahn2002, Peng2002}, and non-parametric \citep{Abraham1994, Conselice2003, Abraham2003, Lotz2004,
Freeman2013} structural indicators. While these scale well to large samples
\citep[e.g.,][]{Simard2011,
Griffith2012,
Casteels2014,
Holwerda2014,
Meert2016},
they often fail to capture detailed structure and can provide only statistical morphologies with large uncertainties \cite[e.g.,][]{Abraham1996, Bershady2000}.
\begin{figure*}[ht!]
\plotone{f1.pdf}
\caption{Schematic of our hybrid system. Humans provide classifications of galaxy images via a web interface. We simulate this with the Galaxy Zoo 2 classification data described in Section~\ref{sec: data}. Human classifications are processed with an algorithm described in Section~\ref{sec: SWAP}. Subjects that pass a set of thresholds are considered human-retired (fully classified) and provide the training sample for the machine classifier as described in Section~\ref{sec: machine}. The trained machine is applied to all subjects not yet retired. Those that pass an analogous set of machine-specific thresholds are considered machine-retired. The rest remain in the system to be classified by either human or machine. This procedure is repeated nightly. Our results are reported in Section~\ref{sec: results}. \label{fig: schematic}}
\end{figure*}
Machine learning techniques are becoming increasingly popular for classification and image processing tasks. Another automated approach, these generally work by defining a set of features that describe the morphology in an $N$-dimensional space. The location in this morphology space defines a morphological type for each galaxy. Learning the morphology space can be achieved through algorithms such as Support Vector Machines \citep{HuertasCompany2008} or Principal Component Analysis \citep{Watanabe1985, Scarlata2007}. Another approach is through deep learning, a machine learning technique that attempts to model high level abstractions. Algorithms like convolutional and artificial neural networks (CNNs, ANNs) have been used for galaxy morphology classification with impressive accuracy \citep{Ball2004,
Banerji2010,
Dieleman2015,
HuertasCompany2015}.
A drawback to all machine learning classification techniques is the need for
standardized training data, with more complex algorithms requiring more data. Furthermore, these data must be consistent for each survey: differences in resolution and depth can be implicitly learned by the algorithm making their application to disparate surveys challenging.
In this work we present a system that preserves the best features of both visual and automatic classifications, developing for the first time a framework that brings both human and machine intelligence to the task of galaxy morphology to handle the scale and scope of next generation data. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 project, and combine these with a Random Forest machine learning algorithm that trains on a suite of non-parametric morphology indicators widely used for automated morphologies. The primary goal of this paper is to generalize how such a system would work in the context of upcoming surveys like LSST and Euclid.
As a proof of concept, we focus on the first question of the Galaxy Zoo decision tree. We demonstrate that our current implementation provides at least a factor of 8 increase in the rate of galaxy morphology classification while maintaining at least 93.5\% classification accuracy as compared to Galaxy Zoo 2 published data. We first present an overview of our framework, which also serves as a blueprint for this paper.
\added{\subsection{Galaxy Zoo Express Overview}}
The Galaxy Zoo Express (GZX) framework combines human and machine to increase morphological classification efficiency, both in terms of the classification rate and required human effort. Figure~\ref{fig: schematic} presents a schematic of GZX including section numbers as a shortcut for the reader. We note that transparent portions of the schematic represent areas of future work which we explore in Section~\ref{sec: visions}. Any system combining human and machine classifications will have a set of generic features: a group of human classifiers, at least one machine classifier, and a decision engine which determines how these classifications should be combined.
In this work we demonstrate our system through a re-analysis of Galaxy Zoo 2 (GZ2) crowd-sourced classifications \added{as described in Section \ref{sec: data}. We compute ``ground truth'' labels for each galaxy in the GZ2 sample from the published GZ2 classification catalogue (Section \ref{sec: ground truth}).} The GZ2 data allow us to create simulations of human classifiers \deleted{(described in Section~\ref{sec: data})} whose classifications are used most effectively when processed with SWAP, a Bayesian code first developed for the Space Warps gravitational lens discovery project~\citep{Marshall2016} and described in Section~\ref{sec: SWAP}. \replaced{Galaxy images classified through SWAP (hereafter, \textit{subjects}) provide the machine's training sample.}{SWAP aggregates the crowd-sourced classifications of galaxy images (hereafter, \textit{subjects}) producing a final label for each subject (Section \ref{sec: fiducial}). \added{We show that SWAP produces significant gains in classification efficiency as well as a reduction of human effort in Sections \ref{sec: swap is faster} and \ref{sec: less human effort}.} In Section \ref{sec: swap gz2 disagree} we compare these labels to the ``ground truth'' labels computed from GZ2's traditional crowd-sourced classification method. Subjects classified by SWAP then provide the machine's training sample.}
In Section~\ref{sec: machine}, we incorporate a machine classifier. We develop a Random Forest algorithm that trains on measured morphology indicators such as Concentration, Asymmetry, Gini coefficient and \M{20}, well-suited for the top-level question of the GZ2 decision tree, discussed below. \added{Section \ref{sec: decision engine} discusses the decision engine we develop that delegates tasks between human classification and the Random Forest.} After a sufficient number of subjects have been classified by humans \added{via SWAP}, the machine is trained and its performance assessed through cross-validation. This procedure is repeated nightly and the machine's performance increases with the size of the training sample, albeit with a performance limit. Once the machine reaches an acceptable level of performance it is applied to the remaining galaxy sample \added{as explored in Section \ref{sec: machine shop}}.
\added{The results of our combined GZX system are provided in Section \ref{sec: results}.} Even with this simple description, one can see that the classification process will progress in three phases. First, the machine will not yet have reached an acceptable level of performance; only humans contribute to subject classification. Second, the machine's performance will improve; both humans and machine will be responsible for classification. Finally, machine performance will slow; remaining images will likely need to be classified by humans. This result is detailed in Section \ref{sec: who retires what}. Furthermore, in Section \ref{sec: machine performance}, we find evidence that the Random Forest may be capable of correctly identifying subjects that humans miss providing a complimentary approach to galaxy classification. \deleted{These results are explored in Section~\ref{sec: results}.} This blueprint allows even modest machine learning routines to make significant contributions alongside human classifiers and removes the need for ever-increasing performance in machine classification. \added{Discussion and conclusions are presented in Section \ref{sec: visions}.}
\section{Galaxy Zoo 2 Classification Data}
\label{sec: data}
Our simulations utilize original classifications made by volunteers during the GZ2 project. These data\footnote{\url{data.galaxyzoo.org}} are described in detail in~\cite{Willett2013}, though we provide a brief overview here. The GZ2 subject sample consists of 285,962 galaxies identified as the brightest 25\% ($r$-band magnitude $< 17$) residing in the SDSS North Galactic Cap region from Data Release 7 and included subjects with both spectroscopic and photometric redshifts out to $z < 0.25$. Subjects were shown as colour composite images via a web-based interface\footnote{\url{www.galaxyzoo.org}} wherein volunteers answered a series of questions pertaining to the morphology of the subject. With the exception of the first question, subsequent queries were dependent on volunteer responses from the previous task creating a complex decision tree\footnote{A visualization of this decision tree can be found at \url{https://data.galaxyzoo.org/gz_trees/gz_trees.html}}. Using GZ2 nomenclature, a \textit{classification} is the total amount of information about a subject obtained by completing all tasks in the decision tree. A subject is \textit{retired} after it has achieved a sufficient number of classifications.
For our current analysis, we choose the first task in the tree: ``Is the galaxy simply smooth and rounded, with no sign of a disk?" to which possible responses include ``smooth", ``features or disk", or ``star or artifact". This choice serves two purposes: 1) this is one of only two questions in the GZ2 decision tree that is asked about every subject thus maximizing the amount of data we have to work with, and 2) our analysis assumes a binary task and this question is simple enough to cast as such. Specifically, we combine ``star or artifact" responses with ``features or disk" responses.
\added{\subsection{``Ground truth'' labels}}
\label{sec: ground truth}
We assign each subject a descriptive label in order to validate our classification output against that of GZ2. GZ2 classifications are composed of volunteer vote fractions for each response to every task in the decision tree, denoted as $f_{\mathrm{response}}$. \replaced{They are derived from the fraction of volunteers who voted for a particular response and are thus approximately continuous.}{The most basic of these is computed simply as $f_{\mathrm{r}} = n_{\mathrm{r}}/n_{\mathrm{t}}$, that is, the number of votes of response $r$ divided by the total number of votes for task $t$. Vote fractions are thus approximately continuous.} A common technique is to place a threshold on these vote fractions to select samples with an emphasis on purity or completeness, depending on the science case. For our current analysis we choose a threshold of 0.5, that is, if $f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~$ >$ $f_{\mathrm{smooth}}$, the galaxy is labelled~`Featured', otherwise it is labelled~`Not'. We note that only 512 subjects in the GZ2 catalogue have a majority $f_{\mathrm{artifact}}$, contributing less than half a percent contamination when combining the ``star or artifact" with ``features or disk" responses.
The GZ2 catalogue publishes three types of vote fractions for each subject: raw, weighted, and debiased. Debiased vote fractions are calculated to correct for redshift bias, a task that GZX does not perform. The weighted vote fractions account for inconsistent volunteers. The SWAP algorithm (described below) also has a mechanism to weight volunteer votes, however, the two methods are in stark contrast. For consistency, we thus derive labels from the \replaced{raw vote fractions (GZ2$_{\text{raw}}$); those that have received no post-processing whatsoever.}{simple ``raw'' vote fractions defined above, and designate the resulting labels as GZ2$_{\text{raw}}$.} In total, the data consist of over 14 million classifications from 83,943 individual volunteers.
The \added{GZ2$_{\text{raw}}$} labels we compute from GZ2 vote fractions are used solely to validate our classification method and are thus considered ``ground truth,'' though this is, of course, subjective. Furthermore, we envision our framework being applied to never-before-classified image sets for which ``ground truth" labels would not yet exist. Nevertheless, in Appendix~\ref{sec: vary threshold} we show how different choices of our descriptive GZ2 labels change the perceived quality of our classification system and demonstrate that our method yields robust galaxy classifications.
\section{Efficiency through intelligent human-vote aggregation}
\label{sec: SWAP}
Galaxy Zoo 2 did not have a predictive retirement rule, rather each galaxy received a median of 44 independent classifications. Once the project reached completion, inconsistent volunteers were down-weighted~\citep{Willett2013}, a process that does not make efficient use of those who are exceptionally skilled. To intelligently manage subject retirement and increase classification efficiency, we adapt an algorithm from the Zooniverse project Space Warps~\citep{Marshall2016}, which searched for and discovered several gravitational lens candidates in the CFHT Legacy Survey~\citep{More2016}. Dubbed SWAP (Space Warps Analysis Pipeline), this algorithm computed the probability that an image contained a gravitational lens given volunteers' classifications and experience after being shown a training sample consisting of simulated lensing events. We provide \replaced{a brief}{an} overview here; \added{ interested readers are encouraged to refer to \cite{Marshall2016} for additional details}.
\added{\subsection{The SWAP algorithm}}
\added{SWAP evaluates the accuracy of individual classifiers based on their responses to subjects where the true classification is known, and applies those evaluations to the consensus classifications of subjects where the true classification is unknown in order to improve classification efficiency and reduce the classification effort required to complete a project. In order to achieve this,} SWAP assigns each volunteer an \textit{agent} which interprets that volunteer's classifications. Each agent assigns a 2$\times$2 confusion matrix to their volunteer which encodes that volunteer's probability to correctly identify feature \textit{A}~given that the subject exhibits feature \textit{A}; and the probability to correctly identify the absence of feature \textit{A}~(denoted \textit{N}) given that the subject does not exhibit that feature. The agent updates these probabilities by estimating them as
\begin{equation}
P(``X" | X, \mathbf{d}) \approx \frac{\mathcal{N}_{``X"}}{\mathcal{N}_{X}}
\end{equation}
where $X$ is the true classification of the subject and ``$X$" is the classification made by the volunteer upon viewing the subject. Thus $\mathcal{N}_{``X"}$ is the number of classifications the volunteer labelled as type $X$, $\mathcal{N}_X$ is the number of subjects the volunteer has seen that were actually of type $X$, and $\mathbf{d}$ represents the history of the volunteer, i.e., all subjects they have seen. Therefore the confusion matrix for a single volunteer goes as
\begin{eqnarray}
\mathcal{M} & = & \left[
\begin{array}{cc}
P(``A"|N, \mathbf{d}) ~~& P(``A" | A, \mathbf{d}) \\[0.3em]
P(``N"|N, \mathbf{d})~~& P(``N"|A, \mathbf{d}) \\[0.3em]
\end{array}\right]
\end{eqnarray}
where probabilities are normalised such that $P(``A"|A) = 1- P(``N" | A) $.
Each subject is assigned a prior probability that it exhibits feature \textit{A}: $P(A) = p_0$. When a volunteer makes a classification, Bayes' theorem is used to compute how that subject's prior probability should be updated into a posterior using elements of the agent's confusion matrix. As the project progresses, each subject's posterior probability is updated after every volunteer classification, nudged higher or lower depending on volunteer input. Upper and lower probability thresholds can be set such that when a subject's posterior crosses the upper threshold it is highly likely to exhibit feature \textit{A}; while if it crosses the lower threshold it is highly likely that feature \textit{A}~is absent. Subjects whose posteriors cross either of these thresholds are considered retired.
\subsection{Gold-standard sample}\label{sec: training sample}
A key feature of the original Space Warps project was the training of
individual volunteers through the use of simulated images. These were interspersed with real imaging and were predominantly shown at the beginning of a volunteer's engagement with the project, allowing that volunteer's agent time to update before classifying real data. Volunteers were provided feedback in the form of a pop-up comment after classifying a training image. GZ2 did not train volunteers in such a way, presenting a challenge when applying SWAP to GZ2 classifications. Though we cannot retroactively train GZ2 volunteers, we develop a gold standard sample and arrange the order of gold standard classifications in order to mimic the Space Warps system.
\begin{figure}[t!]
\includegraphics[width=3.45in]{f2.pdf}
\caption{Confusion matrices for 1000 randomly selected GZ2 volunteers after fiducial SWAP assessment. Circle size is proportional to the number of gold standard subjects each volunteer classified. The histograms on top and right represent the distribution of each component of the confusion matrix for all volunteers. A quarter of GZ2 volunteers are ``Astute": they correctly identify both `Featured'~and `Not'~subjects more than 50\% of the time. The peaks at 0.5 in both distributions are due primarily to volunteers who see only one training image: only half of their confusion matrix is updated. \label{fig: volunteer training}}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=3.25in]{f3.pdf}
\caption{Posterior probabilities for GZ2 subjects. The top panel depicts the probability trajectories of 200 randomly selected GZ2 subjects. All subjects begin with a prior of 0.5 denoted by the arrow. Each subject's probability is nudged back and forth with each volunteer classification. From left to right the dotted vertical lines show the `Not'~threshold, prior probability, and `Featured'~threshold. Different colours denote different types of subjects. The bottom panel shows the distribution in probability for all GZ2 subjects by the end of our simulation, where the y axis is truncated to show detail. \label{fig: subject probabilities}}
\end{figure}
We create a gold standard sample by selecting 3496 SDSS galaxies representative of the relative abundance of T-Types, a numerical index of a galaxy's stage along the Hubble sequence, at $z\sim0$ by considering galaxies that overlap with the~\cite{NairAbraham2010} catalogue, a collection of $\sim$14K galaxies classified by eye into T-Types. We generate new expert labels for these galaxies that are consistent with the labels we defined for GZ2 classifications. These are provided by 15 professional astronomers, including members of the Galaxy Zoo science team, through the Zooniverse platform.\footnote{The Project Builder template facility can be found at \url{http://www.zooniverse.org/lab.}} The question posed was identical to the original top-level GZ2 question and at least five experts classified each galaxy. Votes are aggregated and a simple majority provides an expert label for each subject. This ensures that our expert labels are defined in exactly the same manner as the labels we assign the rest of the GZ2 sample. Our final dataset consists of the GZ2 classifications made by those volunteers who classify at least one of these gold standard subjects. We thus retain for our simulation 12,686,170 classifications from 30,894 unique volunteers. When running SWAP, classifications of gold standard subjects are always processed first.
\subsection{Fiducial SWAP simulation}
\label{sec: fiducial}
Before we run a simulation, a number of SWAP parameters must be chosen: the initial confusion matrix for each volunteer's agent, ($P(``F"|F)$, $P(``N"|N)$); the subject prior probability, $p_0$; and the retirement thresholds, $t_F$~and $t_N$. For our fiducial simulation we initialize all confusion matrices at (0.5, 0.5), and set the subject prior probability, $p_0$~$= 0.5$. We set the~`Featured'~threshold, $t_F$, i.e., the minimum probability for a subject to be retired as~`Featured', to $0.99$. Similarly, we set the~`Not'~threshold, $t_N$~$= 0.004$. In Appendix~\ref{sec: tweaking swap} we show that varying these parameters has only a small affect on the SWAP output. To simulate a live project, we run SWAP on a time step of $\Delta t = 1$ day, during which SWAP processes all volunteer classifications with timestamps within that range. This is performed for three months worth of GZ2 classification data. Hereafter, we refer to this as \textit{GZ2 project time} where $0$ marks the first day of the original GZ2 project.
Figure~\ref{fig: volunteer training} (adapted from Figure 4 of~\citealt{Marshall2016}) demonstrates the volunteer assessment we achieve \added{at the end of our simulation}, and shows confusion matrices for 1000 randomly selected volunteers. The circle size is proportional to the number of gold standard subjects each volunteer classified. \added{If we were to examine this figure immediately prior to the start of classifications, it would show all points as small circles stacked precisely at the center of the figure since each volunteer is initially assigned a confusion matrix of (0.5, 0.5). As the simulation progresses, each volunteer's green circle is updated in both location and size according to their assessment of gold standard subjects until arriving at the figure shown here.} The histograms represent the distribution of each component of the confusion matrix for all volunteers. Nearly 25\% of volunteers are considered ``Astute" indicating they correctly identify both `Featured'~and `Not'~subjects more than 50\% of the time. Furthermore, as long as a volunteer's confusion matrix is different from a random classifier, they provide useful information to the project. The spikes at $0.5$ in the histograms are due to volunteers who see only one gold standard subject (i.e.,~`Featured'), leaving their probability in the other (`Not') unchanged. Additionally, 4\% of volunteers have a confusion matrix of (0.5, 0.5) indicating these volunteers classified two gold standard subjects of the same type, one correctly and one incorrectly.
\begin{figure}[t!]
\centering
\includegraphics[width=3.2in]{f4.pdf}
\caption{Confusion matrix for comparing \replaced{GZ2 classifications to our method.}{our method to GZ2 which we consider to be ``ground truth'' as discussed in Section \ref{sec: ground truth}.} True positives (TP) and true negatives (TN) indicate that the predictions from our method agree with GZ2 for subjects labelled `Featured'~and `Not', respectively. When the two classification methods disagree, the result is a sample of false negatives (FN) and false positives (FP). This allows us to easily compute quality metrics like accuracy, completeness, and purity with respect to GZ2 as shown in Equations \ref{eqn: metrics}.}
\label{fig: confusionmatrix}
\end{figure}
Figure~\ref{fig: subject probabilities} (adapted from Figure 5 of \citealt{Marshall2016}) demonstrates how subject posterior probabilities are updated with each classification. The arrow in the top panel denotes the prior probability, $p_0$~$=0.5$. With each classification, that prior is updated into a posterior probability creating a trajectory through probability space for each subject. The blue and orange lines show the trajectories of a random sample of `Featured'~and `Not'~subjects from our gold standard sample, while the black lines show the trajectories of a random sample of GZ2 subjects that were not part of the gold standard sample. The blue and orange dashed lines correspond to the retirement thresholds, $t_F$~and $t_N$. The lower panel shows the full distribution of GZ2 subject posteriors at the end of our simulation, where the y-axis has been truncated to show detail. An overwhelming majority of subjects cross one of these retirement thresholds: of all subjects that SWAP ``sees", i.e., processes at least one classification, only 8\% have not reached retirement by the end of our simulation.
Our goal is to increase the efficiency of galaxy classification. We therefore use as a metric the cumulative number of retired subjects as a function of GZ2 project time. We define a subject as GZ2-retired once it achieves at least 30 volunteer votes, encompassing 98.6\% of GZ2 subjects (this definition is quantified and its implications explored in Section~\ref{sec: swap is faster}). In contrast, a subject is considered SWAP-retired once its posterior probability crosses either of the retirement thresholds defined above.
However, it is important not to prioritize efficiency at the expense of quality.
Because we have a binary classification, we can construct a confusion matrix from which we can compute the quality metrics of accuracy, completeness and purity as a function of GZ2 project time by comparing our predicted labels to the GZ2$_{\text{raw}}$~labels. Figure~\ref{fig: confusionmatrix} graphically ascribes semantic interpretations for the elements of this confusion matrix. From these we compute:
\begin{align*}\label{eqn: metrics}
\mathrm{accuracy} &= \frac{TP + TN}{TP + FP + TN + FN} \\
\mathrm{completeness} &= \frac{TP}{TP +FN }\tag{3} \\
\mathrm{purity} &= \frac{TP}{TP + FP}
\end{align*}
Thus a 100\% complete sample recovers \textit{all} subjects labelled `Featured'~by GZ2, whereas a 100\% pure sample recovers \textit{only} subjects labelled `Featured'~by GZ2. For example, by Day 20, SWAP retires 120K subjects with 96\% accuracy, 99.7\% completeness, and 92\% purity.
Figure \ref{fig: fiducial run} and Table~\ref{tab: summary} detail the results of our fiducial SWAP simulation (``SWAP only") compared to the original GZ2 project. The bottom panel shows the cumulative number of retired subjects as a function of GZ2 project time. By the end of our simulation, GZ2 (dashed dark blue) retires $\sim$50K subjects while SWAP (solid light blue) retires 226,124 subjects. We thus classify 80\% of the entire GZ2 sample in three months. Processing volunteer classifications through SWAP presents nearly a factor of 5 increase in classification efficiency. The top panel of Figure~\ref{fig: fiducial run} demonstrates the quality of those classifications as a function of time and establishes that our full SWAP-retired sample is 95.7\% accurate, 99\% complete, and 86.7\% pure. We discuss these small discrepancies in Section~\ref{sec: swap gz2 disagree}.
\begin{figure}[t!]
\includegraphics[width=3.35in]{f5.pdf}
\caption{Fiducial SWAP simulation demonstrates a factor of 4.7 increase in the rate of subject retirement as a function of GZ2 project time (bottom panel, light blue) compared with the original GZ2 project (dashed dark blue). After 92 days, SWAP retires over 226K subjects, while GZ2 retires $\sim$48K. The top panel displays the quality metrics (greys). These are calculated by comparing labels predicted by SWAP to~GZ2$_{\text{raw}}$~labels (Section~\ref{sec: data}) for the subject sample retired by that day of the simulation. Thus, on the final day, SWAP retires 226,124 subjects with 95.7\% accuracy, and with completeness and purity of~`Featured'~subjects at 99\% and 86.7\% respectively. The decrease in purity as a function of time is due, in part, to the fact that more difficult to classify subjects are retired later in the simulation (see Section~\ref{sec: swap is faster}).
\label{fig: fiducial run}}
\end{figure}
\begin{figure*}[t!]
\includegraphics[width=7in]{f6.pdf}
\caption{SWAP's intelligent retirement mechanism requires only 30\% of the classifications that GZ2 needs for the top-level question due to SWAP's ability to retire easier subjects quickly, while more difficult subjects remain in the system to accrue additional classifications.
\textit{Top panels:} The top left panel shows~$f_{\mathrm{smooth}}$~for the entire GZ2 sample (orange), the subjects retired by SWAP (blue), and subjects that SWAP has not yet retired by the end of our simulation (red). The latter distribution peaks at $f_{\mathrm{smooth}}$~$\sim 0.6$, which can intuitively be understood as the most difficult to classify subjects: those with $f_{\mathrm{smooth}}$~$\le 0.5$ are easily identified as `Featured', while those with $f_{\mathrm{smooth}}$~$\ge 0.8$ are more obviously `Not'. The top right panel provides additional evidence showing the number of votes at retirement for both the original GZ2 project (solid lines) and our SWAP simulation (dashed blue). The left-skew inherent in the red SWAP-not-yet-retired sample is due to difficult-to-classify subjects that received only 30-40 classifications during the GZ2 project. Even after processing all available classifications, SWAP cannot retire these subjects without additional volunteer input.
\textit{Bottom panel:} Here we compare SWAP to results of simulations of GZ2 run with a lower retirement limit in order to evaluate whether or not GZ2's considerable number of votes per subject are necessary solely to populate subqueries. Solid bars show the number of classifications required to retire the same number of galaxies as SWAP (dark grey) for different fixed retirement limits in GZ2 (light grey). The height of the bars are normalised to show the counts relative to the highest simulated GZ2 retirement limit we test ($N=35$, right vertical axis). The accuracy of the classifications for these simulated GZ2 runs against the full GZ2 project are shown as red points (left vertical axis). If GZ2 retirement were set at a level ($N=10$) that reproduces the total number of classifications logged by SWAP, the accuracy would be below 90\% (versus SWAP's 96\%). Instead, GZ2 requires, at minimum, 3.5 times as many votes to approach the same accuracy (95\%) as SWAP. Simulated GZ2 sessions were run 100 times, randomly selecting subsamples with the same number of galaxies as were retired during our fiducial SWAP simulation. Quantities shown are averages of these trials; statistical error bars are too small to be seen.}
\label{fig: swap is faster}
\end{figure*}
\subsection{Intelligent subject retirement}
\label{sec: swap is faster}
That SWAP achieves a classification rate nearly 5 times faster than GZ2 comes with a caveat: we consider only the top-level question of the GZ2 decision tree. It can be argued that GZ2 did not need $\sim$40 votes per subject to achieve exquisite sampling for the top-level question but rather adequate sampling for the subqueries. It might therefore be the case that the top-level question could be accurately resolved with far fewer classifications. In order to put SWAP and GZ2 on equal footing we determine the minimum number of votes, $N$, that the GZ2 project would need in order to replicate the original GZ2 outcome for the top-level classification task for a canonical 95\% of its sample.
We compute the raw vote fractions ($f_{\mathrm{featured}}$, $f_{\mathrm{smooth}}$, and $f_{\mathrm{artifact}}$) for every subject in the GZ2 sample using only the first $N$ classifications for $N \in [10, 15, 20, 25, 30, 35]$. From this, we compute descriptive labels as described in Section~\ref{sec: ground truth}. Our SWAP simulation did not retire every subject in the GZ2 sample. We therefore select 100 random subsamples each consisting of 226,124 subjects, and compute the accuracy and the total number of GZ2 classifications necessary to retire each subsample. These results are shown in the bottom panel of Figure~\ref{fig: swap is faster} for each value of $N$ along with the accuracy and total classifications for our SWAP simulation. We see that GZ2 needs at least 35 votes per subject in order to achieve consistent class labels 95\% of the time, a full 3.5 times more classifications than SWAP needs to achieve the same accuracy. Furthermore, this justifies our choice of defining a subject as GZ2-retired once it reaches at least 30 classifications.
SWAP's performance can be explained through its retirement mechanism. GZ2 did not have a predictive retirement rule, rather the project was declared complete when the median classification count for the ensemble reached a value that was deemed to be sufficient for accurate characterization of the classification. In contrast, SWAP retires ``easier" subjects first while harder subjects remain in the system for longer (requiring many more votes to nudge that subject's posterior across a retirement threshold). Evidence for this can be seen in the top two panels of Figure~\ref{fig: swap is faster}. The top left panel shows the distribution of $f_{\mathrm{smooth}}$~for the entire GZ2 sample (orange), the SWAP-retired sample (blue), and the sample of subjects which SWAP has not yet retired, of which there are $\sim$19K at the end of our simulation. The SWAP-retired sample generally follows the same distribution as GZ2-full except for the noticeable dip around $f_{\mathrm{smooth}}$$=0.6$. In contrast, the SWAP-not-yet-retired sample peaks at $f_{\mathrm{smooth}}$$=0.6$. These subjects can be interpreted as being the most difficult to classify which can be understood intuitively: galaxies with $f_{\mathrm{smooth}}$~$\le 0.5$ are easily identified as having features, while galaxies with $f_{\mathrm{smooth}}$~$\ge 0.8$ are more obviously elliptical.
This is further corroborated in the top right panel of Figure \ref{fig: swap is faster} which shows the distribution of the number of classifications a subject had at the time of retirement. The solid lines show this distribution from the original GZ2 project for the same subsamples as the top left panel. For comparison, the dashed line shows the number of classifications at retirement realized during our SWAP simulation. Again, we see that the SWAP-retired sample is representative of GZ2 as a whole. However, the distribution for the SWAP-not-yet-retired sample is skewed toward fewer total classifications.
\begin{figure}[t!]
\centering
\includegraphics[width=3.5in]{f7.pdf}
\caption{
SWAP's volunteer-weighting mechanism provides a factor of three reduction in the human effort required to retire GZ2 subjects. The filled histograms show the number of volunteer classifications per subject achieved during our SWAP simulation broken down by class label, where the solid black line is the total. The dashed histograms are results from our toy model in which we simulate volunteers with fixed confusion matrices, effectively disengaging SWAP's volunteer-weighting mechanism. These broad distributions require $\sim$3 times more classifications per subject to reach the same retirement thresholds. } \label{fig: swap vote distributions}
\end{figure}
To understand this, consider the following: GZ2 served subject images at random with the exception that, towards the end of the project, subjects with low numbers of classifications were shown at a higher rate \citep{Willett2013}. The median number of classifications was 44 with the full distribution shown in orange in the top right panel of Figure~\ref{fig: swap is faster}. Our SWAP simulation processes these classifications in the same order as the original project (with the exception that gold-standard subject classifications are processed first as described in Section~\ref{sec: training sample}). Because our simulations cycle through only 92 days of GZ2 data, there are three general scenarios for why a subject has not yet been retired through SWAP: 1) SWAP has seen only a few of the many classifications for a given subject and it is not yet enough to retire it, 2) SWAP has seen many of the classifications for a subject but that subject is difficult; if we ran the simulation longer to process the remaining GZ2 classifications, SWAP would eventually retire it, and 3) SWAP has seen most or all of the classifications for a subject but it is difficult and there are few or no remaining GZ2 classifications; without additional volunteer input, these subjects will never be retired by SWAP.
It is this third category that skews the red distribution towards fewer GZ2 votes. These are difficult-to-classify subjects that have only 30 - 40 GZ2 classifications, all of which are processed by SWAP, but these subjects remain unretired. This is an indication that such subjects should have continued to accrue classifications in order to reach strong consensus.
We have demonstrated that SWAP retires subjects intelligently: quickly retiring easy-to-classify subjects while allowing those that are more difficult to collect additional classifications. SWAP thus requires only 30\% of the votes that GZ2 needs and retires nearly 5 times as many subjects during the three months of GZ2 project time that we include in our simulation.
\subsection{Reducing human effort}
\label{sec: less human effort}
SWAP's intelligent retirement mechanism is characterised, in large part, by the way SWAP estimates volunteer classification ability. This in turn allows for a dramatic reduction in the amount of human effort (votes) required. To see this more clearly, we consider a toy model wherein we simulate volunteers with fixed confusion matrices. We simulate 1000 `Featured'~subjects and 1000 `Not'~subjects each with prior, $p_0$~$ = 0.5$. We simulate 100 volunteer agents all with the same fixed confusion matrix of (0.63, 0.65), where these values are computed as the average $P(``F"|F)$~and $P(``N"|N)$~from our assessment of real volunteers, excluding the spikes at 0.5. We generate volunteer classifications based on this confusion matrix (i.e., volunteers will correctly identify `Featured'~subjects 63\% of the time) and update the subject's posterior probability with each classification. We track how many classifications are required for each subject's posterior to cross either the `Featured'~or `Not'~retirement thresholds.
The results are presented in Figure~\ref{fig: swap vote distributions}. The filled blue and orange histograms show the number of classifications per subject achieved from our SWAP simulation, where volunteer agent confusion matrices are those from Figure~\ref{fig: volunteer training}. The dashed blue and orange distributions are the results from our toy model. When SWAP accounts for volunteer ability, most subjects are retired with between 6 and 15 votes, with a median of 9 votes. In contrast, when every volunteer is given equal weighting, subjects require 16 to 45 votes with a median of 30 votes before crossing one of the retirement thresholds. Thus the volunteer weighting scheme embedded in SWAP can reduce the amount of human effort required to retire subjects by a factor of three.
This reduction will be, in part, a function of the number of gold standard subjects each volunteer sees. Our gold standard sample was chosen to be representative of morphology rather than evenly distributed among GZ2 volunteers. We thus find that half of our volunteers classify only one or two gold standard subjects. That we achieve a factor of three reduction when only half of our volunteer pool has seen $\ge 2$ gold standard subjects suggests that an additional reduction of human effort is possible with more extensive volunteer training.
\begin{figure}[t!]
\includegraphics[width=3.5in]{f8.pdf}
\caption{Distribution of GZ2 $f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~vote fractions for subjects correctly identified by SWAP (dotted grey), along with those identified as false positives (solid purple), and false negatives (dashed teal). The false positives and false negatives are scaled by factors of 10 and 100 respectively for easier comparison. From Section~\ref{sec: data}, subjects with values $> 0.5$ are defined as~`Featured', however, the teal distribution indicates that SWAP labels them as~`Not'. This is not necessarily a flaw of SWAP: 68.9\% of incorrectly identified subjects have $0.4 \le $~$f_{\mathrm{featured}}$ +$f_{\mathrm{artifact}}$~$ \le 0.6$, nearly the same range as a 68\% confidence interval around our choosen threshold. The overlap between the false positives and negatives is due to subjects that are exactly 50-50; by default these are labelled~`Not'. \label{fig: SWAP sucks}}
\end{figure}
\subsection{Disagreements between SWAP and GZ2}
\label{sec: swap gz2 disagree}
Galaxy Zoo's strength comes from the consensus of dozens of volunteers voting on each subject. Processing votes with SWAP reduces the number of classifications to reach consensus. Though we typically recover the~GZ2$_{\text{raw}}$~label, SWAP disagrees about 5\% of the time. We thus examine the false positives (subjects SWAP labels as~`Featured'~but~GZ2$_{\text{raw}}$~labels as~`Not') and false negatives (subjects SWAP labels as~`Not'~but~GZ2$_{\text{raw}}$~labels as~`Featured'). We explore these subjects in redshift, magnitude, physical size, and concentration but find no correlation with any of these variables, suggesting that, at least for this galaxy sample, the reliability of morphology depends on factors that are not captured by these coarse measurements. This is perhaps unsurprising since GZ2 subjects were selected from the larger GZ1 sample to be the brightest, largest and nearest galaxies: precisely those subjects most accessible for visual classification.
Instead we consider the stochastic nature of GZ2 vote fractions, which can be estimated as binomial. Let success be a response of ``smooth'' and failure be any other response. The $68\%$ confidence interval on a subject with $f_{\mathrm{smooth}}$~$=0.5$ is then $(0.42, 0.57)$ assuming 40 classifications, each with a probability of 0.5. Figure~\ref{fig: SWAP sucks} shows the distribution of~$f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~for the false positives (solid purple), and the false negatives (dashed teal) compared to the subjects where SWAP and GZ2 agree (dotted grey). Recall that if this value is greater than 0.5, the subject is labelled~`Featured'. The majority of disagreements between SWAP and GZ2 are for subjects that have $0.4 <$~$f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~$< 0.6$. It is thus unsurprising that SWAP and GZ2 disagree most within the approximate confidence interval of our selected GZ2 threshold. We note that the distribution overlap between false positives and false negatives is due to subjects that do not have a majority; these are labelled~`Not'~by default.
Two other effects contribute to the disagreement between SWAP and GZ2. First, as the number of classifications used to retire a galaxy decreases, the likelihood of misclassification by random chance increases. Second, disagreement arises due to expert-level volunteers whose confusion matrices are close to 1.0. These volunteers are essentially more strongly weighted, allowing that subject's posterior to cross a retirement threshold in as few as two classifications. In rare cases, despite training, some expert-level
volunteers get it wrong compared to the gold-standard labels. These issues can be mitigated by requiring each subject reach a minimum number of classifications in addition to its posterior probability crossing a retirement threshold, thus combining the best qualities of GZ2 and SWAP.
\subsection{Summary}
We demonstrate nearly a factor of five increase in the classification rate, a reduction of at least a factor of three in the human effort necessary to maintain that increased rate, all while maintaining 95\% accuracy, nearly perfect completeness of~`Featured'~subjects, and with a purity that can be controlled by careful selection of input parameters to be better than 90\% (see Appendix~\ref{sec: tweaking swap}). Exploring those subjects wherein SWAP and GZ2 disagree, we conclude that the majority of this disagreement stems from the stochastic nature of~GZ2$_{\text{raw}}$~labels. We now turn our focus towards incorporating a machine classifier utilizing these SWAP-retired subjects as a training sample.
\section{Efficiency through incorporation of machine classifiers} \label{sec: machine}
We construct the full Galaxy Zoo Express by incorporating supervised learning, the machine learning task of inference from labelled training data. The training data consist of a set of training examples, and must include an input feature vector and a desired output label. Generally speaking, a supervised learning algorithm analyses the training data and produces a function that can be mapped to new examples. A properly optimized algorithm will correctly determine class labels for unseen data. By processing human classifications through SWAP, we obtain a set of binary labels by which we can train a machine classifier. We briefly outline the technical details of our machine below, turning towards the decision engine we develop in Section~\ref{sec: decision engine}.
\subsection{Random Forests}
We use a Random Forest (RF) algorithm~\citep{Breiman2001}, an ensemble classifier that operates by bootstrapping the training data and constructing a multitude of individual decision tree algorithms, one for each subsample. An individual decision tree works by deciding which of the input features best separates the classes. It does this by performing splits on the values of the input feature that minimize the classification error. These feature splits proceed recursively. Decision trees alone are prone to over-fitting, precluding them from generalising well to new data. Random Forests mitigate this effect by combining the output labels from a multitude of decision trees. Specifically, we use the \texttt{RandomForestClassifier} from the Python module \texttt{scikit-learn} \citep{scikit-learn}.
\subsection{Grid Search and Cross-validation}
Of fundamental importance is the task of choosing an algorithm's hyperparameters, values which determine how the machine learns. For a RF, key quantities include the maximum depth of individual trees (\texttt{max\_depth}), the number of trees in the forest (\texttt{n\_estimators}), and the number of features to consider when looking for the best split (\texttt{max\_features}). The goal is to determine which values will optimize the machine's performance and thus these values cannot be chosen \textit{a priori}. We perform a grid search with $k$-fold cross-validation whereby the training sample is split into $k$ subsamples. One subsample is withheld to estimate the machine's performance while the remaining data are used to train the machine. This is performed $k$ times and the average performance value is recorded. The entire process is repeated for every combination of the hyperparameters in the grid space and values that optimize the output are chosen. In this work we let $k=10$, however, we leave this as an adjustable input parameter. In the interest of computational speed, we set \texttt{n\_estimators} $=30$ and perform the grid search for \texttt{max\_depth} over the range $[5,16]$, and \texttt{max\_features} over the range $[\sqrt{D}, D]$, where $D$ is the number of features in the feature vector, described below.
\subsection{Feature Representation and Pre-Processing}
The feature vector on which the machine learns is composed of $D$ individual numeric quantities associated with the subject that the machine uses to discern that subject from others in the training sample. To segregate~`Featured'~from~`Not', we draw on ZEST \citep{Scarlata2007} and compute concentration, asymmetry, Gini coefficient, and M$_{20}$, the second-order moment of light for the brightest 20\% of galaxy pixels, as measured from SDSS DR12 $i$-band imaging (see Appendix \ref{sec: measuring morphology}). Coupled with SExtractor's measurement of ellipticity \citep{sextractor}, we provide the machine with a $D=5$ dimensional morphology parameter space. These non-parametric diagnostics have long been used to distinguish between early- and late-type galaxies in an automated fashion \cite[e.g.,][]{Abraham1996, Bershady2000, Conselice2000, Abraham2003, Conselice2003, Lotz2004, Snyder2015}. Because the RF algorithm handles a variety of input formats, the only pre-processing step we perform is the removal of poorly-measured morphological indicators, i.e. catastrophic failures.
\begin{figure}[t!]
\includegraphics[width=3.25in]{f9.pdf}
\caption{Learning curve for a Random Forest with fixed hyperparameters. These curves show the mean accuracy computed during cross-validation and on the training sample, where the shaded regions denote the standard deviation. When the training sample size is small, the machine accurately identifies its own training sample but is unable to generalize to unseen data as evidenced by a low cross-validation score. This score increases with the size of the training sample but eventually plateaus indicating that larger training samples provide little in additional performance. \label{fig: learning curve}}
\end{figure}
\subsection{Decision Engine}
\label{sec: decision engine}
A number of decisions must be addressed before attempting to train the machine.
In particular, which subjects should be designated as the training sample?
When should the machine attempt its first training session?
When has the machine's performance been optimized such that it will successfully
generalize to unseen subjects? The field of machine learning provides few hard rules
for answering these questions, only guidelines and best practices.
Here we briefly discuss our approach for the development of our decision engine.
As discussed in detail in Section~\ref{sec: SWAP}, SWAP yields a probability that a subject exhibits the feature of interest. While some machine algorithms can accept continuous input labels, the RF requires distinct classes. We thus use only those subjects which have crossed either of the retirement thresholds. Though we find that SWAP consistently retires 35-40\%~`Featured'~subjects on any given day of the simulation, a balanced ratio of~`Featured'~to~`Not'~isn't guaranteed. Highly unbalanced training samples should be resampled to correct the imbalance; however, as we exhibit only a mild lopsidedness, we allow the machine to train on all SWAP-retired subjects.
SWAP retires a few hundred subjects during the first days of the simulation.
In principle, a machine can be trained with such a small sample, but will be unable
to generalize to unseen data. We estimate a minimum number of training samples
and the machine's ability to generalize by considering a learning curve, an illustration
of a machine's performance with increasing sample size for fixed hyperparameters.
Figure~\ref{fig: learning curve} demonstrates such a curve wherein we plot
the accuracy from both the 10-fold cross-validation, and the trained machine
applied to its own training sample for a random sample of GZ2 subjects
required to be balanced between~`Featured'~and~`Not'.
We fix the RF's hyperparameters as follows: \texttt{max\_depth} $=8$,
\texttt{n\_estimators } $=30$, and \texttt{max\_features} $=2$.
When the sample size is small, the cross-validation score is low and the training
score is high, a clear sign of over-fitting. However, as the training
sample size increases, the cross-validation score increases and eventually plateaus,
indicating that larger training sets will yield little additional gain.
We estimate this plateau begins when the training
sample reaches 10,000 subjects and require SWAP retire at least this many
before the machine attempts its first training. We estimate the machine
has trained sufficiently if the cross-validation score fluctuates by less than 1\%
for three consecutive nights of training to ensure we have reached the plateau.
This requires that we record the machine's training performance each night,
including how well it scores on the training sample, the
cross-validation score, and the best hyperparameters.
\begin{table*}[]
\centering
\caption{Summary of key quantities for GZ2 and our various simulations. All quality metrics are calculated using~GZ2$_{\text{raw}}$~labels.}
\label{tab: summary}
\let\mc\multicolumn
\begin{tabular}{lcccccc}
\mc7c{ \textbf{Simulation Summary} } \\
\hline \hline
& Days & Subjects Retired & Human Effort & Accuracy & Purity & Completeness\\
\mc2c{} & & (classifications) & (\%) & (\%) & (\%) \\
\hline
Galaxy Zoo 2 & 430 & 285,962 & 14,144,142 & -- & -- & -- \\
SWAP only & 92 & 226,124 & 2,298,772 & 95.7 & 86.7 & 99.0 \\
SWAP+RF & 32 & 210,803 & 936,887 & 93.1 & 83.2 & 94.0 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t!]
\centering
\includegraphics[width=5.5in]{f10.pdf}
\caption{By incorporating a machine classifier, GZX (red) increases the classification rate by an order of magnitude compared to GZ2 (dashed dark blue) and out-performs the SWAP-only run (light blue), retiring more than 200K subjects in just 27 days of GZ2 project time. The dashed black line marks the first night the machine trains. After several additional nights of training, it is deemed optimized and allowed to retire subjects. Both humans and machine then contribute to retirement. We end the simulation after 32 days having retired over 210K galaxies. See Table~\ref{tab: summary} for details. \label{fig: money}}
\end{figure*}
\subsection{The Machine Shop}
\label{sec: machine shop}
We can now describe a full GZX simulation, which begins with human classifications processed through SWAP for several days. Once at least 10K subjects have been retired, their feature vectors are passed to the machine for its inaugural training. A suite of performance metrics are recorded by a machine agent, similar in construction to SWAP's agents. This agent determines when the machine has trained sufficiently by assessing the variation in performance metrics for all previous nights of training. Once the machine has been optimized, the agent introduces it to the test sample consisting of any subject that has not yet reached retirement through SWAP and is not part of the gold standard sample.
Analogous to SWAP, we generate a retirement rule for machine-classified subjects. In addition to the class prediction, the RF algorithm computes the probability for each subject to belong to each class. This probability is simply the average of the probabilities of each individual decision tree, where the probability of a single tree is determined as the fraction of subjects of class X on a leaf node. Only subjects that receive a class prediction of `Featured'~with $p_{\mathrm{machine}} \ge 0.9$ ($p_{\mathrm{machine}} \le 0.1$ for `Not') are considered retired. The remaining subjects have the possibility of being classified by humans or the machine on a future night of the simulation. This constitutes the core of our passive feedback mechanism. Subjects that are not retired by the machine can instead be retired by humans, thus providing the machine a more fully sampled morphology parameter space on future training sessions.
\section{Results}
\label{sec: results}
We perform a full GZX simulation incorporating our RF with the fiducial SWAP run discussed in Section~\ref{sec: fiducial}. The machine attempts its first training on Day 8 with an initial training sample of $\sim$20K subjects. It undergoes several additional nights of training, each time with a larger training sample. By Day 12, SWAP has provided over 40K subjects for training and the machine's agent has deemed the machine optimized. The machine predicts class labels for the remaining 230K GZ2 subjects. Of those, the machine retires over 70K, dramatically increasing the subset of retired subjects. We end the simulation after 32 days, having retired $\sim$210K subjects as detailed in Table~\ref{tab: summary}.
We present these results in Figure~\ref{fig: money} where subject retirement with GZX (red) is compared to our fiducial SWAP-only run (light blue) and GZ2 (dashed dark blue). Using the~GZ2$_{\text{raw}}$~labels as before, we compute our usual quality metrics on the full sample of GZX-retired subjects; reported in Table~\ref{tab: summary}. Accuracy and purity remain within a few percent of the SWAP-only run at \replaced{93.5\% and 84.2\%}{93.1\% and 83.2\%} respectively. Instead we see a 5\% decline in the completeness. While the SWAP-only run identified 99\% of~`Featured'~subjects, incorporation of the machine seems to miss a significant portion thus dropping GZX completeness to 94.0\%. We discuss this behaviour below.
By dynamically generating a training sample through a more sophisticated analysis of human classifications coupled with a machine classifier, we retire more than 200K GZ2 subjects in just 27 days. Our GZX simulation processes a total of \replaced{932,017}{936,887} visual classifications. As presented in Section~\ref{sec: swap is faster}, GZ2 requires at least 35 votes per subject to obtain galaxy classifications that are consistent 95\% of the time. At best, GZ2 could have retired \replaced{26,629}{26,768} subjects with the classifications we process during our GZX run. This implies that we have increased the classification rate by at least a factor of 8, while requiring only 13\% as many human classifications. We next explore the composition of those classifications.
\begin{figure}[t!]
\includegraphics[width=3.35in]{f11.pdf}
\caption{Contributions to subject retirement by both classifying agents of GZX: human (SWAP, orange) and machine (RF, teal). The top panel shows cumulative subject retirement for GZX as a whole (solid black), along with that attributed to the RF and SWAP. The dotted grey line shows the fiducial SWAP-only run for comparison. Retirement totals for humans and machine are nearly equal over the course of the simulation but display different behaviours: SWAP's retirement rate is almost constant while the RF contributes substantially after its initial application and then plateaus. The bottom panels show what fraction of GZ2 subjects are retired, separated by class label. Overall, GZX retires 73.7\% of the entire GZ2 sample in 32 days, retiring the same proportion of~`Featured'~and~`Not'~subjects as indicated by the black lines. However, humans retire 30\% more~`Featured'~subjects than the machine, while both components retire a similar proportion of~`Not'~subjects. \label{fig: gzx components}}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=6.25in]{f12.pdf}
\caption{A random subsample of subjects identified as false positives: labelled by machine as `Featured'~but as `Not'~according to GZ2$_{\text{raw}}$. We display $f_{\mathrm{featured}}$~in the lower left corner, that is, the fraction of volunteers who classified the subject as `Featured'. Values are typically under 0.35 indicating that GZ2 volunteers strongly believed these to be `smooth' (`Not'). Fortunately, the machine is able to identify these subjects as~`Featured'~due to their measured morphology diagnostics.}
\label{fig: machine false pos}
\end{figure*}
\subsection{Who retires what, when?}
\label{sec: who retires what}
In the top panel of Figure~\ref{fig: gzx components} we explore the individual contributions to GZX subject retirement from the RF (dash-dotted teal) and SWAP (dashed orange). The solid black line shows the total GZX retirement (SWAP+RF), while the dotted grey line depicts the fiducial SWAP-only run from Section~\ref{sec: fiducial} for reference. Two things are immediately obvious. First, each component shoulders approximately half of the retirement burden with the machine and SWAP responsible for \replaced{$\sim$$100$K and $\sim$$110$K}{$\sim$$98$K and $\sim$$112$K} subjects respectively. Secondly, the rate of retirement exhibited by the two components is in stark contrast. SWAP retires at a relatively constant rate while the machine retires dramatically at the beginning of its application, quickly surpassing the human contribution, and plateaus thereafter. We thus clearly see three epochs of subject retirement. In the first phase, humans are the only contributors to subject retirement. Once the machine is optimized, it immediately contributes more to retirement than humans. However, the machine's performance plateaus quickly; the third phase is again dominated by human classifications.
In the bottom panels of Figure~\ref{fig: gzx components}, we consider the class
composition of subjects retired by SWAP and the RF. The left (right) panel shows the retired fraction of GZ2 subjects identified as~`Featured'~(`Not') according to their~GZ2$_{\text{raw}}$~labels as a function of GZ2 project time. Overall, GZX retires 73.7\% of the GZ2 subject sample and this is evenly distributed between~`Featured'~and~`Not'~subjects as indicated by the solid black lines in both panels. However, SWAP retires more than 50\% of all~`Featured'~subjects while the machine retires only \replaced{18\%}{20\%}. This divergence does not exist for~`Not'~subjects where each component contributes \replaced{33-37\%}{33-34\%}.
What is the source of this discrepancy? Each night the machine trains on a sample composed consistently of 30-40\%~`Featured'~subjects but does not retire a similar proportion, indicating that the 30\% of non-retired~`Featured'~subjects do not receive high~$p_{\mathrm{machine}}$. In the following section we explore whether this is an artefact of our choice in machine or in the human-machine combination implemented here.
\subsection{Machine performance}
\label{sec: machine performance}
Throughout our analysis we have defined `Featured'~and `Not'~subjects by their GZ2$_{\text{raw}}$~labels as this was the most compatible choice for comparison with SWAP output. However, the machine does not learn in the same way, nor is it presented with the same information. Machine and human classifications each provide valuable and complementary information for identifying `Featured'~galaxies.
We isolate the \replaced{6127}{7060} subjects that were deemed false positives, i.e., galaxies retired by the machine as~`Featured'~that have~`Not'~GZ2$_{\text{raw}}$~labels, a sample that comprises only \replaced{6.25\%}{7.2\%} of all subjects the machine retires. We visually examine several hundred and assess that, to the expert eye, a majority are, in fact, `Featured'. A random sample is shown in Figure~\ref{fig: machine false pos}.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{f13.pdf}
\caption{The RF is trained on a 5-dimensional morphology parameter space. We show the distribution of each morphology indicator for machine-retired~`Featured'~(blue) and~`Not'~(orange) subjects compared to the full GZ2 subject sample (black). The difference between~`Featured'~and~`Not'~subjects is in stark contrast for all distributions except, perhaps, \M{20}. \label{fig: morph params}}
\end{figure*}
That the machine strongly identifies these galaxies as `Featured'~($p_{\mathrm{machine}}$~$\ge 0.9$) where humans instead classify them as `Not'~($f_{\mathrm{featured}}$~$< 0.5$) has several contributing factors: 1) as discussed in Section~\ref{sec: swap gz2 disagree}, the threshold we chose carries with it a confidence interval such that subjects with $0.4 <$~$f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~$< 0.6$ are most likely to receive disagreeing labels from other classifying agents, 2) the first task of the GZ2 decision tree asks a question that does not necessarily correlate with a split between early- and late-type galaxies, and 3) the machine learns on morphology diagnostics that are very different from visual inspection.
We find that \replaced{41.4\%}{40\%} of these false positives have $0.4 \le$~$f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$$<0.5$ indicating that the disagreement between humans and machine is likely due to the labels we assign at our given threshold. However, we also find that \replaced{43.5\%}{45\%} of false positives have $f_{\mathrm{featured}}$+$f_{\mathrm{artifact}}$~$\le0.35$, and this discrepancy is not as easily explained. In Figure~\ref{fig: machine false pos} we examine a random sample of false positives in this regime where, for clarity, we display only the $f_{\mathrm{featured}}$~value in the lower left corner. The majority of these subjects are discs lacking features such as spiral arms or strong bars. Whether this is the reason the majority of volunteers classify these objects as ``smooth" is beyond the scope of this paper, however, this behaviour might be modified by providing actual training images and live feedback as performed in \cite{Marshall2016}. We suggest that, at least for this particular question, if either human or machine identifies a subject as `Featured', it is likely the subject is discy and worth further investigation.
\begin{figure}[t]
\includegraphics[width=3.2in]{f14.pdf}
\caption{The RF's ranked feature importance averaged over all nights of training with black bars indicating the standard deviation. A larger value corresponds to higher importance. The machine computes feature importance according to how much each feature increases the purity of the resulting split averaged over all trees in the forest. The RF places great importance in the Gini coefficient though we note that it can under-represent the importance of highly correlated features such as concentration.\label{fig: feature importance}}
\end{figure}
Accordingly, this suggests that, in some cases, the morphology indicators we measure are sufficient for the machine to recognize~`Featured'~galaxies regardless of the labels humans provide. Figure~\ref{fig: morph params} shows the distribution of each morphology indicator for all subjects the machine retires as~`Featured'~(blue) and~`Not'~(orange) compared to the full GZ2 subject set. The difference between~`Featured'~and~`Not'~is stark in all but the \M{20} distribution. This can be seen explicitly in Figure~\ref{fig: feature importance} in which we show the RF's ranked feature importances, where large values indicate higher importance. Feature importance is computed as how much each feature decreases the impurity of a split in a tree. The impurity decrease from each feature is then averaged over all trees and ranked. We show the feature importance averaged over all nights of training with black bars indicating the standard deviation. The machine finds the Gini coefficient most important for class prediction, placing little emphasis on \M{20}. It is well known that the Gini coefficient is more sensitive to noise than other diagnostics, however, we point out that when a machine is faced with two or more correlated features any of them can be used as the predictor. Once chosen, the importance of the others is reduced. This explains why Concentration is ranked much lower than Gini even though they are strongly correlated as seen in Figure~\ref{fig: morph thresh}. That the machine relies heavily on these two morphology diagnostics is unsurprising as concentration has long been an automated predictor between early- and late-type galaxies~\citep{Abraham1994, Abraham1996, Shen2003}.
The complementary nature of human and machine classification can
best be utilized by a feedback mechanism in which a portion of machine-retired
subjects are reviewed by humans. Subjects that display excessive disagreement
should be verified by an expert (or expert-user). In the same way that
humans increase the machine's training sample over time, subjects that the
machine properly identifies can become part of the humans' training sample.
\section{Looking Forward}
\label{sec: visions}
We have demonstrated the first practical framework for combining human and machine intelligence in galaxy morphology classification tasks. While we focus below on a brief discussion of our next steps and potential applications to large upcoming surveys, we note that our results have implications for the future of citizen science and Galaxy Zoo in particular.
GZX is perhaps one of the simplest ways to combine human and machine intelligence and its impressive performance motivates a higher level of sophistication. A first step will be an implementation of SWAP that can handle a complex decision tree. In addition, we envision multiple forms of active feedback in addition to our passive feedback mechanism. SWAP allows us to leverage the most skilled volunteers to review galaxies difficult for either human or machine to classify. Additionally, machine-retired subjects should contribute to the training sample for humans in an analogous fashion to what we have already implemented.
Secondly, our RF can be improved by providing it information equal to what humans receive: multi-band morphology diagnostics will be included in our future feature vector. However, the Random Forest algorithm is not easily adapted to handle measurement errors or class labels with continuous distributions. A key feature of GZ2 vote fractions is their use in determining the strength of a a morphological feature. Although both SWAP and our RF provide class predictions that are continuous, we apply thresholds to discretize the classification. To fully utilize the information provided, sophisticated algorithms should be considered such as deep convolutional neural networks (CNN) or Latent Dirichlet allocation (LDA), an algorithm that is frequently used in document processing. Furthermore, there is no reason to limit to a single machine. As hinted at in Figure~\ref{fig: schematic}, several machines could train simultaneously, their predictions aggregated through SWAP, creating an on-the-fly machine ensemble.
With the above upgrades implemented, we expect performance of both the
classification rate and quality to further increase. However, even our current
implementation can cope with upcoming data volumes from large surveys.
By some estimates, \textit{Euclid} is expected to obtain measurable morphology with its
visual instrument (VIS) for approximately $10^6 - 10^7$ galaxies~\citep{Euclid}.
Visual classification at the rate achieved with Galaxy Zoo today
would require 12--120 years to classify.\footnote{We note that the classification
rate of GZ2 was 4 times higher than GZ's current steady rate.}
If the \textit{Euclid} sample is on the high end, GZX as currently implemented
could classify the brightest 20\% during the six years of its observing mission.
As currently implemented, we obtain accuracy around 95\% potentially leaving
hundreds of thousands of galaxies with unreliable classifications.
In a companion paper that seeks to identify supernovae, \cite{Wright2017}
demonstrate a dramatic increase in accuracy through an entirely different human-machine
combination whereby the
scores from human and machine are averaged together with the combined score
yielding the most reliable classification. Again, a combination of both
approaches will allow us to take full advantage of legacy output from large scale surveys.
\subsection{Conclusions}
In this paper we design and test Galaxy Zoo Express, an innovative system\footnote{Our code can be found at \url{https://github.com/melaniebeck/GZExpress}}for the efficient classification of galaxy morphology tasks that integrates the native ability of the human mind to identify the abstract and novel with machine learning algorithms that provide speed and brute force. We demonstrate for the first time that the SWAP algorithm, originally developed to identify rare gravitational lenses in the Space Warps project, is robust for use in galaxy morphology classification. We show that by implementing SWAP on GZ2 classification data we can increase the rate of classification by a factor of 4-5, requiring only 90 days of GZ2 project time to classify nearly 80\% of the entire galaxy sample.
Furthermore, we have implemented and tested a Random Forest algorithm and developed a decision engine that delegates tasks between human and machine. We show that even this simple machine is capable of providing significant gains in the classification rate when combined with human classifiers: GZX retires over 70\% of GZ2 galaxies in just 32 days of GZ2 project time. This represents a factor of at least 8 increase in the classification rate as well as nearly an order of magnitude reduction in human effort compared to the original GZ2 project. This is achieved without sacrificing the quality of classifications as we maintain $\sim$94\% accuracy throughout our simulations. Additionally, we have shown that training on a 5-dimensional parameter space of traditional non-parametric morphology indicators allows the machine to identify subjects that humans miss, providing a complementary approach to visual classification. The gain in classification speed allows us to tackle the massive amount of data promised from large surveys like \textit{LSST} and \textit{Euclid}.
\acknowledgements
\added{We are grateful to the anonymous referees for helpful comments and suggestions which greatly improved this manuscript.} MB thanks Steven Bamford and Boris H{\"a}u{\ss}ler for insightful discussions on citizen science and Galaxy Zoo; and John Wallin and Marc Huertas-Company for several enlightening conversations on machine learning and classification.
We are grateful to Elisabeth Baeten, Micaela Bagley, Karlen Shahinyan, Vihang Mehta, Steven Bamford, Kevin Schawinski, and Rebecca Smethurst for providing expert classifications in addition to those provided by the authors. PJM acknowledges Aprajita Verma and Anupreeta More for their ongoing collaboration on the Space Warps project.
MB, CS, LF, KW, and MG gratefully acknowledge partial support from the US National Science Foundation (NSF) Grant AST-1413610. LF and DW also gratefully acknowledge partial support from NSF IIS 1619177. MB acknowledges additional support
through New College and Oxford University's Balzan Fellowship as well as the University
of Minnesota Doctoral Dissertation Fellowship. Travel funding was supplied
to MB, in part, by the University of Minnesota Thesis Research Travel Grant. CJL recognizes support from a grant from the Science \& Technology Facilities Council (ST/N003179/1).
BDS acknowledges support from Balliol College, Oxford, and the National Aeronautics and Space Administration (NASA) through Einstein Postdoctoral Fellowship Award Number PF5-160143 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. The work of PJM is supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515.
\added{This publication uses data generated via the Zooniverse.org platform, development of which is funded by generous support, including a Global Impact Award from Google, and by a grant from the Alfred P. Sloan Foundation.}
\software{scikit-learn \citep{scikit-learn}, Astropy \citep{astropy}, TOPCAT \citep{topcat}}
|
{
"timestamp": "2018-02-27T02:00:52",
"yymm": "1802",
"arxiv_id": "1802.08713",
"language": "en",
"url": "https://arxiv.org/abs/1802.08713"
}
|
\section{Introduction}\label{secIntro}
An action of a compact torus $G$ on a topological space $X$ is a
classical object of study~\cite{Br}. For a point $x\in X$ let
$G_x\subset G$ denote the stabilizer subgroup and $Gx$ the orbit
of $x$. Let $p\colon X\to X/G$ be the projection to the orbit
space. Let $S(G)$ denote the set of all closed subgroups of $G$
endowed with the lower interval topology. There is continuous map
\[
\tilde{\lambda}\colon X/G\to S(G)
\]
which maps an orbit $x\in X/G$ to the stabilizer subgroup $G_x$,
see \cite{BuchRay}.
The classical idea in the study of torus actions is the following.
It is assumed that the projection map $p\colon X\to X/G$ admits a
section. Then, given the orbit space $Q=X/G$, and the continuous
map $\tilde{\lambda}\colon Q\to S(G)$ one builds a topological
model
\[
X_{(Q,\tilde{\lambda})}=(Q\times G)/\sim
\]
which is equivariantly homeomorphic to the original space $X$. The
method of constructing model spaces was used by Davis and
Januszkiewicz \cite{DJ} for the classification of manifolds which
are now called quasitoric \cite{BPnew}. This idea traces back to
the works of Vinberg \cite{Vin}.
The method can be naturally extended to the locally standard
actions of $G\cong T^n$ on $2n$-manifolds \cite{Yo}. In this case
the projection may not admit the global section, however, it
always admits a section locally, and there exists a topological
model space of such action.
Buchstaber--Terzi\'{c} \cite{BTober,BT,BT2} introduced a theory of
$(2n,k)$-manifolds in order to study the orbit spaces of more
general torus actions and to obtain topological models for such
actions. Grassmann manifolds and flag manifolds are important
families of $(2n,k)$-manifolds. In this theory a manifold is
subdivided into strata $X_\sigma$, so that the action has the same
stabilizer $T_\sigma$ for all points of a stratum. It is essential
in the definition of $(2n,k)$-manifold that there is a convex
polytope $P^k$ and a $T^k$-equivariant generalized moment map
$X^{2n}\to P^k$. Every stratum $X_\sigma$ is then represented as a
principal $T/T_x$-bundle over the product $P_\sigma^\circ\times
M_\sigma$, where $P_\sigma$ is a certain subpolytope of $P$ and
$M_\sigma$ is an auxiliary space of dimension $2(n-k)$ called the
space of parameters. Therefore the orbit space $X^{2n}/T^k$ is
represented as the union $\bigsqcup_\sigma P_\sigma\times
M_\sigma$. The theory of $(2n,k)$-manifolds provided the specific
methods to describe the topology of this union.
Whenever a compact $k$-torus acts effectively on a $2n$-manifold
we call the number $n-k$ the \emph{complexity of the action}.
While actions of complexity zero are well studied, the actions of
positive complexity constitute a harder problem. It is generally
assumed that the actions of complexity $\geqslant 2$ are extremely
complicated in general. The actions of complexity one take an
intermediate position: they were studied from several different
viewpoints. Algebraical theory of complexity one actions was
developed in the works of many authors, in particular, the
classification of such actions even in nonabelean case was given
by Timash\"{e}v \cite{Tim,Tim2}. Hamiltonian complexity one
actions on symplectic manifolds are also well studied: see e.g.
the work of Karshon--Tolman \cite{KT} and references therein.
Circle action on a $4$-manifold is a classical subject, see e.g.
\cite{ChL,Fint,OR}.
In this paper we study complexity one actions from the topological
viewpoint. Our approach is different from the one used in
\cite{BuchRay} and \cite{BT}. Instead of trying to stratify the
manifold so that the action on each stratum admits a section, we
partition the manifold by orbit types. Under two restrictions we
prove that the orbit space $Q=X/T$ is a topological manifold, see
Theorem \ref{thmManifold} for the precise statement. Note that for
this result it is not required that the stabilizers of the action
are connected. Such restriction was imposed in the theory of
$(2n,k)$-manifolds, however there exist natural examples of the
actions which have finite stabilizers but still the orbit space is
a manifold.
We make a remark on the main difference from situations considered
in toric topology: the typical action of complexity one does not
admit a section, even locally.
Natural examples of complexity one actions which we keep in mind
are the following.
\begin{enumerate}
\item The $T^3$ action on the complex Grassmann manifold
$G_{4,2}$.
\item The $T^2$ action on the manifold $F_3$ of full complex flags in
$\mathbb{C}^3$.
\item Quasitoric manifolds $X_{(P,\Lambda)}^{2n}$ with the induced action of
a generic subtorus $T\subset G$, $\dim T=n-1$.
\item The space of isospectral periodic tridiagonal Hermitian
matrices of size $n\geqslant 3$.
\end{enumerate}
Using the theory of $(2n,k)$-manifolds, Buchstaber--Terzi\'{c}
proved that the orbit space of the Grassmann manifold $G_{4,2}$ is
$S^5$, and the orbit space of the flag manifold $F_3$ is $S^4$.
These two examples motivated our study. In
Theorem~\ref{thmQToricReduc} we prove that the orbit space of a
quasitoric manifold by the action of $T$ is also homeomorphic to a
sphere $S^{n+1}$.
A space of isospectral tridiagonal $n\times n$-matrices is a more
interesting object. This space will be studied in details in the
subsequent paper \cite{AyzMatr}. This space depends on the
spectrum and for some degenerate spectra it is not smooth.
However, if it is a smooth manifold, we will prove that its orbit
space is $S^4\times T^{n-3}$. In \cite{AyzMatr} we describe
non-free part of the torus action using the regular permutohedral
tiling of the space. This allows to understand the topology of the
whole space, not just its orbit space.
The study of the space of periodic tridiagonal matrices raised
several questions about actions of complexity one. One of the
questions is the topological classification of such actions. In
this paper we prove that under certain restrictions the space $X$
with complexity one action is determined by the orbit manifold
$Q=X/T$, the set of non-free orbits $Z\subset Q$, and the weights
of tangent representations at fixed points. See Theorem
\ref{thmClassSphere} and Proposition~\ref{propSpecialHomeo}. The
set of non-free orbits have a specific topology which we
axiomatized in the notion of \emph{sponge}. Sponges seem to be the
objects of independent interest.
\section{Appropriate actions of complexity
one}\label{secGeneralDefins}
In the following, $T$ usually denotes the compact torus of
dimension $n-1$ and $G$ denotes compact tori of other dimensions.
We refer to the classical monograph of Bredon \cite{Br} for
general information of group actions on manifolds.
Let us specify the type of actions to be considered in the paper.
For a smooth action of $G$ on a smooth manifold $X$ define the
\emph{fine partition} on $X$ by orbit types
\[
X=\bigsqcup_{H\in S(G)}X^H.
\]
Here $H$ runs over all closed subgroups of $G$ and
$X^H=\tilde{\lambda}^{-1}(H)=\{x\in X\mid G_x=H\}$.
\begin{defin}
An effective action of $G$ on a compact smooth manifold $X$ is
called \emph{appropriate} if
\begin{itemize}
\item the fixed points set $X^G$ is finite;
\item (adjoining condition) the closure of every connected component of a
partition element $X^H$, $H\neq G$, contains a point $x'$ with
$\dim G_{x'}>\dim H$.
\end{itemize}
If, moreover, the stabilizer subgroup of every point is a torus,
we call the action \emph{strictly appropriate}.
\end{defin}
\begin{rem}
The adjoining condition implies that whenever a subset $X^H$ is
closed in the topology of $X$, then it is the fixed point set
$X^G$.
\end{rem}
\begin{rem}
A subgroup $H$ of a torus has the form $H_t\times H_f$, where
$H_t$ is a torus and $H_f$ is a finite abelian group. For strictly
appropriate actions the finite components $H_f$ of all stabilizers
vanish. In other words, a strictly appropriate action is an action
with all stabilizers being connected.
\end{rem}
\begin{ex}
Let an algebraical torus $(\mathbb{C}^{\times})^k$ act algebraically on a smooth
variety $X$ with finitely many fixed points. Then the induced
action of a compact subtorus $T^k\subset (\mathbb{C}^{\times})^k$ on $X$ is
appropriate, as follows from Bialynicki-Birula method \cite{BB}.
Indeed, for a given point $x\in X\setminus X^T$ consider the
1-dimensional algebraical torus $\mathbb{C}^{\times}\subset (\mathbb{C}^{\times})^k$ which
acts on $x$ nontrivially. Consider the point $x'=\lim_{t\to 0}tx$,
where $0<t\leqslant 1$, $t\in \mathbb{C}^\times$. The point $x'$ is
connected with $x$ and has stabilizer of bigger dimension (since
$x$ is stabilized by $(\mathbb{C}^{\times})^k_x$ as well as by $\mathbb{C}^\times$).
Iterating this procedure, we arrive at some fixed point.
In particular, the action of a compact torus on a complex
GKM-manifold (see \cite{GKM}) is appropriate.
\end{ex}
\begin{ex}
The effective action of $T^{n-1}$ on $F_n$, the manifold of
complete complex flags in $\mathbb{C}^n$ is strictly appropriate. The
effective action of $T^{n-1}$ on a Grassmann manifold $G_{n,k}$ of
complex $k$-planes in $\mathbb{C}^n$ is also strictly appropriate.
\end{ex}
\begin{ex}
Let the action of $G\cong T^n$ on a smooth manifold $X^{2n}$ be
locally standard (see definition in Section
\ref{secLocStandActions}). The orbit space $P=X^{2n}/G$ is a
manifold with corners. This action is appropriate whenever every
face of $P$ contains a vertex. If it is appropriate, then it is
strictly appropriate. In particular, quasitoric manifolds provide
examples of strictly appropriate torus actions.
\end{ex}
\begin{ex}
Let the action of $G$ on $X$ be appropriate, and the induced
action of a subtorus $T\subset G$ on $X$ has the same fixed points
set. Then the action of $T$ is also appropriate. Indeed, the
partition element $(X')^K$ of the $T$-action for $K\subseteq T$
have the form
\[
(X')^K=\bigcup_{H\subseteq G, H\cap T=K} X^H.
\]
Therefore the adjoining condition for $G$-action implies the
adjoining condition for the induced $T$-action.
\end{ex}
Now we restrict to actions of complexity one, that is to the case
$\dim T=n-1$, $\dim X=2n$. Let $x\in X^T$ be a fixed point, and
\[
\alpha_1,\ldots,\alpha_n\in N = \Hom(T,S^1)\cong \mathbb{Z}^{n-1}
\]
be the weights of the tangent representation at $x$. This means,
\[
T_xX\cong V(\alpha_1)\oplus\cdots\oplus V(\alpha_n),
\]
where $V(\alpha)$ is the standard 1-dimensional complex
representation given by
\[
tz=\alpha(t)\cdot z,\quad z\in \mathbb{C}.
\]
If there is no complex structure on $X$, then we have an ambiguity
in choice of signs of $\alpha_i$. These signs do not affect the
following definitions.
\begin{defin}
A representation of $T^{n-1}$ on $\mathbb{C}^n$ is called in general
position if every $n-1$ of its $n$ weights are linearly
independent. An action of $T=T^{n-1}$ on $X=X^{2n}$ is called
\emph{an action in general position} if its tangent representation
at any fixed point is in general position.
\end{defin}
\begin{rem}
For a given $n$-tuple of weights $\alpha_1,\ldots,\alpha_n$ there
is a relation $c_1\alpha_1+\cdots+c_n\alpha_n=0$ in $N\cong
\mathbb{Z}^{n-1}$.
The action is in general position if $c_i\neq 0$ for
$i=1,\ldots,n$.
\end{rem}
\begin{thm}\label{thmManifold}
Consider an appropriate action of $T=T^{n-1}$ on $X=X^{2n}$ and
assume it is in general position. Then the orbit space $Q=X/T$ is
a topological manifold.
\end{thm}
\begin{proof}
First we prove the local statement near fixed points.
\begin{lem}\label{lemLocalQuotient}
For a representation of $T=T^{n-1}$ on $\mathbb{C}^n$ in general position
we have $\mathbb{C}^n/T\cong \mathbb{R}^{n+1}$.
\end{lem}
\begin{proof}
Consider the standard action of $G=T^n$ on $\mathbb{C}^n$ which rotates
the coordinates. The weights of the standard action
$e_1,\ldots,e_n$ is the standard basis of the character lattice
$\Hom(G,S^1)\cong \mathbb{Z}^n$. Consider the lattice homomorphism
$\phi\colon \mathbb{Z}^n\to N$ given by $\phi(e_i)=\alpha_i$,
$i=1,\ldots,n$. This homomorphism is induced by some homomorphism
$\phi^*\colon T\to G$ of tori. The given action of $T$ is the
composition of $\phi^*$ with the standard action.
So far we may assume that there is an action of a subtorus
$T'=f(T)\subset G$ where $G$ acts in a standard way. The torus
$T'$ is given by $\{t_1^{c_1}\cdots t_n^{c_n}=1\}$, where
$(c_1,\ldots,c_n)$ is a linear relation on the weights $\alpha_i$
and $\gcd\{c_i\}=1$. The condition of general position implies
that all $c_i\neq 0$. Hence the intersection of $T'$ with each
coordinate circle in $G$ is a finite subgroup.
Let us denote the space $\mathbb{C}^n/T=\mathbb{C}^n/T'$ by $Q$. We have the map
$g\colon Q\to \mathbb{C}^n/G\cong \mathbb{R}_{\geqslant 0}^n$, which sends $T$-orbit to its
$G$-orbit. For every $p\in \mathbb{R}_>^n$ the preimage $g^{-1}(p)$ is a
circle $G/T'$. For every $p\in \partial\mathbb{R}_{\geqslant 0}^n$, the preimage $g^{-1}(p)$
is a single point, since the product of $T'$ with any nontrivial
coordinate subtorus generate the whole torus $G$. Therefore we
have $Q=\mathbb{R}_{\geqslant 0}^n\times S^1/\sim$, where $\sim$ collapses circles over
$\partial\mathbb{R}_{\geqslant 0}^n$. We have
\[
(\mathbb{R}_{\geqslant 0}^n\times S^1/\sim)\cong(\mathbb{R}^{n-1}\times\mathbb{R}_{\geqslant 0}\times
S^1)/\sim\cong \mathbb{R}^{n-1}\times\mathbb{C}.
\]
which proves the lemma.
\end{proof}
We now prove the theorem by induction on the dimension of
stabilizer subgroup. If $\dim H=n-1$, that is $H=T$, Lemma
\ref{lemLocalQuotient} shows that $X/T$ is a manifold near the
fixed point set $X^T/T$. Now let $[x]\in X/T$ be an orbit such
that $T_x=H$, that is $x\in X^H$. Due to the adjoining condition,
there exists a point $x'$ such that the local representations at
$x$ and $x'$ coincide and $x'$ is close to a partition element
$X^{H'}$ with $\dim H'>\dim H$. Here by the local representation
we mean a representation of $T_x$ on the normal space
$T_xX/T_xT(x)$ to the orbit.
By induction, the space $X/T$ is a manifold near $X^{H'}/T$
therefore there exists a neighborhood of $[x']$ homeomorphic to
$\mathbb{R}^{n+1}$. Therefore there exists a neighborhood of $[x]$
homeomorphic to $\mathbb{R}^{n+1}$. Indeed, both neighborhoods are
homeomorphic to the orbit space of the local representation
according to slice theorem.
\end{proof}
\begin{rem}
Let $v_1,\ldots,v_{n-1}\in\mathbb{R}^{n-1}$ be the basis of a vector
space and $v_n=-v_1-\cdots-v_{n-1}$. Consider the subset $C$ of
$\mathbb{R}^{n-1}$ given by
\[
C=\bigcup_{I\subset [n],|I|=n-2} \Cone(v_i\mid i\in I).
\]
This subset is homeomorphic to the $(n-2)$-skeleton of the
standard nonnegative cone $\mathbb{R}_{\geqslant 0}^n$.
The subset $C$ is the $(n-2)$-skeleton of the simplicial fan
$\Delta_{n-1}$ of type $A_{n-1}$; it comes equipped with the
natural filtration
\[
C_0\subset\cdots\subset C_{n-2}=C
\]
where $C_k$ is the union of cones of dimension $k$ of
$\Delta_{n-1}$. This filtration can be defined topologically: we
say that $x\in \mathbb{R}^{n-1}$ has type $k$ if $C$ cuts a small disc
$B_x$ around $x$ into $n-k$ chambers. Then $C_k$ consists of all
points of type $\leqslant k$.
\end{rem}
Next we introduce a notion of the subspace in a topological
manifold which is locally modeled by the subset $C\subset
\mathbb{R}^{n-1}\subset \mathbb{R}^{n+1}$. Assume we are given a topological
manifold $Q$ of dimension $n+1$ and a subset $Z\subset Q$.
\begin{defin}
A subset $Z\subset Q$ is called a \emph{sponge} if, for any point
$x\in Z$, there is a neighborhood $U_x\subset Q$ such that
$(U_x,U_x\cap Z)$ is homeomorphic to $(V\times \mathbb{R}^2,(V\cap
C)\times\mathbb{R}^2)$, where $V$ is an open subset of the space
$\mathbb{R}^{n-1}\supset C$.
\end{defin}
Every sponge is filtered in a natural way compatible with the
filtration of $C$. We say that a point $x\in Z\subset Q$ has type
$k$ if $H^2(U_x\setminus Z;\mathbb{Z})\cong \mathbb{Z}^{n-k-1}$ for a small disc
neighborhood $x\in U_x\subset Q$. Then $Z_k$ consists of all
points of type at most $k$. Note that $\dim Z_k=k$. Informally
speaking, the sponge set is a collection of $(n-2)$-manifolds with
corners, and the corners are stacked together like maximal cones
in $C$. The case $n=4$ is shown on Fig.\ref{figSponge}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.2]{sponge.pdf}
\end{center}
\caption{Local structure of the sponge for
$n=4$.}\label{figSponge}
\end{figure}
\begin{con}
For a general action of a torus $G$, $\dim G=m$ on $X$ we can
consider the \emph{coarse filtration}:
\[
X_0\subset X_1\subset\cdots\subset X_m=X
\]
where $X_i=\bigcup_{\dim H\geqslant m-i}X^H$ is the union of all
orbits of dimension at most $i$. In particular, the set
$X\setminus X_{m-1}$ is the locus of almost free action (``almost
free'' action means ``have only finite stabilizers''). There is an
induced coarse filtration on $Q=X/G$:
\[
Q_0\subset Q_1\subset\cdots\subset Q_m
\]
\end{con}
\begin{rem}
The terms ``fine partition'' and ``coarse filtration'' refer to
the following fact. The fine partition distinguishes different
subgroups of the torus. However, coarse filtration distinguishes
only the dimensions of the subgroups.
\end{rem}
\begin{prop}
For an appropriate action in general position of $T^{n-1}$ on
$X^{2n}$ we get a topological manifold $Q=X/T$. The coarse
filtration on $Q$ has the form
\[
Z_0\subset Z_1\subset\cdots\subset Z_{n-2}=Z\subset Q
\]
where $Z\subset Q$ is a sponge. The filtration by orbit dimensions
coincides with the sponge filtration defined topologically.
\end{prop}
\begin{proof}
The local statement near fixed points is proved in Lemma
\ref{lemLocalQuotient}. The global case follows by the adjoining
condition similar to the proof of Theorem \ref{thmManifold}.
\end{proof}
\section{Characteristic data}\label{secCharData}
Assume there is an appropriate action of $T=T^{n-1}$ in general
position on $X=X^{2n}$. We allow $X$ to have a boundary, however
in this case we require that the action is free on $\partial X$. The
same arguments as before show that $Q=X/T$ is a topological
manifold with boundary and its boundary $\partial Q$ is $\partial X/T$.
In this section we assume that the actions are strictly
appropriate. This means that there are no finite components in
stabilizer subgroups and therefore, the face partition of a coarse
filtration coincides with the fine partition on $Q$. With a
strictly appropriate action in general position we assign the
characteristic data $(Q,Z,\mu,e)$ consisting of the following
information
\begin{itemize}
\item $Q=X/T$, the orbit space.
\item $Z\subset Q^\circ$, the sponge subset determined by the
action:
\[
Z_0\subset Z_1\subset\cdots\subset Z_{n-2}=Z,
\]
where $Z_i\subset Q$ is the set of orbits of dimension at most
$i$. The closure of a connected component of $Z_i\setminus
Z_{i-1}$ is called a \emph{face} of $Z$ of dimension $i$. Faces of
dimension $n-2$ are called \emph{facets}. Every face of dimension
$i$ is contained in exactly ${n-i\choose 2}$ different facets. The
stabilizer remains the same for all points in the interior of any
given face $F$, since no finite components are allowed in the
stabilizers. This stabilizer will be denoted $T_F$. Let
$\mathcal{F}=\{F_1,\ldots,F_m\}$ be the set of facets of $Z$.
\item $\mu$ is a \emph{characteristic map}
\[
\mu\colon \mathcal{F}\to\{1\mbox{-dimensional subgroups of
}T^{n-1}\}=\Hom(S^1,T^{n-1})\cong \mathbb{Z}^{n-1}
\]
It sends a facet $F_k$ to the oriented stabilizer subgroup
$T_{F_k}$ of any of its interior points (we introduce orientation
arbitrarily, see details in Section \ref{secOrientations}). For
any face $F$ of dimension $i$ we have $F=\bigcap_{i\in I}F_i$ for
certain subset $I\subset [m]$, $|I|={n-i\choose 2}$. The
stabilizer $T_F$ is the product of $\mu(F_i)=T_{F_i}$ inside the
torus $T^{n-1}$. Note that this product is generally not free,
since it has dimension $n-1-i$. However, it can be seen that
characteristic map $\mu$ determines the stabilizers of all points.
\item $e\in H^2(Q\setminus Z; H^2(BT))$ is the \emph{Euler class}
of the free part of the action. In details, every orbit in
$Q\setminus Z$ is full-dimensional and there are no finite
stabilizer subgroups, therefore the free part of the action is a
principal $T$-bundle $p\colon X^{\free}\to Q\setminus Z$. This
bundle is classified by the homotopy class of a map
\[
Q\setminus Z\to BT\cong (\mathbb{C}P^\infty)^{n-1}\cong K(\mathbb{Z}^{n-1};2).
\]
Such maps also classify the second cohomology group of $Q\setminus
Z$. Therefore, we have the classifying element
\[
e\in H^2(Q\setminus Z;\mathbb{Z}^{n-1}),\mbox{ where }\mathbb{Z}^{n-1}\cong
H_2(BT;\mathbb{Z})\cong H_1(T;\mathbb{Z}).
\]
\end{itemize}
\begin{rem}
Note that unlike the half-dimensional torus actions the
characteristic data of complexity one actions can not be
arbitrary. It will be shown in this and the next section that the
Euler class $e$ and the characteristic function $\mu$ determine
each other to much extent. Moreover, the Euler class of complexity
one actions is always nontrivial.
\end{rem}
Let $x\in Z\subset Q$ be a point of type $k\leqslant n-2$. Let
$U_x$ be a small neighborhood of $x$ in $Q$, homeomorphic to an
open disc. Let $i_x\colon U_x\to Q$ be the inclusion map. We have
an induced homomorphism
\[
H^2(Q\setminus Z;H_1(T))\to H^2(U_x\setminus Z;H_1(T))
\]
The image of $e\in H^2(Q\setminus Z;H_1(T))$ by this homomorphism
is denoted
\[
e_x\in H^2(U_x\setminus Z;H_1(T))\cong \mathbb{Z}^{n-k-1}\otimes H_1(T)
\]
and called the local Euler class at $x$. Recall that the type of
the point is defined by the rank of the second cohomology of
$U_x\setminus Z$, see section \ref{secGeneralDefins}.
In particular, if $x$ has type $n-2$ (i.e. $x$ lies in the
interior of a facet), the neighborhood $U$ can be chosen in a way
that $U_x\cap Z\cong \mathbb{R}^{n-2}$. In this case we have
$U_x\setminus Z\cong \mathbb{R}^{n+1}\setminus \mathbb{R}^{n-2}$ and
\[
H^2(U_x\setminus Z;H_1(T))\cong H^2(\mathbb{R}^{n+1}\setminus
\mathbb{R}^{n-2};H_1(T))\cong H^1(T).
\]
The last isomorphism is canonical provided $Q$ (hence $U_x$) is
oriented.
\begin{defin}
The Euler class $e$ and characteristic function $\mu$ are called
\emph{compatible} if the following condition holds: for any $x\in
Z$, the map $H_1(T)\to H_1(T/T_x)$ induced by the quotient map
$T\to T/T_x$ sends $e_x\in H^1(T)$ to zero.
\end{defin}
\begin{prop}
Assume there is an appropriate action in general position of
$T=T^{n-1}$ on a manifold $X=X^{2n}$. Then its characteristic data
$e$ and $\mu$ are compatible.
\end{prop}
\begin{proof}
As before, let $Q=X/T$ be the orbit space, $Z\subset Q$ the set of
orbits of dimensions $\leqslant n-2$, and $p\colon X\to Q$ the
projection map. Take any point $x\in Z\subset Q$. We can choose a
small neighborhood $U_x\ni x$, $U_x\subset Q$ such that
stabilizers of any point $y\in U_x$ are contained in $T_x$ and
$U_x\cong \mathbb{R}^{n+1}$. Consider the map
\[
f\colon p^{-1}(U_x)/T_x\stackrel{T/T_x}{\longrightarrow} U_x
\]
taking the remaining quotient. Since all stabilizers of points in
$U_x$ are contained in $T_x$, the map $f$ is a principal
$T/T_x$-bundle. It is a trivial bundle since $U_x$ is
contractible, therefore the induced $T/T_x$-bundle over
$U_x\setminus Z$ is also trivial. Hence its Euler class vanishes.
However, this Euler class is the image of $e_x\in H^2(U_x\setminus
Z;H_1(T))$ under the induced map $H_1(T)\to H_1(T/T_x)$, which
proves the statement.
\end{proof}
\begin{rem}
For a point $x$ in the interior of a facet $F_i$ the stabilizer
$T_x$ is one-dimensional. In this case the compatibility condition
tells that $e_x$ is proportional to the fundamental class of
$T_x=\mu(F_i)$:
\[
e_x=k_i\mu(F_i)\in H_1(T;\mathbb{Z})\cong \Hom(S^1,T).
\]
The constants $k_i\in \mathbb{Z}$ can be determined from the weights of
the tangent representation at any fixed point adjacent to $F_i$.
It will be shown in Section \ref{secOrientations} that all these
constants are actually $\pm1$ for strictly appropriate actions.
\end{rem}
\begin{con}\label{constrModelSpace}
Let us construct a topological model space given abstract
compatible characteristic data. Assume a topological
$(n+1)$-manifold $Q$ is given, and let $Z\subset Q$ be a sponge
with facets $F_1,\ldots,F_m$. Let $\mu$ be a map assigning a
1-dimensional subgroup of $T=T^{n-1}$ to any facet $F_i$ with the
following property: if a $k$-dimensional face $F$ of a sponge lies
in facets $F_{i}$ with labels $i\in I$, $|I|={n-k\choose 2}$, then
\[
\dim\prod_{i\in I}\mu(F_i)=k.
\]
The subgroup $\prod_{i\in I}\mu(F_i)$ will be denoted $T_x$ if $x$
lies in interior of $F$. If $x\in Q\setminus Z$ we set
$T_x=\{1\}\subset T$. Finally, fix a class $e\in H^2(Q\setminus Z;
H_1(T))$ compatible with $\mu$. With all this information fixed,
introduce a space $Y=Y_{(Q,Z,\mu,e)}$. As a set,
\[
Y=\bigsqcup_{x\in Q} T/T_x.
\]
The topology is introduced in two steps.
(1) The topology on a subset
\[
Y^{\free}=\bigsqcup_{x\in Q\setminus Z} T/T_x= \bigsqcup_{x\in
Q\setminus Z} T\subset Y
\]
is introduced in a way such that the natural projection
$Y^{\free}\to Q\setminus Z$ is the principal $T$-bundle classified
by $e\in H^2(Q\setminus Z;H_1(T))$.
(2) For a point $y$ in $\bigsqcup_{x\in Z} T/T_x$ we specify the
basis of topology. Let $x\in Z$ and $t_x\in T/T_x\subset Y$. To
define the base of topology near $t_x$, we fix a small open
neighborhood $U_x\subset Q$ of $x$ and for each $x'\in U_x$ take a
projection of tori $p_{x'}\colon T/T_{x'}\to T/T_x$. This is well
defined since $U_x$ is assumed small enough so that $T_x$ contains
any other stabilizer $T_{x'}$. Let $V$ be a neighborhood of $t_x$
in $T/T_x$. The subsets of the form
\[
\bigsqcup_{x'\in U_x} p_{x'}^{-1}(V)
\]
form the base of topology around $t_x$. Note that since $e$ and
$\mu$ are compatible, we have a trivial principal $T/T_x$-bundle
over $A\to U_x\setminus Z$ therefore the topology defined in (2)
is compatible with the one defined in (1) on a subset
$U_x\setminus Z$.
Finally, define the $T$-action on each fiber $T/T_x$ as given by
the projection $T\to T_x$. It can be seen that $Y$ is a compact
Hausdorff topological space carrying the continuous action of $T$.
Its orbit space is homeomorphic to $Q$.
\end{con}
The constructed space $Y=Y_{(Q,Z,\mu,e)}$ is not necessarily a
manifold.
\begin{ex}
Assume $e_x=0$ for some point $x$ lying in interior of a facet
$F_j$. Then $Y$ is not a manifold over $x$. Indeed, by
construction, a neighborhood of $x$ in $Y$ is homeomorphic to
$U_x\times T/\sim$, where $(x',t')\sim(x'',t'')$ whenever
$x'=x''\in Z$ and $t'(t'')^{-1}\in \mu(F_j)$. This subset is not a
manifold, which can be shown by computing its local homology
groups for points lying over $Z$.
\end{ex}
\begin{prop}\label{propHomeoToModel}
Let $X=X^{2n}$ be a manifold with strictly appropriate action of
$T=T^{n-1}$ in general position. Let $(Q,Z,\mu,e)$ be its
characteristic data. Let $Y$ be the model space constructed from
the data $(Q,Z,\mu,e)$. Then there is a $T$-equivariant
homeomorphism $h\colon Y\to X$ which induces the identity
homeomorphism on the orbit space $Q$:
\[
\xymatrix{ Y\ar[r] \ar[d]^{p_Y}& X \ar[d]^{p_X} \\
Q \ar@{=}[r]& Q}
\]
\end{prop}
\begin{proof}
The equivariant homeomorphism over $Q\setminus Z$ follows
immediately, since both $p_X^{-1}(Q\setminus Z)$ and
$p_Y^{-1}(Q\setminus Z)$ are the principal $T$-bundles classified
by $e$. For a point $x\in Z\subset Q$, the equivariant
homeomorphism $h\colon p_Y^{-1}(U_x\setminus Z)\to
p_X^{-1}(U_x\setminus Z)$ is extended uniquely to the equivariant
homeomorphism $h\colon p_Y^{-1}(U_x)\to p_X^{-1}(U_x)$. Indeed,
there is a unique equivariant homeomorphism $\tilde{h}\colon
p_Y^{-1}(U_x)/T_x\to p_X^{-1}(U_x)/T_x$ which extends the
homeomorphism $h/T_x\colon p_Y^{-1}(U_x\setminus Z)/T_x\to
p_X^{-1}(U_x\setminus Z)/T_x$, since both spaces are trivial
$(T/T_x)$-bundles over $U_x$ (according to compatibility
condition) and $U_x\setminus Z$ is dense in $U_x$. For a point
$t_x\in T/T_x\subset p_Y^{-1}(U_x)$ over $x$ there is a unique
point $\alpha\in p_X^{-1}(U_x)$ such that
$\tilde{h}([t_x])=[\alpha]$, since the projection map
$p_X^{-1}(U_x)\to p_X^{-1}(U_x)/T_x$ is a bijection over $x$.
Hence we can extend $h$ by putting $h(t_x)=\alpha$.
This procedure defines an equivariant continuous bijection between
compact spaces $Y$ and $X$. Since $X$ is compact and $Y$ is
Hausdorff it is an equivariant homeomorphism.
\end{proof}
\section{Orientation issues and details}\label{secOrientations}
Consider a representation of $T=T^{n-1}$ on $\mathbb{C}^n$ in general
position. The weights $\alpha_1,\ldots,\alpha_n\in \Hom(T,S^1)$
are defined up to sign.
\begin{defin}
An \emph{omniorientation} is a choice of the orientation of $T$
(hence the orientation of the lattice $N=\Hom(T,S^1)$) and the
choice of signs of all vectors $\alpha_i$.
\end{defin}
\begin{con}
Assume there is a fixed basis in the lattice $N$, so that
$\alpha_j$ is written in coordinates
$\alpha_j=(\alpha_{j,1},\ldots,\alpha_{j,n-1})$. For each
$i=1,\ldots,n$ consider the determinant of the matrix formed by
$\alpha_j$ with $j\neq i$:
\[
\tilde{c}_i=(-1)^i\alpha_1\wedge\cdots\wedge
\widehat{\alpha_i}\wedge\cdots\wedge\alpha_n\in
\Lambda^{n-1}N\cong\mathbb{Z}
\]
Since $\alpha_i$ are in general position we have $\tilde{c}_i\neq 0$ for
all $i=1,\ldots,n$. Cramer's rule implies
\[
\tilde{c}_1\alpha_1+\cdots+\tilde{c}_n\alpha_n=0.
\]
Let $c_{\gcd}=\gcd(\tilde{c}_1,\ldots,\tilde{c}_n)$ and $c_i=\tilde{c}_i/c_{\gcd}$.
Let $G=T^n$ act on $\mathbb{C}^n$ in a standard way
\[
(t_1,\ldots,t_n)\cdot (z_1,\ldots,z_n)=(t_1z_1,\ldots,t_nz_n)
\]
and let $T$ be a subtorus
\begin{equation}\label{eqTprime}
T'=\{t_1^{c_1}\cdots t_n^{c_n}=1\}\subset G
\end{equation}
The proof of Lemma \ref{lemLocalQuotient} implies that the orbit
space of the representation of $T$ on $\mathbb{C}^n$ coincides with the
orbit space of the induced action of $T'$ on $\mathbb{C}^n$, therefore we
may not distinguish these two cases.
\end{con}
\begin{lem}\label{lemPMone}
The representation action of $T=T'$ on $\mathbb{C}^n$ in general position
is strictly appropriate if and only if $c_i=\pm1$, that is all
parameters $\tilde{c}_i$ coincide up to sign.
\end{lem}
\begin{proof}
The point $(0,\ldots,0,1,0,\ldots,0)$ with unit at $j$-th position
has the stabilizer $T'\cap G_j$, where $T'$ is given by
\eqref{eqTprime} and $G_j$ is the $j$-th coordinate circle of
$G\cong T^n$. This stabilizer is isomorphic to the cyclic group
$\mathbb{Z}_{c_j}$. If the action is strictly appropriate, then there are
no finite components in stabilizer subgroups, so far $c_i$ is
necessarily $\pm1$.
The converse statement is similar. The stabilizers of $T'$-action
on $\mathbb{C}^n$ have the form $T'\cap G_I$ for all possible coordinate
subtori $G_I\in G$, $I\subseteq [n]$. This group has finite
component of order $\gcd(c_i\mid i\in I)$. Hence, if all $c_i$ are
$\pm1$, these finite components vanish.
\end{proof}
Recall that $C\subset \mathbb{R}^{n-1}\subset\mathbb{R}^{n+1}$ denotes the
$(n-2)$-skeleton of the fan of type $A_{n-1}$. This space is the
sponge of an appropriate representation action of $T$ on $\mathbb{C}^n$.
In the following we only consider strictly appropriate actions.
The facets $\{F_{i,j}\mid 1\leqslant i<j\leqslant n\}$ of $Z$ are
labeled in a way that $F_{i,j}$ is ``spanned'' by all weights
except $\alpha_i$ and $\alpha_j$. Let us fix an orientation on
1-dimensional stabilizers of the action (this corresponds to the
choice of the signs of the characteristic values $\mu(F_{i,j})\in
\Hom(S^1,T)$). These orientations determine the orientation of the
orbit $Tx\cong T/\mu(F_{i,j})$ for $x\in F_{i,j}^\circ$. The
preimage of $F_{i,j}^\circ$ under the projection map has the form
$\{(z_1,\ldots,z_n)\in\mathbb{C}^n\mid z_i=z_j=0,z_k\neq 0$ for $k\neq
i,j\}$, this space has a canonical orientation determined by the
complex structure on $\mathbb{C}^n$. Therefore the orientations of the
stabilizer circles determine the orientations of facets $F_{i,j}$.
Finally, since the orientation on $\mathbb{C}^n/T\cong \mathbb{R}^{n+1}$ is
fixed, the orientation of $F_{i,j}$ determines the orientation of
a small 2-sphere $S_{i,j}^2$ around $F_{i,j}$. Let us describe the
Euler class of the free part of action.
\begin{prop}\label{propPMone}
The Euler class $e\in H^2(Q\setminus Z;H_1(T))$ of a strictly
appropriate representation action of $T$ on $\mathbb{C}^n$ is given by
the condition
\[
\langle e, [S_{i,j}^2]\rangle = \frac{c_i}{c_j}\mu(F_{i,j})\in
H_1(T)\cong \Hom(S^1,T),
\]
for a small 2-sphere around facet $F_{i,j}$, $1\leqslant
i<j\leqslant n$.
\end{prop}
The constants $c_i$ were defined earlier in this section. Lemma
\ref{lemPMone} shows that for strictly appropriate actions
$c_i=\pm1$. Note that $\frac{c_i}{c_j}=\frac{\tilde{c}_i}{\tilde{c}_j}$, and
parameters $\tilde{c}_i,\tilde{c}_j$ can be computed from the weight vectors.
\begin{proof}
Assume $i=1$, $j=2$ for simplicity. The preimage of a sphere
$S_{1,2}^2$ in the space $\mathbb{C}^n$ has the form
\[
M=\{(z_1,\ldots,z_n)\in\mathbb{C}^n\mid
|z_1|^2+|z_2|^2=\varepsilon,|z_k|=\varepsilon, k>2\}\mbox{ for
small }\varepsilon>0.
\]
The subtorus $T=\{t_1^{c_1}\cdots t_n^{c_n}=1\}\subset G$ acts
freely on $M$. The stabilizer $T_x=\mu(F_{1,2})$ for $x\in
F_{1,2}^{\circ}$ has the form
\[
T_x=\{t_1^{c_1}t_2^{c_2}=1,t_k=1,k>2\}.
\]
The induced action of $T/T_x$ on $M/T_x$ is a trivial principal
bundle, therefore Euler class of $T$-action on $M$ coincides with
the image of the Euler class of $T_x$-action on $M$ under the
inclusion map $i_x\colon T_x\to T$. The $T_x$-action on $M$ is the
Hopf bundle if $c_1,c_2$ have the same sign, and ``anti-Hopf''
bundle if $c_1,c_2$ have different signs. Its Euler class is
$\mu(F_{1,2})\in H^2(S^2_{1,2};H_1(T))$ in the first case and
$-\mu(F_{1,2})$ in the second case.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.3]{localThree.pdf}
\end{center}
\caption{Orienting three facets with a common
face}\label{figFacets}
\end{figure}
\begin{rem}
Note that there exist relations on the cycles $[S_{i,j}]\in
H_2(Q\setminus C;\mathbb{Z})\cong \mathbb{Z}^{n-1}$. Every triple of indices
$i,j,k$ determines the $(n-3)$-face $F_{i,j,k}\in C$ which lies in
facets $F_{i,j},F_{j,k},F_{i,k}$. If we choose a small circle
around $F_{i,j,k}\subset \mathbb{R}^{n-1}$ and orient the facets
$F_{i,j},F_{j,k},F_{i,k}$ consistently (see schematic
Fig.\ref{figFacets}), we get a relation in $H_2(Q\setminus
C;\mathbb{Z})$:
\[
[S_{i,j}]+[S_{j,k}]+[S_{i,k}]=0.
\]
It implies the cocycle relation for stabilizers:
\begin{equation}\label{eqCocycleReln}
\frac{c_i}{c_j}\mu(F_{i,j})+\frac{c_j}{c_k}\mu(F_{j,k})+
\frac{c_i}{c_k}\mu(F_{i,k})=0.
\end{equation}
This relation is not surprising. Indeed, the product of the circle
subgroups $\mu(F_{i,j}),\mu(F_{j,k}),\mu(F_{i,k})$ inside the
torus $T$ have dimension $2$, therefore there should be exactly
one linear relation on their fundamental classes.
\end{rem}
Proposition \ref{propPMone} implies that for strictly appropriate
torus actions we have $e_x=\pm[\mu(F)]$ for $x\in F^\circ$, since
this holds in the local chart around fixed point.
\section{Reductions of locally standard
actions}\label{secLocStandActions}
A smooth manifold $X=X^{2n}$ with the action of $G=T^n$ is called
\emph{locally standard} if the action is locally modeled by the
standard representation of $G=T^n$ on $\mathbb{C}^n$. Since
$\mathbb{C}^n/T^n\cong \mathbb{R}_{\geqslant 0}^n$, the orbit space $P=X/G$ gets a natural
structure of a manifold with corners. Manifolds with locally
standard actions are classified up to equivariant homeomorphism
(see \cite{Yo}) by the following \emph{characteristic data}
\begin{enumerate}
\item The manifold with corners $P$, $\dim P=n$. There is a requirement that
every $k$-dimensional face of $Q$ is contained in precisely $n-k$
facets. Such manifolds with corners are called \emph{nice} in
\cite{MasPan} or just \emph{manifolds with faces}.
\item The characteristic function $\lambda$ which maps a facet $F$ of
$P$ into a circle subgroup of $G$: the stabilizer of any interior
point of $F$. Characteristic function satisfies the celebrated
(*)-condition: whenever facets $F_1,\ldots,F_k$ intersect in $P$,
the subgroups $\lambda(F_1),\ldots,\lambda(F_k)$ form a direct
product inside $G$. Since every circle subgroup of $G$ determines
a primitive integral vector in $\Hom(S^1,G)\cong \mathbb{Z}^n$ up to
sign, it will be convenient to assume that $\lambda$ takes values
in the lattice.
\item The Euler class $e\in H^2(Q;H_1(G))\cong H^2(P^\circ;H_1(G))$,
which classifies the principal $G$-bundle $X^{\free}\to
X^{\free}/G=P^{\circ}$, where $X^{\free}$ is the free part of the
$G$-action.
\end{enumerate}
In the following we assume that every face of $P$ contains a
vertex so that the action is \emph{appropriate}.
A manifold $X$ with a locally standard action of $G$ is called a
\emph{quasitoric manifold} if $P=X/G$ is isomorphic to a simple
polytope as a manifold with corners. The free part of action is a
trivial $G$-bundle, since $P$ is contractible. So far, the Euler
class vanishes for quasitoric manifolds.
A fixed point $v$ of a locally standard action of $G$ on $X$
corresponds to a vertex $v$ of $P$ (we denote them by the same
letter). We have $v=F_1\cap\cdots\cap F_n$ for some facets
$F_i\subset Q$. Then the weights $\alpha_1,\ldots,\alpha_n\in
\Hom(G,S^1)=N$ of the tangent representation at $v$ is the dual
basis to $\lambda(F_1),\ldots,\lambda(F_n)\in \Hom(S^1,G)=N^*$.
Let $\{\alpha_{v,i}\}$ be a collection of all weights at all fixed
points. We can choose a generic homomorphism of the lattices
\[
\phi\colon \Hom(G,S^1)\cong\mathbb{Z}^n\to \mathbb{Z}^{n-1}
\]
such that the images
$\phi(\alpha_{v,1}),\ldots,\phi(\alpha_{v,n})\in \mathbb{Z}^{n-1}$ are in
general position for any fixed point $v$. The homomorphism $\phi$
is determined by some homomorphism of tori $\phi^*\colon
T^{n-1}\to G$. Therefore the action of the subtorus
$T=\phi^*(T^{n-1})\subset G$ on $X$ is in general position.
\begin{thm}\label{thmQToricReduc}
Let $X=X^{2n}$ be a quasitoric manifold with the action of $G\cong
T^n$. Let $T\subset G$ be a subtorus of dimension $n-1$ such that
the induced action of $T$ on $X$ is an action in general position.
Then $X/T\cong S^{n+1}$.
\end{thm}
\begin{proof}
Denote the orbit space $X/T$ by $Q$ and the orbit space $X/G$ by
$P$. By the definition of a quasitoric manifold, $P$ is a simple
polytope, $\dim P=n$. We have a map $g\colon Q\to P$, which sends
a $T$-orbit to the $G$-orbit in which it lies. For any point $x$
in the interior of $P$ we have $g^{-1}(x)\cong S^1$. Since the
action is in general position, the preimage of a point $x\in\partial P$
is a single point (this fact was actually proved in Lemma
\ref{lemLocalQuotient} for a local chart). Since $P$ is
contractible, the map $g\colon Q\to P$ admits a section over
$P^\circ$. Therefore we have
\[
Q\cong P\times S^1/\sim
\]
where $\sim$ collapses circles over $\partial P$. Since $P$ is
homeomorphic to the $n$-disc $D^n$, we have
\[
Q\cong D^n\times S^1/\sim\cong \partial(D^n\times D^2)\cong S^{n+1},
\]
which proves the statement.
\end{proof}
We further investigate the characteristic data of the induced
action of $T\cong T^{n-1}$ on a quasitoric manifold. The arguments
in the proof of Theorem \ref{thmQToricReduc} imply the following
statement.
\begin{prop}
The sponge of the $T$-action on a quasitoric manifold $X$ has the
form
\[
Z\subset S^{n-1}\subset \Sigma^2S^{n-1}\cong Q,
\]
where $S^{n-1}$ is identified with the boundary of the polytope
$P$ and $Z$ is its $(n-2)$-skeleton. The facets of $Z$ are exactly
the faces of $P$ of codimension $2$.
\end{prop}
Note that characteristic function $\lambda$ of the $G$-action
determines the characteristic function $\mu$ of the $T$-action.
Let $F$ be a codimension-2 face of $P$ (hence a facet of $Z$).
Then $F=F_1\cap F_2$, where $F_1,F_2$ are the facets of $P$. We
have
\[
\mu(F)=\lambda(F_1)\times\lambda(F_2)\cap T
\]
Here $\lambda(F_1)\times\lambda(F_2)$ is a 2-torus in $G$, and
since $T$ is a codimension-1 subtorus of $G$ in general position,
the intersection $\lambda(F_1)\times\lambda(F_2)\cap T$ is a
1-dimensional subgroup, which is the stabilizer of the $T$-action
on interior of $F$. If we want this subgroup to be a circle
(recall that the definition of strictly appropriate action
requires that stabilizers don't have finite components), then the
subgroup $T\subset G$ is subject to some additional restrictions.
Namely, the subgroup $T\subset G$ determines the character
$\alpha_T\in \Hom(G,S^1)$, $\alpha_T\colon G\to G/T$. The next
lemma easily follows from Lemma \ref{lemPMone}:
\begin{lem}
The induced action of $T$ on a locally standard $G$-manifold $X$
is strictly appropriate if and only if
$\langle\alpha,\lambda(F_i)\rangle=\pm1$ for all facets $F_i$.
\end{lem}
\begin{ex}
Let $c\colon \{F_1,\ldots,F_m\}\to[n]$ be a proper coloring of
facets of a simple polytope $P$. This means, whenever $F_i$ and
$F_j$ are adjacent, their colors $c(F_i), c(F_j)$ are different.
Given such coloring we can construct a special characteristic
function $\lambda_c\colon\{F_1,\ldots,F_m\}\to\mathbb{Z}^n$ which
associates to $F_i$ the basis vector $\lambda(F_i)=
\epsilon_{c(F_i)}\in \mathbb{Z}^n$. Such characteristic functions and
corresponding quasitoric manifolds $X_{(P,\lambda_c)}$ were called
\emph{pullbacks from linear model} in \cite{DJ}. It can be seen
that the induced action of the subtorus
\[
T=\{t_1^{c_1}t_2^{c_2}\cdots t_n^{c_n}=1\}\subset G \qquad
c_i=\pm1,
\]
on $X_{(P,\lambda_c)}$ is strictly appropriate.
Note that there exist other examples of strictly appropriate
induced actions which do not come from colored characteristic
functions.
\end{ex}
The Euler class $e$ of the induced action of $T$ on a quasitoric
manifold $X$ determines the action.
\begin{thm}\label{thmClassSphere}
Let $X'$ and $X''$ be two manifolds with strictly appropriate
actions in general position. Let $(Q'\cong S^{n+1},Z',\mu',e')$
and $(Q''\cong S^{n+1},Z'',\mu'',e'')$ be their characteristic
data. Suppose there is a homeomorphism of pairs $(Q',Z')\cong
(Q'',Z'')$ and $e'_x=e''_x$ for any point $x$ in a sponge. Then
$X'$ and $X''$ are equivariantly homeomorphic.
\end{thm}
\begin{proof}
Taking $x$ in the interior of a facet $F$ of a
sponge $Z'\cong Z''$, we see that $\mu'(F)=\mu''(F)$ since $e_x'$
is the fundamental class of $\mu'(F)$ and $e_x''$ is the
fundamental class of $\mu''(F)$. Hence $\mu'=\mu''$.
Let $(Q,Z)$ be either $(Q',Z')$ or $(Q'',Z'')$ and let
$U=\bigcup_{x\in Z}U_x$ be a neighborhood of $Z$ in $Q$. As
before, $U_x$ denotes a small neighborhood of $x\in Z$
homeomorphic to an open ball. The local classes $e_x$ determine
the classes $e_x'\in H^3(U_x,U_x\setminus Z;\mathbb{Z}^{n-1})$ due to the
exact sequence
\[
\xymatrix{ H^2(U_x;\mathbb{Z}^{n-1})\ar[r] \ar@{=}[d]& H^2(U_x\setminus
Z;\mathbb{Z}^{n-1}) \ar[r] &
H^3(U_x,U_x\setminus Z;\mathbb{Z}^{n-1}) \ar[r]& H^3(U_x;\mathbb{Z}^{n-1})\ar@{=}[d]\\
0& e_x\ar@{|->}[r]& e_x'&0}
\]
The classes $\{e_x'\mid x\in Z\}$ determine a unique element
$e'\in H^3(U,U\setminus Z;\mathbb{Z}^{n-1})$ such that $i_x^*(e')=e'_x$
for an inclusion $i_x\colon U_x\hookrightarrow U$ according to
Mayer--Vietoris argument. By excision, we can view $e'$ as an
element in $H^3(Q,Q\setminus Z;\mathbb{Z}^{n-1})\cong H^3(U,U\setminus
Z;\mathbb{Z}^{n-1})$. Recall that $Q\cong S^{n+1}$. From the exact
sequence
\[
\xymatrix{ H^2(Q;\mathbb{Z}^{n-1})\ar[r] \ar@{=}[d]& H^2(Q\setminus
Z;\mathbb{Z}^{n-1}) \ar[r] &
H^3(Q,Q\setminus Z;\mathbb{Z}^{n-1}) \ar[r]& H^3(Q;\mathbb{Z}^{n-1})\ar@{=}[d]\\
0& e\ar@{|->}[r]& e'&0}
\]
we extract a unique element $e\in H^2(Q\setminus Z;\mathbb{Z}^{n-1})$
which projects to $e_x$ for any point $x\in Z$.
Characteristic data $(Q'\cong S^{n+1},Z',\mu',e')$ and $(Q''\cong
S^{n+1},Z'',\mu'',e'')$ coincide, hence the spaces $X'$ and $X''$
are equivariantly homeomorphic to the model space according to
Proposition \ref{propHomeoToModel}. Thus they are homeomorphic to
each other.
\end{proof}
\begin{rem}
Instead of the equality $e_x'=e_x''$ one can require the equality
of characteristic functions $\mu'=\mu''$, and for a small 2-sphere
around each facet $F$ specify the type of its preimage (whether it
is Hopf or anti-Hopf bundle, see Proposition \ref{propPMone}). If
the types agree for $X$ and $X'$ then equality $\mu'=\mu''$ would
imply equality of local classes $e_x'=e_x''$.
\end{rem}
In order to study certain examples, we need a modification of
Theorem \ref{thmClassSphere}. Let $M$ be a closed manifold of
dimension $n-1$. Assume there is a regular simple cell subdivision
on $M$ which means there is a given regular cell structure in
which every $k$-dimensional cell is contained in exactly $n-k$
maximal cells. Its $(n-2)$-skeleton $Z_M=M_{n-2}$ is a sponge.
Consider the manifold with boundary $Q_M=M\times D^2$. We consider
$M$ as a subset $M\times{0}\subset Q_M$.
\begin{prop}\label{propSpecialHomeo}
Let $(X,\partial X)$ be a $2n$-dimensional manifold with boundary, and
assume there is an appropriate action of $T=T^{n-1}$ on $X$ with
characteristic data $(Q_M,Z_M,\mu_M,e_M)$. We also assume that the
action is free on the boundary $\partial X$ and the principal
$T$-bundle $\partial X\to \partial X/T=\partial Q_M\cong M\times \partial D^2$ is
trivial. Then the class $e_M\in H^2(Q_m\setminus Z_M;\mathbb{Z}^{n-1})$
is uniquely determined by the local classes $e_x$, $x\in Z_M$.
\end{prop}
\begin{proof}
There is an exact sequence of the pair $(Q_M\setminus Z_m,\partial
Q_M)$:
\[
\xymatrix{ H^2(Q_M\setminus Z_M,\partial Q_M;\mathbb{Z}^{n-1})\ar[r] &
H^2(Q_M\setminus Z_M;\mathbb{Z}^{n-1}) \ar[r] &
H^2(\partial Q_M;\mathbb{Z}^{n-1})
}
\]
The class $e\in H^2(Q_M\setminus Z_M;\mathbb{Z}^{n-1})$ maps to zero
since the free part of action is a trivial $T$-bundle over $\partial
Q$. Hence there exists $\tilde{e}\in H^2(Q_M\setminus Z_M,\partial
Q_M;\mathbb{Z}^{n-1})$ which maps to $e$, and $e$ is uniquely determined
by the class $\tilde{e}$. We have
\[
(Q_M\setminus Z_M)/\partial Q_M\simeq \Sigma^2(M\setminus Z_M)
\]
hence $H^2(Q_M\setminus Z_M,\partial Q_M;\mathbb{Z}^{n-1})\cong
\widetilde{H}^0(M\setminus Z_M)$. The space $M\setminus Z_M$ is the disjoint
union of open top-dimensional cells of $M$. It can be seen that
cohomology classes of $H^2(Q_M\setminus Z_M,\partial Q_M;\mathbb{Z}^{n-1})$
are localized near $Z_M$ thus are completely determined by the
local classes.
\end{proof}
\begin{cor}
Under the assumptions of Proposition \ref{propSpecialHomeo}, the
equivariant homeomorphism type of $X$ is determined by $(Q_M,Z_M)$
and the weights of all tangent representations at all fixed
points.
\end{cor}
\begin{con}\label{constrDiscTimesMfd}
The examples of the actions above can be constructed in the
following way. We consider a manifold $P\cong M\times [0;1]$ with
boundary $\partial P=\partial_0P\sqcup\partial_1P$, $\partial_iP=M\times\{i\}$, and
endow it with the structure of a nice manifold with corners.
Namely, we subdivide the boundary component $\partial_0P$ according to
the subdivision of $M$ and do nothing with $\partial_1P$ (this boundary
component is considered a single face of dimension $n-1$). Now we
may take an abstract characteristic function satisfying
$(*)$-condition:
\[
\lambda\colon \{\mbox{facets of }\partial_0P\}\to \Hom(S^1,G)\cong
\mathbb{Z}^n,
\]
and construct a topological manifold
\[
X=(P\times G)/\sim
\]
with boundary $\partial X=\partial_1P\times G$. Here $G\cong T^n$ and $\sim$
collapses tori over $\partial_0P$ according to characteristic function
(refer to \cite{DJ,Yo,BPnew} for details). These particular
manifolds with boundary were studied in \cite{AyzMexicana}.
We take a generic $(n-1)$-dimensional subtorus $T\subset G$ so
that the induced action of $T$ on $X$ is strictly appropriate and
in general position. It can be seen that the orbit space $Q=X/T$
is homeomorphic to
\[
Q\cong P\times S^1/\sim = (M\times[0;1])\times S^1/\sim
\]
where the circles collapse over $\partial_0P=M\times\{0\}$. Therefore
$Q\cong M\times D^2$. The sponge of the $T$-action is the
$(n-2)$-skeleton of $M=M\times \{0\}\subset M\times D^2$. Finally,
the free $T$-action over $M\times\partial D^2$ is a trivial bundle,
since the $G$-action is trivial over $\partial_1P$.
\end{con}
\section{Grassmann and flag manifolds}
Next we review two classical examples motivating our study.
\begin{ex}
The standard action of a compact torus $T^4$ on $\mathbb{C}^4$ induces
the action of $T^4$ on a Grassmann manifold $G_{4,2}$ of complex
$2$-planes in $\mathbb{C}^4$. This action has non-effective kernel
$\Delta(T^1)\subset T^4$, hence we have an effective action of
$T=T^4/\Delta(T^1)\cong T^3$ on $G_{4,2}$, $\dim_\mathbb{R} G_{4,2}=8$.
There are $6$ fixed points, and it is not difficult to find the
weights of their tangent representations. The easiest way to do
this is to look at the image of the moment map, which coincides
with a regular octahedron $\Delta_{4,2}$. Its vertices correspond
to the fixed points, and the primitive lattice vectors along the
edges of octahedron correspond to the weights of the tangent
representation. For example, the edges from the top vertex
$(0,0,1)$ of octahedron are
\[
\alpha_1=(1,0,-1),\quad \alpha_2=(0,1,-1),\quad
\alpha_3=(-1,0,-1),\quad \alpha_4=(0,-1,-1).
\]
Every $3$ of them are linearly independent, hence the action is in
general position. The action is strictly appropriate.
It was proved in \cite{BT}, that the orbit space $G_{4,2}/T$ is
homeomorphic to $S^5$. The sponge $Z$ of the action is obtained by
taking the boundary of octahedron $\partial\Delta_{4,2}$, and attaching
three squares along the equatorial cycles as shown on
Fig.\ref{figOctah}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.3]{octahedron.pdf}
\end{center}
\caption{The sponge of $G_{4,2}$ consists of the boundary of
octahedron with 3 squares attached along the
equators}\label{figOctah}
\end{figure}
\end{ex}
\begin{ex}
The standard action of $T^3$ on $\mathbb{C}^3$ induces the effective
action of $T=T^3/\Delta(T^1)$ on the manifold $F_3$ complete
complex flags in $\mathbb{C}^3$. We have $\dim T=2$, $\dim F_3=6$. There
are $6$ fixed points and the tangent representation at each point
is in general position. The action has no finite components in
stabilizers.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.2]{FlagGraph.pdf}
\end{center}
\caption{The sponge of complete flag manifold
$F_3$}\label{figFlag}
\end{figure}
Using the technique of \cite{BT} (see \cite{AyzMatr} for
alternative proof) it can be shown that the orbit space $F_3/T$ is
homeomorphic to $S^4$. The sponge of the action has dimension $1$.
This is simply the GKM-graph of the action, which is well known.
This graph is shown on Fig.\ref{figFlag}. As an abstract graph it
is a complete bipartite graph $K_{3,3}$. The figure on the right
shows how to realize this graph as a 1-skeleton of simple cell
structure on a 2-torus $\mathcal{T}$. Actually, $\mathcal{T}$ can be embedded in
$S^4=F_3/T$ in a canonical way and the preimage of its small
neighborhood $U_\mathcal{T}$ under the projection map is described by
Construction \ref{constrDiscTimesMfd}. This subject will be
covered in detail in a subsequent paper \cite{AyzMatr}.
\end{ex}
Note the geometrical difference of these two examples from the
induced $T$-action on a quasitoric manifold. In case of $T$-action
on a quasitoric manifold, the sponge, which is an
$(n-2)$-dimensional complex, can be embedded in $S^{n-1}$ (since
it is the $(n-2)$-skeleton of a polytope). However the sponges of
$G_{4,2}$ and $F_3$ do not embed in a sphere as codimension one
complexes. In case of $F_3$ the graph $K_{3,3}$ is well-known to
be non-planar. The sponge of $G_{4,2}$, which is the octahedron
with $3$ squares attached, cannot be embedded in $\mathbb{R}^3$.
\begin{rem}
Whenever the orbit space $Q=X/T$ is a sphere $S^{n+1}$, Alexander
duality implies $H^2(Q\setminus Z; R)\cong H_{n-2}(Z;R)$ for a
sponge $Z\subset Q$. The homology class corresponding to $e\in
H^2(Q\setminus Z;H_1(T))$ is represented by the chain
\[
\sigma=\sum_{F :\mbox{ facet of }Z}e_x\cdot[F]\in
C_{n-2}(Z;H_1(T)).
\]
Here $[F]$ is the fundamental class of a facet $F$ and $e_x\in
H_2(U_x;H_1(T))\cong H_1(T)$ is the local Euler class in an
interior point $x\in F^\circ$. The chain $\sigma$ is a cycle
according to relation \eqref{eqCocycleReln}.
\end{rem}
\section{Acknowledgements}
The author deeply appreciates the help of Victor Buchstaber,
especially his advices and explanation concerning the theory of
$(2n,k)$-manifolds. The author thanks Mikiya Masuda for a very
geometrical explanation of Bialynicki--Birula theory.
|
{
"timestamp": "2018-02-27T02:05:29",
"yymm": "1802",
"arxiv_id": "1802.08828",
"language": "en",
"url": "https://arxiv.org/abs/1802.08828"
}
|
\section{Introduction}
\label{sec:introduction}
For a family of pseudo-disks $\EuScript{F}$ and a subset $\EuScript{P} \subset \EuScript{F}$, we denote by
$H(\EuScript{P},\EuScript{F})$ the hypergraph whose vertex set is $\EuScript{P}$ and whose
edges are all subsets of $\EuScript{P}$ of the form
$\{D \in \EuScript{P} \mid D \cap S \neq \emptyset\}$, with~$S$ a~pseudo-disk from~$\EuScript{F}$. That is, such a subset consists of all pseudo-disks in~$\EuScript{P}$ intersected by a fixed pseudo-disk of~$\EuScript{F}$.
Our main goal in this paper is to obtain an upper bound on the number of edges in~$H(\EuScript{P},\EuScript{F})$ of bounded cardinality. %
Specifically, we establish the following main property:
\begin{theorem}
\label{theorem:main_c}
Suppose $\EuScript{F}$ is a family of pseudo-disks in the plane and $\EuScript{P}$ is a finite subset of $\EuScript{F}$.
Let $k \ge 1$ be an integer parameter.
%
Then the number of edges of cardinality at most $k$ in $H(\EuScript{P},\EuScript{F})$ is $O(|\EuScript{P}|k^3)$, where the implied constant does not depend on the family $\EuScript{F}$.
%
%
%
\end{theorem}
Our proof technique exploits several ideas from the work of Buzaglo~\textsl{et~al.} \cite{BPR-13},
who studied the corresponding problem for points and pseudo-disks in the plane.
Specifically, Buzaglo~\textsl{et~al.} \cite{BPR-13} studied hypergraphs defined by \emph{points} and \emph{pseudo-disks enclosing them},
whereas we consider a hypergraph of \emph{pseudo-disks} and subsets of them
\emph{intersected by pseudo-disks}. Our result is a generalization of that in \cite{BPR-13},
as one can represent a point by a sufficiently small pseudo-disk.
Note that it is crucial to consider pseudo-disks rather than pseudo-circles
(that is, entire regions rather than just their boundaries).
Indeed, range spaces of pseudo-\emph{circles} and subsets of them met
by a pseudo-circle do not satisfy Theorem~\ref{theorem:main_c}:
Consider $n$ pairwise intersecting circles in general position and for each of the $n \choose 2$ pairs of circles, place a tiny circle at one of their intersection points.
Obviously, this construction yields a collection of quadratically many circle pairs,
contradicting the linear bound asserted in Theorem~\ref{theorem:main_c}, for $k=2$.
As an application of Theorem~\ref{theorem:main_c}, combined with the machinery of Chan~\textsl{et~al.}~\cite{CGKS-12},
we show that the \emph{dominating set} of smallest weight in a collection of pseudo-disks in the plane can be approximated
up to a constant factor in expected polynomial time; to the best of our knowledge, the result for the weighted version of this problem was previously unknown. The details are presented in Section~\ref{sec:dominating_set}.
\section{Proof of Theorem~\ref{theorem:main_c} }
\label{sec:main_result}
\subsection{Preliminaries}
\label{sec:prelim}
\paragraph*{Family of pseudo-disks.}
A family of pseudo-disks is a set of objects in the plane, where each object is bounded by a Jordan curve and any two object boundaries either are disjoint, cross properly exactly twice, or are tangent exactly once. No boundary overlaps are allowed. Several boundaries may meet at a common point.
\paragraph*{Arrangements and levels.}
Let $\EuScript{P} \subset \EuScript{F}$ be a finite family of pseudo-disks in the plane.
Let $\EuScript{A}(\EuScript{P})$ denote the \emph{arrangement} of $\EuScript{P}$ (see, e.g., \cite{AS-survey}).
The \emph{level} of an (open) face in this arrangement is the number of pseudo-disks containing it in their interior.
Well-known results by Kedem \textsl{et~al.} \cite{KLPS-86} and by Clarkson and Shor\cite{CS-89} imply that
$\EuScript{A}(\EuScript{P})$ has $O(|\EuScript{P}|)$ level-2 faces and, more generally, $O(|\EuScript{P}|k)$ faces at level at most $k$.
\paragraph*{VC-dimension.}
Given a hypergraph $H$ with vertex set $X$,
we say that a subset $K \subseteq X$ is \emph{shattered} by $H$ if,
for every subset $Z$ of $K$,
$Z = K \cap e$ for some edge $e \in H$.
The \emph{VC-dimension} of $H$ is the size of the largest
finite shattered subset.
The rest of this section is organized as follows.
We first show that, for a family $\EuScript{F}$ of pseudo-disks and a finite subset $\EuScript{P} \subset \EuScript{F}$, the
VC-dimension of $H(\EuScript{P},\EuScript{F})$ is at most four and that this bound is optimal (Theorem~\ref{thm:vc_dim}).
Then we prove that the number of edges of
$H(\EuScript{P},\EuScript{F})$ of cardinality at most $k$ is linear in $|\EuScript{P}|$; the proof gives a super-polynomial dependency on $k$.
Finally, using the above bound on the VC-dimension and the proof
technique in \cite{BPR-13}, we are able to improve the dependency on $k$ and show that
the number of edges in $H(\EuScript{P},\EuScript{F})$ of cardinality at most $k$ is $O(|\EuScript{P}|k^3)$.
\bigskip
\subsection{The Analysis}
We first show:
\begin{theorem}
\label{thm:vc_dim}
A hypergraph $H(\EuScript{P},\EuScript{F})$ as defined above has VC-dimension at most four. This bound is the best possible.
\end{theorem}
We start by stating the following technical lemma from~\cite{BPR-13}
(see figure below): %
\begin{lemma}[Buzaglo~\textsl{et~al.} \cite{BPR-13}]
\label{lemma:even_crossings}
Let $\gamma$ and $\gamma'$ be arbitrary non-overlapping curves contained in
pseudo-disks $D$ and~$D'$, respectively. If the endpoints of
$\gamma$ lie outside of $D'$ and the endpoints of $\gamma'$ lie
outside of $D$, then $\gamma$ and $\gamma'$ cross an even
number of times (where tangency counts as two crossings).
\end{lemma}
\begin{figure}[h]
\centering
\includegraphics{lemma}
%
\end{figure}
We say that a set~$K$ of pseudo-disks is \emph{well-behaved} if every pseudo-disk in~$K$ has a point not covered
by the union of other pseudo-disks in $K$.
We begin with an auxiliary construction. Let $K$ be a finite well-behaved set of pseudo-disks.
We construct a graph $G = G(K)$ whose vertices correspond to
the pseudo-disks in $K$ and whose edges correspond to pseudo-disks in $\EuScript{F}$
that meet precisely two sets in $K$.
More specifically, we draw $G$ as follows:
\paragraph{Vertices of $G$:}
For each pseudo-disk $D \in K$, we fix a point $v(D)\in D$ (which need not lie on the boundary of $D$), not contained in any other pseudo-disk of $K$; it exists since $K$ is well behaved.
The points $\{v(D) \mid D \in K\}$ form the \emph{vertex set} of $G$.
\paragraph{Edges of $G$:}
Let $D_1, D_2 \in K$, $v_1$ = $v(D_1)$ and $v_2$ = $v(D_2)$. Suppose there exists
$S \in \EuScript{F}$ that intersects $D_1$ and $D_2$ and no other disk in $K$; fix one such $S$ (it is possible that $S \in K$). We will add an \emph{edge} $v_1v_2$ to $G$, drawn as described below. We call a connected portion of the edge contained in~$S$ a \emph{red} arc and such a portion outside $S$ a \emph{blue} arc. The edge $v_1v_2$ consists of at most one red arc and at most two blue arcs.
In the figures below, we use the convention of drawing pseudo-disks of $K$ in blue and the ``connecting'' pseudo-disk(s) from $\EuScript{F}$ in red.
\begin{description}
\item[\normalfont\emph{$S$ contains both $v_1$ and $v_2$}:]
Draw a red arc in~$S$ from $v_1$ to $v_2$.
This forms the edge $v_1v_2 \in G$.
See figure below.
\begin{figure}[h]
\centering
\includegraphics{case1a}\hspace*{2ex plus 0.5fil}\includegraphics{case1b}
%
%
\end{figure}
\item[\normalfont\emph{$S$ contains $v_1$, but not $v_{2}$}:]
Draw a red arc in $S$ that starts at $v_1$ and ends at the boundary of $S$
in~$D_2$. Now draw a blue arc in $D_2$ that starts at this point, ends at $v_2$ and lies completely outside $S$ otherwise. The concatenation of these two arcs forms the edge $v_1v_2$ of $G$.
%
See figure below.
\begin{figure}[h]
\centering
\includegraphics{case2a}%
\hspace*{1ex plus 0.5fil}\includegraphics{case2b}%
\hspace*{1ex plus 0.5fil}\includegraphics{case2c}
%
%
\end{figure}
\item[\normalfont\emph{$S$ contains neither $v_1$ nor $v_2$}:]
Draw a blue arc in $D_1$ that starts at $v_1$, ends at the boundary of $S$ in~$D_1$, and otherwise stays outside of $S$. From its endpoint, draw a red arc in~$S$ to a point of the boundary of $S$ in~$D_2$ and from there, draw the final blue arc outside $S$ in $D_2$ to the vertex $v_2$. The concatenation of these three arcs constitutes the edge $v_1v_2$.
See figure below.
\begin{figure}[h]
\centering
\includegraphics{case3a}\hspace*{2ex plus 0.5fil}\includegraphics{case3b}
%
%
\end{figure}
\end{description}
The simple but important observation that makes the construction above
possible is that, if $A$ and $B$ are two pseudo-disks, then both
$A \setminus B$ and $B \setminus A$ are arcwise connected.
By construction, for each arc of the constructed edge, either red or blue, there is a pseudo-disk that completely contains~it. We also assume that the arcs belonging to different edges of $G$ may intersect at a finite number of points, but do not overlap among themselves. Similarly, we will assume they do not overlap the boundaries of the finite number of pseudo-disks under consideration.
\begin{lemma}
\label{lemma:planar}
The graph $G = G(K)$ is planar.
\end{lemma}
\begin{proof}
We will prove $G$ is planar using the strong Hanani-Tutte theorem \cite{Tutte-Com-70}. Consider two edges~$e, e'$ that connect $v_1= v(D_1)$ to $v_2 = v(D_2)$, and $v_3 = v(D_3)$ to $v_4 = v(D_4)$ in $G(K)$, respectively, and do not share a vertex so that $D_1, D_2, D_3, D_4 \in K$ are pairwise distinct. We will prove that $e$ and $e'$ intersect an even
number of times, by considering their red and blue portions
separately. Let $S \in \EuScript{F}$ be the pseudo-disk intersecting $D_1$ and $D_2$
and no other disk in $K$ that was used to draw $e$, and let
$S' \in \EuScript{F}$ be the corresponding pseudo-disk intersecting only
$D_3$ and $D_4$ from the disks in $K$.
\textsc{Red-Blue Intersections:} Consider the red portion of $e$. This red arc is contained in $S$ and therefore does not meet any pseudo-disk of $K$ other than~$D_1, D_2$. As the blue portions of $e'$ lie inside $D_3, D_4$, this implies that the red arc of $e$ does not meet the blue portions of $e'$. Symmetrically the red portion of $e'$ cannot intersect the blue portions of $e$.
\textsc{Red-Red Intersections:} The red arc $\alpha$ along $e$ lies entirely in $S$ and has one endpoint in~$D_1$ and the other in $D_2$. Similarly, the red arc $\alpha'$ along $e'$ lies entirely in $S'$ and has one endpoint in~$D_3$ and the other in $D_4$. As $S$ does not intersect $D_3$~and~$D_4$ and $S'$ does not intersect $D_1$~and~$D_2$, the endpoints of~$\alpha$ do not lie in $S'$ and the endpoints of~$\alpha'$ do not lie in $S$.
By Lemma~\ref{lemma:even_crossings}, $\alpha$ and~$\alpha'$ intersect an even number of times.
\textsc{Blue-Blue Intersections:} Consider blue arcs $\beta \subset e$ and $\beta' \subset e'$. The blue arc $\beta$ starts, say, at vertex $v_1$ of pseudo-disk $D_1$ and ends at $x$ in $D_1$ on the boundary of pseudo-disk $S$, and $\beta'$ starts, say, at vertex $v_3$ of pseudo-disk $D_3$ and ends at $x'$ in $D_3$ on the boundary of pseudo-disk $S'$.
By the construction of the vertices of $G$, we have
$v_1 \notin D_3$ and $v_3 \notin D_1$. Now, $x$ cannot lie in $D_3$ because $S$ meets only $D_1$ and $D_2$ and similarly $x'$ cannot lie in $D_1$. Hence, by Lemma~\ref{lemma:even_crossings} we deduce once again that $\beta$ and $\beta'$ intersect an even number of times.
There is a possibility that some edges of $G$ self-intersect, but such intersections can be removed using standard methods: see, for example, \cite{PSS07} and Figure~\ref{fig:self_intersect}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{tangle}
\caption{How to undo self-intersections}
\label{fig:self_intersect}
\end{figure}
Thus, any two edges of $G$ that do not share an endpoint cross an even number of times, and therefore $G$ is planar by the strong Hanani-Tutte theorem \cite{Tutte-Com-70}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:vc_dim}]
Let $K \subseteq \EuScript{P}$ be a set shattered by $\EuScript{F}$.
Since $K$ is shattered, for every pseudo-disk $P \in K$ there
is a pseudo-disk $F \in \EuScript{F}$ that intersects $P$ and no other element of~$K$.
Therefore, $K$ is well-behaved.
For a well-behaved set $K$, $G(K)$ is planar, by Lemma~\ref{lemma:planar},
and therefore has at most $3|K| - 6$~edges (if $|K|<3$, we are already done). However, $K$ is shattered, so $G(K)$ is a complete graph with ${|K| \choose 2}$~edges. Therefore, ${|K| \choose 2} \leq 3|K| - 6$, implying $|K| \le 4$.
This proves that the VC-dimension of $H(\EuScript{P},\EuScript{F})$ is at most four.
Figure~\ref{fig:four_shattered} shows that this bound is the best possible,
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{4-shattered}
\caption{How to shatter a set of four pseudo-disk objects (shaded): the
pseudo-disks meeting all or none of the four objects are not shown.
The pseudo-disks meeting exactly one object are the objects
themselves.}
\label{fig:four_shattered}
\end{figure}
completing the proof of Theorem~\ref{thm:vc_dim}.
\end{proof}
Using the analysis above, %
we first show that the number of edges of cardinality two in $H(\EuScript{P},\EuScript{F})$ is linear in $|\EuScript{P}|$:
\begin{theorem}
\label{thm:pairs}
Let $\EuScript{F}$ be a family of pseudo-disks and let $\EuScript{P} \subset \EuScript{F}$. %
Then the number of edges of cardinality two in $H(\EuScript{P},\EuScript{F})$ is $O(|\EuScript{P}|)$.
\end{theorem}
\begin{proof}
First, consider the subset $K$ of $\EuScript{P}$ consisting of pseudo-disks $D$ with the property that $D$ contains a point $v(D)$ not covered by any other pseudo-disk of $\EuScript{P}$. $K$ is well-behaved, by construction, and consequently, by Lemma~\ref{lemma:planar}, the set of edges of cardinality two it induces in~$H(\EuScript{P},\EuScript{F})$ forms a planar graph, and therefore its cardinality must be $O(|\EuScript{P}|)$.
It remains to consider edges $e$ in $H(\EuScript{P},\EuScript{F})$ of the form $\{D_1,D_2\}$ with $D_1$ covered by other pseudo-disks of $\EuScript{P}$, without loss of generality. By definition of $H(\EuScript{P},\EuScript{F})$, there must exist a pseudo-disk $S \in \EuScript{F}$ that meets $D_1$, $D_2$, and no other pseudo-disk of $\EuScript{P}$. Notice that $D_1$ and $D_2$ must intersect, for otherwise, as $D_1$ is completely covered by other pseudo-disks of $\EuScript{P}$, it is impossible that $S$ intersects $D_1$, $D_2$, and no other pseudo-disks of $\EuScript{P}$ ($S$ would have to intersect one of the pseudo-disks covering~$D_1$, in addition to $D_1$ and $D_2$).
Since $D_1$ is completely covered by other pseudo-disks of $\EuScript{P}$, there must exist a point $p$ of $D_1\cap D_2$ contained in $S$ and no other pseudo-disks of $\EuScript{P}$.
If $D_{1} \cap D_{2}$ contains an (open) face~$f$ of level~two in the arrangement $\EuScript{A}(\EuScript{P})$, then we charge the edge~$e$ to~$f$ (then $p$ can be chosen to lie in~$f$). At most one edge is charged to~$f$.
Recalling that the number of faces of level two in $\EuScript{A}(\EuScript{P})$ is $O(|\EuScript{P}|)$, we conclude that the number of such edges $e$ is $O(|\EuScript{P}|)$.
Now suppose that $e$ is not charged to any face of level two in $\EuScript{A}(\EuScript{P})$. Then
the point $p$ chosen above must lie on the boundary of $D_1\cap D_2$ and not be contained in any other pseudo-disk of $\EuScript{P}$. In particular, in $\EuScript{A}(\EuScript{P})$, it must either (a)~coincide with a vertex of level~zero ($p$ lies on the boundary of both $D_1$ and $D_2$ and is not contained in any other pseudo-disk of $\EuScript{P}$) or (b)~lie in an (open) edge of level~one ($p$ must be either contained in the interior of $D_1$ and on the boundary of $D_2$, or vice versa).
Now consider a neighborhood~$U$ of $p$ sufficiently small to avoid all other pseudo-disks of $\EuScript{P}$.
In case~(b), it is easy to check that within~$U$ the edge of $\EuScript{A}(\EuScript{P})$ containing $p$ would have to bound a level-two face contained in $D_1\cap D_2$, a situation that we have already excluded above. In case~(a), examining all possibilities (the boundaries of~$D_1$ and of~$D_2$ may properly cross or touch at~$p$; $D_1$~and~$D_2$ may touch externally or internally), $U$ must meet a level-two face contained in~$D_1\cap D_2$ (excluded above) or a level-one face contained in~$D_1$ (also excluded, as we assumed $D_1$ is fully covered by other pseudo-disks of $\EuScript{P}$), or both. Therefore neither case~(a) nor~(b) arises,
thereby concluding the proof of the theorem.
\end{proof}
We next show:
\begin{theorem}
\label{thm:triples_quads}
Let $\EuScript{F}$ be a family of pseudo-disks, let $\EuScript{P} \subset \EuScript{F}$ be a subset of
$\EuScript{F}$, and let $k \geq 2$ be a~fixed integer.
Then the number of edges in $H(\EuScript{P},\EuScript{F})$ of cardinality at most $k$ is $O_k(|\EuScript{P}|)$.
\end{theorem}
In order to prove Theorem~\ref{thm:triples_quads}, we first need the following key lemma:
\begin{lemma}
\label{lemma:pairs}
Let $k \geq 2$ be a fixed integer. Let $\EuScript{F}$ be a family of pseudo-disks in
the plane. Let $\EuScript{H}$ be a subfamily of $m$ pseudo-disks from $\EuScript{F}$.
We call a pair of pseudo-disks $\{D_{1},D_{2}\}$ from $\EuScript{H}$ \emph{$k$-good}
if there exists a pseudo-disk in $\EuScript{F}$ that intersects $D_{1}$, $D_{2}$, and at most $k-2$ additional pseudo-disks from $\EuScript{H}$,
for a total of at most $k$ pseudo-disks from $\EuScript{H}$.
Then the number of $k$-good pairs in~$\EuScript{H}$ is at most $c_{k}m$, where $c_{k}$ is an absolute constant depending only on~$k$.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $k$. Case $k=2$ is precisely Theorem~\ref{thm:pairs}.
Suppose $k \geq 3$. We choose each pseudo-disk in $\EuScript{H}$ independently with probability $p = 1/2$ (but we keep $\EuScript{F}$ intact).
We denote the resulting sample of pseudo-disks by $\EuScript{H}'$.
We say that a $k$-good pair $\{D_1, D_2\}$ from $\EuScript{H}$ \emph{survives} if $D_1, D_2 \in \EuScript{H}'$
and there is a pseudo-disk in $\EuScript{F}$ that intersects $D_{1}, D_{2}$, and a total of at most $k-1$ pseudo-disks in $\EuScript{H}'$.
In other words, after sampling $\{D_{1},D_{2}\}$ becomes \emph{$(k-1)$-good}.
We observe that a $k$-good pair $\{D_{1},D_{2}\}$ in $\EuScript{H}$ survives with probability of at least $1/8$.
Indeed, because $\{D_{1}, D_{2}\}$ is a $k$-good pair, there exists $F \in \EuScript{F}$ such that $F$ intersects $D_{1}$ and $D_{2}$
and a total of $\ell \leq k$ pseudo-disks in $\EuScript{H}$. If $\ell \leq k-1$, then $\{D_{1}, D_{2}\}$
is $(k-1)$-good as soon as both $D_{1}$ and $D_{2}$ are in~$\EuScript{H}'$; this happens with probability $1/4$.
If $\ell=k$, let $S \in \EuScript{H}$ be a pseudo-disk other than $D_{1}$ and $D_{2}$ intersected by $F$. If $D_{1}$ and~$D_{2}$ are in
$\EuScript{H}'$ and $S$ is not in $\EuScript{H}'$, then $\{D_{1},D_{2}\}$ becomes $(k-1)$-good.
This happens with probability $1/8$; there may be other ways for $\{D_1,D_2\}$ to become $(k-1)$-good.
Therefore the expected number of $(k-1)$-good pairs in $\EuScript{H}'$ is at least
$\frac18$ of the number of $k$-good pairs in $\EuScript{H}$.
By the inductive hypothesis on $\EuScript{H}'$, %
there are at most $c_{k-1}|\EuScript{H}'|$ $(k-1)$-good pairs of pseudo-disks in $\EuScript{H}'$.
Therefore, the expected number of $(k-1)$-good pairs of pseudo-disks in $\EuScript{H}'$
is at most $c_{k-1} m/2$.
Combining the two estimates, the number of $k$-good pairs in $\EuScript{H}$ is at most $4 c_{k-1} m$, as claimed.
%
\end{proof}
Theorem~\ref{thm:triples_quads} is then proved using the following result from \cite{BPR-13}:
\begin{lemma}[Buzaglo~\textsl{et~al.} \cite{BPR-13}]
\label{lem:l_cycles}
Consider a graph $G$ on $m$ vertices, with the property that, in any subgraph induced by a subset $V$ of vertices,
the number of edges is at most $c |V|$, where $c > 0$ is an absolute constant. Then, for any $k \ge 2$, the number of copies of $K_k$
(the complete graph on~$k$~vertices) in $G$ is at most $d_k m$,
where $d_k = \frac{(2c)^{k-1}}{k!}$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:triples_quads}]
We follow the approach in~\cite{BPR-13}.
We define a graph $G$ whose vertex set is~$\EuScript{P}$. Two pseudo-disks in $\EuScript{P}$
form an edge in $G$ if they belong to an edge of $H(\EuScript{P},\EuScript{F})$
of cardinality $k$.
By Lemma~\ref{lemma:pairs}, if $G'$ is any induced subgraph of $G$, then the number of edges in $G'$ is $O(|V(G')|)$,
where $V(G')$ is the set of vertices of $G'$.
We can now use Lemma~\ref{lem:l_cycles} and conclude that the number of
copies of $K_{k}$, the complete graph on $k$ vertices,
in $G$ is $O(|V(G)|)$. This is sufficient to prove the assertion in Theorem \ref{thm:triples_quads},
since every edge of cardinality $k$ in $H(\EuScript{P},\EuScript{F})$
gives rise to a unique copy of $K_{k}$ in $G$, as is easily verified.
\end{proof}
Next we would like to prove Theorem \ref{theorem:main_c},
namely, to show that the number of edges
of cardinality at most $k$ in $H(\EuScript{P},\EuScript{F})$ is $O(|\EuScript{P}|k^3)$.
The bound in Theorem~\ref{thm:triples_quads} is linear in $|\EuScript{P}|$
but at the cost of a~multiplicative constant that grows extremely fast (super-exponentially) in~$k$.
In order to overcome this problem and improve the dependence on~$k$, we use Theorem~\ref{thm:triples_quads} and
a fundamental property shown in~\cite{BPR-13}, namely, that in a set system of bounded VC-dimension
every set has a unique small signature. Specifically:
\begin{theorem}[Buzaglo~\textsl{et~al.} \cite{BPR-13}]
\label{thm:signature}
Let $\EuScript{S} = \{S_1, \ldots, S_m\}$ be a set family
%
with VC-dimension $d$. Then it is possible to assign to each set
$S \in \EuScript{S}$ a subset $S^{*} \subseteq S$ (its \emph{signature}), of cardinality at~most~$d$,
so that distinct sets from~$\EuScript{S}$ are assigned distinct signatures.
\end{theorem}
Given this machinery we are ready to prove Theorem \ref{theorem:main_c}.
We follow almost verbatim the random sampling approach in~\cite{BPR-13}.
By Theorem~\ref{thm:vc_dim}, the VC-dimension of $H(\EuScript{P},\EuScript{F})$
is at most four. Applying Theorem~\ref{thm:signature},
we assign to each $e \in H(\EuScript{P},\EuScript{F})$ a unique subset $B_{e} \subseteq e$ of
cardinality at most four.
Let $0 < q < 1/2$ be a parameter to be fixed shortly.
We now select each pseudo-disk in~$\EuScript{P}$ independently with probability $q$.
Let $\EuScript{P}'$ be the resulting sample, and consider the induced hypergraph $H(\EuScript{P}',\EuScript{F})$.
We say that $e \in H(\EuScript{P},\EuScript{F})$ \emph{survives} if all the pseudo-disks in $B_{e}$
are in $\EuScript{P}'$ but none of the remaining pseudo-disks in $e \setminus B_{e}$ are in $\EuScript{P}'$.
It is easy to verify that, if $e$ has cardinality
at most $k$, then $e$ survives with probability
$$
q^{|B_{e}|} (1-q)^{|e| - |B_{e}|} \ge q^{|B_{e}|} (1-q)^{k - |B_{e}|} \ge q^{4} (1-q)^{k-4} ,
$$
where the first inequality follows from the assumption $|e| \le k$, and the second from the fact that $q < 1/2$.
By Theorems~\ref{thm:pairs} and~\ref{thm:triples_quads},
the number of edges in $H(\EuScript{P}',\EuScript{F})$ of cardinality two, three, and four is $O(|\EuScript{P}'|)$, with an absolute constant of proportionality.
Clearly, the number of edges in $H(\EuScript{P}',\EuScript{F})$
of cardinality one is at most $|\EuScript{P}'|$.
It thus follows that the number of surviving edges from $H(\EuScript{P},\EuScript{F})$ is~$O(|\EuScript{P}'|)$, by Theorem~\ref{thm:signature}.
Taking expectations, we see that the expected number of surviving edges
from $H(\EuScript{P},\EuScript{F})$ is $O(|\EuScript{P}'|)=O(q|\EuScript{P}|)$.
On the other hand, the expected number of surviving edges of $H(\EuScript{P},\EuScript{F})$
with cardinality at most $k$ is at least
$q^{4} (1-q)^{k-4}Z$, where $Z$ is the number of edges in $H(\EuScript{P},\EuScript{F})$
of cardinality at most $k$.
Therefore, $q^{4} (1-q)^{k-4}Z = O(q|\EuScript{P}|)$.
By setting $q = 1/k$, we obtain $Z = O(|\EuScript{P}| k^{3})$, as asserted.
%
%
%
This at last completes the proof of Theorem~\ref{theorem:main_c}.
%
\section{An application to the weighted dominating set problem}
\label{sec:dominating_set}
\paragraph*{Problem statement.}
We are given a finite collection $\EuScript{P}$ of \emph{pseudo-disks} in the plane.
We define the \emph{intersection graph} $G$ of $\EuScript{P}$ in the standard manner, that is,
the vertex set is $\EuScript{P}$ and there is an edge between two pseudo-disks if their intersection is non-empty.
The \textsc{dominating set} problem for $G$ is to find a smallest subset $\EuScript{D} \subseteq \EuScript{P}$, such that
each vertex in $G$ is either in $\EuScript{D}$ or is adjacent to a vertex in $\EuScript{D}$. In other words, this is a smallest
subset of $\EuScript{P}$ such that any pseudo-disk in $\EuScript{P}$ appears in the subset or is intersected by a pseudo-disk
in it.
In the \textsc{weighted dominating set} problem, each element of $\EuScript{P}$ is assigned a non-negative weight,
and the goal is to find a dominating set of smallest total weight.
\paragraph*{Related work.}
It is beyond the scope of this paper to report all previous studies related to the dominating set problem.
We only mention that the abstract problem for general graphs is
NP-hard to solve \cite{GJ-79, Karp-72}, and that the standard greedy algorithm yields an
$(1+\ln{n})$-approximation factor \cite{Chvatal-79, LOVASZ-75},
where $n$~is the size of the vertex set.
The problem remains NP-hard
in more specialized settings, such as unit disk graphs and growth-bounded graphs \cite{CCJ-90}.
However, the approximation factors achievable in polynomial time tend to be better.
Specifically, the \textsc{dominating set} problem admits a polynomial-time approximation scheme~(\Term{PTAS}\xspace)
for the aforementioned settings \cite{HMRRRS-98, NH-06}; see also \cite{EM-09} for a constant-factor
approximation for the \textsc{weighted dominating set} problem on unit disk graphs.
The current state-of-the-art for pseudo-disk graphs is a \Term{PTAS}\xspace for the unweighted case, which recently has been
introduced by Govindarajan~\textsl{et~al.}~\cite{GRRR-16}.
See also the earlier studies by Erlebach and van Leeuwen \cite{EL-08} for special forms of triangles and for axis-parallel
rectangles, and by Gibson and Pirwani~\cite{GP-10}, who obtained a \Term{PTAS}\xspace for the case of disk graphs, and a constant-factor
approximation for the weighted problem.
The latter result was strengthened by Chan~\textsl{et~al.}~\cite{CGKS-12}, who also presented a simple
reduction from \textsc{set cover} to \textsc{dominating set}, considerably simplifying the approach taken in \cite{GP-10}.
For a more detailed discussion, we refer the reader to~\cite{GP-10, GRRR-16} and the references therein.
In this section we deduce the following main result, using the assertions in Theorem~\ref{theorem:main_c}, combined with
the recent machinery of Chan~\textsl{et~al.} \cite{CGKS-12}:
\begin{theorem}
\label{thm:dominating_set}
There is a randomized expected polynomial-time algorithm, that, given a set~$\EuScript{P}$ of~pseudo-disks in the plane,
each with a non-negative weight,
computes a dominating set~$\EuScript{D} \subseteq \EuScript{P}$ of~weight~$O(\textsc{Opt})$,
where $\textsc{Opt}$ is the smallest total weight of such a dominating set.
\end{theorem}
We first show the connection between the dominating set problem and the \emph{hitting-set} problem, and then describe the machinery
of Chan~\textsl{et~al.} \cite{CGKS-12} and how to apply it in the scenario of our problem.
\paragraph*{Hitting sets and dominating sets.}
Fix any family $\EuScript{P}$ of pseudo-disks in the plane.
Consider the intersection graph $G$ of $\EuScript{P}$ as defined above.
In $G$, a \emph{neighborhood} of a pseudo-disk is the set of pseudo-disks intersecting
it; therefore, this is a subgraph of $G$ spanned by (the vertex set of) a star. Note that we include the pseudo-disk itself in its neighborhood.
The family of all neighborhoods defines a hypergraph $H(\EuScript{P})$, which is a special case of the hypergraph $H(\EuScript{P}, \EuScript{F})$ defined above,
as in this case we have $\EuScript{F} = \EuScript{P}$.
We now observe that a dominating set in $G$ is, in fact, a \emph{hitting set} for $H(\EuScript{P})$, where the latter
refers to a subset $\EuScript{D} \subseteq \EuScript{P}$, which meets all edges of $H(\EuScript{P})$.
That is, $\EuScript{D}$ meets all objects in~$\EuScript{P}$ if and only if each neighborhood in the
intersection graph (that is, an edge of $H(\EuScript{P})$) is hit by an element of~$\EuScript{D}$.
In particular, the \textsc{minimum hitting set} for $(\EuScript{P},\EuScript{E})$ corresponds to the \textsc{minimum dominating set} of $G$,
and this property holds in the weighted setting as well.
See Figure~\ref{fig:example} for an example.
\begin{figure} %
\centering
\subfigure[]{\includegraphics{example_disks}\label{fig:disks}}%
\qquad
\subfigure[]{\includegraphics{example_int_graph}\label{fig:graph}}
\caption{The intersection graph in \subref{fig:graph} induced by the pseudo-disks $P_1, \ldots, P_6$, depicted in \subref{fig:disks},
is a ``star'' centered at $P_1$. The smallest dominating set is $\{ P_1 \}$.
The underlying hypergraph is $H(\EuScript{P})$, where $\EuScript{P} = \{P_1, \ldots, P_6\}$, and the edges are
$\{ \{P_1, \ldots, P_6\}, \{P_1, P_2\} , \{P_1, P_3\}, \{P_1, P_4\}, \{P_1, P_5\}, \{P_1, P_6\} \}$.
Finally, $\{ P_1 \}$ is the smallest hitting set for this hypergraph.}
\label{fig:example}
\end{figure}
Chan~\textsl{et~al.} \cite{CGKS-12} showed the existence of small approximation factors (achievable in expected polynomial time)
for the weighted hitting-set problem in favorable scenarios. Specifically, they showed:
\begin{theorem}[Chan~\textsl{et~al.} \cite{CGKS-12}]
\label{thm:chan_weighted_cover}
Let $H(V,E)$ be a hypergraph representing a hitting set instance, where the number of edges of cardinality $k$
for any restriction of $H$ to a subset $V' \subseteq V$ is at most $O(|V'| k^{c})$, where $c > 0$ is an absolute
constant and $k \le |V'|$ is an integer parameter.\footnote{In~\cite{CGKS-12} this property is referred to as ``shallow cell complexity,'' although we do not define it formally in this paper.}
%
Then there exists a randomized polynomial-time $O(1)$-approximation algorithm for the weighted hitting set problem for $H(V,E)$.
\end{theorem}
It is now easy to verify that the statement in Theorem~\ref{thm:dominating_set} is obtained by combining
Theorem~\ref{thm:chan_weighted_cover}, our observation regarding the equivalence between hitting sets and dominating
sets (in the sense discussed above), as well as our main result established in Theorem~\ref{theorem:main_c}.
\iffalse %
\paragraph*{Examples of pseudo-disk families.}
In Section~\ref{sec:introduction} we mentioned two geometric settings that give rise to pseudo-disks. The first is a collection of homothets of a fixed convex region in the plane.
\boris{edit/remove/fix?}
We now describe the second setting in more detail; this is a standard
construction; we follow the exposition of Section~4.3 in \cite{AH-08}.
Fix a finite point set $Q$ of sites in the plane and consider its
standard Euclidean Voronoi diagram $\EuScript{V}(Q)$, that is the
decomposition of the plane into \emph{Voronoi} cells $V(q)$, for each
$q \in Q$, where
\[
V(q) = \{x \in {\mathbb{R}}^2 \ \mid \ d(x,q) \le d(x,q'), \forall q' \neq q, q' \in Q \} ,
\]
and $d(\cdot, \cdot)$ is the Euclidean distance.
A well known transformation \cite{ES-86} maps each point $q(q_x,q_y)\in Q$ to a plane $\pi(q) \subset \mathbb{R}^3$ tangent to the standard paraboloid $z=x^2+y^2$ at the point $(q_x,q_y,q_x^2+q_y^2)$, so that the $xy$-projection of the upper envelope of the resulting planes, when projected to the $xy$-plane, coincides with $\EuScript{V}(Q)$. Equivalently, the set of points lying on or above all the planes $\pi(q)$, $q \in Q$ forms an unbounded polyhedron $H$ whose faces are (or more precisely, vertically project to) the Voronoi regions.
Adding a new point $p(p_x,p_y)$ to $Q$ corresponds to introducing an additional plane $\pi(p)$ tangent to the paraboloid at the point $(p_x,p_y,p_x^2+p_y^2)$. Its intersection with $H$ projects precisely to the Voronoi cell~$V(p)$ of~$p$ in~$\EuScript{V}(Q\cup\{p\})$. Two such cells $V(p)$, $V(p')$ behave as pseudo-disks as the intersection of their boundaries corresponds to the intersection of the line $\pi(p)\cap \pi(p')$ with the boundary of the convex set~$H$ and therefore it is either empty or contains precisely two points.
The pseudo-disks $V(p)$,~$V(p')$ cannot nest, as that would contradict the Voronoi cells being vertical projections of $\pi(p) \cap H$ and $\pi(p') \cap H$, respectively, and the planes $\pi(p)$,~$\pi(p')$ being tangent to the standard paraboloid which is contained in $H$.
\fi %
\section*{Discussion}
\label{sec:discussion}
An earlier version of this work was presented at the 2015 Fall Workshop on Computational Geometry in Buffalo, NY (\url{https://www.cse.buffalo.edu/fwcg2015/assets/pdf/FWCG_2015_paper_15.pdf}). Bal\'azs Keszegh has recently pointed out to us that a construction essentially identical to the one in Lemma~\ref{lemma:planar} has appeared independently in \cite{balazs:arXiv}. He also noted that, just as in \cite{balazs:arXiv}, Theorem~\ref{theorem:main_c} and, by extension, Theorem~\ref{thm:dominating_set}, also apply to the following, more general setting, unmodified: We once again consider the intersection hypergraph~$H(\EuScript{P},\EuScript{F})$, but allow $\EuScript{P}$~and~$\EuScript{F}$ to be two completely unrelated families of pseudo-disks.
$\EuScript{P}$ forms the ground set as above, and the hypergraph edges are formed by subsets of pseudo-disks from $\EuScript{P}$ intersected by a pseudo-disk from $\EuScript{F}$. We believe our analysis extends to this case as well, although we have not verified it in full detail.
|
{
"timestamp": "2018-02-27T02:04:09",
"yymm": "1802",
"arxiv_id": "1802.08799",
"language": "en",
"url": "https://arxiv.org/abs/1802.08799"
}
|
\subsection{Measuring the Entropy of Vectors}
\label{subsec:entropy}
The first step toward practical chip identification is to pick high entropy vectors. We set the clock period for each adder style to achieve a 1\% error rate and simulate 200,000 random input vectors on 50 instances of RCAs, CLAs and HCAs. We calculate the entropy of each applied vector according to Eq.~\ref{eq:entropy}. The average entropy for RCA, CLA and HCA are 0.01227, 0.01187 and 0.03260, respectively. In the RCA, 97.78\% of all input vectors induce the same result on all 50 chips, while in the HCA, the same number is only 90.73\%. This is because in an RCA, a much higher percentage of vectors cause errors on no chips, or in other words sensitize no paths with delay comparable to or exceeding the clock period. On the other hand, an HCA, which is a tree adder, tends to have a variety of paths with similar nominal delays, and a much lower percentage of vectors are error-free across all chips at the chosen clock period. Another view of this result is as follows: when considering for each adder type the set of random vectors that caused an error on one or more of the 50 instances, we find that each such vector causes errors in about 71\% of RCA chips, versus only 24\% for CLA and 10\% for HCA adders. If each adder type is operated at the same error rate, the error-causing input vectors will be less unique on the RCA. Note however, that non-unique input vectors does not mean that the output vectors are less unique to each chip; instead, it only means that the inputs that {\itshape induce} the erroneous outputs are less unique to each chip.
\subsection{Identification Results} \label{sssec:identification}
Next we explore identification of chip instances using their outputs. For this experiment, we simulate 40,000 vectors on 50 instances of each adder type operating at their respective clock periods for 1\% error (Tab.~\ref{tab:clock_frequency}). To measure similarity or lack of similarity between the outputs produced, we use a metric of {\itshape Matching Distance}. The matching distance for any two adders of the same type is the number of outputs that differ when the same (40,000) input vectors are applied. The histograms of between-class and within-class matching distances are shown in Fig.~\ref{fig:histogram}. The within-class bars correspond to the matching distance of two trials of the same chip in the presence of noise (see Sec.~\ref{sec:methodology}), and the between-class bars correspond to pairings of two different chips. When between-class and within-class overlap less, then one can better tell whether two sets of outputs are from the same chip, and can therefore better identify a chip. A chip can always be identified using some matching distance as a decision threshold if the between-class and within-class distances are non-overlapping.
\begin{figure}[htb!]
\centering
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfloat]{farskip=20pt,captionskip=0pt}
\subfloat[Matching distance of ripple carry adder]{
\includegraphics[width=0.8\columnwidth]{histogram_RCA_1p.png}
}\newline
\subfloat[Matching distance of carry lookahead adder]{
\includegraphics[width=0.8\columnwidth]{histogram_CLA_1p.png}
}\newline
\subfloat[Matching distance of Han-Carlson adder]{
\includegraphics[width=0.8\columnwidth]{histogram_HanCarlson_1p.png}
}
\hfill
\caption{Matching distance based on outputs produced for 40,000 random input vectors for each adder type.}
\label{fig:histogram}
\end{figure}
{\bfseries ROC Curve:}
A Receiver Operating Characteristic (ROC) curve is used to measure the performance of chip identification. Each point of an ROC curve corresponds to a single decision threshold and depicts the trade-off between true positives and false positives at that decision threshold. In an ideal case where between-class and within-class distances are separable, the ROC curve will be a step function~\cite{Jain19991371}, as this would indicate that there exists some decision threshold that can correctly identify all true positives (within-class pairings) without accepting any false positives (between-class pairings). Fig.~\ref{fig:ROC_1p} shows the ROC curve for the three adder styles; the RCA is easily the most identifiable of the three adder styles in this case.
\begin{figure}[htb!]
\centering
\setlength{\belowcaptionskip}{-7pt}
\captionsetup[subfloat]{farskip=20pt,captionskip=2pt}
\includegraphics[width=0.65\columnwidth]{ROC_all_1p.png}
\caption{ROC curve of three adders when 40,000 vectors are simulated on 50 instances of each adder using a clock period for 1\% error rate. The AUCs for RCA, CLA, and HCA are 0.99, 0.89 and 0.81 respectively.}
\label{fig:ROC_1p}
\end{figure}
\par
\subsection{Impact of Error Rate}
There is usually a trade-off between the amount of errors in the outputs and the power/performance improvement in the system. While accepting a higher error rate can be more attractive for efficiency, our results show that a higher error rate can increase identifiability of a circuit. In this experiment we set the clock period such that an average error rate of 1\%, 2\% and 5\% are seen on the output results. The results of this experiment are shown in Fig.~\ref{fig:ROC_higher}.
\begin{figure}[htb!]
\centering
\setlength{\abovecaptionskip}{-7pt}
\captionsetup[subfloat]{farskip=20pt,captionskip=2pt}
\subfloat[ROC of RCA]{
\includegraphics[width=0.3\columnwidth]{ROC_all_RCA.png}
\label{fig:ROC_RCA}
}
\subfloat[ROC of CLA]{
\includegraphics[width=0.3\columnwidth]{ROC_all_CLA.png}
\label{fig:ROC_CLA}
}
\subfloat[ROC of HCA]{
\includegraphics[width=0.3\columnwidth]{ROC_all_HCA.png}
\label{fig:ROC_HCA}
}
\hfill
\newline \vspace{10pt}
\caption{ROC curve of each adder type at different error rates}
\label{fig:ROC_higher}
\end{figure}
\subsection{Error Pattern as a Chip Fingerprint}
Overscaling-based approximate computing relaxes clock period constraints and allows that the long combinational paths of a circuit may not fully propagate within the clock period. In this case, the register at the end of the path may capture intermediate (wrong) results on the clock edge.
Because of process variation, the critical paths of different chips will have different delays. For example, recent works report 12\% frequency variation at 1.1V in 45nm technology \cite{7298264} and 30\% for sub-90nm technologies~\cite{Borkar:2003}. The variable path delays will cause different erroneous outputs in approximate computation.
\par
{\bfseries Example: }
We now give a concrete example to show how gate delays can lead to different results at overscaled operating points. Fig.~\ref{fig:FullAdder_complete} shows an example 8-bit ripple carry adder that has two 8-bit input signals $\{a_7 \dots a_0\}$ and $\{b_7 \dots b_0\}$, and a 9-bit output signal $\{c_{out} s_7 \dots s_0\}$. If $\{a_7 \dots a_0\}=8'b11111111$ and $\{b_7 \dots b_0\}=8'b00000001$, a carry signal has to propagate all the way from FA0 to FA7 in order to generate the correct result. We now focus on what occurs after the carry has propagated through the first seven full adders and signal $c_7$ rises on the input to FA7.
Letting the delay of gate $i$ in FA7 (see Fig.~\ref{fig:FullAdder_complete}) be denoted $d_i$, when the value of $c_7$ rises, the output $s_7$ will fall after time $d_2$. The critical path to $c_{out}$ goes through gates 3 and 5. Therefore, it takes $d_3+d_5$ from the time $c_7$ changes for $c_{out}$ to rise.
The value captured on $c_{out}$ and $s_7$ will depend on the delays of the gate instances. In the presence of process variation, some gates might be faster or slower on one chip than another. If all gates are slow relative to the clock period, then the rising transition on $c_7$ may propagate to neither $c_{out}$ nor $s_7$ before the clock edge, and the output will be $c_{out}s_7=01$. If gates 2,3, and 5 are all fast, then the correct value of $c_{out}s_7=10$ will be captured on the clock edge; this is depicted in Fig.~\ref{fig:timing1}. If gate 2 is slow, and gates 3 and 5 are fast, then output $s_7$ will not have fallen before the capturing clock edge, and the captured value will be $c_{out}s_7=11$ (Fig.~\ref{fig:timing2}). If gate 2 is fast and gate 3 or 5 is slow, then output $s_7$ will have fallen but $c_{out}$ will not have risen, and the captured output value will be $c_{out}s_7=00$ (Fig.~\ref{fig:timing3}). This example shows that variations in gate delays can lead to different erroneous outputs in approximate computing; this is the reason that overscaled approximate computing may lead to device identifiability.
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-8pt}
\centering
\includegraphics[width=0.7\columnwidth]{FullAdder_complete.pdf}
\caption{An 8-bit ripple carry adder with full-adder blocks.}
\label{fig:FullAdder_complete}
\end{figure}
\begin{figure}[htb!]
\centering
\setlength{\belowcaptionskip}{-10pt}
\captionsetup[subfloat]{farskip=20pt,captionskip=0pt}
\subfloat[]{
\label{fig:timing1}
\includegraphics[width=0.25\columnwidth, height=2.5cm]{timing1.pdf}
}
\subfloat[]{
\includegraphics[width=0.25\columnwidth, height=2.5cm]{timing2.pdf}
\label{fig:timing2}
}
\subfloat[]{
\centering
\includegraphics[width=0.25\columnwidth, height=2.5cm]{timing3.pdf}
\label{fig:timing3}
}
\caption{Timing diagram for FA7 of the ripple carry adder depicted in Fig.~\ref{fig:FullAdder_complete}}
\label{fig:timings}
\end{figure}
\par
{\bfseries Entropy of Input Vectors: }
Note that for the above example, we only considered a portion of a small circuit. For a large circuit, each individual gate has different delays and many different output results can be generated for some inputs. A good input vector for identification is able to distinguish different chips with different path delays caused by process variation.
When applying random input vectors to a circuit
the majority of vectors will not sensitize long paths and therefore will produce deterministic error-free outputs.
To distinguish the useful vectors from non-useful vectors, we use metric of conditional entropy. When an input vector $a_j$ is applied across a large number of devices at a particular operating point, let the probability of observing output $x_i$ be denoted $Pr(x_i |a_j)$. The entropy associated with the result to input $a_j$ is given by equation~\ref{eq:entropy}. The input vectors with high entropy may be specific to the style of adder and operating point. Although entropy can be estimated from the outputs of adders when viewed as a black box, the entropy associated with different inputs to each adder type implicitly depends on the distribution of path lengths and the path diversity inside of the adder.
\begin{equation}\label{eq:entropy}
H(X|a_j) = - \displaystyle\sum_{i} Pr(x_i|a_j)log_{2}Pr(x_i|a_j)
\end{equation}
\vspace{-10pt}
If an input vector has high entropy on a particular style of adder, it will induce different results for many of the considered chips.
However in practice, an input vector usually produces the same results for many chips. Furthermore, noise can diminish the usefulness of high-entropy inputs. Nonetheless, entropy is a useful metric that can provide insights about the identifying ability of each adder, as will be discussed in Sec.~\ref{subsec:entropy}.
\subsection{Threat Model}
In this work, we consider how approximate computing can compromise privacy of a device or of a device-bearer.
We assume that an adversary can apply chosen operands to the processor and observe computed, and possibly identifying erronous results.
{\bfseries Contributions: } The specific contributions we make in the paper are as follows:
\begin{itemize}
\itemsep -1pt
\item We show, for the first time, that results from overscaled approximate computations can reveal the identity of the chip that performed the computation.
\item We compare and contrast the identifying ability of the outputs of three popular styles of 32-bit adders.
\end{itemize}
The remainder of this paper is organized as follows: Section~\ref{sec:related_work} provides related work on approximate computing to give context to our contribution. Section~\ref{sec:explanation} explains how approximate computational results can reveal device identity. Section~\ref{sec:methodology} addresses methodology. Section~\ref{sec:evaluation} presents simulation results showing how privacy leakage varies with design and clock frequency. Section~\ref{sec:conclusion} concludes the paper.
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Background and Related Work}
\label{sec:related_work}
\input{related-work}
\section{Identification from Overscaling}
\label{sec:explanation}
\input{explanation}
\section{Methodology}
\label{sec:methodology}
\input{methodology}
\section{Evaluation}
\label{sec:evaluation}
\input{evaluation}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion}
\thispagestyle{empty}
\footnotesize
\bibliographystyle{acm}
|
{
"timestamp": "2018-02-27T02:07:58",
"yymm": "1802",
"arxiv_id": "1802.08919",
"language": "en",
"url": "https://arxiv.org/abs/1802.08919"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.